Migrate dirinstall gating test from STI to TMT

The Fedora rawhide CI tests are used.

This pull request enables tests in the RHEL gating using `tmt` which
also allows to easily execute and debug tests from your laptop:

Run tests directly on your localhost:

    sudo dnf install -y tmt
    tmt run --all provision --how local

Run tests in a virtual machine:

    sudo dnf install -y tmt-provision-virtual
    tmt run

Check the documentation to learn more about the tool:
https://docs.fedoraproject.org/en-US/ci/tmt/

Related: rhbz#2006718
This commit is contained in:
Radek Vykydal 2021-09-02 15:56:59 +02:00
parent ae4ab28174
commit ffec7909cc
35 changed files with 104 additions and 939 deletions

12
plans/integration.fmf Normal file
View File

@ -0,0 +1,12 @@
summary: Integration tests for anaconda
discover:
how: fmf
filter: 'tag: integration'
execute:
how: tmt
finish:
how: shell
script: command -v journalctl && journalctl -a || true

View File

@ -1,56 +0,0 @@
[Anaconda](https://github.com/rhinstaller/anaconda) installer gating tests.
Additionally to the tests (`tests*.yml`) contains also playbooks for runnning the tests from localhost on a remote test runner. The runner can be provisioned by `linchpin`. See [run_tests_remotely.sh](run_tests_remotely.sh) script as an example of the playbooks usage.
Running the tests remotely
--------------------------
### Test runner
The remote test runner can be provided in any way. To be used by the playbooks:
* It has to allow ssh access as a remote user which is allowed to become root, using the ssh key configured by `private_key_file` in [ansible config](remote_config/ansible.cfg). By default the `remote_user` for running the tests is `root`.
* Test runner host name / IP should be configured for the playbooks in `gating_test_runner` group of [remote_config/inventory](remote_config/inventory).
The runner can be provisioned in a cloud by linchpin as in the [script](run_tests_remotely.sh):
* The cloud credentials need to be configured in the file and profile reffered by `credentials` variable of [topology](linchpin/topologies/gating-test.yml). So the credentials file [`clouds.yml`](linchpin/credentials/clouds.yml) should contain profile `ci-rhos`. The file can be placed to `~/.config/linchpin` directory or the directory containing the file can be set by `linchpin` `--creds-path` option.
* The ssh key is set by `keypair` value of linchpin [topology](linchpin/topologies/gating-test.yml) file. It should correspond to the key defined in [ansible config](remote_config/ansible.cfg). The [topology](linchpin/topologies/gating-test.yml) file also defines image to be used for test runner.
* The script populates the [inventory](remote_config/inventory) for playbooks with [inventory](linchpin/layouts/gating-test.yml) generated by linchpin.
* The script tries to find out which remote user should be used (`root`, `fedora`, `cloud-user`) and updates [ansible config](remote_config/ansible.cfg) with the value.
### Test runner environment
Test runner environment is prepared by [`prepare-test-runner.yml`](prepare-test-runner.yml) playbook:
* It is possible to add repositories to the runner by defining [`test_runner_repos`](roles/prepare-test-runner/defaults/main.yml) variable. It can be useful for example for adding a repository with scratch build to be tested or adding repositories for test dependencies missing on remote runner.
* Empty directory for storing test artifacts is created on test runner based on the [`artifacts`](roles/prepare-test-runner/vars/main.yml) variable.
### Test playbooks configuration
#### Running on the remote runner:
Normally the testing system runs all the `tests*.yml` playbooks.
**WARNING:**
The test playbooks are run on `localhost` (test runner provided by the testing system). They change the test runner environment (eg install packages) so most probably you don't want to run them as they are - on your local host.
The [script](run_tests_remotely.sh) updates `hosts` value of the test playbooks to use remote host from [`gating_test_runner`](remote_config/inventory/hosts) group as test runner (using a [playbook](set_tests_to_run_on_remote.yml)).
If you want to run the tests playbooks separately make sure the `hosts` variable in the test playbook is set to remote test runner (eg. `gating_test_runner`).
The test playbooks need [`artifacts`](roles/prepare-test-runner/vars/main.yml) variable supplied as can be seen in the [script](run_tests_remotely.sh). (Normally testing system takes care of this.)
#### Installation repositories:
Repositories (base and additional) used for installation test are defined in [repos](roles/installation-repos/defaults/main.yml) configuration. Their URL can be either defined explicitly or looked up in specified repositories of the test runner.
#### dirinstall test
There is a text and a vnc variant of dirinstall test. Both will run all the kickstarts found in [roles/dirinstall/templates/kickstarts](roles/dirinstall/templates/kickstarts).
### The results
The results and logs are fetched from remote host into local host directory defined by `artifacts` variable passed to the playbooks. This value can be passed also to the [script](run_tests_remotely.sh) with `-a` option.

View File

@ -1,25 +0,0 @@
---
# Check if remote_user is reachable by ansible and set ansible.cfg
# if so.
- hosts: gating_test_runner
become: True
gather_facts: False
remote_user: "{{ remote_user }}"
tasks:
- name: Try a raw command as a check
raw: echo "CHECK OK"
register: result
- debug:
msg: "{{ result }}"
- name: Set ansible.cfg remote user to "{{ remote_user }}"
become: no
local_action:
module: lineinfile
path: ./remote_config/ansible.cfg
regexp: ^remote_user
line: "remote_user = {{ remote_user }}"
when: result.stdout_lines[0] == "CHECK OK"

View File

@ -0,0 +1,22 @@
summary: Dirinstall test on regular os
contact: Radek Vykydal <rvykydal@redhat.com>
path: /tests/dirinstall
test: ./dirinstall.sh
duration: 1h
tag: [integration]
/text:
summary: Dirinstall test on regular os - text UI
require:
- anaconda
environment:
ANACONDA_UI_MODE: text
/vnc:
summary: Dirinstall test on regular os - vnc UI
enabled: false
require:
- anaconda
- gnome-kiosk
environment:
ANACONDA_UI_MODE: vnc

42
tests/dirinstall/dirinstall.sh Executable file
View File

@ -0,0 +1,42 @@
#!/bin/sh -eux
# Prepare test work directory
WORK_DIR=$(mktemp -d /var/tmp/dirinstall.XXXXXX)
# Create kickstart
KICKSTART_PATH=${WORK_DIR}/ks.cfg
source ./repositories
TEST_KICKSTART=./ks.dirinstall.cfg
# Dump URLs of installation repositories found in local repositories whose names are configured in 'repositories' file
echo "url --metalink=$(dnf repoinfo $BASE_REPO | grep ^Repo-metalink | cut -d: -f2- | sed 's/^ *//')" > ${KICKSTART_PATH}
for repo in $REPOS; do
echo "repo --name=$repo --metalink=$(dnf repoinfo $repo | grep ^Repo-metalink | cut -d: -f2- | sed 's/^ *//')" >> ${KICKSTART_PATH}
done
cat ${TEST_KICKSTART} >> ${KICKSTART_PATH}
# Log the kickstart
cat ${KICKSTART_PATH}
# Run dirinstall
INSTALL_DIR=${WORK_DIR}/install_dir
mkdir ${INSTALL_DIR}
anaconda --dirinstall ${INSTALL_DIR} --kickstart ${KICKSTART_PATH} --${ANACONDA_UI_MODE} --noninteractive 2>&1
# Remove test work directory
rm -rf ${WORK_DIR}
# Show and remove the logs for this anaconda run
./show_logs.sh

View File

@ -1,6 +1,6 @@
{{ base_repo_command }}
{{ "\n".join(repo_commands) }}
{{ "\n".join(additional_repo_commands) }}
# The repository configuration (url, repo) needs to be added here.
# It varies by the product and version we are running on / testing
lang en_US.UTF-8
keyboard --vckeymap=us --xlayouts='us'
rootpw --plaintext redhat

View File

@ -0,0 +1,4 @@
# Names of local repositories whose urls will be used for installation
# Repositories for "Fedora X" release:
BASE_REPO="fedora"
REPOS="fedora-modular"

21
tests/dirinstall/show_logs.sh Executable file
View File

@ -0,0 +1,21 @@
#!/bin/sh -x
ls /tmp
LOG_DIR=/tmp
cd ${LOG_DIR}
KS_SCRIPT_LOGS=$(ls ks-script-*.log)
cd -
ANACONDA_LOGS="anaconda.log storage.log packaging.log program.log dbus.log dnf.librepo.log ${KS_SCRIPT_LOGS}"
for log in ${ANACONDA_LOGS} ; do
LOG_PATH=${LOG_DIR}/${log}
if [ -f ${LOG_PATH} ]; then
echo "----------------------- Dumping log file $LOG_PATH:"
cat $LOG_PATH
# clear for the following test
rm $LOG_PATH
fi
done

View File

@ -1,4 +0,0 @@
---
gating-test:
topology: gating-test.yml
layout: gating-test.yml

View File

@ -1,7 +0,0 @@
clouds:
ci-rhos:
auth:
auth_url:
project_name:
username:
password:

View File

@ -1 +0,0 @@
[gating_test_runner]

View File

@ -1,11 +0,0 @@
---
inventory_layout:
#inventory_file: "{% raw -%}{{ workspace }}/inventories/gating-test.inventory{%- endraw%}"
vars:
hostname: __IP__
hosts:
gating_test_runner:
count: 1
host_groups:
- gating_test_runner

View File

@ -1,295 +0,0 @@
# This file is a well-documented, and commented out (mostly) file, which
# covers the configuration options available in LinchPin
#
# Used to override default configuration settings for LinchPin
# Defaults exist in linchpin/linchpin.constants file
#
# Uncommented options enable features found in v1.5.1 or newer and
# can be turned off by commenting them out.
#
# structured in INI style
# use %% to allow code interpolation
# use % to use config interpolation
#
[DEFAULT]
# name of the python package (Redundant, but easier than programmatically
# obtaining the value. It's very unlikely to change.)
pkg = linchpin
# Useful for storing the RunDB or other components like global credentials
# travis-ci doesn't like ~/.config/linchpin, use /tmp
#default_config_path = ~/.config/linchpin
# When creating an provider not already included in LinchPin, this path
# extends where LinchPin will look to run the appropriate playbooks
#external_providers_path = %(default_config_path)s/linchpin-x
# When adding anything to the lp section, it should be general for
# the entire application.
[lp]
# load custom ansible modules from here
#module_folder = library
# rundb tracks provisioning transactions
# If you add a new one, rundb/drivers.py needs to be updated to match
# rundb_conn is the location of the run database.
# A common reason to move it is to use the rundb centrally across
# the entire system, or in a central db on a shared filesystem.
# System-wide RunDB: rundb_conn = ~/.config/linchpin/rundb-::mac::.json
#rundb_conn = {{ workspace }}/.rundb/rundb-::mac::.json
rundb_conn = ~/.config/linchpin/rundb-::mac::.json
# name the type of Run database. Currently only TinyRunDB exists
#rundb_type = TinyRunDB
# How to connect to the RunDB, if it's on a separate server,
# it may be tcp or ssh
#rundb_conn_type = file
# The schema is used because TinyDB is a NoSQL db. Another DB
# may use this as a way to manage fields in a specific table.
#rundb_schema = {"action": "",
# "inputs": [],
# "outputs": [],
# "start": "",
# "end": "",
# "rc": 0,
# "uhash": ""}
# each entry in the RunDB contains a unique-ish hash (uhash). This
# sets the hashing mechanism used to generate the uhash.
#rundb_hash = sha256
# The default dateformat used in LinchPin. Specifically used in the
# RunDB for recording start and end dates, but also used elsewhere.
#dateformat = %%m/%%d/%%Y %%I:%%M:%%S %%p
# The name of the pinfile. Someone could adjust this and use TopFile
# or somesuch. The ramifications of this would mean that the file in
# the workspace that linchpin reads would change to this value.
#default_pinfile = PinFile
# By default, whenever linchpin performs an action
# (linchpin up/linchpin destroy), the data is read from the PinFile.
# Enabling 'use_rundb_for_actions' will allow destroy and certain up
# actions (specifically when using --run-id or --tx-id) to pull data
# from the RunDB instead.
#use_rundb_for_actions = False
use_rundb_for_actions = True
# A user can request specific data distilled from the RunDB. This flag
# enables the Context Distiller.
# NOTE: This flag requires generate_resources = False.
#distill_data = False
distill_data = True
# If desired, enabling distill_on_error will distill any successfully (and
# possibly failed) provisioned resources. This is predicated on the data
# being written to the RunDB (usually means _async tasks may never record
# data upon failure).
distill_on_error = False
# User can make linchpin use the actual return codes for final return code
# if enabled True, even if one target provision is successfull linchpin
# returns exit code zero else returns the sum of all the return codes
# use_actual_rcs = False
# LinchPin sets several extra_vars (evars) that are passed to the playbooks.
# This section controls those items.
[evars]
# enables the ansible --check option
# _check_mode = False
# enables the ansible async ability. For some providers, it allows multiple
# provisioning tasks to happen at once, then will collect the data afterward.
# The default is perform the provision actions in serial.
#_async = False
# How long to wait before failing (in seconds) for an async task.
#async_timeout = 1000
# the uhash value will still exist, but will not be added to
# instances or the inventory_path
#enable_uhash = False
enable_uhash = True
# in older versions of linchpin (<v1.0.4), a resources folder exists, which
# dumped the data that is now stored in the RunDB. To disable the resources
# output, set the value to False.
#generate_resources = True
generate_resources = False
# default paths in playbooks
#
# lp_path = <src_dir>/linchpin
# determined in the load_config method of # linchpin.cli.LinchpinCliContext
# Each of the following items controls the path (usually along with the
# default values below) to the corresponding item.
# In the workspace (generally), this is the location of the layouts and
# topologies looked up by the PinFile. If either of these change, the
# value in linchpin/templates must also change.
#layouts_folder = layouts
#topologies_folder = topologies
# The relative location for hooks
#hooks_folder = hooks
# The relative location for provider roles
#roles_folder = roles
# The relative location for storing inventories
#inventories_folder = inventories
# The relative location for resources output (deprecated)
#resources_folder = resources
# The relative location to find schemas (deprecated)
#schemas_folder = schemas
# The relative location to find playbooks
#playbooks_folder = provision
# The default path to schemas for validation (deprecated)
#default_schemas_path = {{ lp_path }}/defaults/%(schemas_folder)s
# The default path to topologies if they aren't in the workspace
#default_topologies_path = {{ lp_path }}/defaults/%(topologies_folder)s
# The default path to inventory layouts if they aren't in the workspace
#default_layouts_path = {{ lp_path }}/defaults/%(layouts_folder)s
# The default path for outputting ansible static inventories
#default_inventories_path = {{ lp_path }}/defaults/%(inventories_folder)s
# The default path to the ansible roles which control the providers
#default_roles_path = {{ lp_path }}/%(playbooks_folder)s/%(roles_folder)s
# In older versions (<1.2.x), the schema was held here. These schemas are
# deprecated.
#schema_v3 = %(default_schemas_path)s/schema_v3.json
#schema_v4 = %(default_schemas_path)s/schema_v4.json
# The location where default credentials data would exist. This path doesn't
# automatically exist
#default_credentials_path = %(default_config_path)s
# If desired, one could overwrite the location of the generated inventory path
#inventory_path = {{ workspace }}/{{inventories_folder}}/happy.inventory
# Libvirt images can be stored almost anywhere (not /tmp).
# Unprivileged users need not setup sudo to manage a path to which they have rights.
# The following are specific settings to manage libvirt images and instances
# the location to store generated ssh keys and the like
#default_ssh_key_path = ~/.ssh
# Where to store the libvirt images for copying/booting instances
#libvirt_image_path = /var/lib/libvirt/images/
# What user to use to access libvirt.
# Using root means sudo without password must be setup
#libvirt_user = root
# When using root or any privileged user, this must be set to yes.
# sudo without password must also be setup
#libvirt_become = yes
# This section covers settings for the `linchpin init` command
#[init]
# source path of files generated by linchpin init
#source = templates/
# formal name of the generated PinFile. Can be changed as desired.
#pinfile = PinFile
# This section covers logging setup
[logger]
# Turns off and on the logger functionality
#enable = True
# Full path to the location of the linchpin log file
file = ~/.config/linchpin/linchpin.log
# Log format used. See https://docs.python.org/2/howto/logging-cookbook.html
#format = %%(levelname)s %%(asctime)s %%(message)s
# Date format used. See https://docs.python.org/2/howto/logging-cookbook.html
#dateformat = %%m/%%d/%%Y %%I:%%M:%%S %%p
# Level of logging provided
#level = logging.DEBUG
# Logging to the console via STDERR
#[console]
# logging to the console should also be possible
# NOTE: Placeholder only, cannot disable.
#enable = True
# Log format used. See https://docs.python.org/2/howto/logging-cookbook.html
#format = %%(message)s
# Level of logging provided
#level = logging.INFO
# LinchPin hooks have several states depending on the action. Currently, there
# are three hook states relating to tasks being completed.
# * up - when performing the up (provision) action
# * destroy - when performing the destroy (teardown) action
# * inv - when performing the internal inventory generation action
# (currently unimplemented)
#[hookstates]
# when performing the up action, these hooks states are run
#up = pre,post,inv
# when performing the inv action, these hooks states are run
#inv = post
# when performing the destroy action, these hooks states are run
#destroy = pre,post
# This section covers file extensions for generating or looking
# up specific files
#[extensions]
# When looking for provider playbooks, use this extension
#playbooks = .yml
# When generating inventory files, use this extension
#inventory = .inventory
# This section controls the ansible settings for display or other settings
#[ansible]
# If set to true, this enables verbose output automatically to the screen.
# This is equivalent of passing `-v` to the linchpin command line shell.
#console = False
# When linchpin is run, certain states are called at certain points along the
# execution timeline. These STATES are defined below.
#[states]
# in future each state will have comma separated substates
# The name of the state which occurs before (pre) provisioning (up)
#preup = preup
# The name of the state which occurs before (pre) teardown (destroy)
#predestroy = predestroy
# The name of the state which occurs after (post) provisioning (up)
#postup = postup
# The name of the state which occurs after (pre) teardown (destroy)
#postdestroy = postdestroy
# The name of the state which occurs after (post) inventory is generated (inv)
#postinv = inventory

View File

@ -1,18 +0,0 @@
---
topology_name: gating-test
resource_groups:
- resource_group_name: gating-test
resource_group_type: openstack
resource_definitions:
- name: "gating_test_runner"
role: os_server
flavor: m1.small
image: RHEL-8.0-x86_64-nightly-latest
count: 1
keypair: kstests
fip_pool: 10.8.240.0
networks:
- installer-jenkins-priv-network
credentials:
filename: clouds.yml
profile: ci-rhos

View File

@ -1,7 +0,0 @@
---
# prepare test runner
- hosts: gating_test_runner
become: true
roles:
- prepare-test-runner

View File

@ -1,5 +0,0 @@
---
standard-inventory-qcow2:
qemu:
m: 2G

View File

@ -1,5 +0,0 @@
[defaults]
inventory = inventory
remote_user = root
host_key_checking = False
private_key_file = /path/to/private_key

View File

@ -1,3 +0,0 @@
[gating_test_runner]
[gating_test_runner:vars]
ansible_python_interpreter=/usr/libexec/platform-python

View File

@ -1,78 +0,0 @@
---
- set_fact:
kickstart: "{{ kickstart_template | basename }}"
- set_fact:
test_name_with_ks: "{{ test_name }}.{{ kickstart }}"
- debug:
msg: "Running '{{ test_name }}' with kickstart '{{ kickstart }}'"
- name: Copy installation kickstart
template:
src: "templates/kickstarts/{{ kickstart }}"
dest: "{{ kickstart_dest }}"
mode: 0755
- name: Clean target directory
file:
path: "{{ install_dir }}"
state: "{{ item }}"
mode: 0755
with_items:
- absent
- directory
- name: Clean installation logs
file:
path: "/tmp/{{ item }}"
state: absent
with_items: "{{ installation_logs }}"
- name: Run dirinstall
shell: timeout -k 10s 3600s anaconda --dirinstall {{ install_dir }} --kickstart {{ kickstart_dest }} {{ method }} --noninteractive 2>&1
register: result
ignore_errors: True
- debug:
msg: "{{ result }}"
- set_fact:
result_str: "PASS"
- set_fact:
result_str: "FAIL"
global_result: "FAIL"
when: result.rc != 0
- name: Update global test.log
lineinfile:
path: "{{ local_artifacts }}/test.log"
line: "{{ result_str }} {{ test_name_with_ks }}"
create: yes
insertafter: EOF
- name: Create this test log
copy:
content: "{{ result.stdout }}"
dest: "{{ local_artifacts }}/{{ result_str }}_{{ test_name_with_ks }}.log"
- name: Create installation logs dir in artifacts
file:
path: "{{ local_artifacts }}/{{ test_name_with_ks }}"
state: directory
- name: Copy input kickstart to artifacts
copy:
remote_src: True
src: "{{ kickstart_dest }}"
dest: "{{ local_artifacts }}/{{ test_name_with_ks }}/{{ kickstart_dest | basename }}"
- name: Copy installation logs to artifacts
copy:
remote_src: True
src: "/tmp/{{ item }}"
dest: "{{ local_artifacts }}/{{ test_name_with_ks }}/{{ item }}"
with_items: "{{ installation_logs }}"
ignore_errors: True

View File

@ -1,13 +0,0 @@
---
- name: Install vnc install dependencies
dnf:
name:
- metacity
state: latest
when: method == "--vnc"
- include_tasks: ks-run.yml
with_fileglob:
- templates/kickstarts/*
loop_control:
loop_var: kickstart_template

View File

@ -1,5 +0,0 @@
---
install_dir: "/root/installdir"
kickstart_dest: "/root/ks.dirinstall.cfg"

View File

@ -1,6 +0,0 @@
---
# Make the test fail based on individual test failures
- fail:
msg: "The test has failed."
when: global_result is defined and global_result == "FAIL"

View File

@ -1,43 +0,0 @@
---
### Base repository
# Base repository command for kickstart
#base_repo_command: "url --url=http://download.englab.brq.redhat.com/pub/fedora/development-rawhide/Everything/x86_64/os/"
# If base_repo_command is not defined, look for base repo url
# in [base_repo_from_runner.repo] repository of
# /etc/yum.repos.d/base_repo_from_runner.file on test runner
base_repo_from_runner:
file: rhel.repo
repo: rhel
### Additional repositories
# Additional repo commands for kickstart:
# - undefine to allow detecting of repos from test runner by
# repos_from_runner variable
# - set to [] for no additional repositories
#repo_commands: []
#repo_commands:
# - "repo --baseurl=http://download.englab.brq.redhat.com/pub/fedora/development-rawhide/Everything/x86_64/os/"
# If repo_commands is not defined, look for additinal repositories
# in [repo] repository of /etc/yum.repos.d/file of test runner.
# Multiple repositories can be defined here.
repos_from_runner:
- file: rhel.repo
repo: rhel
- file: rhel.repo
repo: rhel-AppStream
# Additional repo commands to be used in any case,
# ie even in case additional repos are detected by repos_from_runner
additional_repo_commands: []

View File

@ -1,87 +0,0 @@
---
### Set up local facts from system repositories
- name: Create facts directory for repository custom facts
file:
state: directory
recurse: yes
path: /etc/ansible/facts.d
- name: Install base repository facts
copy:
remote_src: yes
src: "/etc/yum.repos.d/{{ base_repo_from_runner.file }}"
dest: "/etc/ansible/facts.d/{{ base_repo_from_runner.file}}.fact"
when: base_repo_command is not defined and base_repo_from_runner is defined
- name: Install additional repositories facts
copy:
remote_src: yes
src: "/etc/yum.repos.d/{{ item.file }}"
dest: "/etc/ansible/facts.d/{{ item.file}}.fact"
with_items: "{{ repos_from_runner }}"
when: repo_commands is not defined and repos_from_runner is defined
- name: Setup repository facts
setup:
filter: ansible_local
### Base repository
- name: Set base installation repository from system base metalink repository
set_fact:
base_repo_command: "url --metalink={{ ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['metalink'] }}"
when: ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['metalink'] is defined
- name: Set base installation repository from system base mirrorlist repository
set_fact:
base_repo_command: "url --mirrorlist={{ ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['mirrorlist'] }}"
when: ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['mirrorlist'] is defined
- name: Set base installation repository from system base url repository
set_fact:
base_repo_command: "url --url={{ ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['baseurl'] }}"
when: ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['baseurl'] is defined
### Additional repositories
- name: Look for system metalink repositories
set_fact:
repos_metalink: "{{ repos_metalink | default([]) + [ 'repo --name=' + item.repo + ' --metalink=' + ansible_local[item.file][item.repo]['metalink'] ] }}"
#ignore_errors: true
with_items: "{{ repos_from_runner }}"
when: repo_commands is not defined and ansible_local[item.file][item.repo]['metalink'] is defined
- name: Look for system mirrorlist repositories
set_fact:
repos_mirrorlist: "{{ repos_mirrorlist | default([]) + [ 'repo --name=' + item.repo + ' --mirrorlist=' + ansible_local[item.file][item.repo]['mirrorlist'] ] }}"
#ignore_errors: true
with_items: "{{ repos_from_runner }}"
when: repo_commands is not defined and ansible_local[item.file][item.repo]['mirrorlist'] is defined
- name: Look for system baseurl repositories
set_fact:
repos_baseurl: "{{ repos_baseurl | default([]) + [ 'repo --name=' + item.repo + ' --baseurl=' + ansible_local[item.file][item.repo]['baseurl'] ] }}"
#ignore_errors: true
with_items: "{{ repos_from_runner }}"
when: repo_commands is not defined and ansible_local[item.file][item.repo]['baseurl'] is defined
- name: Set additional metalink installation repositories from system
set_fact:
repo_commands: "{{ repo_commands | default([]) + [ item ] }}"
with_items: "{{ repos_metalink }}"
when: repos_metalink is defined
- name: Set additional mirrorlist installation repositories from system
set_fact:
repo_commands: "{{ repo_commands | default([]) + [ item ] }}"
with_items: "{{ repos_mirrorlist }}"
when: repos_mirrorlist is defined
- name: Set additional baseurl installation repositories from system
set_fact:
repo_commands: "{{ repo_commands | default([]) + [ item ] }}"
with_items: "{{ repos_baseurl }}"
when: repos_baseurl is defined

View File

@ -1,6 +0,0 @@
---
- name: Prepare testing environment
dnf:
name:
- anaconda
state: latest

View File

@ -1,10 +0,0 @@
---
# Additional repos added to test runner, eg repo with builds to be tested
test_runner_repos:
rhel:
name: rhel
source: "baseurl=http://download.eng.bos.redhat.com/rhel-9/nightly/RHEL-9-Beta/latest-RHEL-9/compose/BaseOS/x86_64/os/"
rhel-AppStream:
name: rhel-AppStream
source: "baseurl=http://download.eng.bos.redhat.com/rhel-9/nightly/RHEL-9-Beta/latest-RHEL-9/compose/AppStream/x86_64/os/"

View File

@ -1,18 +0,0 @@
---
- name: Add repositories
template:
src: repo.j2
dest: "/etc/yum.repos.d/{{ test_runner_repos[item]['name']}}.repo"
with_items: "{{ test_runner_repos }}"
- name: Create empty artifacts directory on local host
become: no
local_action:
module: file
path: "{{ artifacts }}"
state: "{{ item }}"
mode: 0755
with_items:
- absent
- directory

View File

@ -1,5 +0,0 @@
[{{ test_runner_repos[item]['name'] }}]
name={{ test_runner_repos[item]['name'] }}
{{ test_runner_repos[item]['source'] }}
enabled=1
gpgcheck=0

View File

@ -1,3 +0,0 @@
---
artifacts: "{{ lookup('env', 'TEST_ARTIFACTS')|default('./artifacts', true) }}"

View File

@ -1,13 +0,0 @@
---
- name: Make sure rsync required to fetch artifacts is installed
dnf:
name:
- rsync
- name: Fetch artifacts
synchronize:
mode: pull
delete: yes
src: "{{ local_artifacts }}/"
dest: "{{ artifacts }}"

View File

@ -1,161 +0,0 @@
#!/bin/sh
usage () {
cat <<HELP_USAGE
$0 [-c] [-a <ARTIFACTS DIR>]
Run gating tests on test runners provisioned by linchpin and deployed with ansible,
syncing artifacts to localhost.
-c Run configuration check.
-a Local host directory for fetching artifacts from test runner.
HELP_USAGE
}
CHECK_ONLY="no"
ARTIFACTS="/tmp/artifacts"
while getopts "ca:" opt; do
case $opt in
c)
# Run only configuration check
CHECK_ONLY="yes"
;;
a)
# Set up directory for fetching artifacts
ARTIFACTS="${OPTARG}"
;;
*)
echo "Usage:"
usage
exit 1
;;
esac
done
DEFAULT_CRED_FILENAME="clouds.yml"
CRED_DIR="${HOME}/.config/linchpin"
CRED_FILE_PATH=${CRED_DIR}/${DEFAULT_CRED_FILENAME}
TOPOLOGY_FILE_PATH="linchpin/topologies/gating-test.yml"
ANSIBLE_CFG_PATH="remote_config/ansible.cfg"
CHECK_RESULT=0
############################## Check the configuration
echo
echo "========= Dependencies are installed"
echo "linchpin and ansible are required to be installed."
echo "For linchpin installation instructions see:"
echo "https://linchpin.readthedocs.io/en/latest/installation.html"
echo
if ! type ansible &> /dev/null; then
echo "=> FAILED: ansible package is not installed"
CHECK_RESULT=1
else
echo "=> OK: ansible is installed"
fi
if ! type linchpin &> /dev/null; then
echo "=> FAILED: linchpin is not installed"
CHECK_RESULT=1
else
echo "=> OK: linchpin is installed"
fi
echo
echo "========= Linchpin cloud credentials configuration"
echo "The credentials file for linchpin provisioner should be in ${CRED_DIR}"
echo "The name of the file and the profile to be used is defined by"
echo " resource_groups.credentials variables in the topology file"
echo " (${TOPOLOGY_FILE_PATH})"
echo
config_changed=0
if [[ -f ${TOPOLOGY_FILE_PATH} ]]; then
grep -q 'filename:.*'${DEFAULT_CRED_FILENAME} ${TOPOLOGY_FILE_PATH}
config_changed=$?
fi
if [[ ${config_changed} -eq 0 ]]; then
if [[ -f ${CRED_FILE_PATH} ]]; then
echo "=> OK: ${CRED_FILE_PATH} exists"
else
echo "=> FAILED: ${CRED_FILE_PATH} does not exist"
CHECK_RESULT=1
fi
else
echo "=> NOT CHECKING: seems like this has been configured in a different way"
fi
echo
echo "========== Deployment ssh key configuration"
echo "The ssh key used for deployment with ansible has to be defined by"
echo "private_key_file variable in ${ANSIBLE_CFG_PATH}"
echo "and match the key used for provisioning of the machines with linchpin"
echo "which is defined by resource_groups.resource_definitions.keypair variable"
echo "in topology file (${TOPOLOGY_FILE_PATH})."
echo
deployment_key_defined_line=$(grep 'private_key_file.*=.*[^\S]' ${ANSIBLE_CFG_PATH})
if [[ -n "${deployment_key_defined_line}" ]]; then
echo "=> OK: ${ANSIBLE_CFG_PATH}: ${deployment_key_defined_line}"
else
echo "=> FAILED: deployment ssh key not defined in ${ANSIBLE_CFG_PATH}"
CHECK_RESULT=1
fi
linchpin_keypair=$(grep "keypair:" ${TOPOLOGY_FILE_PATH} | uniq)
echo "=> INFO: should be the same key as ${TOPOLOGY_FILE_PATH}: ${linchpin_keypair}"
if [[ ${CHECK_RESULT} -ne 0 ]]; then
echo
echo "=> Configuration check FAILED, see FAILED messages above."
echo
fi
if [[ ${CHECK_ONLY} == "yes" || ${CHECK_RESULT} -ne 0 ]]; then
exit ${CHECK_RESULT}
fi
############################## Run the tests
set -x
### Clean the linchpin generated inventory
rm -rf linchpin/inventories/*.inventory
### Provision test runner
linchpin -v --workspace linchpin -p linchpin/PinFile -c linchpin/linchpin.conf up
### Pass inventory generated by linchpin to ansible
cp linchpin/inventories/*.inventory remote_config/inventory/linchpin.inventory
### Use remote hosts in tests playbooks
ansible-playbook set_tests_to_run_on_remote.yml
### Use the ansible configuration for running tests on remote host
export ANSIBLE_CONFIG=${ANSIBLE_CFG_PATH}
### Configure remote user for playbooks
# By default root is used but it can be fedora or cloud-user for cloud images
for USER in root fedora cloud-user; do
ansible-playbook --extra-vars="remote_user=$USER" check_and_set_remote_user.yml
done
### Prepare test runner
ansible-playbook --extra-vars="artifacts=${ARTIFACTS}" prepare-test-runner.yml
### Run test on test runner (supply artifacts variable which is testing system's job)
ansible-playbook --extra-vars="artifacts=${ARTIFACTS}" tests.yml
### Destroy the test runner
linchpin -v --workspace linchpin -p linchpin/PinFile -c linchpin/linchpin.conf destroy

View File

@ -1,17 +0,0 @@
---
# Replace hosts in test playbooks to run on remote host instead of localhost
- hosts: localhost
become: False
gather_facts: False
tasks:
- name: Replace hosts in tests*.yml playbooks
lineinfile:
path: "{{ item }}"
regexp: "- hosts: localhost\\S*"
line: "- hosts: gating_test_runner"
backrefs: yes
with_fileglob:
- tests*.yml

View File

@ -1,22 +0,0 @@
---
# test anaconda
- hosts: localhost
become: true
tags:
- classic
vars_files:
- vars_tests.yml
roles:
- role: prepare-env
tags:
- prepare-env
- role: installation-repos
- role: dirinstall
vars:
method: "--text"
test_name: dirinstall-text
tags:
- dirinstall-text
- role: sync-artifacts
- role: global-result

View File

@ -1,12 +0,0 @@
---
# variables for tests.yml
installation_logs:
- anaconda.log
- dbus.log
- dnf.librepo.log
- hawkey.log
- ifcfg.log
- packaging.log
- program.log
- storage.log
local_artifacts: /tmp/artifacts