import CS containers-common-1-55.el9

This commit is contained in:
eabdullin 2023-09-21 18:12:41 +00:00
parent 1f340d2b0b
commit 769087e347
18 changed files with 2051 additions and 574 deletions

View File

@ -1 +0,0 @@
a72daf8585b41529269cdffcca3a0b3d4e2f21cd SOURCES/RPM-GPG-KEY-redhat-beta

1
.gitignore vendored
View File

@ -1 +0,0 @@
SOURCES/RPM-GPG-KEY-redhat-beta

File diff suppressed because it is too large Load Diff

View File

@ -54,11 +54,11 @@ A Containerfile is similar to a Makefile.
# FORMAT
`FROM image`
`FROM image [AS <name>]`
`FROM image:tag`
`FROM image:tag [AS <name>]`
`FROM image@digest`
`FROM image@digest [AS <name>]`
-- The **FROM** instruction sets the base image for subsequent instructions. A
valid Containerfile must have either **ARG** or *FROM** as its first instruction.
@ -82,6 +82,9 @@ A Containerfile is similar to a Makefile.
-- If no digest is given to the **FROM** instruction, container engines apply the
`latest` tag. If the used tag does not exist, an error is returned.
-- A name can be assigned to a build stage by adding **AS name** to the instruction.
The name can be referenced later in the Containerfile using the **FROM** or **COPY --from=<name>** instructions.
**MAINTAINER**
-- **MAINTAINER** sets the Author field for the generated images.
Useful for providing users with an email or url for support.
@ -154,6 +157,47 @@ Current supported mount TYPES are bind, cache, secret and tmpfs.
· rw, read-write: allows writes on the mount.
**RUN --network**
`RUN --network` allows control over which networking environment the command
is run in.
Syntax: `--network=<TYPE>`
**Network types**
| Type | Description |
|----------------------------------------------|----------------------------------------|
| [`default`](#run---networkdefault) (default) | Run in the default network. |
| [`none`](#run---networknone) | Run with no network access. |
| [`host`](#run---networkhost) | Run in the host's network environment. |
##### RUN --network=default
Equivalent to not supplying a flag at all, the command is run in the default
network for the build.
##### RUN --network=none
The command is run with no network access (`lo` is still available, but is
isolated to this process).
##### Example: isolating external effects
```dockerfile
FROM python:3.6
ADD mypackage.tgz wheels/
RUN --network=none pip install --find-links wheels mypackage
```
`pip` will only be able to install the packages provided in the tarfile, which
can be controlled by an earlier build stage.
##### RUN --network=host
The command is run in the host's network environment (similar to
`buildah build --network=host`, but on a per-instruction basis)
**RUN Secrets**
@ -321,10 +365,10 @@ The secret needs to be passed to the build using the --secret flag. The final im
-- **COPY** has two forms:
```
COPY <src> <dest>
COPY [--chown=<user>:<group>] [--chmod=<mode>] <src> <dest>
# Required for paths with whitespace
COPY ["<src>",... "<dest>"]
COPY [--chown=<user>:<group>] [--chmod=<mode>] ["<src>",... "<dest>"]
```
The **COPY** instruction copies new files from `<src>` and
@ -337,6 +381,16 @@ The secret needs to be passed to the build using the --secret flag. The final im
attempt to unpack it. All new files and directories are created with mode **0755**
and with the uid and gid of **0**.
`--chown=<user>:<group>` changes the ownership of new files and directories.
Supports names, if defined in the containers `/etc/passwd` and `/etc/groups` files, or using
uid and gid integers. The build will fail if a user or group name can't be mapped in the container.
Numeric id's are set without looking them up in the container.
`--chmod=<mode>` changes the mode of new files and directories.
The optional flag `--from=name` can be used to copy files from a named previous build stage. It
changes the context of `<src>` from the build context to the named build stage.
**ENTRYPOINT**
-- **ENTRYPOINT** has two forms:

View File

@ -0,0 +1,29 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.2.6 (GNU/Linux)
mQINBEmkAzABEAC2/c7bP1lHQ3XScxbIk0LQWe1YOiibQBRLwf8Si5PktgtuPibT
kKpZjw8p4D+fM7jD1WUzUE0X7tXg2l/eUlMM4dw6XJAQ1AmEOtlwSg7rrMtTvM0A
BEtI7Km6fC6sU6RtBMdcqD1cH/6dbsfh8muznVA7UlX+PRBHVzdWzj6y8h84dBjo
gzcbYu9Hezqgj/lLzicqsSZPz9UdXiRTRAIhp8V30BD8uRaaa0KDDnD6IzJv3D9P
xQWbFM4Z12GN9LyeZqmD7bpKzZmXG/3drvfXVisXaXp3M07t3NlBa3Dt8NFIKZ0D
FRXBz5bvzxRVmdH6DtkDWXDPOt+Wdm1rZrCOrySFpBZQRpHw12eo1M1lirANIov7
Z+V1Qh/aBxj5EUu32u9ZpjAPPNtQF6F/KjaoHHHmEQAuj4DLex4LY646Hv1rcv2i
QFuCdvLKQGSiFBrfZH0j/IX3/0JXQlZzb3MuMFPxLXGAoAV9UP/Sw/WTmAuTzFVm
G13UYFeMwrToOiqcX2VcK0aC1FCcTP2z4JW3PsWvU8rUDRUYfoXovc7eg4Vn5wHt
0NBYsNhYiAAf320AUIHzQZYi38JgVwuJfFu43tJZE4Vig++RQq6tsEx9Ftz3EwRR
fJ9z9mEvEiieZm+vbOvMvIuimFVPSCmLH+bI649K8eZlVRWsx3EXCVb0nQARAQAB
tDBSZWQgSGF0LCBJbmMuIChiZXRhIGtleSAyKSA8c2VjdXJpdHlAcmVkaGF0LmNv
bT6JAjYEEwECACAFAkpSM+cCGwMGCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRCT
ioDK8hVB6/9tEAC0+KmzeKceXQ/GTUoU6jy9vtkFCFrmv+c7ol4XpdTt0QhqBOwy
6m2mKWwmm8KfYfy0cADQ4y/EcoXl7FtFBwYmkCuEQGXhTDn9DvVjhooIq59LEMBQ
OW879RwwzRIZ8ebbjMUjDPF5MfPQqP2LBu9N4KvXlZp4voykwuuaJ+cbsKZR6pZ6
0RQKPHKP+NgUFC0fff7XY9cuOZZWFAeKRhLN2K7bnRHKxp+kELWb6R9ZfrYwZjWc
MIPbTd1khE53L4NTfpWfAnJRtkPSDOKEGVlVLtLq4HEAxQt07kbslqISRWyXER3u
QOJj64D1ZiIMz6t6uZ424VE4ry9rBR0Jz55cMMx5O/ni9x3xzFUgH8Su2yM0r3jE
Rf24+tbOaPf7tebyx4OKe+JW95hNVstWUDyGbs6K9qGfI/pICuO1nMMFTo6GqzQ6
DwLZvJ9QdXo7ujEtySZnfu42aycaQ9ZLC2DOCQCUBY350Hx6FLW3O546TAvpTfk0
B6x+DV7mJQH7MGmRXQsE7TLBJKjq28Cn4tVp04PmybQyTxZdGA/8zY6pPl6xyVMH
V68hSBKEVT/rlouOHuxfdmZva1DhVvUC6Xj7+iTMTVJUAq/4Uyn31P1OJmA2a0PT
CAqWkbJSgKFccsjPoTbLyxhuMSNkEZFHvlZrSK9vnPzmfiRH0Orx3wYpMQ==
=21pb
-----END PGP PUBLIC KEY BLOCK-----

View File

@ -5,30 +5,33 @@ containers-auth.json - syntax for the registry authentication file
# DESCRIPTION
A credentials file in JSON format used to authenticate against container image registries.
A file in JSON format controlling authentication against container image registries.
The primary (read/write) file is stored at `${XDG_RUNTIME_DIR}/containers/auth.json` on Linux;
on Windows and macOS, at `$HOME/.config/containers/auth.json`.
When searching for the credential for a registry, the following files will be read in sequence until the valid credential is found:
first reading the primary (read/write) file, or the explicit override using an option of the calling application.
If credentials are not present, search in `${XDG_CONFIG_HOME}/containers/auth.json` (usually `~/.config/containers/auth.json`), `$HOME/.docker/config.json`, `$HOME/.dockercfg`.
If credentials are not present there,
the search continues in `${XDG_CONFIG_HOME}/containers/auth.json` (usually `~/.config/containers/auth.json`), `$HOME/.docker/config.json`, `$HOME/.dockercfg`.
Except the primary (read/write) file, other files are read-only, unless the user use an option of the calling application explicitly points at it as an override.
Except for the primary (read/write) file, other files are read-only unless the user, using an option of the calling application, explicitly points at it as an override.
## FORMAT
The auth.json file stores encrypted authentication information for the
user to container image registries. The file can have zero to many entries and
is created by a `login` command from a container tool such as `podman login`,
`buildah login` or `skopeo login`. Each entry either contains a single
hostname (e.g. `docker.io`) or a namespace (e.g. `quay.io/user/image`) as a key
and an auth token in the form of a base64 encoded string as value of `auth`. The
token is built from the concatenation of the username, a colon, and the
password. The registry name can additionally contain a repository name (an image
name without tag or digest) and namespaces. The path (or namespace) is matched
in its hierarchical order when checking for available authentications. For
example, an image pull for `my-registry.local/namespace/user/image:latest` will
The auth.json file stores, or references, credentials that allow the user to authenticate
to container image registries.
It is primarily managed by a `login` command from a container tool such as `podman login`,
`buildah login`, or `skopeo login`.
Each entry contains a single hostname (e.g., `docker.io`) or a namespace (e.g., `quay.io/user/image`) as a key,
and credentials in the form of a base64-encoded string as value of `auth`. The
base64-encoded string contains a concatenation of the username, a colon, and the
password.
When checking for available credentials, the relevant repository is matched
against available keys in its hierarchical order, going from most-specific to least-specific.
For example, an image pull for `my-registry.local/namespace/user/image:latest` will
result in a lookup in `auth.json` in the following order:
- `my-registry.local/namespace/user/image`
@ -77,10 +80,8 @@ preserving a fallback for `my-registry.local`:
An entry can be removed by using a `logout` command from a container
tool such as `podman logout` or `buildah logout`.
In addition, credential helpers can be configured for specific registries and the credentials-helper
software can be used to manage the credentials in a more secure way than depending on the base64 encoded authentication
provided by `login`. If the credential helpers are configured for specific registries, the base64 encoded authentication will not be used
for operations concerning credentials of the specified registries.
In addition, credential helpers can be configured for specific registries, and the credentials-helper
software can be used to manage the credentials more securely than storing only base64-encoded credentials in `auth.json`.
When the credential helper is in use on a Linux platform, the auth.json file would contain keys that specify the registry domain, and values that specify the suffix of the program to use (i.e. everything after docker-credential-). For example:

View File

@ -282,7 +282,7 @@ signed by the provided public key.
The `signedIdentity` field has the same semantics as in the `signedBy` requirement described above.
Note that `cosign`-created signatures only contain a repository, so only `matchRepository` and `exactRepository` can be used to accept them (and that does not protect against substitution of a signed image with an unexpected tag).
To use this with images hosted on image registries, the relevant registry or repository must have the `use-sigstore-attachments` option enabled in containers-registries.d(5).
To use this with images hosted on image registries, the `use-sigstore-attachments` option needs to be enabled for the relevant registry or repository in the client's containers-registries.d(5).
## Examples

View File

@ -68,7 +68,9 @@ the consumer MUST verify at least the following aspects of the signature
(like the `github.com/containers/image/signature` package does):
- The blob MUST be a “Signed Message” as defined RFC 4880 section 11.3.
(e.g. it MUST NOT be an unsigned “Literal Message”, or any other non-signature format).
(e.g. it MUST NOT be an unsigned “Literal Message”,
a “Cleartext Signature” as defined in RFC 4880 section 7,
or any other non-signature format).
- The signature MUST have been made by an expected key trusted for the purpose (and the specific container image).
- The signature MUST be correctly formed and pass the cryptographic validation.
- The signature MUST correctly authenticate the included JSON payload

View File

@ -55,6 +55,11 @@ $ restorecon -R -v /NEWSTORAGEPATH
A common use case for this field is to provide a local storage directory when user home directories are NFS-mounted (podman does not support container storage over NFS).
**imagestore**=""
Path of imagestore different from `graphroot`, by default storage library stores all images in `graphroot` but if `imagestore` is provided it will store newly pulled images in provided `imagestore` but will keep using `graphroot` for everything else. If user is using `overlay` driver then images which were already part of `graphroot` will still be accessible ( Internally storage library will mount `graphroot` as an `additionalImageStore` to allow this behaviour ).
A common use case for this field is for the users who want to split the file-system in different parts i.e disk which stores images vs disk used by the container created by the image.
**runroot**=""
container storage run dir (default: "/run/containers/storage")
Default directory to store all temporary writable content created by container storage programs. The rootless runroot path supports environment variable substitutions (ie. `$HOME/containers/storage`)
@ -64,6 +69,12 @@ Default directory to store all temporary writable content created by container s
By default, the storage driver is set via the `driver` option. If it is not defined, then the best driver will be picked according to the current platform. This option allows you to override this internal priority list with a custom one to prefer certain drivers.
Setting this option only has an effect if the local storage has not been initialized yet and the driver name is not set.
**transient_store** = "false" | "true"
Transient store mode makes all container metadata be saved in temporary storage
(i.e. runroot above). This is faster, but doesn't persist across reboots.
Additional garbage collection must also be performed at boot-time, so this option should remain disabled in most configurations. (default: false)
### STORAGE OPTIONS TABLE
The `storage.options` table supports the following options:
@ -82,7 +93,7 @@ container registry. These options can deduplicate pulling of content, disk
storage of content and can allow the kernel to use less memory when running
containers.
containers/storage supports four keys
containers/storage supports three keys
* enable_partial_images="true" | "false"
Tells containers/storage to look for files previously pulled in storage
rather then always pulling them from the container registry.
@ -99,14 +110,14 @@ containers/storage supports four keys
Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of a container, to the UIDs/GIDs outside of the container, and the length of the range of UIDs/GIDs. Additional mapped sets can be listed and will be heeded by libraries, but there are limits to the number of mappings which the kernel will allow when you later attempt to run a container.
Example
remap-uids = 0:1668442479:65536
remap-gids = 0:1668442479:65536
remap-uids = "0:1668442479:65536"
remap-gids = "0:1668442479:65536"
These mappings tell the container engines to map UID 0 inside of the container to UID 1668442479 outside. UID 1 will be mapped to 1668442480. UID 2 will be mapped to 1668442481, etc, for the next 65533 UIDs in succession.
**remap-user**=""
**remap-group**=""
Remap-User/Group is a user name which can be used to look up one or more UID/GID ranges in the /etc/subuid or /etc/subgid file. Mappings are set up starting with an in-container ID of 0 and then a host-level ID taken from the lowest range that matches the specified name, and using the length of that range. Additional ranges are then assigned, using the ranges which specify the lowest host-level IDs first, to the lowest not-yet-mapped in-container ID, until all of the entries have been used for maps.
Remap-User/Group is a user name which can be used to look up one or more UID/GID ranges in the /etc/subuid or /etc/subgid file. Mappings are set up starting with an in-container ID of 0 and then a host-level ID taken from the lowest range that matches the specified name, and using the length of that range. Additional ranges are then assigned, using the ranges which specify the lowest host-level IDs first, to the lowest not-yet-mapped in-container ID, until all of the entries have been used for maps. This setting overrides the Remap-UIDs/GIDs setting.
Example
remap-user = "containers"

View File

@ -64,12 +64,17 @@ The _algo:digest_ refers to the image ID reported by docker-inspect(1).
### **oci:**_path[:reference]_
An image compliant with the "Open Container Image Layout Specification" at _path_.
Using a _reference_ is optional and allows for storing multiple images at the same _path_.
An image in a directory structure compliant with the "Open Container Image Layout Specification" at _path_.
_Path_ terminates at the first `:` character; any further `:` characters are not separators, but a part of _reference_.
Specify a _reference_ to allow storing multiple images within the same _path_.
### **oci-archive:**_path[:reference]_
An image compliant with the "Open Container Image Layout Specification" stored as a tar(1) archive at _path_.
An image in a tar(1) archive with contents compliant with the "Open Container Image Layout Specification" at _path_.
_Path_ terminates at the first `:` character; any further `:` characters are not separators, but a part of _reference_.
Specify a _reference_ to allow storing multiple images within the same _path_.
### **ostree:**_docker-reference[@/absolute/repo/path]_

View File

@ -33,6 +33,11 @@
#
#base_hosts_file = ""
# List of cgroup_conf entries specifying a list of cgroup files to write to and
# their values. For example `memory.high=1073741824` sets the
# memory.high limit to 1GB.
# cgroup_conf = []
# Default way to to create a cgroup namespace for the container
# Options are:
# `private` Create private Cgroup Namespace for the container.
@ -63,6 +68,7 @@
# "SETGID",
# "SETPCAP",
# "SETUID",
# "SYS_CHROOT",
#]
# A list of sysctls to be set in containers by default,
@ -167,6 +173,12 @@ default_sysctls = [
#
#label = true
# label_users indicates whether to enforce confined users in containers on
# SELinux systems. This option causes containers to maintain the current user
# and role field of the calling process. By default SELinux containers run with
# the user system_u, and the role system_r.
#label_users = false
# Logging driver for the container. Available options: k8s-file and journald.
#
#log_driver = "k8s-file"
@ -197,6 +209,10 @@ default_sysctls = [
#
#no_hosts = false
# Tune the host's OOM preferences for containers
# (accepts values from -1000 to 1000).
#oom_score_adj = 0
# Default way to to create a PID namespace for the container
# Options are:
# `private` Create private PID Namespace for the container.
@ -294,6 +310,15 @@ default_sysctls = [
# "/opt/cni/bin",
#]
# List of directories that will be searched for netavark plugins.
#
#netavark_plugin_dirs = [
# "/usr/local/libexec/netavark",
# "/usr/libexec/netavark",
# "/usr/local/lib/netavark",
# "/usr/lib/netavark",
#]
# The network name of the default network to attach pods to.
#
#default_network = "podman"
@ -319,6 +344,13 @@ default_sysctls = [
# {"base" = "10.128.0.0/9", "size" = 24},
#]
# Configure which rootless network program to use by default. Valid options are
# `slirp4netns` (default) and `pasta`.
#
#default_rootless_network_cmd = "slirp4netns"
# Path to the directory where network configuration files are located.
# For the CNI backend the default is "/etc/cni/net.d" as root
# and "$HOME/.config/cni/net.d" as rootless.
@ -334,16 +366,27 @@ default_sysctls = [
#
#dns_bind_port = 53
# A list of default pasta options that should be used running pasta.
# It accepts the pasta cli options, see pasta(1) for the full list of options.
#
#pasta_options = []
[engine]
# Index to the active service
#
#active_service = production
#active_service = "production"
# The compression format to use when pushing an image.
# Valid options are: `gzip`, `zstd` and `zstd:chunked`.
#
#compression_format = "gzip"
# The compression level to use when pushing an image.
# Valid options depend on the compression format used.
# For gzip, valid options are 1-9, with a default of 5.
# For zstd, valid options are 1-20, with a default of 3.
#
#compression_level = 5
# Cgroup management implementation used for the runtime.
# Valid options "systemd" or "cgroupfs"
@ -373,11 +416,16 @@ default_sysctls = [
# short-name aliases defined in containers-registries.conf(5).
#compat_api_enforce_docker_hub = true
# The database backend of Podman. Supported values are "boltdb" (default) and
# "sqlite". Please run `podman-system-reset` prior to changing the database
# backend of an existing deployment, to make sure Podman can operate correctly.
#database_backend="boltdb"
# Specify the keys sequence used to detach a container.
# Format is a single character [a-Z] or a comma separated sequence of
# `ctrl-<value>`, where `<value>` is one of:
# `a-z`, `@`, `^`, `[`, `\`, `]`, `^` or `_`
#
# Specifying "" disables this feature.
#detach_keys = "ctrl-p,ctrl-q"
# Determines whether engine will reserve ports on the host when they are
@ -447,7 +495,7 @@ default_sysctls = [
#
#image_parallel_copies = 0
# Tells container engines how to handle the builtin image volumes.
# Tells container engines how to handle the built-in image volumes.
# * bind: An anonymous named volume will be created and mounted
# into the container.
# * tmpfs: The volume is mounted onto the container as a tmpfs,
@ -463,26 +511,30 @@ default_sysctls = [
# Infra (pause) container image name for pod infra containers. When running a
# pod, we start a `pause` process in a container to hold open the namespaces
# associated with the pod. This container does nothing other then sleep,
# reserving the pods resources for the lifetime of the pod. By default container
# engines run a builtin container using the pause executable. If you want override
# associated with the pod. This container does nothing other than sleep,
# reserving the pod's resources for the lifetime of the pod. By default container
# engines run a built-in container using the pause executable. If you want override
# specify an image to pull.
#
#infra_image = ""
# Default Kubernetes kind/specification of the kubernetes yaml generated with the `podman kube generate` command.
# The possible options are `pod` and `deployment`.
#kube_generate_type = "pod"
# Specify the locking mechanism to use; valid values are "shm" and "file".
# Change the default only if you are sure of what you are doing, in general
# "file" is useful only on platforms where cgo is not available for using the
# faster "shm" lock type. You may need to run "podman system renumber" after
# you change the lock type.
#
#lock_type** = "shm"
#lock_type = "shm"
# MultiImageArchive - if true, the container engine allows for storing archives
# (e.g., of the docker-archive transport) with multiple images. By default,
# Podman creates single-image archives.
#
#multi_image_archive = "false"
#multi_image_archive = false
# Default engine namespace
# If engine is joined to a namespace, it will see only containers and pods
@ -588,8 +640,8 @@ runtime = "crun"
# map of service destinations
#
# [service_destinations]
# [service_destinations.production]
# [engine.service_destinations]
# [engine.service_destinations.production]
# URI to access the Podman service
# Examples:
# rootless "unix://run/user/$UID/podman/podman.sock" (Default)
@ -694,7 +746,7 @@ runtime = "crun"
# "https://example.com/linux/amd64/foobar.ami" on a Linux AMD machine.
# The default value is `testing`.
#
# image = "testing"
#image = "testing"
# Memory in MB a machine is created with.
#
@ -710,10 +762,15 @@ runtime = "crun"
# the source and destination. An optional third field `:ro` can be used to
# tell the container engines to mount the volume readonly.
#
# volumes = [
#volumes = [
# "$HOME:$HOME",
#]
# Virtualization provider used to run Podman machine.
# If it is empty or commented out, the default provider will be used.
#
#provider = ""
# The [machine] table MUST be the last entry in this file.
# (Unless another table is added)
# TOML does not provide a way to end a table other than a further table being

View File

@ -9,11 +9,12 @@ Container engines like Podman & Buildah read containers.conf file, if it exists
and modify the defaults for running containers on the host. containers.conf uses
a TOML format that can be easily modified and versioned.
Container engines read the /usr/share/containers/containers.conf and
/etc/containers/containers.conf, and /etc/containers/containers.conf.d/*.conf files
if they exist. When running in rootless mode, they also read
$HOME/.config/containers/containers.conf and
$HOME/.config/containers/containers.conf.d/*.conf files.
Container engines read the __/usr/share/containers/containers.conf__,
__/etc/containers/containers.conf__, and __/etc/containers/containers.conf.d/\*.conf__
files if they exist.
When running in rootless mode, they also read
__$HOME/.config/containers/containers.conf__ and
__$HOME/.config/containers/containers.conf.d/\*.conf__ files.
Fields specified in containers conf override the default options, as well as
options in previously read containers.conf files.
@ -22,10 +23,10 @@ Config files in the `.d` directories, are added in alpha numeric sorted order an
Not all options are supported in all container engines.
Note container engines also use other configuration files for configuring the environment.
Note, container engines also use other configuration files for configuring the environment.
* `storage.conf` for configuration of container and images storage.
* `registries.conf` for definition of container registires to search while pulling.
* `registries.conf` for definition of container registries to search while pulling.
container images.
* `policy.conf` for controlling which images can be pulled to the system.
@ -50,6 +51,7 @@ TOML can be simplified to:
The containers table contains settings to configure and manage the OCI runtime.
**annotations** = []
List of annotations. Specified as "key=value" pairs to be added to all containers.
Example: "run.oci.keep_original_groups=1"
@ -66,6 +68,12 @@ file. This must be either an absolute path or as special values "image" which
uses the hosts file from the container image or "none" which means
no base hosts file is used. The default is "" which will use /etc/hosts.
**cgroup_conf**=[]
List of cgroup_conf entries specifying a list of cgroup files to write to and
their values. For example `memory.high=1073741824` sets the
memory.high limit to 1GB.
**cgroups**="enabled"
Determines whether the container will create CGroups.
@ -98,12 +106,12 @@ default_capabilities = [
"SETGID",
"SETPCAP",
"SETUID",
"SYS_CHROOT",
]
```
Note, by default container engines using containers.conf, run with less
capabilities than Docker. Docker runs additionally with "AUDIT_WRITE", "MKNOD",
"NET_RAW", "CHROOT". If you need to add one of these capabilities for a
capabilities than Docker. Docker runs additionally with "AUDIT_WRITE", "MKNOD" and "NET_RAW". If you need to add one of these capabilities for a
particular container, you can use the --cap-add option or edit your system's containers.conf.
**default_sysctls**=[]
@ -199,6 +207,13 @@ the container.
Indicates whether the container engine uses MAC(SELinux) container separation via labeling. This option is ignored on disabled systems.
**label_users**=false
label_users indicates whether to enforce confined users in containers on
SELinux systems. This option causes containers to maintain the current user
and role field of the calling process. By default SELinux containers run with
the user system_u, and the role system_r.
**log_driver**=""
Logging driver for the container. Currently available options are k8s-file, journald, none and passthrough, with json-file aliased to k8s-file for scripting compatibility. The journald driver is used by default if the systemd journal is readable and writable. Otherwise, the k8s-file driver is used.
@ -227,6 +242,10 @@ Options are:
Create /etc/hosts for the container. By default, container engines manage
/etc/hosts, automatically adding the container's own IP address.
**oom_score_adj**=0
Tune the host's OOM preferences for containers (accepts values from -1000 to 1000).
**pidns**="private"
Default way to to create a PID namespace for the container.
@ -324,6 +343,20 @@ cni_plugin_dirs = [
]
```
**netavark_plugin_dirs**=[]
List of directories that will be searched for netavark plugins.
The default list is:
```
netavark_plugin_dirs = [
"/usr/local/libexec/netavark",
"/usr/libexec/netavark",
"/usr/local/lib/netavark",
"/usr/lib/netavark",
]
```
**default_network**="podman"
The network name of the default network to attach pods to.
@ -352,11 +385,16 @@ default_subnet_pools = [
]
```
**default_rootless_network_cmd**="slirp4netns"
Configure which rootless network program to use by default. Valid options are
`slirp4netns` (default) and `pasta`.
**network_config_dir**="/etc/cni/net.d/"
Path to the directory where network configuration files are located.
For the CNI backend the default is "/etc/cni/net.d" as root
and "$HOME/.config/cni/net.d" as rootless.
For the CNI backend the default is __/etc/cni/net.d__ as root
and __$HOME/.config/cni/net.d__ as rootless.
For the netavark backend "/etc/containers/networks" is used as root
and "$graphroot/networks" as rootless.
@ -367,6 +405,11 @@ mode and dns enabled.
Using an alternate port might be useful if other dns services should
run on the machine.
**pasta_options** = []
A list of default pasta options that should be used running pasta.
It accepts the pasta cli options, see pasta(1) for the full list of options.
## ENGINE TABLE
The `engine` table contains configuration options used to set up container engines such as Podman and Buildah.
@ -403,6 +446,12 @@ conmon_path=[
]
```
**database_backend**="boltdb"
The database backend of Podman. Supported values are "boltdb" (default) and
"sqlite". Please run `podman-system-reset` prior to changing the database
backend of an existing deployment, to make sure Podman can operate correctly.
**detach_keys**="ctrl-p,ctrl-q"
Keys sequence used for detaching a container.
@ -410,6 +459,7 @@ Specify the keys sequence used to detach a container.
Format is a single character `[a-Z]` or a comma separated sequence of
`ctrl-<value>`, where `<value>` is one of:
`a-z`, `@`, `^`, `[`, `\`, `]`, `^` or `_`
Specifying "" disables this feature.
**enable_port_reservation**=true
@ -462,12 +512,14 @@ with detailed information about the container. Set to false by default.
A is a list of directories which are used to search for helper binaries.
The default paths on Linux are:
- `/usr/local/libexec/podman`
- `/usr/local/lib/podman`
- `/usr/libexec/podman`
- `/usr/lib/podman`
The default paths on macOS are:
- `/usr/local/opt/podman/libexec`
- `/opt/homebrew/bin`
- `/opt/homebrew/opt/podman/libexec`
@ -478,6 +530,7 @@ The default paths on macOS are:
- `/usr/lib/podman`
The default path on Windows is:
- `C:\Program Files\RedHat\Podman`
**hooks_dir**=["/etc/containers/oci/hooks.d", ...]
@ -502,7 +555,7 @@ Not setting this field will fall back to containers/image defaults. (6)
**image_volume_mode**="bind"
Tells container engines how to handle the builtin image volumes.
Tells container engines how to handle the built-in image volumes.
* bind: An anonymous named volume will be created and mounted into the container.
* tmpfs: The volume is mounted onto the container as a tmpfs, which allows the users to create content that disappears when the container is stopped.
@ -512,18 +565,22 @@ Tells container engines how to handle the builtin image volumes.
Infra (pause) container image command for pod infra containers. When running a
pod, we start a `/pause` process in a container to hold open the namespaces
associated with the pod. This container does nothing other then sleep,
reserving the pods resources for the lifetime of the pod.
associated with the pod. This container does nothing other than sleep,
reserving the pod's resources for the lifetime of the pod.
**infra_image**=""
Infra (pause) container image for pod infra containers. When running a
pod, we start a `pause` process in a container to hold open the namespaces
associated with the pod. This container does nothing other then sleep,
reserving the pods resources for the lifetime of the pod. By default container
engines run a builtin container using the pause executable. If you want override
associated with the pod. This container does nothing other than sleep,
reserving the pod's resources for the lifetime of the pod. By default container
engines run a built-in container using the pause executable. If you want override
specify an image to pull.
**kube_generate_type**="pod"
Default Kubernetes kind/specification of the kubernetes yaml generated with the `podman kube generate` command. The possible options are `pod` and `deployment`.
**lock_type**="shm"
Specify the locking mechanism to use; valid values are "shm" and "file".
@ -595,6 +652,7 @@ Pull image before running or creating a container. The default is **missing**.
- **never**: do not pull the image from the registry, use only the local version. Raise an error if the image is not present locally.
**remote** = false
Indicates whether the application should be running in remote mode. This flag modifies the
--remote option on container engines. Setting the flag to true will default `podman --remote=true` for access to the remote Podman service.
@ -661,14 +719,21 @@ not be by other drivers.
Determines whether file copied into a container will have changed ownership to
the primary uid/gid of the container.
**compression_format**=""
**compression_format**="gzip"
Specifies the compression format to use when pushing an image. Supported values are: `gzip`, `zstd` and `zstd:chunked`.
## SERVICE DESTINATION TABLE
The `service_destinations` table contains configuration options used to set up remote connections to the podman service for the podman API.
**compression_level**="5"
**[service_destinations.{name}]**
The compression level to use when pushing an image. Valid options
depend on the compression format used. For gzip, valid options are
1-9, with a default of 5. For zstd, valid options are 1-20, with a
default of 3.
## SERVICE DESTINATION TABLE
The `engine.service_destinations` table contains configuration options used to set up remote connections to the podman service for the podman API.
**[engine.service_destinations.{name}]**
URI to access the Podman service
**uri="ssh://user@production.example.com/run/user/1001/podman/podman.sock"**
@ -745,29 +810,45 @@ Environment variables like $HOME as well as complete paths are supported for
the source and destination. An optional third field `:ro` can be used to
tell the container engines to mount the volume readonly.
On Mac, the default volumes are: `"/Users:/Users", "/private:/private", "/var/folders:/var/folders"`
On Mac, the default volumes are:
[ "/Users:/Users", "/private:/private", "/var/folders:/var/folders" ]
**provider**=""
Virtualization provider to be used for running a podman-machine VM. Empty value
is interpreted as the default provider for the current host OS. On Linux/Mac
default is `QEMU` and on Windows it is `WSL`.
# FILES
**containers.conf**
Distributions often provide a `/usr/share/containers/containers.conf` file to
define default container configuration. Administrators can override fields in
this file by creating `/etc/containers/containers.conf` to specify their own
configuration. Rootless users can further override fields in the config by
creating a config file stored in the `$HOME/.config/containers/containers.conf` file.
Distributions often provide a __/usr/share/containers/containers.conf__ file to
provide a default configuration. Administrators can override fields in this
file by creating __/etc/containers/containers.conf__ to specify their own
configuration. They may also drop `.conf` files in
__/etc/containers/containers.conf.d__ which will be loaded in alphanumeric order.
Rootless users can further override fields in the config by creating a config
file stored in the __$HOME/.config/containers/containers.conf__ file or __.conf__ files in __$HOME/.config/containers/containers.conf.d__.
If the `CONTAINERS_CONF` path environment variable is set, just
this path will be used. This is primarily used for testing.
If the `CONTAINERS_CONF` environment variable is set, all system and user
config files are ignored and only the specified config file will be loaded.
Fields specified in the containers.conf file override the default options, as
well as options in previously read containers.conf files.
If the `CONTAINERS_CONF_OVERRIDE` path environment variable is set, the config
file will be loaded last even when `CONTAINERS_CONF` is set.
The values of both environment variables may be absolute or relative paths, for
instance, `CONTAINERS_CONF=/tmp/my_containers.conf`.
Fields specified in a containers.conf file override the default options, as
well as options in previously loaded containers.conf files.
**storage.conf**
The `/etc/containers/storage.conf` file is the default storage configuration file.
Rootless users can override fields in the storage config by creating
`$HOME/.config/containers/storage.conf`.
__$HOME/.config/containers/storage.conf__.
If the `CONTAINERS_STORAGE_CONF` path environment variable is set, this path
is used for the storage.conf file rather than the default.

View File

@ -1,5 +1,5 @@
#!/bin/bash
#set -e
set -e
rm -f /tmp/pyxis*.json
TOTAL=`curl -s --negotiate -u: -H 'Content-Type: application/json' -H 'Accept: application/json' -X GET "https://pyxis.engineering.redhat.com/v1/repositories?page_size=1" | jq .total`
if [ "$TOTAL" == "null" ]; then
@ -11,53 +11,37 @@ for P in `seq 0 $PAGES`; do
curl -s --negotiate -u: -H 'Content-Type: application/json' -H 'Accept: application/json' -X GET "https://pyxis.engineering.redhat.com/v1/repositories?page_size=500&page=$P" > /tmp/pyxis$P.json
done
cat /tmp/pyxis*.json > /tmp/pyx.json
rm -f /tmp/pyx_debug
rm -f /tmp/rhel-shortnames.conf
while read -r LINE; do
if [[ "$LINE" == *\"_id\":* ]] || [[ "$LINE" == *\"total\":* ]]; then
if [ -z $REGISTRY ] ||
[ -z $PUBLISHED ] ||
[ -z $REPOSITORY ] ||
[ $REPOSITORY == \"\" ] ||
[ "$AVAILABLE" != "Generally Available" ] ||
[[ $REPOSITORY == *[@:]* ]] ||
[[ $REPOSITORY == *[* ]] ||
[[ "$REGISTRY" == *non_registry* ]] ||
[[ $REGISTRY != *.* ]]
then
continue
fi
jq '.data[]|.published,.requires_terms,.repository,.registry,.release_categories[0]' < /tmp/pyx.json >/tmp/pyx
readarray -t lines < /tmp/pyx
IDX=0
while [ $IDX -lt ${#lines[@]} ]; do
PUBLISHED=${lines[$IDX]}
REQ_TERMS=${lines[$IDX+1]}
REPOSITORY=`echo ${lines[$IDX+2]} | tr -d '"'`
REGISTRY=`echo ${lines[$IDX+3]} | tr -d '"'`
RELEASE=`echo ${lines[$IDX+4]} | tr -d '"'`
if [ "$PUBLISHED" == "true" ] &&
[ "$RELEASE" == "Generally Available" ] &&
[ ! -z "$REPOSITORY" ] &&
[ "$REPOSITORY" != \"\" ] &&
[[ $REPOSITORY != *[@:]* ]] &&
[[ $REPOSITORY != *[* ]] &&
[[ $REGISTRY == *.* ]] &&
[ "$REGISTRY" != "non_registry" ]; then
if [[ $REGISTRY == *quay.io* ]] ||
[[ $REGISTRY == *redhat.com* ]]; then
if [ "$REQUIRES_TERMS" == "1" ]; then
if [ "$REQ_TERMS" == "true" ]; then
REGISTRY=registry.redhat.io
fi
fi
echo "\"$REPOSITORY\" = \"$REGISTRY/$REPOSITORY\""
echo $PUBLISHED,$REQ_TERMS,$REPOSITORY,$REGISTRY,$RELEASE >> /tmp/pyx_debug
echo "\"$REPOSITORY\" = \"$REGISTRY/$REPOSITORY\"" >> /tmp/rhel-shortnames.conf
fi
REGISTRY=""
PUBLISHED=""
AVAILABLE=""
REPOSITORY=""
REQUIRES_TERMS=""
continue
fi
if [[ "$LINE" == *\"published\":\ true,* ]]; then
PUBLISHED=1
fi
if [[ "$LINE" == *\"requires_terms\":\ true,* ]]; then
REQUIRES_TERMS=1
fi
if [[ "$LINE" == *\"repository\":\ * ]]; then
REPOSITORY=`echo $LINE | sed 's,^.* ",,' | sed 's;",$;;'`
fi
if [[ "$LINE" == *\"registry\":\ * ]]; then
REGISTRY=`echo $LINE | sed -e 's,^.*:\ ",,' -e 's,".*,,'`
fi
if [[ "$LINE" == *\"release_categories\":\ * ]]; then
read -r LINE
AVAILABLE=`echo $LINE | sed 's,",,g'`
fi
done < /tmp/pyx.json
IDX=$(($IDX+5))
done
cp /tmp/rhel-shortnames.conf /tmp/r.conf
for D in `cut -d\ -f1 /tmp/r.conf | sort | uniq -d`; do

View File

@ -2,6 +2,8 @@
# almalinux
"almalinux" = "docker.io/library/almalinux"
"almalinux-minimal" = "docker.io/library/almalinux-minimal"
# Amazon Linux
"amazonlinux" = "public.ecr.aws/amazonlinux/amazonlinux"
# Arch Linux
"archlinux" = "docker.io/library/archlinux"
# centos

View File

@ -34,6 +34,8 @@ graphroot = "/var/lib/containers/storage"
# Transient store mode makes all container metadata be saved in temporary storage
# (i.e. runroot above). This is faster, but doesn't persist across reboots.
# Additional garbage collection must also be performed at boot-time, so this
# option should remain disabled in most configurations.
# transient_store = true
[storage.options]
@ -53,7 +55,7 @@ additionalimagestores = [
# can deduplicate pulling of content, disk storage of content and can allow the
# kernel to use less memory when running containers.
# containers/storage supports four keys
# containers/storage supports three keys
# * enable_partial_images="true" | "false"
# Tells containers/storage to look for files previously pulled in storage
# rather then always pulling them from the container registry.
@ -73,8 +75,8 @@ pull_options = {enable_partial_images = "false", use_hard_links = "false", ostre
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = 0:1668442479:65536
# remap-gids = 0:1668442479:65536
# remap-uids = "0:1668442479:65536"
# remap-gids = "0:1668442479:65536"
# Remap-User/Group is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file. Mappings are set up starting
@ -82,7 +84,8 @@ pull_options = {enable_partial_images = "false", use_hard_links = "false", ostre
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped in-container ID,
# until all of the entries have been used for maps.
# until all of the entries have been used for maps. This setting overrides the
# Remap-UIDs/GIDs setting.
#
# remap-user = "containers"
# remap-group = "containers"
@ -98,7 +101,7 @@ pull_options = {enable_partial_images = "false", use_hard_links = "false", ostre
# Auto-userns-min-size is the minimum size for a user namespace created automatically.
# auto-userns-min-size=1024
#
# Auto-userns-max-size is the minimum size for a user namespace created automatically.
# Auto-userns-max-size is the maximum size for a user namespace created automatically.
# auto-userns-max-size=65536
[storage.options.overlay]

View File

@ -7,20 +7,23 @@ CENTOS=""
pwd | grep /tmp/centos > /dev/null
if [ $? == 0 ]; then
CENTOS=1
PKG=centpkg
else
PKG=rhpkg
fi
set -e
for P in podman skopeo buildah; do
BRN=`pwd | sed 's,^.*/,,'`
rm -rf $P
pkg clone $P
$PKG clone $P
cd $P
[ -z "$CENTOS" ] && pkg switch-branch $BRN
$PKG switch-branch $BRN
if [ $BRN != stream-container-tools-rhel8 ]; then
pkg prep
$PKG prep
else
pkg --release rhel-8 prep
$PKG --release rhel-8 prep
fi
DIR=`ls -d -- */ | grep -v ^tests | head -n1`
DIR=`ls -d -- */ | grep "^$P"`
grep github.com/containers/image $DIR/go.mod | cut -d\ -f2 | sed 's,-.*,,'>> /tmp/ver_image
grep github.com/containers/common $DIR/go.mod | cut -d\ -f2 | sed 's,-.*,,' >> /tmp/ver_common
grep github.com/containers/storage $DIR/go.mod | cut -d\ -f2 | sed 's,-.*,,' >> /tmp/ver_storage

View File

@ -44,6 +44,11 @@ then
sed -i '/^default_capabilities/a \
"NET_RAW",' containers.conf
fi
if ! grep \"SYS_CHROOT\" containers.conf > /dev/null
then
sed -i '/^default_capabilities/a \
"SYS_CHROOT",' containers.conf
fi
else
ensure registries.conf unqualified-search-registries [\"registry.access.redhat.com\",\ \"registry.redhat.io\",\ \"docker.io\"]
ensure registries.conf short-name-mode \"enforcing\"

View File

@ -4,15 +4,15 @@
# pick the oldest version on c/image, c/common, c/storage vendored in
# podman/skopeo/podman.
%global skopeo_branch main
%global image_branch v5.24.0
%global common_branch v0.51.0
%global storage_branch v1.45.3
%global image_branch v5.26.1
%global common_branch v0.55.1
%global storage_branch v1.48.0
%global shortnames_branch main
Epoch: 2
Name: containers-common
Version: 1
Release: 49%{?dist}
Release: 55%{?dist}
Summary: Common configuration and documentation for containers
License: ASL 2.0
ExclusiveArch: %{go_arches}
@ -173,6 +173,30 @@ EOF
%{_datadir}/rhel/secrets/*
%changelog
* Wed Jul 19 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-55
- fix vendoring script
- Related: #2176063
* Mon Jul 10 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-54
- update vendored components
- Related: #2176063
* Tue Jun 20 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-53
- rebuild
- Resolves: #2178263
* Fri Apr 21 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-52
- update vendored components
- Related: #2176063
* Fri Mar 24 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-51
- regenerate shortnames, vendored components + fix pyxis script
- Related: #2176063
* Wed Feb 22 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-50
- improve shortnames generation
- Related: #2124478
* Tue Jan 31 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-49
- add missing systemd directories
- Related: #2124478