mirror of https://github.com/containers/podman.git
Spelling
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
This commit is contained in:
parent
07663f74c4
commit
4fa1fce930
2
Makefile
2
Makefile
|
@ -176,7 +176,7 @@ gofmt: ## Verify the source code gofmt
|
|||
test/checkseccomp/checkseccomp: .gopathok $(wildcard test/checkseccomp/*.go)
|
||||
$(GO) build $(BUILDFLAGS) -ldflags '$(LDFLAGS_PODMAN)' -tags "$(BUILDTAGS)" -o $@ ./test/checkseccomp
|
||||
|
||||
.PHONY: test/goecho/goechoe
|
||||
.PHONY: test/goecho/goecho
|
||||
test/goecho/goecho: .gopathok $(wildcard test/goecho/*.go)
|
||||
$(GO) build $(BUILDFLAGS) -ldflags '$(LDFLAGS_PODMAN)' -o $@ ./test/goecho
|
||||
|
||||
|
|
|
@ -38,7 +38,7 @@ Podman presently only supports running containers on Linux. However, we are buil
|
|||
|
||||
## Communications
|
||||
|
||||
If you think you've identified a security issue in the project, please *DO NOT* report the issue publicly via the Github issue tracker, mailing list, or IRC.
|
||||
If you think you've identified a security issue in the project, please *DO NOT* report the issue publicly via the GitHub issue tracker, mailing list, or IRC.
|
||||
Instead, send an email with as many details as possible to `security@lists.podman.io`. This is a private mailing list for the core maintainers.
|
||||
|
||||
For general questions and discussion, please use the
|
||||
|
|
|
@ -51,7 +51,7 @@
|
|||
- Fixed a bug where rootless Podman could hang when the `newuidmap` binary was not installed ([#7776](https://github.com/containers/podman/issues/7776)).
|
||||
- Fixed a bug where the `--pull` option to `podman run`, `podman create`, and `podman build` did not match Docker's behavior.
|
||||
- Fixed a bug where sysctl settings from the `containers.conf` configuration file were applied, even if the container did not join the namespace associated with a sysctl.
|
||||
- Fixed a bug where Podman would not return the text of errors encounted when trying to run a healthcheck for a container.
|
||||
- Fixed a bug where Podman would not return the text of errors encountered when trying to run a healthcheck for a container.
|
||||
- Fixed a bug where Podman was accidentally setting the `containers` environment variable in addition to the expected `container` environment variable.
|
||||
- Fixed a bug where rootless Podman using CNI networking did not properly clean up DNS entries for removed containers ([#7789](https://github.com/containers/podman/issues/7789)).
|
||||
- Fixed a bug where the `podman untag --all` command was not supported with remote Podman.
|
||||
|
@ -181,7 +181,7 @@
|
|||
- The `podman run` and `podman create` commands can now specify options to slirp4netns by using the `--network` option as follows: `--net slirp4netns:opt1,opt2`. This allows for, among other things, switching the port forwarder used by slirp4netns away from rootlessport.
|
||||
- The `podman ps` command now features a new option, `--storage`, to show containers from Buildah, CRI-O and other applications.
|
||||
- The `podman run` and `podman create` commands now feature a `--sdnotify` option to control the behavior of systemd's sdnotify with containers, enabling improved support for Podman in `Type=notify` units.
|
||||
- The `podman run` command now features a `--preserve-fds` opton to pass file descriptors from the host into the container ([#6458](https://github.com/containers/podman/issues/6458)).
|
||||
- The `podman run` command now features a `--preserve-fds` option to pass file descriptors from the host into the container ([#6458](https://github.com/containers/podman/issues/6458)).
|
||||
- The `podman run` and `podman create` commands can now create overlay volume mounts, by adding the `:O` option to a bind mount (e.g. `-v /test:/test:O`). Overlay volume mounts will mount a directory into a container from the host and allow changes to it, but not write those changes back to the directory on the host.
|
||||
- The `podman play kube` command now supports the Socket HostPath type ([#7112](https://github.com/containers/podman/issues/7112)).
|
||||
- The `podman play kube` command now supports read-only mounts.
|
||||
|
@ -269,7 +269,7 @@
|
|||
- Fixed a bug where endpoints that hijacked would do perform the hijack too early, before being ready to send and receive data ([#7195](https://github.com/containers/podman/issues/7195)).
|
||||
- Fixed a bug where Pod endpoints that can operate on multiple containers at once (e.g. Kill, Pause, Unpause, Stop) would not forward errors from individual containers that failed.
|
||||
- The Compat List endpoint for networks now supports filtering results ([#7462](https://github.com/containers/podman/issues/7462)).
|
||||
- Fixed a bug where the Top endpoint for pods would return both a 500 and 404 when run on a non-existant pod.
|
||||
- Fixed a bug where the Top endpoint for pods would return both a 500 and 404 when run on a nonexistent pod.
|
||||
- Fixed a bug where Pull endpoints did not stream progress back to the client.
|
||||
- The Version endpoints (Libpod and Compat) now provide version in a format compatible with Docker.
|
||||
- All non-hijacking responses to API requests should not include headers with the version of the server.
|
||||
|
@ -310,7 +310,7 @@
|
|||
- Fixed a bug where the `podman generate systemd` command would panic on an invalid restart policy being specified ([#7271](https://github.com/containers/podman/issues/7271)).
|
||||
- Fixed a bug where the `podman images` command could take a very long time (several minutes) to complete when a large number of images were present.
|
||||
- Fixed a bug where the `podman logs` command with the `--tail` flag would not work properly when a large amount of output would be printed ([#7230](https://github.com/containers/podman/issues/7230)).
|
||||
- Fixed a bug where the `podman exec` command with remote Podman would not return a non-zero exit code when the exec session failed to start (e.g. invoking a non-existent command) ([#6893](https://github.com/containers/podman/issues/6893)).
|
||||
- Fixed a bug where the `podman exec` command with remote Podman would not return a non-zero exit code when the exec session failed to start (e.g. invoking a nonexistent command) ([#6893](https://github.com/containers/podman/issues/6893)).
|
||||
- Fixed a bug where the `podman load` command with remote Podman would did not honor user-specified tags ([#7124](https://github.com/containers/podman/issues/7124)).
|
||||
- Fixed a bug where the `podman system service` command, when run as a non-root user by Systemd, did not properly handle the Podman pause process and would not restart properly as a result ([#7180](https://github.com/containers/podman/issues/7180)).
|
||||
- Fixed a bug where the `--publish` flag to `podman create`, `podman run`, and `podman pod create` did not properly handle a host IP of 0.0.0.0 (attempting to bind to literal 0.0.0.0, instead of all IPs on the system) ([#7104](https://github.com/containers/podman/issues/7014)).
|
||||
|
@ -411,7 +411,7 @@
|
|||
|
||||
### Bugfixes
|
||||
- Fixed a bug where the `podman ps` command would not truncate long container commands, resulting in display issues as the column could become extremely wide (the `--no-trunc` flag can be used to print the full command).
|
||||
- Fixed a bug where `podman pod` commands operationg on multiple containers (e.g. `podman pod stop` and `podman pod kill`) would not print errors from individual containers, but only a warning that some containers had failed.
|
||||
- Fixed a bug where `podman pod` commands operating on multiple containers (e.g. `podman pod stop` and `podman pod kill`) would not print errors from individual containers, but only a warning that some containers had failed.
|
||||
- Fixed a bug where the `podman system service` command would panic if a connection to the Events endpoint hung up early ([#6805](https://github.com/containers/libpod/issues/6805)).
|
||||
- Fixed a bug where rootless Podman would create anonymous and named volumes with the wrong owner for containers run with the `--user` directive.
|
||||
- Fixed a bug where the `TMPDIR` environment variable (used for storing temporary files while pulling images) was not being defaulted (if unset) to `/var/tmp`.
|
||||
|
@ -425,7 +425,7 @@
|
|||
|
||||
### API
|
||||
- Fixed a bug where the timestamp format for Libpod image list endpoint was incorrect - the format has been switched to Unix time.
|
||||
- Fixed a bug where the compatability Create endpoint did not handle empty entrypoints properly.
|
||||
- Fixed a bug where the compatibility Create endpoint did not handle empty entrypoints properly.
|
||||
- Fixed a bug where the compatibility network remove endpoint would improperly handle errors where the network was not found.
|
||||
- Fixed a bug where containers would be created with improper permissions because of a umask issue ([#6787](https://github.com/containers/libpod/issues/6787)).
|
||||
|
||||
|
@ -455,7 +455,7 @@
|
|||
- Fixed a bug where the `label` option to `--security-opt` would only be shown once in `podman inspect`, even if provided multiple times.
|
||||
|
||||
### API
|
||||
- Fixed a bug where network endpoint URLs in the compatability API were mistakenly suffixed with `/json`.
|
||||
- Fixed a bug where network endpoint URLs in the compatibility API were mistakenly suffixed with `/json`.
|
||||
- Fixed a bug where the Libpod volume creation endpoint returned 200 instead of 201 on success.
|
||||
|
||||
### Misc
|
||||
|
@ -485,7 +485,7 @@
|
|||
- Named and anonymous volumes and `tmpfs` filesystems added to containers are no longer mounted `noexec` by default.
|
||||
|
||||
### Bugfixes
|
||||
- Fixed a bug where the `podman exec` command would log to journald when run in containers loggined to journald ([#6555](https://github.com/containers/podman/issues/6555)).
|
||||
- Fixed a bug where the `podman exec` command would log to journald when run in containers logged to journald ([#6555](https://github.com/containers/podman/issues/6555)).
|
||||
- Fixed a bug where the `podman auto-update` command would not preserve the OS and architecture of the original image when pulling a replacement ([#6613](https://github.com/containers/podman/issues/6613)).
|
||||
- Fixed a bug where the `podman cp` command could create an extra `merged` directory when copying into an existing directory ([#6596](https://github.com/containers/podman/issues/6596)).
|
||||
- Fixed a bug where the `podman pod stats` command would crash on pods run with `--network=host` ([#5652](https://github.com/containers/podman/issues/5652)).
|
||||
|
@ -521,7 +521,7 @@
|
|||
|
||||
### Misc
|
||||
- Rootless containers will now automatically set their ulimits to the maximum allowed for the user running the container, to match the behavior of containers run as root
|
||||
- Packages managed by the core Podman team will no longer include a default `libpod.conf`, instead defaulting to `containers.conf`. The default libpod.conf will remain available in the Github repository until the release of Podman 2.0
|
||||
- Packages managed by the core Podman team will no longer include a default `libpod.conf`, instead defaulting to `containers.conf`. The default libpod.conf will remain available in the GitHub repository until the release of Podman 2.0
|
||||
- The default Podman CNI network configuration now sets HairpinMode to allow containers to access other containers via ports published on the host
|
||||
- Updated containers/common to v0.8.4
|
||||
|
||||
|
@ -1105,7 +1105,7 @@
|
|||
|
||||
### Bugfixes
|
||||
- Fixed a bug where `podman cp` would not copy folders ([#2836](https://github.com/containers/podman/issues/2836))
|
||||
- Fixed a bug where Podman would panic when the Varlink API attempted too pull a non-existent image ([#2860](https://github.com/containers/podman/issues/2860))
|
||||
- Fixed a bug where Podman would panic when the Varlink API attempted too pull a nonexistent image ([#2860](https://github.com/containers/podman/issues/2860))
|
||||
- Fixed a bug where `podman rmi` sometimes did not produce an event when images were deleted
|
||||
- Fixed a bug where Podman would panic when the Varlink API passed improperly-formatted options when attempting to build ([#2869](https://github.com/containers/podman/issues/2869))
|
||||
- Fixed a bug where `podman images` would not print a header if no images were present ([#2877](https://github.com/containers/podman/pull/2877))
|
||||
|
|
|
@ -642,7 +642,7 @@ func DefineCreateFlags(cmd *cobra.Command, cf *ContainerCLIOpts) {
|
|||
|
||||
storageOptFlagName := "storage-opt"
|
||||
createFlags.StringSliceVar(
|
||||
&cf.StoreageOpt,
|
||||
&cf.StorageOpt,
|
||||
storageOptFlagName, []string{},
|
||||
"Storage driver options per container",
|
||||
)
|
||||
|
@ -671,7 +671,7 @@ func DefineCreateFlags(cmd *cobra.Command, cf *ContainerCLIOpts) {
|
|||
sysctlFlagName, []string{},
|
||||
"Sysctl options",
|
||||
)
|
||||
//TODO: Add function for systctl completion.
|
||||
//TODO: Add function for sysctl completion.
|
||||
_ = cmd.RegisterFlagCompletionFunc(sysctlFlagName, completion.AutocompleteNone)
|
||||
|
||||
systemdFlagName := "systemd"
|
||||
|
@ -696,13 +696,13 @@ func DefineCreateFlags(cmd *cobra.Command, cf *ContainerCLIOpts) {
|
|||
"Allocate a pseudo-TTY for container",
|
||||
)
|
||||
|
||||
timezonezFlagName := "tz"
|
||||
timezoneFlagName := "tz"
|
||||
createFlags.StringVar(
|
||||
&cf.Timezone,
|
||||
timezonezFlagName, containerConfig.TZ(),
|
||||
timezoneFlagName, containerConfig.TZ(),
|
||||
"Set timezone in container",
|
||||
)
|
||||
_ = cmd.RegisterFlagCompletionFunc(timezonezFlagName, completion.AutocompleteNone) //TODO: add timezone completion
|
||||
_ = cmd.RegisterFlagCompletionFunc(timezoneFlagName, completion.AutocompleteNone) //TODO: add timezone completion
|
||||
|
||||
umaskFlagName := "umask"
|
||||
createFlags.StringVar(
|
||||
|
|
|
@ -98,7 +98,7 @@ type ContainerCLIOpts struct {
|
|||
SignaturePolicy string
|
||||
StopSignal string
|
||||
StopTimeout uint
|
||||
StoreageOpt []string
|
||||
StorageOpt []string
|
||||
SubUIDName string
|
||||
SubGIDName string
|
||||
Sysctl []string
|
||||
|
@ -310,7 +310,7 @@ func ContainerCreateToContainerCLIOpts(cc handlers.CreateContainerConfig, cgroup
|
|||
// on speculation by Matt and I. We think that these come into play later
|
||||
// like with start. We believe this is just a difference in podman/compat
|
||||
cliOpts := ContainerCLIOpts{
|
||||
// Attach: nil, // dont need?
|
||||
// Attach: nil, // don't need?
|
||||
Authfile: "",
|
||||
CapAdd: append(capAdd, cc.HostConfig.CapAdd...),
|
||||
CapDrop: append(cappDrop, cc.HostConfig.CapDrop...),
|
||||
|
@ -321,11 +321,11 @@ func ContainerCreateToContainerCLIOpts(cc handlers.CreateContainerConfig, cgroup
|
|||
CPURTPeriod: uint64(cc.HostConfig.CPURealtimePeriod),
|
||||
CPURTRuntime: cc.HostConfig.CPURealtimeRuntime,
|
||||
CPUShares: uint64(cc.HostConfig.CPUShares),
|
||||
// CPUS: 0, // dont need?
|
||||
// CPUS: 0, // don't need?
|
||||
CPUSetCPUs: cc.HostConfig.CpusetCpus,
|
||||
CPUSetMems: cc.HostConfig.CpusetMems,
|
||||
// Detach: false, // dont need
|
||||
// DetachKeys: "", // dont need
|
||||
// Detach: false, // don't need
|
||||
// DetachKeys: "", // don't need
|
||||
Devices: devices,
|
||||
DeviceCGroupRule: nil,
|
||||
DeviceReadBPs: readBps,
|
||||
|
@ -359,7 +359,7 @@ func ContainerCreateToContainerCLIOpts(cc handlers.CreateContainerConfig, cgroup
|
|||
Rm: cc.HostConfig.AutoRemove,
|
||||
SecurityOpt: cc.HostConfig.SecurityOpt,
|
||||
StopSignal: cc.Config.StopSignal,
|
||||
StoreageOpt: stringMaptoArray(cc.HostConfig.StorageOpt),
|
||||
StorageOpt: stringMaptoArray(cc.HostConfig.StorageOpt),
|
||||
Sysctl: stringMaptoArray(cc.HostConfig.Sysctls),
|
||||
Systemd: "true", // podman default
|
||||
TmpFS: stringMaptoArray(cc.HostConfig.Tmpfs),
|
||||
|
|
|
@ -488,9 +488,9 @@ func FillOutSpecGen(s *specgen.SpecGenerator, c *ContainerCLIOpts, args []string
|
|||
s.ConmonPidFile = c.ConmonPIDFile
|
||||
|
||||
// TODO
|
||||
// ouitside of specgen and oci though
|
||||
// outside of specgen and oci though
|
||||
// defaults to true, check spec/storage
|
||||
// s.readon = c.ReadOnlyTmpFS
|
||||
// s.readonly = c.ReadOnlyTmpFS
|
||||
// TODO convert to map?
|
||||
// check if key=value and convert
|
||||
sysmap := make(map[string]string)
|
||||
|
|
|
@ -32,7 +32,7 @@ var (
|
|||
Example: `podman completion bash
|
||||
podman completion zsh -f _podman
|
||||
podman completion fish --no-desc`,
|
||||
//dont show this command to users
|
||||
//don't show this command to users
|
||||
Hidden: true,
|
||||
}
|
||||
)
|
||||
|
|
|
@ -55,7 +55,7 @@ var (
|
|||
func cpFlags(cmd *cobra.Command) {
|
||||
flags := cmd.Flags()
|
||||
flags.BoolVar(&cpOpts.Extract, "extract", false, "Deprecated...")
|
||||
flags.BoolVar(&cpOpts.Pause, "pause", true, "Deorecated")
|
||||
flags.BoolVar(&cpOpts.Pause, "pause", true, "Deprecated")
|
||||
_ = flags.MarkHidden("extract")
|
||||
_ = flags.MarkHidden("pause")
|
||||
}
|
||||
|
|
|
@ -171,7 +171,7 @@ func createInit(c *cobra.Command) error {
|
|||
}
|
||||
cliVals.UserNS = c.Flag("userns").Value.String()
|
||||
// if user did not modify --userns flag and did turn on
|
||||
// uid/gid mappsings, set userns flag to "private"
|
||||
// uid/gid mappings, set userns flag to "private"
|
||||
if !c.Flag("userns").Changed && cliVals.UserNS == "host" {
|
||||
if len(cliVals.UIDMap) > 0 ||
|
||||
len(cliVals.GIDMap) > 0 ||
|
||||
|
@ -239,7 +239,7 @@ func pullImage(imageName string) (string, error) {
|
|||
|
||||
if cliVals.Platform != "" {
|
||||
if cliVals.OverrideArch != "" || cliVals.OverrideOS != "" {
|
||||
return "", errors.Errorf("--platform option can not be specified with --overide-arch or --override-os")
|
||||
return "", errors.Errorf("--platform option can not be specified with --override-arch or --override-os")
|
||||
}
|
||||
split := strings.SplitN(cliVals.Platform, "/", 2)
|
||||
cliVals.OverrideOS = split[0]
|
||||
|
|
|
@ -39,7 +39,7 @@ var (
|
|||
ValidArgsFunction: common.AutocompleteContainers,
|
||||
}
|
||||
|
||||
containerMountCommmand = &cobra.Command{
|
||||
containerMountCommand = &cobra.Command{
|
||||
Use: mountCommand.Use,
|
||||
Short: mountCommand.Short,
|
||||
Long: mountCommand.Long,
|
||||
|
@ -76,11 +76,11 @@ func init() {
|
|||
|
||||
registry.Commands = append(registry.Commands, registry.CliCommand{
|
||||
Mode: []entities.EngineMode{entities.ABIMode},
|
||||
Command: containerMountCommmand,
|
||||
Command: containerMountCommand,
|
||||
Parent: containerCmd,
|
||||
})
|
||||
mountFlags(containerMountCommmand)
|
||||
validate.AddLatestFlag(containerMountCommmand, &mountOpts.Latest)
|
||||
mountFlags(containerMountCommand)
|
||||
validate.AddLatestFlag(containerMountCommand, &mountOpts.Latest)
|
||||
}
|
||||
|
||||
func mount(_ *cobra.Command, args []string) error {
|
||||
|
|
|
@ -139,7 +139,7 @@ func imagePull(cmd *cobra.Command, args []string) error {
|
|||
}
|
||||
if platform != "" {
|
||||
if pullOptions.OverrideArch != "" || pullOptions.OverrideOS != "" {
|
||||
return errors.Errorf("--platform option can not be specified with --overide-arch or --override-os")
|
||||
return errors.Errorf("--platform option can not be specified with --override-arch or --override-os")
|
||||
}
|
||||
split := strings.SplitN(platform, "/", 2)
|
||||
pullOptions.OverrideOS = split[0]
|
||||
|
|
|
@ -51,7 +51,7 @@ func AddInspectFlagSet(cmd *cobra.Command) *entities.InspectOptions {
|
|||
_ = cmd.RegisterFlagCompletionFunc(formatFlagName, completion.AutocompleteNone)
|
||||
|
||||
typeFlagName := "type"
|
||||
flags.StringVarP(&opts.Type, typeFlagName, "t", AllType, fmt.Sprintf("Specify inspect-oject type (%q, %q or %q)", ImageType, ContainerType, AllType))
|
||||
flags.StringVarP(&opts.Type, typeFlagName, "t", AllType, fmt.Sprintf("Specify inspect-object type (%q, %q or %q)", ImageType, ContainerType, AllType))
|
||||
_ = cmd.RegisterFlagCompletionFunc(typeFlagName, common.AutocompleteInspectType)
|
||||
|
||||
validate.AddLatestFlag(cmd, &opts.Latest)
|
||||
|
|
|
@ -48,7 +48,7 @@
|
|||
| [podman-network(1)](https://podman.readthedocs.io/en/latest/network.html) | Manage Podman CNI networks |
|
||||
| [podman-network-create(1)](https://podman.readthedocs.io/en/latest/markdown/podman-network-create.1.html) | Create a CNI network |
|
||||
| [podman-network-connect(1)](https://podman.readthedocs.io/en/latest/markdown/podman-network-connect.1.html) | Connect a container to a CNI network |
|
||||
| [podman-network-disconnect(1)](https://podman.readthedocs.io/en/latest/markdown/podman-network-dosconnect.1.html) | Disconnect a container from a CNI network |
|
||||
| [podman-network-disconnect(1)](https://podman.readthedocs.io/en/latest/markdown/podman-network-disconnect.1.html) | Disconnect a container from a CNI network |
|
||||
| [podman-network-inspect(1)](https://podman.readthedocs.io/en/latest/markdown/podman-network-inspect.1.html) | Displays the raw CNI network configuration for one or more networks |
|
||||
| [podman-network-ls(1)](https://podman.readthedocs.io/en/latest/markdown/podman-network-ls.1.html) | Display a summary of CNI networks |
|
||||
| [podman-network-rm(1)](https://podman.readthedocs.io/en/latest/markdown/podman-network-rm.1.html) | Remove one or more CNI networks |
|
||||
|
|
|
@ -6,7 +6,7 @@ set -eo pipefail
|
|||
# by connecting to a set of essential external servers and failing
|
||||
# if any cannot be reached. It's intended for use early on in the
|
||||
# podman CI system, to help prevent wasting time on tests that can't
|
||||
# succeede due to some outage or another.
|
||||
# succeed due to some outage or another.
|
||||
|
||||
# shellcheck source=./contrib/cirrus/lib.sh
|
||||
source $(dirname $0)/lib.sh
|
||||
|
|
|
@ -42,7 +42,7 @@ fi
|
|||
OS_RELEASE_ID="$(source /etc/os-release; echo $ID)"
|
||||
# GCE image-name compatible string representation of distribution _major_ version
|
||||
OS_RELEASE_VER="$(source /etc/os-release; echo $VERSION_ID | tr -d '.')"
|
||||
# Combined to ease soe usage
|
||||
# Combined to ease some usage
|
||||
OS_REL_VER="${OS_RELEASE_ID}-${OS_RELEASE_VER}"
|
||||
# This is normally set from .cirrus.yml but default is necessary when
|
||||
# running under hack/get_ci_vm.sh since it cannot infer the value.
|
||||
|
@ -87,7 +87,7 @@ CIRRUS_BUILD_ID=${CIRRUS_BUILD_ID:-$RANDOM$(date +%s)} # must be short and uniq
|
|||
# The starting place for linting and code validation
|
||||
EPOCH_TEST_COMMIT="$CIRRUS_BASE_SHA"
|
||||
|
||||
# Regex defining all CI-releated env. vars. necessary for all possible
|
||||
# Regex defining all CI-related env. vars. necessary for all possible
|
||||
# testing operations on all platforms and versions. This is necessary
|
||||
# to avoid needlessly passing through global/system values across
|
||||
# contexts, such as host->container or root->rootless user
|
||||
|
|
|
@ -506,7 +506,7 @@ END_SYNOPSIS
|
|||
|
||||
# PR 1234 - title of the pr
|
||||
my $pr_title = escapeHTML(_env_replace("{CIRRUS_CHANGE_TITLE}"));
|
||||
$s .= _tr("Github PR", sprintf("%s - %s",
|
||||
$s .= _tr("GitHub PR", sprintf("%s - %s",
|
||||
_a("{CIRRUS_PR}", "https://{CIRRUS_REPO_CLONE_HOST}/{CIRRUS_REPO_FULL_NAME}/pull/{CIRRUS_PR}"),
|
||||
$pr_title));
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
# the RPM name would need to be adjusted before a run as
|
||||
# appropriate.
|
||||
#
|
||||
# To use, first copy an rpm file from bohdi to `/root/tmp`
|
||||
# To use, first copy an rpm file from bodhi to `/root/tmp`
|
||||
# and then run:
|
||||
# 'podman build -f ./Containerfile -t quay.io/podman/stable:v1.7.0 .'
|
||||
#
|
||||
|
|
|
@ -7,5 +7,5 @@
|
|||
# Default Remote URI to access the Podman service.
|
||||
# Examples:
|
||||
# remote rootless ssh://engineering.lab.company.com/run/user/1000/podman/podman.sock
|
||||
# remote rootfull ssh://root@10.10.1.136:22/run/podman/podman.sock
|
||||
# remote rootful ssh://root@10.10.1.136:22/run/podman/podman.sock
|
||||
# remote_uri= ""
|
||||
|
|
|
@ -2,7 +2,7 @@ ARG GOLANG_VERSION=1.15
|
|||
ARG ALPINE_VERSION=3.12
|
||||
ARG CNI_VERSION=v0.8.0
|
||||
ARG CNI_PLUGINS_VERSION=v0.8.7
|
||||
ARG DNSNAME_VESION=v1.0.0
|
||||
ARG DNSNAME_VERSION=v1.0.0
|
||||
|
||||
FROM golang:${GOLANG_VERSION}-alpine${ALPINE_VERSION} AS golang-base
|
||||
RUN apk add --no-cache git
|
||||
|
|
|
@ -45,7 +45,7 @@ because the client (i.e. your web browser) is fetching content from multiple loc
|
|||
do not share a common domain, accessing the API section may show a stack-trace similar to
|
||||
the following:
|
||||
|
||||

|
||||

|
||||
|
||||
If reloading the page, or clearing your local cache does not fix the problem, it is
|
||||
likely caused by broken metadata needed to protect clients from cross-site-scripting
|
||||
|
|
|
@ -40,7 +40,7 @@ container images. This `buildah` code creates `buildah` containers for the
|
|||
`RUN` options in container storage. In certain situations, when the
|
||||
`podman build` crashes or users kill the `podman build` process, these external
|
||||
containers can be left in container storage. Use the `podman ps --all --storage`
|
||||
command to see these contaienrs. External containers can be removed with the
|
||||
command to see these containers. External containers can be removed with the
|
||||
`podman rm --storage` command.
|
||||
|
||||
## OPTIONS
|
||||
|
|
|
@ -28,7 +28,7 @@ random port is assigned by Podman in the specification.
|
|||
Create Kubernetes Pod YAML for a container called `some-mariadb` .
|
||||
```
|
||||
$ sudo podman generate kube some-mariadb
|
||||
# Generation of Kubenetes YAML is still under development!
|
||||
# Generation of Kubernetes YAML is still under development!
|
||||
#
|
||||
# Save the output of this file and use kubectl create -f to import
|
||||
# it into Kubernetes.
|
||||
|
|
|
@ -946,7 +946,7 @@ For the IPC namespace, the following sysctls are allowed:
|
|||
|
||||
Note: if you use the **--ipc=host** option, the above sysctls will not be allowed.
|
||||
|
||||
For the network namespace, the following ysctls areallowed:
|
||||
For the network namespace, the following sysctls are allowed:
|
||||
|
||||
- Sysctls beginning with net.\*
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ podman\-system\-service - Run an API service
|
|||
## DESCRIPTION
|
||||
The **podman system service** command creates a listening service that will answer API calls for Podman. You may
|
||||
optionally provide an endpoint for the API in URI form. For example, *unix://tmp/foobar.sock* or *tcp:localhost:8080*.
|
||||
If no endpoint is provided, defaults will be used. The default endpoint for a rootfull
|
||||
If no endpoint is provided, defaults will be used. The default endpoint for a rootful
|
||||
service is *unix:/run/podman/podman.sock* and rootless is *unix:/$XDG_RUNTIME_DIR/podman/podman.sock* (for
|
||||
example *unix:/run/user/1000/podman/podman.sock*)
|
||||
|
||||
|
|
|
@ -291,7 +291,7 @@ When Podman runs in rootless mode, the file `$HOME/.config/containers/mounts.con
|
|||
|
||||
Non root users of Podman can create the `$HOME/.config/containers/registries.conf` file to be used instead of the system defaults.
|
||||
|
||||
**storage.conf** (`/etc/containers/storage.conf`, `$HOME/.config/contaners/storage.conf`)
|
||||
**storage.conf** (`/etc/containers/storage.conf`, `$HOME/.config/containers/storage.conf`)
|
||||
|
||||
storage.conf is the storage configuration file for all tools using containers/storage
|
||||
|
||||
|
|
|
@ -12,6 +12,6 @@ if [ ! -x "$BIN" ]; then
|
|||
echo "Installing golangci-lint v$VERSION into $GOBIN"
|
||||
curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $GOBIN v$VERSION
|
||||
else
|
||||
# Prints it's own file name as part of --verison output
|
||||
# Prints its own file name as part of --version output
|
||||
echo "Using existing $(dirname $BIN)/$($BIN --version)"
|
||||
fi
|
||||
|
|
|
@ -28,7 +28,7 @@ func replaceNetNS(netNSPath string, ctr *Container, newState *ContainerState) er
|
|||
newState.NetNS = ns
|
||||
} else {
|
||||
if ctr.ensureState(define.ContainerStateRunning, define.ContainerStatePaused) {
|
||||
return errors.Wrapf(err, "error joning network namespace of container %s", ctr.ID())
|
||||
return errors.Wrapf(err, "error joining network namespace of container %s", ctr.ID())
|
||||
}
|
||||
|
||||
logrus.Errorf("error joining network namespace for container %s: %v", ctr.ID(), err)
|
||||
|
|
|
@ -571,7 +571,7 @@ func (c *Container) Cleanup(ctx context.Context) error {
|
|||
|
||||
// Batch starts a batch operation on the given container
|
||||
// All commands in the passed function will execute under the same lock and
|
||||
// without syncronyzing state after each operation
|
||||
// without synchronizing state after each operation
|
||||
// This will result in substantial performance benefits when running numerous
|
||||
// commands on the same container
|
||||
// Note that the container passed into the Batch function cannot be removed
|
||||
|
|
|
@ -151,7 +151,7 @@ type ContainerRootFSConfig struct {
|
|||
// ContainerSecurityConfig is an embedded sub-config providing security configuration
|
||||
// to the container.
|
||||
type ContainerSecurityConfig struct {
|
||||
// Pirivileged is whether the container is privileged. Privileged
|
||||
// Privileged is whether the container is privileged. Privileged
|
||||
// containers have lessened security and increased access to the system.
|
||||
// Note that this does NOT directly correspond to Podman's --privileged
|
||||
// flag - most of the work of that flag is done in creating the OCI spec
|
||||
|
|
|
@ -884,9 +884,9 @@ func (c *Container) startDependencies(ctx context.Context) error {
|
|||
// getAllDependencies is a precursor to starting dependencies.
|
||||
// To start a container with all of its dependencies, we need to recursively find all dependencies
|
||||
// a container has, as well as each of those containers' dependencies, and so on
|
||||
// To do so, keep track of containers already visisted (so there aren't redundant state lookups),
|
||||
// To do so, keep track of containers already visited (so there aren't redundant state lookups),
|
||||
// and recursively search until we have reached the leafs of every dependency node.
|
||||
// Since we need to start all dependencies for our original container to successfully start, we propegate any errors
|
||||
// Since we need to start all dependencies for our original container to successfully start, we propagate any errors
|
||||
// in looking up dependencies.
|
||||
// Note: this function is currently meant as a robust solution to a narrow problem: start an infra-container when
|
||||
// a container in the pod is run. It has not been tested for performance past one level, so expansion of recursive start
|
||||
|
|
|
@ -1659,7 +1659,7 @@ func (c *Container) getHosts() string {
|
|||
|
||||
// generateGroupEntry generates an entry or entries into /etc/group as
|
||||
// required by container configuration.
|
||||
// Generatlly speaking, we will make an entry under two circumstances:
|
||||
// Generally speaking, we will make an entry under two circumstances:
|
||||
// 1. The container is started as a specific user:group, and that group is both
|
||||
// numeric, and does not already exist in /etc/group.
|
||||
// 2. It is requested that Libpod add the group that launched Podman to
|
||||
|
@ -1937,7 +1937,7 @@ func (c *Container) generatePasswdAndGroup() (string, string, error) {
|
|||
needGroup = false
|
||||
}
|
||||
|
||||
// Next, check if we already made the files. If we didn, don't need to
|
||||
// Next, check if we already made the files. If we didn't, don't need to
|
||||
// do anything more.
|
||||
if needPasswd {
|
||||
passwdPath := filepath.Join(c.config.StaticDir, "passwd")
|
||||
|
|
|
@ -23,7 +23,7 @@ func (r *Runtime) Log(ctx context.Context, containers []*Container, options *log
|
|||
return nil
|
||||
}
|
||||
|
||||
// ReadLog reads a containers log based on the input options and returns loglines over a channel.
|
||||
// ReadLog reads a containers log based on the input options and returns log lines over a channel.
|
||||
func (c *Container) ReadLog(ctx context.Context, options *logs.LogOptions, logChannel chan *logs.LogLine) error {
|
||||
switch c.LogDriver() {
|
||||
case define.NoLogging:
|
||||
|
|
|
@ -14,7 +14,7 @@ func (c *Container) Top(descriptors []string) ([]string, error) {
|
|||
// the container. The output data can be controlled via the `descriptors`
|
||||
// argument which expects format descriptors and supports all AIXformat
|
||||
// descriptors of ps (1) plus some additional ones to for instance inspect the
|
||||
// set of effective capabilities. Eeach element in the returned string slice
|
||||
// set of effective capabilities. Each element in the returned string slice
|
||||
// is a tab-separated string.
|
||||
//
|
||||
// For more details, please refer to github.com/containers/psgo.
|
||||
|
|
|
@ -88,7 +88,7 @@ func (c *Container) validate() error {
|
|||
return errors.Wrapf(define.ErrInvalidArg, "cannot add to /etc/hosts if using image's /etc/hosts")
|
||||
}
|
||||
|
||||
// Check named volume, overlay volume and image volume destination conflits
|
||||
// Check named volume, overlay volume and image volume destination conflist
|
||||
destinations := make(map[string]bool)
|
||||
for _, vol := range c.config.NamedVolumes {
|
||||
// Don't check if they already exist.
|
||||
|
|
|
@ -157,7 +157,7 @@ type InspectMount struct {
|
|||
// "volume" and "bind".
|
||||
Type string `json:"Type"`
|
||||
// The name of the volume. Empty for bind mounts.
|
||||
Name string `json:"Name,omptempty"`
|
||||
Name string `json:"Name,omitempty"`
|
||||
// The source directory for the volume.
|
||||
Source string `json:"Source"`
|
||||
// The destination directory for the volume. Specified as a path within
|
||||
|
@ -552,7 +552,7 @@ type InspectBasicNetworkConfig struct {
|
|||
// GlobalIPv6PrefixLen is the length of the subnet mask of this network.
|
||||
GlobalIPv6PrefixLen int `json:"GlobalIPv6PrefixLen"`
|
||||
// SecondaryIPv6Addresses is a list of extra IPv6 Addresses that the
|
||||
// container has been assigned in this networ.
|
||||
// container has been assigned in this network.
|
||||
SecondaryIPv6Addresses []string `json:"SecondaryIPv6Addresses,omitempty"`
|
||||
// MacAddress is the MAC address for the interface in this network.
|
||||
MacAddress string `json:"MacAddress"`
|
||||
|
|
|
@ -51,7 +51,7 @@ func (ir *Runtime) DiskUsage(ctx context.Context, images []*Image) ([]DiskUsageS
|
|||
return stats, nil
|
||||
}
|
||||
|
||||
// diskUsageForImage returns the disk-usage statistics for the spcified image.
|
||||
// diskUsageForImage returns the disk-usage statistics for the specified image.
|
||||
func diskUsageForImage(ctx context.Context, image *Image, tree *layerTree) (*DiskUsageStat, error) {
|
||||
stat := DiskUsageStat{
|
||||
ID: image.ID(),
|
||||
|
|
|
@ -50,7 +50,7 @@ func decompose(input string) (imageParts, error) {
|
|||
|
||||
// suspiciousRefNameTagValuesForSearch returns a "tag" value used in a previous implementation.
|
||||
// This exists only to preserve existing behavior in heuristic code; it’s dubious that that behavior is correct,
|
||||
// gespecially for the tag value.
|
||||
// especially for the tag value.
|
||||
func (ip *imageParts) suspiciousRefNameTagValuesForSearch() (string, string, string) {
|
||||
registry := reference.Domain(ip.unnormalizedRef)
|
||||
imageName := reference.Path(ip.unnormalizedRef)
|
||||
|
|
|
@ -26,7 +26,7 @@ const (
|
|||
type SearchResult struct {
|
||||
// Index is the image index (e.g., "docker.io" or "quay.io")
|
||||
Index string
|
||||
// Name is the canoncical name of the image (e.g., "docker.io/library/alpine").
|
||||
// Name is the canonical name of the image (e.g., "docker.io/library/alpine").
|
||||
Name string
|
||||
// Description of the image.
|
||||
Description string
|
||||
|
|
|
@ -403,7 +403,7 @@ func libpodEnvVarsToKubeEnvVars(envs []string) ([]v1.EnvVar, error) {
|
|||
|
||||
// libpodMountsToKubeVolumeMounts converts the containers mounts to a struct kube understands
|
||||
func libpodMountsToKubeVolumeMounts(c *Container) ([]v1.VolumeMount, []v1.Volume, error) {
|
||||
// TjDO when named volumes are supported in play kube, also parse named volumes here
|
||||
// TODO when named volumes are supported in play kube, also parse named volumes here
|
||||
_, mounts := c.sortUserVolumes(c.config.Spec)
|
||||
vms := make([]v1.VolumeMount, 0, len(mounts))
|
||||
vos := make([]v1.Volume, 0, len(mounts))
|
||||
|
@ -524,7 +524,7 @@ func capAddDrop(caps *specs.LinuxCapabilities) (*v1.Capabilities, error) {
|
|||
defaultCaps = append(defaultCaps, g.Config.Process.Capabilities.Inheritable...)
|
||||
defaultCaps = append(defaultCaps, g.Config.Process.Capabilities.Permitted...)
|
||||
|
||||
// Combine all the container's capabilities into a slic
|
||||
// Combine all the container's capabilities into a slice
|
||||
containerCaps := append(caps.Ambient, caps.Bounding...)
|
||||
containerCaps = append(containerCaps, caps.Effective...)
|
||||
containerCaps = append(containerCaps, caps.Inheritable...)
|
||||
|
|
|
@ -137,7 +137,7 @@ func getTailLog(path string, tail int) ([]*LogLine, error) {
|
|||
nllCounter++
|
||||
}
|
||||
}
|
||||
// if we have enough loglines, we can hangup
|
||||
// if we have enough log lines, we can hangup
|
||||
if nllCounter >= tail {
|
||||
break
|
||||
}
|
||||
|
@ -161,7 +161,7 @@ func getTailLog(path string, tail int) ([]*LogLine, error) {
|
|||
return tailLog, nil
|
||||
}
|
||||
|
||||
// String converts a logline to a string for output given whether a detail
|
||||
// String converts a log line to a string for output given whether a detail
|
||||
// bool is specified.
|
||||
func (l *LogLine) String(options *LogOptions) string {
|
||||
var out string
|
||||
|
|
|
@ -61,7 +61,7 @@ func Test_validateBridgeOptions(t *testing.T) {
|
|||
isIPv6: true,
|
||||
},
|
||||
{
|
||||
name: "IPv6 subnet, range and gateway without IPv6 option (PODMAN SUPPORTS IT UNLIKE DOCKEr)",
|
||||
name: "IPv6 subnet, range and gateway without IPv6 option (PODMAN SUPPORTS IT UNLIKE DOCKER)",
|
||||
subnet: net.IPNet{IP: net.ParseIP("2001:DB8::"), Mask: net.IPMask(net.ParseIP("ffff:ffff:ffff::"))},
|
||||
ipRange: net.IPNet{IP: net.ParseIP("2001:DB8:0:0:1::"), Mask: net.IPMask(net.ParseIP("ffff:ffff:ffff:ffff::"))},
|
||||
gateway: net.ParseIP("2001:DB8::2"),
|
||||
|
|
|
@ -60,7 +60,7 @@ func NewHostLocalBridge(name string, isGateWay, isDefaultGW, ipMasq bool, mtu in
|
|||
return &hostLocalBridge
|
||||
}
|
||||
|
||||
// NewIPAMHostLocalConf creates a new IPAMHostLocal configfuration
|
||||
// NewIPAMHostLocalConf creates a new IPAMHostLocal configuration
|
||||
func NewIPAMHostLocalConf(routes []IPAMRoute, ipamRanges [][]IPAMLocalHostRangeConf) (IPAMHostLocalConf, error) {
|
||||
ipamConf := IPAMHostLocalConf{
|
||||
PluginType: "host-local",
|
||||
|
|
|
@ -1155,7 +1155,7 @@ func (c *Container) NetworkDisconnect(nameOrID, netName string, force bool) erro
|
|||
return c.save()
|
||||
}
|
||||
|
||||
// ConnnectNetwork connects a container to a given network
|
||||
// ConnectNetwork connects a container to a given network
|
||||
func (c *Container) NetworkConnect(nameOrID, netName string, aliases []string) error {
|
||||
networks, err := c.networksByNameIndex()
|
||||
if err != nil {
|
||||
|
|
|
@ -56,7 +56,7 @@ type OCIRuntime interface {
|
|||
// a header prepended as follows: 1-byte STREAM (0, 1, 2 for STDIN,
|
||||
// STDOUT, STDERR), 3 null (0x00) bytes, 4-byte big endian length.
|
||||
// If a cancel channel is provided, it can be used to asynchronously
|
||||
// termninate the attach session. Detach keys, if given, will also cause
|
||||
// terminate the attach session. Detach keys, if given, will also cause
|
||||
// the attach session to be terminated if provided via the STDIN
|
||||
// channel. If they are not provided, the default detach keys will be
|
||||
// used instead. Detach keys of "" will disable detaching via keyboard.
|
||||
|
|
|
@ -83,7 +83,7 @@ func (c *Container) attach(streams *define.AttachStreams, keys string, resize <-
|
|||
// Attach to the given container's exec session
|
||||
// attachFd and startFd must be open file descriptors
|
||||
// attachFd must be the output side of the fd. attachFd is used for two things:
|
||||
// conmon will first send a nonse value across the pipe indicating it has set up its side of the console socket
|
||||
// conmon will first send a nonce value across the pipe indicating it has set up its side of the console socket
|
||||
// this ensures attachToExec gets all of the output of the called process
|
||||
// conmon will then send the exit code of the exec process, or an error in the exec session
|
||||
// startFd must be the input side of the fd.
|
||||
|
|
|
@ -47,7 +47,7 @@ import (
|
|||
|
||||
const (
|
||||
// This is Conmon's STDIO_BUF_SIZE. I don't believe we have access to it
|
||||
// directly from the Go cose, so const it here
|
||||
// directly from the Go code, so const it here
|
||||
bufferSize = conmonConfig.BufSize
|
||||
)
|
||||
|
||||
|
@ -1413,7 +1413,7 @@ func startCommandGivenSelinux(cmd *exec.Cmd) error {
|
|||
}
|
||||
|
||||
// moveConmonToCgroupAndSignal gets a container's cgroupParent and moves the conmon process to that cgroup
|
||||
// it then signals for conmon to start by sending nonse data down the start fd
|
||||
// it then signals for conmon to start by sending nonce data down the start fd
|
||||
func (r *ConmonOCIRuntime) moveConmonToCgroupAndSignal(ctr *Container, cmd *exec.Cmd, startFd *os.File) error {
|
||||
mustCreateCgroup := true
|
||||
|
||||
|
@ -1572,7 +1572,7 @@ func readConmonPipeData(pipe *os.File, ociLog string) (int, error) {
|
|||
return data, nil
|
||||
}
|
||||
|
||||
// writeConmonPipeData writes nonse data to a pipe
|
||||
// writeConmonPipeData writes nonce data to a pipe
|
||||
func writeConmonPipeData(pipe *os.File) error {
|
||||
someData := []byte{0}
|
||||
_, err := pipe.Write(someData)
|
||||
|
|
|
@ -751,7 +751,7 @@ func WithStopTimeout(timeout uint) CtrCreateOption {
|
|||
}
|
||||
}
|
||||
|
||||
// WithIDMappings sets the idmappsings for the container
|
||||
// WithIDMappings sets the idmappings for the container
|
||||
func WithIDMappings(idmappings storage.IDMappingOptions) CtrCreateOption {
|
||||
return func(ctr *Container) error {
|
||||
if ctr.valid {
|
||||
|
|
|
@ -15,7 +15,7 @@ import (
|
|||
// the pod. The output data can be controlled via the `descriptors`
|
||||
// argument which expects format descriptors and supports all AIXformat
|
||||
// descriptors of ps (1) plus some additional ones to for instance inspect the
|
||||
// set of effective capabilities. Eeach element in the returned string slice
|
||||
// set of effective capabilities. Each element in the returned string slice
|
||||
// is a tab-separated string.
|
||||
//
|
||||
// For more details, please refer to github.com/containers/psgo.
|
||||
|
|
|
@ -100,7 +100,7 @@ func DeallocRootlessCNI(ctx context.Context, c *Container) error {
|
|||
}
|
||||
var errs *multierror.Error
|
||||
for _, nw := range networks {
|
||||
err := rootlessCNIInfraCallDelloc(infra, c.ID(), nw)
|
||||
err := rootlessCNIInfraCallDealloc(infra, c.ID(), nw)
|
||||
if err != nil {
|
||||
errs = multierror.Append(errs, err)
|
||||
}
|
||||
|
@ -154,7 +154,7 @@ func rootlessCNIInfraCallAlloc(infra *Container, id, nw, k8sPodName string) (*cn
|
|||
return &cniRes, nil
|
||||
}
|
||||
|
||||
func rootlessCNIInfraCallDelloc(infra *Container, id, nw string) error {
|
||||
func rootlessCNIInfraCallDealloc(infra *Container, id, nw string) error {
|
||||
logrus.Debugf("rootless CNI: dealloc %q, %q", id, nw)
|
||||
_, err := rootlessCNIInfraExec(infra, "dealloc", id, nw)
|
||||
return err
|
||||
|
|
|
@ -230,7 +230,7 @@ func (r *Runtime) Import(ctx context.Context, source, reference, signaturePolicy
|
|||
return newImage.ID(), nil
|
||||
}
|
||||
|
||||
// donwloadFromURL downloads an image in the format "https:/example.com/myimage.tar"
|
||||
// downloadFromURL downloads an image in the format "https:/example.com/myimage.tar"
|
||||
// and temporarily saves in it $TMPDIR/importxyz, which is deleted after the image is imported
|
||||
func downloadFromURL(source string) (string, error) {
|
||||
fmt.Printf("Downloading from %q\n", source)
|
||||
|
|
|
@ -882,7 +882,7 @@ func TestRemoveContainer(t *testing.T) {
|
|||
})
|
||||
}
|
||||
|
||||
func TestRemoveNonexistantContainerFails(t *testing.T) {
|
||||
func TestRemoveNonexistentContainerFails(t *testing.T) {
|
||||
runForAllStates(t, func(t *testing.T, state State, manager lock.Manager) {
|
||||
testCtr, err := getTestCtr1(manager)
|
||||
assert.NoError(t, err)
|
||||
|
@ -1513,7 +1513,7 @@ func TestGetNotExistPodWithPods(t *testing.T) {
|
|||
err = state.AddPod(testPod2)
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = state.Pod("notexist")
|
||||
_, err = state.Pod("nonexistent")
|
||||
assert.Error(t, err)
|
||||
})
|
||||
}
|
||||
|
@ -1748,7 +1748,7 @@ func TestHasPodEmptyIDErrors(t *testing.T) {
|
|||
|
||||
func TestHasPodNoSuchPod(t *testing.T) {
|
||||
runForAllStates(t, func(t *testing.T, state State, manager lock.Manager) {
|
||||
exist, err := state.HasPod("notexist")
|
||||
exist, err := state.HasPod("nonexistent")
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, exist)
|
||||
})
|
||||
|
|
|
@ -280,7 +280,7 @@ func writeHijackHeader(r *http.Request, conn io.Writer) {
|
|||
fmt.Fprintf(conn,
|
||||
"HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
|
||||
} else {
|
||||
// Upraded
|
||||
// Upgraded
|
||||
fmt.Fprintf(conn,
|
||||
"HTTP/1.1 101 UPGRADED\r\nContent-Type: application/vnd.docker.raw-stream\r\nConnection: Upgrade\r\nUpgrade: %s\r\n\r\n",
|
||||
proto)
|
||||
|
|
|
@ -94,7 +94,7 @@ const (
|
|||
// StdinOnce is the stdin_once annotation
|
||||
StdinOnce = "io.kubernetes.cri-o.StdinOnce"
|
||||
|
||||
// Volumes is the volumes annotatoin
|
||||
// Volumes is the volumes annotation
|
||||
Volumes = "io.kubernetes.cri-o.Volumes"
|
||||
|
||||
// HostNetwork indicates whether the host network namespace is used or not
|
||||
|
|
|
@ -75,7 +75,7 @@ func GetEvents(w http.ResponseWriter, r *http.Request) {
|
|||
)
|
||||
|
||||
// NOTE: the "filters" parameter is extracted separately for backwards
|
||||
// compat via `fitlerFromRequest()`.
|
||||
// compat via `filterFromRequest()`.
|
||||
query := struct {
|
||||
Since string `schema:"since"`
|
||||
Until string `schema:"until"`
|
||||
|
|
|
@ -48,7 +48,7 @@ func GetInfo(w http.ResponseWriter, r *http.Request) {
|
|||
stateInfo := getContainersState(runtime)
|
||||
sysInfo := sysinfo.New(true)
|
||||
|
||||
// FIXME: Need to expose if runtime supports Checkpoint'ing
|
||||
// FIXME: Need to expose if runtime supports Checkpointing
|
||||
// liveRestoreEnabled := criu.CheckForCriu() && configInfo.RuntimeSupportsCheckpoint()
|
||||
|
||||
info := &handlers.Info{Info: docker.Info{
|
||||
|
|
|
@ -208,7 +208,7 @@ func RemoveVolume(w http.ResponseWriter, r *http.Request) {
|
|||
* using the volume at the same time".
|
||||
*
|
||||
* With this in mind, we only consider the `force` query parameter when we
|
||||
* hunt for specified volume by name, using it to seletively return a 204
|
||||
* hunt for specified volume by name, using it to selectively return a 204
|
||||
* or blow up depending on `force` being truthy or falsey/unset
|
||||
* respectively.
|
||||
*/
|
||||
|
@ -231,7 +231,7 @@ func RemoveVolume(w http.ResponseWriter, r *http.Request) {
|
|||
utils.VolumeNotFound(w, name, err)
|
||||
} else {
|
||||
// Volume does not exist and `force` is truthy - this emulates what
|
||||
// Docker would do when told to `force` removal of a nonextant
|
||||
// Docker would do when told to `force` removal of a nonexistent
|
||||
// volume
|
||||
utils.WriteResponse(w, http.StatusNoContent, nil)
|
||||
}
|
||||
|
|
|
@ -7,7 +7,7 @@ import (
|
|||
"github.com/gorilla/mux"
|
||||
)
|
||||
|
||||
func (s *APIServer) registerAchiveHandlers(r *mux.Router) error {
|
||||
func (s *APIServer) registerArchiveHandlers(r *mux.Router) error {
|
||||
// swagger:operation PUT /containers/{name}/archive compat putArchive
|
||||
// ---
|
||||
// summary: Put files into a container
|
||||
|
|
|
@ -666,7 +666,7 @@ func (s *APIServer) registerImagesHandlers(r *mux.Router) error {
|
|||
// - in: query
|
||||
// name: destination
|
||||
// type: string
|
||||
// description: Allows for pushing the image to a different destintation than the image refers to.
|
||||
// description: Allows for pushing the image to a different destination than the image refers to.
|
||||
// - in: query
|
||||
// name: tlsVerify
|
||||
// description: Require TLS verification.
|
||||
|
|
|
@ -108,7 +108,7 @@ func newServer(runtime *libpod.Runtime, duration time.Duration, listener *net.Li
|
|||
|
||||
for _, fn := range []func(*mux.Router) error{
|
||||
server.registerAuthHandlers,
|
||||
server.registerAchiveHandlers,
|
||||
server.registerArchiveHandlers,
|
||||
server.registerContainersHandlers,
|
||||
server.registerDistributionHandlers,
|
||||
server.registerEventsHandlers,
|
||||
|
|
|
@ -44,7 +44,7 @@ var supportedPolicies = map[string]Policy{
|
|||
"image": PolicyNewImage,
|
||||
}
|
||||
|
||||
// LookupPolicy looksup the corresponding Policy for the specified
|
||||
// LookupPolicy looks up the corresponding Policy for the specified
|
||||
// string. If none is found, an errors is returned including the list of
|
||||
// supported policies.
|
||||
//
|
||||
|
|
|
@ -8,7 +8,7 @@ type KubeOptions struct {
|
|||
}
|
||||
|
||||
//go:generate go run ../generator/generator.go SystemdOptions
|
||||
// SystemdOptions are optional options for generating ssytemd files
|
||||
// SystemdOptions are optional options for generating systemd files
|
||||
type SystemdOptions struct {
|
||||
// Name - use container/pod name instead of its ID.
|
||||
UseName *bool
|
||||
|
|
|
@ -136,7 +136,7 @@ type PushOptions struct {
|
|||
}
|
||||
|
||||
//go:generate go run ../generator/generator.go SearchOptions
|
||||
// SearchOptions are optional options for seaching images on registies
|
||||
// SearchOptions are optional options for searching images on registries
|
||||
type SearchOptions struct {
|
||||
// Authfile is the path to the authentication file. Ignored for remote
|
||||
// calls.
|
||||
|
|
|
@ -193,9 +193,9 @@ func (o *CreateOptions) WithIPRange(value net.IPNet) *CreateOptions {
|
|||
|
||||
// GetIPRange
|
||||
func (o *CreateOptions) GetIPRange() net.IPNet {
|
||||
var iPRange net.IPNet
|
||||
var ipRange net.IPNet
|
||||
if o.IPRange == nil {
|
||||
return iPRange
|
||||
return ipRange
|
||||
}
|
||||
return *o.IPRange
|
||||
}
|
||||
|
|
|
@ -70,7 +70,7 @@ var _ = Describe("Podman images", func() {
|
|||
// Inspect by long name
|
||||
_, err = images.GetImage(bt.conn, alpine.name, nil)
|
||||
Expect(err).To(BeNil())
|
||||
// TODO it looks like the images API alwaays returns size regardless
|
||||
// TODO it looks like the images API always returns size regardless
|
||||
// of bool or not. What should we do ?
|
||||
// Expect(data.Size).To(BeZero())
|
||||
|
||||
|
|
|
@ -169,7 +169,7 @@ var _ = Describe("Podman pods", func() {
|
|||
|
||||
// This test validates if All running containers within
|
||||
// each specified pod are paused and unpaused
|
||||
It("pause upause pod", func() {
|
||||
It("pause unpause pod", func() {
|
||||
// TODO fix this
|
||||
Skip("Pod behavior is jacked right now.")
|
||||
// Pause invalid container
|
||||
|
|
|
@ -22,7 +22,7 @@ import (
|
|||
var (
|
||||
// ErrCgroupDeleted means the cgroup was deleted
|
||||
ErrCgroupDeleted = errors.New("cgroup deleted")
|
||||
// ErrCgroupV1Rootless means the cgroup v1 were attempted to be used in rootless environmen
|
||||
// ErrCgroupV1Rootless means the cgroup v1 were attempted to be used in rootless environment
|
||||
ErrCgroupV1Rootless = errors.New("no support for CGroups V1 in rootless environments")
|
||||
)
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ import (
|
|||
// base64 encoded JSON payload of stating a path in a container.
|
||||
const XDockerContainerPathStatHeader = "X-Docker-Container-Path-Stat"
|
||||
|
||||
// ENOENT mimics the stdlib's ENONENT and can be used to implement custom logic
|
||||
// ENOENT mimics the stdlib's ENOENT and can be used to implement custom logic
|
||||
// while preserving the user-visible error message.
|
||||
var ENOENT = errors.New("No such file or directory")
|
||||
|
||||
|
|
|
@ -222,7 +222,7 @@ type ImageSearchOptions struct {
|
|||
type ImageSearchReport struct {
|
||||
// Index is the image index (e.g., "docker.io" or "quay.io")
|
||||
Index string
|
||||
// Name is the canoncical name of the image (e.g., "docker.io/library/alpine").
|
||||
// Name is the canonical name of the image (e.g., "docker.io/library/alpine").
|
||||
Name string
|
||||
// Description of the image.
|
||||
Description string
|
||||
|
|
|
@ -40,7 +40,7 @@ func (ic *ContainerEngine) containerStat(container *libpod.Container, containerP
|
|||
// Not all errors from secureStat map to ErrNotExist, so we
|
||||
// have to look into the error string. Turning it into an
|
||||
// ENOENT let's the API handlers return the correct status code
|
||||
// which is crucuial for the remote client.
|
||||
// which is crucial for the remote client.
|
||||
if os.IsNotExist(err) || strings.Contains(statInfoErr.Error(), "o such file or directory") {
|
||||
statInfoErr = copy.ENOENT
|
||||
}
|
||||
|
@ -70,7 +70,7 @@ func (ic *ContainerEngine) containerStat(container *libpod.Container, containerP
|
|||
absContainerPath = containerPath
|
||||
}
|
||||
|
||||
// Now we need to make sure to preseve the base path as specified by
|
||||
// Now we need to make sure to preserve the base path as specified by
|
||||
// the user. The `filepath` packages likes to remove trailing slashes
|
||||
// and dots that are crucial to the copy logic.
|
||||
absContainerPath = copy.PreserveBasePath(containerPath, absContainerPath)
|
||||
|
|
|
@ -21,7 +21,7 @@ func NewContainerEngine(facts *entities.PodmanConfig) (entities.ContainerEngine,
|
|||
return r, err
|
||||
case entities.TunnelMode:
|
||||
ctx, err := bindings.NewConnectionWithIdentity(context.Background(), facts.URI, facts.Identity)
|
||||
return &tunnel.ContainerEngine{ClientCxt: ctx}, err
|
||||
return &tunnel.ContainerEngine{ClientCtx: ctx}, err
|
||||
}
|
||||
return nil, fmt.Errorf("runtime mode '%v' is not supported", facts.EngineMode)
|
||||
}
|
||||
|
@ -34,7 +34,7 @@ func NewImageEngine(facts *entities.PodmanConfig) (entities.ImageEngine, error)
|
|||
return r, err
|
||||
case entities.TunnelMode:
|
||||
ctx, err := bindings.NewConnectionWithIdentity(context.Background(), facts.URI, facts.Identity)
|
||||
return &tunnel.ImageEngine{ClientCxt: ctx}, err
|
||||
return &tunnel.ImageEngine{ClientCtx: ctx}, err
|
||||
}
|
||||
return nil, fmt.Errorf("runtime mode '%v' is not supported", facts.EngineMode)
|
||||
}
|
||||
|
|
|
@ -37,7 +37,7 @@ func NewContainerEngine(facts *entities.PodmanConfig) (entities.ContainerEngine,
|
|||
return nil, fmt.Errorf("direct runtime not supported")
|
||||
case entities.TunnelMode:
|
||||
ctx, err := newConnection(facts.URI, facts.Identity)
|
||||
return &tunnel.ContainerEngine{ClientCxt: ctx}, err
|
||||
return &tunnel.ContainerEngine{ClientCtx: ctx}, err
|
||||
}
|
||||
return nil, fmt.Errorf("runtime mode '%v' is not supported", facts.EngineMode)
|
||||
}
|
||||
|
@ -49,7 +49,7 @@ func NewImageEngine(facts *entities.PodmanConfig) (entities.ImageEngine, error)
|
|||
return nil, fmt.Errorf("direct image runtime not supported")
|
||||
case entities.TunnelMode:
|
||||
ctx, err := newConnection(facts.URI, facts.Identity)
|
||||
return &tunnel.ImageEngine{ClientCxt: ctx}, err
|
||||
return &tunnel.ImageEngine{ClientCtx: ctx}, err
|
||||
}
|
||||
return nil, fmt.Errorf("runtime mode '%v' is not supported", facts.EngineMode)
|
||||
}
|
||||
|
|
|
@ -30,12 +30,12 @@ func (ic *ContainerEngine) ContainerRunlabel(ctx context.Context, label string,
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerExists(ctx context.Context, nameOrID string, options entities.ContainerExistsOptions) (*entities.BoolReport, error) {
|
||||
exists, err := containers.Exists(ic.ClientCxt, nameOrID, options.External)
|
||||
exists, err := containers.Exists(ic.ClientCtx, nameOrID, options.External)
|
||||
return &entities.BoolReport{Value: exists}, err
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerWait(ctx context.Context, namesOrIds []string, opts entities.WaitOptions) ([]entities.WaitReport, error) {
|
||||
cons, err := getContainersByContext(ic.ClientCxt, false, false, namesOrIds)
|
||||
cons, err := getContainersByContext(ic.ClientCtx, false, false, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -43,7 +43,7 @@ func (ic *ContainerEngine) ContainerWait(ctx context.Context, namesOrIds []strin
|
|||
options := new(containers.WaitOptions).WithCondition(opts.Condition)
|
||||
for _, c := range cons {
|
||||
response := entities.WaitReport{Id: c.ID}
|
||||
exitCode, err := containers.Wait(ic.ClientCxt, c.ID, options)
|
||||
exitCode, err := containers.Wait(ic.ClientCtx, c.ID, options)
|
||||
if err != nil {
|
||||
response.Error = err
|
||||
} else {
|
||||
|
@ -55,26 +55,26 @@ func (ic *ContainerEngine) ContainerWait(ctx context.Context, namesOrIds []strin
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerPause(ctx context.Context, namesOrIds []string, options entities.PauseUnPauseOptions) ([]*entities.PauseUnpauseReport, error) {
|
||||
ctrs, err := getContainersByContext(ic.ClientCxt, options.All, false, namesOrIds)
|
||||
ctrs, err := getContainersByContext(ic.ClientCtx, options.All, false, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
reports := make([]*entities.PauseUnpauseReport, 0, len(ctrs))
|
||||
for _, c := range ctrs {
|
||||
err := containers.Pause(ic.ClientCxt, c.ID, nil)
|
||||
err := containers.Pause(ic.ClientCtx, c.ID, nil)
|
||||
reports = append(reports, &entities.PauseUnpauseReport{Id: c.ID, Err: err})
|
||||
}
|
||||
return reports, nil
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerUnpause(ctx context.Context, namesOrIds []string, options entities.PauseUnPauseOptions) ([]*entities.PauseUnpauseReport, error) {
|
||||
ctrs, err := getContainersByContext(ic.ClientCxt, options.All, false, namesOrIds)
|
||||
ctrs, err := getContainersByContext(ic.ClientCtx, options.All, false, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
reports := make([]*entities.PauseUnpauseReport, 0, len(ctrs))
|
||||
for _, c := range ctrs {
|
||||
err := containers.Unpause(ic.ClientCxt, c.ID, nil)
|
||||
err := containers.Unpause(ic.ClientCtx, c.ID, nil)
|
||||
reports = append(reports, &entities.PauseUnpauseReport{Id: c.ID, Err: err})
|
||||
}
|
||||
return reports, nil
|
||||
|
@ -90,7 +90,7 @@ func (ic *ContainerEngine) ContainerStop(ctx context.Context, namesOrIds []strin
|
|||
id := strings.Split(string(content), "\n")[0]
|
||||
namesOrIds = append(namesOrIds, id)
|
||||
}
|
||||
ctrs, err := getContainersByContext(ic.ClientCxt, opts.All, opts.Ignore, namesOrIds)
|
||||
ctrs, err := getContainersByContext(ic.ClientCtx, opts.All, opts.Ignore, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -100,7 +100,7 @@ func (ic *ContainerEngine) ContainerStop(ctx context.Context, namesOrIds []strin
|
|||
}
|
||||
for _, c := range ctrs {
|
||||
report := entities.StopReport{Id: c.ID}
|
||||
if err = containers.Stop(ic.ClientCxt, c.ID, options); err != nil {
|
||||
if err = containers.Stop(ic.ClientCtx, c.ID, options); err != nil {
|
||||
// These first two are considered non-fatal under the right conditions
|
||||
if errors.Cause(err).Error() == define.ErrCtrStopped.Error() {
|
||||
logrus.Debugf("Container %s is already stopped", c.ID)
|
||||
|
@ -125,7 +125,7 @@ func (ic *ContainerEngine) ContainerStop(ctx context.Context, namesOrIds []strin
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerKill(ctx context.Context, namesOrIds []string, opts entities.KillOptions) ([]*entities.KillReport, error) {
|
||||
ctrs, err := getContainersByContext(ic.ClientCxt, opts.All, false, namesOrIds)
|
||||
ctrs, err := getContainersByContext(ic.ClientCtx, opts.All, false, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -133,7 +133,7 @@ func (ic *ContainerEngine) ContainerKill(ctx context.Context, namesOrIds []strin
|
|||
for _, c := range ctrs {
|
||||
reports = append(reports, &entities.KillReport{
|
||||
Id: c.ID,
|
||||
Err: containers.Kill(ic.ClientCxt, c.ID, opts.Signal, nil),
|
||||
Err: containers.Kill(ic.ClientCtx, c.ID, opts.Signal, nil),
|
||||
})
|
||||
}
|
||||
return reports, nil
|
||||
|
@ -147,7 +147,7 @@ func (ic *ContainerEngine) ContainerRestart(ctx context.Context, namesOrIds []st
|
|||
if to := opts.Timeout; to != nil {
|
||||
options.WithTimeout(int(*to))
|
||||
}
|
||||
ctrs, err := getContainersByContext(ic.ClientCxt, opts.All, false, namesOrIds)
|
||||
ctrs, err := getContainersByContext(ic.ClientCtx, opts.All, false, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -157,7 +157,7 @@ func (ic *ContainerEngine) ContainerRestart(ctx context.Context, namesOrIds []st
|
|||
}
|
||||
reports = append(reports, &entities.RestartReport{
|
||||
Id: c.ID,
|
||||
Err: containers.Restart(ic.ClientCxt, c.ID, options),
|
||||
Err: containers.Restart(ic.ClientCtx, c.ID, options),
|
||||
})
|
||||
}
|
||||
return reports, nil
|
||||
|
@ -172,7 +172,7 @@ func (ic *ContainerEngine) ContainerRm(ctx context.Context, namesOrIds []string,
|
|||
id := strings.Split(string(content), "\n")[0]
|
||||
namesOrIds = append(namesOrIds, id)
|
||||
}
|
||||
ctrs, err := getContainersByContext(ic.ClientCxt, opts.All, opts.Ignore, namesOrIds)
|
||||
ctrs, err := getContainersByContext(ic.ClientCtx, opts.All, opts.Ignore, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -182,7 +182,7 @@ func (ic *ContainerEngine) ContainerRm(ctx context.Context, namesOrIds []string,
|
|||
for _, c := range ctrs {
|
||||
reports = append(reports, &entities.RmReport{
|
||||
Id: c.ID,
|
||||
Err: containers.Remove(ic.ClientCxt, c.ID, options),
|
||||
Err: containers.Remove(ic.ClientCtx, c.ID, options),
|
||||
})
|
||||
}
|
||||
return reports, nil
|
||||
|
@ -190,7 +190,7 @@ func (ic *ContainerEngine) ContainerRm(ctx context.Context, namesOrIds []string,
|
|||
|
||||
func (ic *ContainerEngine) ContainerPrune(ctx context.Context, opts entities.ContainerPruneOptions) (*entities.ContainerPruneReport, error) {
|
||||
options := new(containers.PruneOptions).WithFilters(opts.Filters)
|
||||
return containers.Prune(ic.ClientCxt, options)
|
||||
return containers.Prune(ic.ClientCtx, options)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerInspect(ctx context.Context, namesOrIds []string, opts entities.InspectOptions) ([]*entities.ContainerInspectReport, []error, error) {
|
||||
|
@ -200,7 +200,7 @@ func (ic *ContainerEngine) ContainerInspect(ctx context.Context, namesOrIds []st
|
|||
)
|
||||
options := new(containers.InspectOptions).WithSize(opts.Size)
|
||||
for _, name := range namesOrIds {
|
||||
inspect, err := containers.Inspect(ic.ClientCxt, name, options)
|
||||
inspect, err := containers.Inspect(ic.ClientCtx, name, options)
|
||||
if err != nil {
|
||||
errModel, ok := err.(entities.ErrorModel)
|
||||
if !ok {
|
||||
|
@ -225,7 +225,7 @@ func (ic *ContainerEngine) ContainerTop(ctx context.Context, opts entities.TopOp
|
|||
return nil, errors.New("NameOrID must be specified")
|
||||
}
|
||||
options := new(containers.TopOptions).WithDescriptors(opts.Descriptors)
|
||||
topOutput, err := containers.Top(ic.ClientCxt, opts.NameOrID, options)
|
||||
topOutput, err := containers.Top(ic.ClientCtx, opts.NameOrID, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -254,7 +254,7 @@ func (ic *ContainerEngine) ContainerCommit(ctx context.Context, nameOrID string,
|
|||
}
|
||||
options := new(containers.CommitOptions).WithAuthor(opts.Author).WithChanges(opts.Changes).WithComment(opts.Message)
|
||||
options.WithFormat(opts.Format).WithPause(opts.Pause).WithRepo(repo).WithTag(tag)
|
||||
response, err := containers.Commit(ic.ClientCxt, nameOrID, options)
|
||||
response, err := containers.Commit(ic.ClientCtx, nameOrID, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -272,7 +272,7 @@ func (ic *ContainerEngine) ContainerExport(ctx context.Context, nameOrID string,
|
|||
return err
|
||||
}
|
||||
}
|
||||
return containers.Export(ic.ClientCxt, nameOrID, w, nil)
|
||||
return containers.Export(ic.ClientCtx, nameOrID, w, nil)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerCheckpoint(ctx context.Context, namesOrIds []string, opts entities.CheckpointOptions) ([]*entities.CheckpointReport, error) {
|
||||
|
@ -282,7 +282,7 @@ func (ic *ContainerEngine) ContainerCheckpoint(ctx context.Context, namesOrIds [
|
|||
)
|
||||
|
||||
if opts.All {
|
||||
allCtrs, err := getContainersByContext(ic.ClientCxt, true, false, []string{})
|
||||
allCtrs, err := getContainersByContext(ic.ClientCtx, true, false, []string{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -294,7 +294,7 @@ func (ic *ContainerEngine) ContainerCheckpoint(ctx context.Context, namesOrIds [
|
|||
}
|
||||
|
||||
} else {
|
||||
ctrs, err = getContainersByContext(ic.ClientCxt, false, false, namesOrIds)
|
||||
ctrs, err = getContainersByContext(ic.ClientCtx, false, false, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -303,7 +303,7 @@ func (ic *ContainerEngine) ContainerCheckpoint(ctx context.Context, namesOrIds [
|
|||
options := new(containers.CheckpointOptions).WithExport(opts.Export).WithIgnoreRootfs(opts.IgnoreRootFS).WithKeep(opts.Keep)
|
||||
options.WithLeaveRunning(opts.LeaveRunning).WithTCPEstablished(opts.TCPEstablished)
|
||||
for _, c := range ctrs {
|
||||
report, err := containers.Checkpoint(ic.ClientCxt, c.ID, options)
|
||||
report, err := containers.Checkpoint(ic.ClientCtx, c.ID, options)
|
||||
if err != nil {
|
||||
reports = append(reports, &entities.CheckpointReport{Id: c.ID, Err: err})
|
||||
}
|
||||
|
@ -318,7 +318,7 @@ func (ic *ContainerEngine) ContainerRestore(ctx context.Context, namesOrIds []st
|
|||
ctrs = []entities.ListContainer{}
|
||||
)
|
||||
if opts.All {
|
||||
allCtrs, err := getContainersByContext(ic.ClientCxt, true, false, []string{})
|
||||
allCtrs, err := getContainersByContext(ic.ClientCtx, true, false, []string{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -330,7 +330,7 @@ func (ic *ContainerEngine) ContainerRestore(ctx context.Context, namesOrIds []st
|
|||
}
|
||||
|
||||
} else {
|
||||
ctrs, err = getContainersByContext(ic.ClientCxt, false, false, namesOrIds)
|
||||
ctrs, err = getContainersByContext(ic.ClientCtx, false, false, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -338,7 +338,7 @@ func (ic *ContainerEngine) ContainerRestore(ctx context.Context, namesOrIds []st
|
|||
reports := make([]*entities.RestoreReport, 0, len(ctrs))
|
||||
options := new(containers.RestoreOptions)
|
||||
for _, c := range ctrs {
|
||||
report, err := containers.Restore(ic.ClientCxt, c.ID, options)
|
||||
report, err := containers.Restore(ic.ClientCtx, c.ID, options)
|
||||
if err != nil {
|
||||
reports = append(reports, &entities.RestoreReport{Id: c.ID, Err: err})
|
||||
}
|
||||
|
@ -348,7 +348,7 @@ func (ic *ContainerEngine) ContainerRestore(ctx context.Context, namesOrIds []st
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerCreate(ctx context.Context, s *specgen.SpecGenerator) (*entities.ContainerCreateReport, error) {
|
||||
response, err := containers.CreateWithSpec(ic.ClientCxt, s, nil)
|
||||
response, err := containers.CreateWithSpec(ic.ClientCtx, s, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -371,7 +371,7 @@ func (ic *ContainerEngine) ContainerLogs(_ context.Context, nameOrIDs []string,
|
|||
stderrCh := make(chan string)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
go func() {
|
||||
err = containers.Logs(ic.ClientCxt, nameOrIDs[0], options, stdoutCh, stderrCh)
|
||||
err = containers.Logs(ic.ClientCtx, nameOrIDs[0], options, stdoutCh, stderrCh)
|
||||
cancel()
|
||||
}()
|
||||
|
||||
|
@ -392,7 +392,7 @@ func (ic *ContainerEngine) ContainerLogs(_ context.Context, nameOrIDs []string,
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerAttach(ctx context.Context, nameOrID string, opts entities.AttachOptions) error {
|
||||
ctrs, err := getContainersByContext(ic.ClientCxt, false, false, []string{nameOrID})
|
||||
ctrs, err := getContainersByContext(ic.ClientCtx, false, false, []string{nameOrID})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -401,7 +401,7 @@ func (ic *ContainerEngine) ContainerAttach(ctx context.Context, nameOrID string,
|
|||
return errors.Errorf("you can only attach to running containers")
|
||||
}
|
||||
options := new(containers.AttachOptions).WithStream(true).WithDetachKeys(opts.DetachKeys)
|
||||
return containers.Attach(ic.ClientCxt, nameOrID, opts.Stdin, opts.Stdout, opts.Stderr, nil, options)
|
||||
return containers.Attach(ic.ClientCtx, nameOrID, opts.Stdin, opts.Stdout, opts.Stderr, nil, options)
|
||||
}
|
||||
|
||||
func makeExecConfig(options entities.ExecOptions) *handlers.ExecCreateConfig {
|
||||
|
@ -429,7 +429,7 @@ func makeExecConfig(options entities.ExecOptions) *handlers.ExecCreateConfig {
|
|||
func (ic *ContainerEngine) ContainerExec(ctx context.Context, nameOrID string, options entities.ExecOptions, streams define.AttachStreams) (int, error) {
|
||||
createConfig := makeExecConfig(options)
|
||||
|
||||
sessionID, err := containers.ExecCreate(ic.ClientCxt, nameOrID, createConfig)
|
||||
sessionID, err := containers.ExecCreate(ic.ClientCtx, nameOrID, createConfig)
|
||||
if err != nil {
|
||||
return 125, err
|
||||
}
|
||||
|
@ -439,11 +439,11 @@ func (ic *ContainerEngine) ContainerExec(ctx context.Context, nameOrID string, o
|
|||
startAndAttachOptions.WithInputStream(*streams.InputStream)
|
||||
}
|
||||
startAndAttachOptions.WithAttachError(streams.AttachError).WithAttachOutput(streams.AttachOutput).WithAttachInput(streams.AttachInput)
|
||||
if err := containers.ExecStartAndAttach(ic.ClientCxt, sessionID, startAndAttachOptions); err != nil {
|
||||
if err := containers.ExecStartAndAttach(ic.ClientCtx, sessionID, startAndAttachOptions); err != nil {
|
||||
return 125, err
|
||||
}
|
||||
|
||||
inspectOut, err := containers.ExecInspect(ic.ClientCxt, sessionID, nil)
|
||||
inspectOut, err := containers.ExecInspect(ic.ClientCtx, sessionID, nil)
|
||||
if err != nil {
|
||||
return 125, err
|
||||
}
|
||||
|
@ -454,12 +454,12 @@ func (ic *ContainerEngine) ContainerExec(ctx context.Context, nameOrID string, o
|
|||
func (ic *ContainerEngine) ContainerExecDetached(ctx context.Context, nameOrID string, options entities.ExecOptions) (string, error) {
|
||||
createConfig := makeExecConfig(options)
|
||||
|
||||
sessionID, err := containers.ExecCreate(ic.ClientCxt, nameOrID, createConfig)
|
||||
sessionID, err := containers.ExecCreate(ic.ClientCtx, nameOrID, createConfig)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if err := containers.ExecStart(ic.ClientCxt, sessionID, nil); err != nil {
|
||||
if err := containers.ExecStart(ic.ClientCtx, sessionID, nil); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
|
@ -474,7 +474,7 @@ func startAndAttach(ic *ContainerEngine, name string, detachKeys *string, input,
|
|||
options.WithDetachKeys(*dk)
|
||||
}
|
||||
go func() {
|
||||
err := containers.Attach(ic.ClientCxt, name, input, output, errput, attachReady, options)
|
||||
err := containers.Attach(ic.ClientCtx, name, input, output, errput, attachReady, options)
|
||||
attachErr <- err
|
||||
}()
|
||||
// Wait for the attach to actually happen before starting
|
||||
|
@ -485,7 +485,7 @@ func startAndAttach(ic *ContainerEngine, name string, detachKeys *string, input,
|
|||
if dk := detachKeys; dk != nil {
|
||||
startOptions.WithDetachKeys(*dk)
|
||||
}
|
||||
if err := containers.Start(ic.ClientCxt, name, startOptions); err != nil {
|
||||
if err := containers.Start(ic.ClientCtx, name, startOptions); err != nil {
|
||||
return err
|
||||
}
|
||||
case err := <-attachErr:
|
||||
|
@ -498,7 +498,7 @@ func startAndAttach(ic *ContainerEngine, name string, detachKeys *string, input,
|
|||
func (ic *ContainerEngine) ContainerStart(ctx context.Context, namesOrIds []string, options entities.ContainerStartOptions) ([]*entities.ContainerStartReport, error) {
|
||||
reports := []*entities.ContainerStartReport{}
|
||||
var exitCode = define.ExecErrorCodeGeneric
|
||||
ctrs, err := getContainersByContext(ic.ClientCxt, false, false, namesOrIds)
|
||||
ctrs, err := getContainersByContext(ic.ClientCtx, false, false, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -535,14 +535,14 @@ func (ic *ContainerEngine) ContainerStart(ctx context.Context, namesOrIds []stri
|
|||
// Defer the removal, so we can return early if needed and
|
||||
// de-spaghetti the code.
|
||||
defer func() {
|
||||
shouldRestart, err := containers.ShouldRestart(ic.ClientCxt, ctr.ID, nil)
|
||||
shouldRestart, err := containers.ShouldRestart(ic.ClientCtx, ctr.ID, nil)
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to check if %s should restart: %v", ctr.ID, err)
|
||||
return
|
||||
}
|
||||
|
||||
if !shouldRestart {
|
||||
if err := containers.Remove(ic.ClientCxt, ctr.ID, removeOptions); err != nil {
|
||||
if err := containers.Remove(ic.ClientCtx, ctr.ID, removeOptions); err != nil {
|
||||
if errorhandling.Contains(err, define.ErrNoSuchCtr) ||
|
||||
errorhandling.Contains(err, define.ErrCtrRemoved) {
|
||||
logrus.Warnf("Container %s does not exist: %v", ctr.ID, err)
|
||||
|
@ -554,7 +554,7 @@ func (ic *ContainerEngine) ContainerStart(ctx context.Context, namesOrIds []stri
|
|||
}()
|
||||
}
|
||||
|
||||
exitCode, err := containers.Wait(ic.ClientCxt, name, nil)
|
||||
exitCode, err := containers.Wait(ic.ClientCtx, name, nil)
|
||||
if err == define.ErrNoSuchCtr {
|
||||
// Check events
|
||||
event, err := ic.GetLastContainerEvent(ctx, name, events.Exited)
|
||||
|
@ -573,11 +573,11 @@ func (ic *ContainerEngine) ContainerStart(ctx context.Context, namesOrIds []stri
|
|||
// Start the container if it's not running already.
|
||||
if !ctrRunning {
|
||||
|
||||
err = containers.Start(ic.ClientCxt, name, new(containers.StartOptions).WithDetachKeys(options.DetachKeys))
|
||||
err = containers.Start(ic.ClientCtx, name, new(containers.StartOptions).WithDetachKeys(options.DetachKeys))
|
||||
if err != nil {
|
||||
if ctr.AutoRemove {
|
||||
rmOptions := new(containers.RemoveOptions).WithForce(false).WithVolumes(true)
|
||||
if err := containers.Remove(ic.ClientCxt, ctr.ID, rmOptions); err != nil {
|
||||
if err := containers.Remove(ic.ClientCtx, ctr.ID, rmOptions); err != nil {
|
||||
if errorhandling.Contains(err, define.ErrNoSuchCtr) ||
|
||||
errorhandling.Contains(err, define.ErrCtrRemoved) {
|
||||
logrus.Warnf("Container %s does not exist: %v", ctr.ID, err)
|
||||
|
@ -601,11 +601,11 @@ func (ic *ContainerEngine) ContainerStart(ctx context.Context, namesOrIds []stri
|
|||
func (ic *ContainerEngine) ContainerList(ctx context.Context, opts entities.ContainerListOptions) ([]entities.ListContainer, error) {
|
||||
options := new(containers.ListOptions).WithFilters(opts.Filters).WithAll(opts.All).WithLast(opts.Last)
|
||||
options.WithNamespace(opts.Namespace).WithSize(opts.Size).WithSync(opts.Sync)
|
||||
return containers.List(ic.ClientCxt, options)
|
||||
return containers.List(ic.ClientCtx, options)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerRun(ctx context.Context, opts entities.ContainerRunOptions) (*entities.ContainerRunReport, error) {
|
||||
con, err := containers.CreateWithSpec(ic.ClientCxt, opts.Spec, nil)
|
||||
con, err := containers.CreateWithSpec(ic.ClientCtx, opts.Spec, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -622,7 +622,7 @@ func (ic *ContainerEngine) ContainerRun(ctx context.Context, opts entities.Conta
|
|||
|
||||
if opts.Detach {
|
||||
// Detach and return early
|
||||
err := containers.Start(ic.ClientCxt, con.ID, nil)
|
||||
err := containers.Start(ic.ClientCtx, con.ID, nil)
|
||||
if err != nil {
|
||||
report.ExitCode = define.ExitCode(err)
|
||||
}
|
||||
|
@ -637,7 +637,7 @@ func (ic *ContainerEngine) ContainerRun(ctx context.Context, opts entities.Conta
|
|||
|
||||
report.ExitCode = define.ExitCode(err)
|
||||
if opts.Rm {
|
||||
if rmErr := containers.Remove(ic.ClientCxt, con.ID, new(containers.RemoveOptions).WithForce(false).WithVolumes(true)); rmErr != nil {
|
||||
if rmErr := containers.Remove(ic.ClientCtx, con.ID, new(containers.RemoveOptions).WithForce(false).WithVolumes(true)); rmErr != nil {
|
||||
logrus.Debugf("unable to remove container %s after failing to start and attach to it", con.ID)
|
||||
}
|
||||
}
|
||||
|
@ -648,14 +648,14 @@ func (ic *ContainerEngine) ContainerRun(ctx context.Context, opts entities.Conta
|
|||
// Defer the removal, so we can return early if needed and
|
||||
// de-spaghetti the code.
|
||||
defer func() {
|
||||
shouldRestart, err := containers.ShouldRestart(ic.ClientCxt, con.ID, nil)
|
||||
shouldRestart, err := containers.ShouldRestart(ic.ClientCtx, con.ID, nil)
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to check if %s should restart: %v", con.ID, err)
|
||||
return
|
||||
}
|
||||
|
||||
if !shouldRestart {
|
||||
if err := containers.Remove(ic.ClientCxt, con.ID, new(containers.RemoveOptions).WithForce(false).WithVolumes(true)); err != nil {
|
||||
if err := containers.Remove(ic.ClientCtx, con.ID, new(containers.RemoveOptions).WithForce(false).WithVolumes(true)); err != nil {
|
||||
if errorhandling.Contains(err, define.ErrNoSuchCtr) ||
|
||||
errorhandling.Contains(err, define.ErrCtrRemoved) {
|
||||
logrus.Warnf("Container %s does not exist: %v", con.ID, err)
|
||||
|
@ -668,7 +668,7 @@ func (ic *ContainerEngine) ContainerRun(ctx context.Context, opts entities.Conta
|
|||
}
|
||||
|
||||
// Wait
|
||||
exitCode, waitErr := containers.Wait(ic.ClientCxt, con.ID, nil)
|
||||
exitCode, waitErr := containers.Wait(ic.ClientCtx, con.ID, nil)
|
||||
if waitErr == nil {
|
||||
report.ExitCode = int(exitCode)
|
||||
return &report, nil
|
||||
|
@ -717,7 +717,7 @@ func (ic *ContainerEngine) ContainerRun(ctx context.Context, opts entities.Conta
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerDiff(ctx context.Context, nameOrID string, _ entities.DiffOptions) (*entities.DiffReport, error) {
|
||||
changes, err := containers.Diff(ic.ClientCxt, nameOrID, nil)
|
||||
changes, err := containers.Diff(ic.ClientCtx, nameOrID, nil)
|
||||
return &entities.DiffReport{Changes: changes}, err
|
||||
}
|
||||
|
||||
|
@ -726,13 +726,13 @@ func (ic *ContainerEngine) ContainerCleanup(ctx context.Context, namesOrIds []st
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerInit(ctx context.Context, namesOrIds []string, options entities.ContainerInitOptions) ([]*entities.ContainerInitReport, error) {
|
||||
ctrs, err := getContainersByContext(ic.ClientCxt, options.All, false, namesOrIds)
|
||||
ctrs, err := getContainersByContext(ic.ClientCtx, options.All, false, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
reports := make([]*entities.ContainerInitReport, 0, len(ctrs))
|
||||
for _, ctr := range ctrs {
|
||||
err := containers.ContainerInit(ic.ClientCxt, ctr.ID, nil)
|
||||
err := containers.ContainerInit(ic.ClientCtx, ctr.ID, nil)
|
||||
// When using all, it is NOT considered an error if a container
|
||||
// has already been init'd.
|
||||
if err != nil && options.All && strings.Contains(errors.Cause(err).Error(), define.ErrCtrStateInvalid.Error()) {
|
||||
|
@ -766,7 +766,7 @@ func (ic *ContainerEngine) ContainerPort(ctx context.Context, nameOrID string, o
|
|||
if len(nameOrID) > 0 {
|
||||
namesOrIds = append(namesOrIds, nameOrID)
|
||||
}
|
||||
ctrs, err := getContainersByContext(ic.ClientCxt, options.All, false, namesOrIds)
|
||||
ctrs, err := getContainersByContext(ic.ClientCtx, options.All, false, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -785,15 +785,15 @@ func (ic *ContainerEngine) ContainerPort(ctx context.Context, nameOrID string, o
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerCopyFromArchive(ctx context.Context, nameOrID string, path string, reader io.Reader) (entities.ContainerCopyFunc, error) {
|
||||
return containers.CopyFromArchive(ic.ClientCxt, nameOrID, path, reader)
|
||||
return containers.CopyFromArchive(ic.ClientCtx, nameOrID, path, reader)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerCopyToArchive(ctx context.Context, nameOrID string, path string, writer io.Writer) (entities.ContainerCopyFunc, error) {
|
||||
return containers.CopyToArchive(ic.ClientCxt, nameOrID, path, writer)
|
||||
return containers.CopyToArchive(ic.ClientCtx, nameOrID, path, writer)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) ContainerStat(ctx context.Context, nameOrID string, path string) (*entities.ContainerStatReport, error) {
|
||||
return containers.Stat(ic.ClientCxt, nameOrID, path)
|
||||
return containers.Stat(ic.ClientCtx, nameOrID, path)
|
||||
}
|
||||
|
||||
// Shutdown Libpod engine
|
||||
|
@ -804,10 +804,10 @@ func (ic *ContainerEngine) ContainerStats(ctx context.Context, namesOrIds []stri
|
|||
if options.Latest {
|
||||
return nil, errors.New("latest is not supported for the remote client")
|
||||
}
|
||||
return containers.Stats(ic.ClientCxt, namesOrIds, new(containers.StatsOptions).WithStream(options.Stream))
|
||||
return containers.Stats(ic.ClientCtx, namesOrIds, new(containers.StatsOptions).WithStream(options.Stream))
|
||||
}
|
||||
|
||||
// ShouldRestart reports back whether the containre will restart
|
||||
// ShouldRestart reports back whether the container will restart
|
||||
func (ic *ContainerEngine) ShouldRestart(_ context.Context, id string) (bool, error) {
|
||||
return containers.ShouldRestart(ic.ClientCxt, id, nil)
|
||||
return containers.ShouldRestart(ic.ClientCtx, id, nil)
|
||||
}
|
||||
|
|
|
@ -29,7 +29,7 @@ func (ic *ContainerEngine) Events(ctx context.Context, opts entities.EventsOptio
|
|||
close(opts.EventChan)
|
||||
}()
|
||||
options := new(system.EventsOptions).WithFilters(filters).WithSince(opts.Since).WithStream(opts.Stream).WithUntil(opts.Until)
|
||||
return system.Events(ic.ClientCxt, binChan, nil, options)
|
||||
return system.Events(ic.ClientCtx, binChan, nil, options)
|
||||
}
|
||||
|
||||
// GetLastContainerEvent takes a container name or ID and an event status and returns
|
||||
|
|
|
@ -13,10 +13,10 @@ func (ic *ContainerEngine) GenerateSystemd(ctx context.Context, nameOrID string,
|
|||
if to := opts.StopTimeout; to != nil {
|
||||
options.WithStopTimeout(*opts.StopTimeout)
|
||||
}
|
||||
return generate.Systemd(ic.ClientCxt, nameOrID, options)
|
||||
return generate.Systemd(ic.ClientCtx, nameOrID, options)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) GenerateKube(ctx context.Context, nameOrIDs []string, opts entities.GenerateKubeOptions) (*entities.GenerateKubeReport, error) {
|
||||
options := new(generate.KubeOptions).WithService(opts.Service)
|
||||
return generate.Kube(ic.ClientCxt, nameOrIDs, options)
|
||||
return generate.Kube(ic.ClientCtx, nameOrIDs, options)
|
||||
}
|
||||
|
|
|
@ -9,5 +9,5 @@ import (
|
|||
)
|
||||
|
||||
func (ic *ContainerEngine) HealthCheckRun(ctx context.Context, nameOrID string, options entities.HealthCheckOptions) (*define.HealthCheckResults, error) {
|
||||
return containers.RunHealthCheck(ic.ClientCxt, nameOrID, nil)
|
||||
return containers.RunHealthCheck(ic.ClientCtx, nameOrID, nil)
|
||||
}
|
||||
|
|
|
@ -22,13 +22,13 @@ import (
|
|||
)
|
||||
|
||||
func (ir *ImageEngine) Exists(_ context.Context, nameOrID string) (*entities.BoolReport, error) {
|
||||
found, err := images.Exists(ir.ClientCxt, nameOrID)
|
||||
found, err := images.Exists(ir.ClientCtx, nameOrID)
|
||||
return &entities.BoolReport{Value: found}, err
|
||||
}
|
||||
|
||||
func (ir *ImageEngine) Remove(ctx context.Context, imagesArg []string, opts entities.ImageRemoveOptions) (*entities.ImageRemoveReport, []error) {
|
||||
options := new(images.RemoveOptions).WithForce(opts.Force).WithAll(opts.All)
|
||||
return images.Remove(ir.ClientCxt, imagesArg, options)
|
||||
return images.Remove(ir.ClientCtx, imagesArg, options)
|
||||
}
|
||||
|
||||
func (ir *ImageEngine) List(ctx context.Context, opts entities.ImageListOptions) ([]*entities.ImageSummary, error) {
|
||||
|
@ -39,7 +39,7 @@ func (ir *ImageEngine) List(ctx context.Context, opts entities.ImageListOptions)
|
|||
filters[f[0]] = f[1:]
|
||||
}
|
||||
options := new(images.ListOptions).WithAll(opts.All).WithFilters(filters)
|
||||
psImages, err := images.List(ir.ClientCxt, options)
|
||||
psImages, err := images.List(ir.ClientCtx, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -65,7 +65,7 @@ func (ir *ImageEngine) Unmount(ctx context.Context, images []string, options ent
|
|||
|
||||
func (ir *ImageEngine) History(ctx context.Context, nameOrID string, opts entities.ImageHistoryOptions) (*entities.ImageHistoryReport, error) {
|
||||
options := new(images.HistoryOptions)
|
||||
results, err := images.History(ir.ClientCxt, nameOrID, options)
|
||||
results, err := images.History(ir.ClientCtx, nameOrID, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -97,7 +97,7 @@ func (ir *ImageEngine) Prune(ctx context.Context, opts entities.ImagePruneOption
|
|||
filters[f[0]] = f[1:]
|
||||
}
|
||||
options := new(images.PruneOptions).WithAll(opts.All).WithFilters(filters)
|
||||
results, err := images.Prune(ir.ClientCxt, options)
|
||||
results, err := images.Prune(ir.ClientCtx, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -124,7 +124,7 @@ func (ir *ImageEngine) Pull(ctx context.Context, rawImage string, opts entities.
|
|||
}
|
||||
}
|
||||
options.WithQuiet(opts.Quiet).WithSignaturePolicy(opts.SignaturePolicy).WithUsername(opts.Username)
|
||||
pulledImages, err := images.Pull(ir.ClientCxt, rawImage, options)
|
||||
pulledImages, err := images.Pull(ir.ClientCtx, rawImage, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -150,7 +150,7 @@ func (ir *ImageEngine) Tag(ctx context.Context, nameOrID string, tags []string,
|
|||
if len(repo) < 1 {
|
||||
return errors.Errorf("invalid image name %q", nameOrID)
|
||||
}
|
||||
if err := images.Tag(ir.ClientCxt, nameOrID, tag, repo, options); err != nil {
|
||||
if err := images.Tag(ir.ClientCtx, nameOrID, tag, repo, options); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
@ -160,7 +160,7 @@ func (ir *ImageEngine) Tag(ctx context.Context, nameOrID string, tags []string,
|
|||
func (ir *ImageEngine) Untag(ctx context.Context, nameOrID string, tags []string, opt entities.ImageUntagOptions) error {
|
||||
options := new(images.UntagOptions)
|
||||
if len(tags) == 0 {
|
||||
return images.Untag(ir.ClientCxt, nameOrID, "", "", options)
|
||||
return images.Untag(ir.ClientCtx, nameOrID, "", "", options)
|
||||
}
|
||||
|
||||
for _, newTag := range tags {
|
||||
|
@ -180,7 +180,7 @@ func (ir *ImageEngine) Untag(ctx context.Context, nameOrID string, tags []string
|
|||
if len(repo) < 1 {
|
||||
return errors.Errorf("invalid image name %q", nameOrID)
|
||||
}
|
||||
if err := images.Untag(ir.ClientCxt, nameOrID, tag, repo, options); err != nil {
|
||||
if err := images.Untag(ir.ClientCtx, nameOrID, tag, repo, options); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
@ -192,7 +192,7 @@ func (ir *ImageEngine) Inspect(ctx context.Context, namesOrIDs []string, opts en
|
|||
reports := []*entities.ImageInspectReport{}
|
||||
errs := []error{}
|
||||
for _, i := range namesOrIDs {
|
||||
r, err := images.GetImage(ir.ClientCxt, i, options)
|
||||
r, err := images.GetImage(ir.ClientCtx, i, options)
|
||||
if err != nil {
|
||||
errModel, ok := err.(entities.ErrorModel)
|
||||
if !ok {
|
||||
|
@ -227,7 +227,7 @@ func (ir *ImageEngine) Load(ctx context.Context, opts entities.ImageLoadOptions)
|
|||
ref += ":" + opts.Tag
|
||||
}
|
||||
options := new(images.LoadOptions).WithReference(ref)
|
||||
return images.Load(ir.ClientCxt, f, options)
|
||||
return images.Load(ir.ClientCtx, f, options)
|
||||
}
|
||||
|
||||
func (ir *ImageEngine) Import(ctx context.Context, opts entities.ImageImportOptions) (*entities.ImageImportReport, error) {
|
||||
|
@ -244,7 +244,7 @@ func (ir *ImageEngine) Import(ctx context.Context, opts entities.ImageImportOpti
|
|||
return nil, err
|
||||
}
|
||||
}
|
||||
return images.Import(ir.ClientCxt, f, options)
|
||||
return images.Import(ir.ClientCtx, f, options)
|
||||
}
|
||||
|
||||
func (ir *ImageEngine) Push(ctx context.Context, source string, destination string, opts entities.ImagePushOptions) error {
|
||||
|
@ -261,7 +261,7 @@ func (ir *ImageEngine) Push(ctx context.Context, source string, destination stri
|
|||
options.WithSkipTLSVerify(false)
|
||||
}
|
||||
}
|
||||
return images.Push(ir.ClientCxt, source, destination, options)
|
||||
return images.Push(ir.ClientCtx, source, destination, options)
|
||||
}
|
||||
|
||||
func (ir *ImageEngine) Save(ctx context.Context, nameOrID string, tags []string, opts entities.ImageSaveOptions) error {
|
||||
|
@ -284,7 +284,7 @@ func (ir *ImageEngine) Save(ctx context.Context, nameOrID string, tags []string,
|
|||
return err
|
||||
}
|
||||
|
||||
exErr := images.Export(ir.ClientCxt, append([]string{nameOrID}, tags...), f, options)
|
||||
exErr := images.Export(ir.ClientCtx, append([]string{nameOrID}, tags...), f, options)
|
||||
if err := f.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -319,7 +319,7 @@ func (ir *ImageEngine) Save(ctx context.Context, nameOrID string, tags []string,
|
|||
// Diff reports the changes to the given image
|
||||
func (ir *ImageEngine) Diff(ctx context.Context, nameOrID string, _ entities.DiffOptions) (*entities.DiffReport, error) {
|
||||
options := new(images.DiffOptions)
|
||||
changes, err := images.Diff(ir.ClientCxt, nameOrID, options)
|
||||
changes, err := images.Diff(ir.ClientCtx, nameOrID, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -354,7 +354,7 @@ func (ir *ImageEngine) Search(ctx context.Context, term string, opts entities.Im
|
|||
options.WithSkipTLSVerify(false)
|
||||
}
|
||||
}
|
||||
return images.Search(ir.ClientCxt, term, options)
|
||||
return images.Search(ir.ClientCtx, term, options)
|
||||
}
|
||||
|
||||
func (ir *ImageEngine) Config(_ context.Context) (*config.Config, error) {
|
||||
|
@ -362,7 +362,7 @@ func (ir *ImageEngine) Config(_ context.Context) (*config.Config, error) {
|
|||
}
|
||||
|
||||
func (ir *ImageEngine) Build(_ context.Context, containerFiles []string, opts entities.BuildOptions) (*entities.BuildReport, error) {
|
||||
report, err := images.Build(ir.ClientCxt, containerFiles, opts)
|
||||
report, err := images.Build(ir.ClientCtx, containerFiles, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -382,7 +382,7 @@ func (ir *ImageEngine) Build(_ context.Context, containerFiles []string, opts en
|
|||
|
||||
func (ir *ImageEngine) Tree(ctx context.Context, nameOrID string, opts entities.ImageTreeOptions) (*entities.ImageTreeReport, error) {
|
||||
options := new(images.TreeOptions).WithWhatRequires(opts.WhatRequires)
|
||||
return images.Tree(ir.ClientCxt, nameOrID, options)
|
||||
return images.Tree(ir.ClientCtx, nameOrID, options)
|
||||
}
|
||||
|
||||
// Shutdown Libpod engine
|
||||
|
|
|
@ -14,7 +14,7 @@ import (
|
|||
// ManifestCreate implements manifest create via ImageEngine
|
||||
func (ir *ImageEngine) ManifestCreate(ctx context.Context, names, images []string, opts entities.ManifestCreateOptions) (string, error) {
|
||||
options := new(manifests.CreateOptions).WithAll(opts.All)
|
||||
imageID, err := manifests.Create(ir.ClientCxt, names, images, options)
|
||||
imageID, err := manifests.Create(ir.ClientCtx, names, images, options)
|
||||
if err != nil {
|
||||
return imageID, errors.Wrapf(err, "error creating manifest")
|
||||
}
|
||||
|
@ -23,7 +23,7 @@ func (ir *ImageEngine) ManifestCreate(ctx context.Context, names, images []strin
|
|||
|
||||
// ManifestInspect returns contents of manifest list with given name
|
||||
func (ir *ImageEngine) ManifestInspect(ctx context.Context, name string) ([]byte, error) {
|
||||
list, err := manifests.Inspect(ir.ClientCxt, name, nil)
|
||||
list, err := manifests.Inspect(ir.ClientCtx, name, nil)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error getting content of manifest list or image %s", name)
|
||||
}
|
||||
|
@ -51,7 +51,7 @@ func (ir *ImageEngine) ManifestAdd(ctx context.Context, opts entities.ManifestAd
|
|||
options.WithAnnotation(annotations)
|
||||
}
|
||||
|
||||
listID, err := manifests.Add(ir.ClientCxt, opts.Images[1], options)
|
||||
listID, err := manifests.Add(ir.ClientCtx, opts.Images[1], options)
|
||||
if err != nil {
|
||||
return listID, errors.Wrapf(err, "error adding to manifest list %s", opts.Images[1])
|
||||
}
|
||||
|
@ -65,7 +65,7 @@ func (ir *ImageEngine) ManifestAnnotate(ctx context.Context, names []string, opt
|
|||
|
||||
// ManifestRemove removes the digest from manifest list
|
||||
func (ir *ImageEngine) ManifestRemove(ctx context.Context, names []string) (string, error) {
|
||||
updatedListID, err := manifests.Remove(ir.ClientCxt, names[0], names[1], nil)
|
||||
updatedListID, err := manifests.Remove(ir.ClientCtx, names[0], names[1], nil)
|
||||
if err != nil {
|
||||
return updatedListID, errors.Wrapf(err, "error removing from manifest %s", names[0])
|
||||
}
|
||||
|
@ -75,6 +75,6 @@ func (ir *ImageEngine) ManifestRemove(ctx context.Context, names []string) (stri
|
|||
// ManifestPush pushes a manifest list or image index to the destination
|
||||
func (ir *ImageEngine) ManifestPush(ctx context.Context, name, destination string, opts entities.ManifestPushOptions) error {
|
||||
options := new(manifests.PushOptions).WithAll(opts.All)
|
||||
_, err := manifests.Push(ir.ClientCxt, name, destination, options)
|
||||
_, err := manifests.Push(ir.ClientCtx, name, destination, options)
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -10,7 +10,7 @@ import (
|
|||
|
||||
func (ic *ContainerEngine) NetworkList(ctx context.Context, opts entities.NetworkListOptions) ([]*entities.NetworkListReport, error) {
|
||||
options := new(network.ListOptions).WithFilters(opts.Filters)
|
||||
return network.List(ic.ClientCxt, options)
|
||||
return network.List(ic.ClientCtx, options)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) NetworkInspect(ctx context.Context, namesOrIds []string, opts entities.InspectOptions) ([]entities.NetworkInspectReport, []error, error) {
|
||||
|
@ -20,7 +20,7 @@ func (ic *ContainerEngine) NetworkInspect(ctx context.Context, namesOrIds []stri
|
|||
)
|
||||
options := new(network.InspectOptions)
|
||||
for _, name := range namesOrIds {
|
||||
report, err := network.Inspect(ic.ClientCxt, name, options)
|
||||
report, err := network.Inspect(ic.ClientCtx, name, options)
|
||||
if err != nil {
|
||||
errModel, ok := err.(entities.ErrorModel)
|
||||
if !ok {
|
||||
|
@ -45,7 +45,7 @@ func (ic *ContainerEngine) NetworkRm(ctx context.Context, namesOrIds []string, o
|
|||
reports := make([]*entities.NetworkRmReport, 0, len(namesOrIds))
|
||||
options := new(network.RemoveOptions).WithForce(opts.Force)
|
||||
for _, name := range namesOrIds {
|
||||
response, err := network.Remove(ic.ClientCxt, name, options)
|
||||
response, err := network.Remove(ic.ClientCtx, name, options)
|
||||
if err != nil {
|
||||
report := &entities.NetworkRmReport{
|
||||
Name: name,
|
||||
|
@ -63,17 +63,17 @@ func (ic *ContainerEngine) NetworkCreate(ctx context.Context, name string, opts
|
|||
options := new(network.CreateOptions).WithName(name).WithDisableDNS(opts.DisableDNS).WithDriver(opts.Driver).WithGateway(opts.Gateway)
|
||||
options.WithInternal(opts.Internal).WithIPRange(opts.Range).WithIPv6(opts.IPv6).WithLabels(opts.Labels).WithIPv6(opts.IPv6)
|
||||
options.WithMacVLAN(opts.MacVLAN).WithOptions(opts.Options).WithSubnet(opts.Subnet)
|
||||
return network.Create(ic.ClientCxt, options)
|
||||
return network.Create(ic.ClientCtx, options)
|
||||
}
|
||||
|
||||
// NetworkDisconnect removes a container from a given network
|
||||
func (ic *ContainerEngine) NetworkDisconnect(ctx context.Context, networkname string, opts entities.NetworkDisconnectOptions) error {
|
||||
options := new(network.DisconnectOptions).WithForce(opts.Force)
|
||||
return network.Disconnect(ic.ClientCxt, networkname, opts.Container, options)
|
||||
return network.Disconnect(ic.ClientCtx, networkname, opts.Container, options)
|
||||
}
|
||||
|
||||
// NetworkConnect removes a container from a given network
|
||||
func (ic *ContainerEngine) NetworkConnect(ctx context.Context, networkname string, opts entities.NetworkConnectOptions) error {
|
||||
options := new(network.ConnectOptions).WithAliases(opts.Aliases)
|
||||
return network.Connect(ic.ClientCxt, networkname, opts.Container, options)
|
||||
return network.Connect(ic.ClientCtx, networkname, opts.Container, options)
|
||||
}
|
||||
|
|
|
@ -19,5 +19,5 @@ func (ic *ContainerEngine) PlayKube(ctx context.Context, path string, opts entit
|
|||
if start := opts.Start; start != types.OptionalBoolUndefined {
|
||||
options.WithStart(start == types.OptionalBoolTrue)
|
||||
}
|
||||
return play.Kube(ic.ClientCxt, path, options)
|
||||
return play.Kube(ic.ClientCtx, path, options)
|
||||
}
|
||||
|
|
|
@ -12,7 +12,7 @@ import (
|
|||
)
|
||||
|
||||
func (ic *ContainerEngine) PodExists(ctx context.Context, nameOrID string) (*entities.BoolReport, error) {
|
||||
exists, err := pods.Exists(ic.ClientCxt, nameOrID)
|
||||
exists, err := pods.Exists(ic.ClientCtx, nameOrID)
|
||||
return &entities.BoolReport{Value: exists}, err
|
||||
}
|
||||
|
||||
|
@ -22,14 +22,14 @@ func (ic *ContainerEngine) PodKill(ctx context.Context, namesOrIds []string, opt
|
|||
return nil, err
|
||||
}
|
||||
|
||||
foundPods, err := getPodsByContext(ic.ClientCxt, opts.All, namesOrIds)
|
||||
foundPods, err := getPodsByContext(ic.ClientCtx, opts.All, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
reports := make([]*entities.PodKillReport, 0, len(foundPods))
|
||||
options := new(pods.KillOptions).WithSignal(opts.Signal)
|
||||
for _, p := range foundPods {
|
||||
response, err := pods.Kill(ic.ClientCxt, p.Id, options)
|
||||
response, err := pods.Kill(ic.ClientCtx, p.Id, options)
|
||||
if err != nil {
|
||||
report := entities.PodKillReport{
|
||||
Errs: []error{err},
|
||||
|
@ -44,13 +44,13 @@ func (ic *ContainerEngine) PodKill(ctx context.Context, namesOrIds []string, opt
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) PodPause(ctx context.Context, namesOrIds []string, options entities.PodPauseOptions) ([]*entities.PodPauseReport, error) {
|
||||
foundPods, err := getPodsByContext(ic.ClientCxt, options.All, namesOrIds)
|
||||
foundPods, err := getPodsByContext(ic.ClientCtx, options.All, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
reports := make([]*entities.PodPauseReport, 0, len(foundPods))
|
||||
for _, p := range foundPods {
|
||||
response, err := pods.Pause(ic.ClientCxt, p.Id, nil)
|
||||
response, err := pods.Pause(ic.ClientCtx, p.Id, nil)
|
||||
if err != nil {
|
||||
report := entities.PodPauseReport{
|
||||
Errs: []error{err},
|
||||
|
@ -65,13 +65,13 @@ func (ic *ContainerEngine) PodPause(ctx context.Context, namesOrIds []string, op
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) PodUnpause(ctx context.Context, namesOrIds []string, options entities.PodunpauseOptions) ([]*entities.PodUnpauseReport, error) {
|
||||
foundPods, err := getPodsByContext(ic.ClientCxt, options.All, namesOrIds)
|
||||
foundPods, err := getPodsByContext(ic.ClientCtx, options.All, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
reports := make([]*entities.PodUnpauseReport, 0, len(foundPods))
|
||||
for _, p := range foundPods {
|
||||
response, err := pods.Unpause(ic.ClientCxt, p.Id, nil)
|
||||
response, err := pods.Unpause(ic.ClientCtx, p.Id, nil)
|
||||
if err != nil {
|
||||
report := entities.PodUnpauseReport{
|
||||
Errs: []error{err},
|
||||
|
@ -87,7 +87,7 @@ func (ic *ContainerEngine) PodUnpause(ctx context.Context, namesOrIds []string,
|
|||
|
||||
func (ic *ContainerEngine) PodStop(ctx context.Context, namesOrIds []string, opts entities.PodStopOptions) ([]*entities.PodStopReport, error) {
|
||||
timeout := -1
|
||||
foundPods, err := getPodsByContext(ic.ClientCxt, opts.All, namesOrIds)
|
||||
foundPods, err := getPodsByContext(ic.ClientCtx, opts.All, namesOrIds)
|
||||
if err != nil && !(opts.Ignore && errors.Cause(err) == define.ErrNoSuchPod) {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -97,7 +97,7 @@ func (ic *ContainerEngine) PodStop(ctx context.Context, namesOrIds []string, opt
|
|||
reports := make([]*entities.PodStopReport, 0, len(foundPods))
|
||||
options := new(pods.StopOptions).WithTimeout(timeout)
|
||||
for _, p := range foundPods {
|
||||
response, err := pods.Stop(ic.ClientCxt, p.Id, options)
|
||||
response, err := pods.Stop(ic.ClientCtx, p.Id, options)
|
||||
if err != nil {
|
||||
report := entities.PodStopReport{
|
||||
Errs: []error{err},
|
||||
|
@ -112,13 +112,13 @@ func (ic *ContainerEngine) PodStop(ctx context.Context, namesOrIds []string, opt
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) PodRestart(ctx context.Context, namesOrIds []string, options entities.PodRestartOptions) ([]*entities.PodRestartReport, error) {
|
||||
foundPods, err := getPodsByContext(ic.ClientCxt, options.All, namesOrIds)
|
||||
foundPods, err := getPodsByContext(ic.ClientCtx, options.All, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
reports := make([]*entities.PodRestartReport, 0, len(foundPods))
|
||||
for _, p := range foundPods {
|
||||
response, err := pods.Restart(ic.ClientCxt, p.Id, nil)
|
||||
response, err := pods.Restart(ic.ClientCtx, p.Id, nil)
|
||||
if err != nil {
|
||||
report := entities.PodRestartReport{
|
||||
Errs: []error{err},
|
||||
|
@ -133,13 +133,13 @@ func (ic *ContainerEngine) PodRestart(ctx context.Context, namesOrIds []string,
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) PodStart(ctx context.Context, namesOrIds []string, options entities.PodStartOptions) ([]*entities.PodStartReport, error) {
|
||||
foundPods, err := getPodsByContext(ic.ClientCxt, options.All, namesOrIds)
|
||||
foundPods, err := getPodsByContext(ic.ClientCtx, options.All, namesOrIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
reports := make([]*entities.PodStartReport, 0, len(foundPods))
|
||||
for _, p := range foundPods {
|
||||
response, err := pods.Start(ic.ClientCxt, p.Id, nil)
|
||||
response, err := pods.Start(ic.ClientCtx, p.Id, nil)
|
||||
if err != nil {
|
||||
report := entities.PodStartReport{
|
||||
Errs: []error{err},
|
||||
|
@ -154,14 +154,14 @@ func (ic *ContainerEngine) PodStart(ctx context.Context, namesOrIds []string, op
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) PodRm(ctx context.Context, namesOrIds []string, opts entities.PodRmOptions) ([]*entities.PodRmReport, error) {
|
||||
foundPods, err := getPodsByContext(ic.ClientCxt, opts.All, namesOrIds)
|
||||
foundPods, err := getPodsByContext(ic.ClientCtx, opts.All, namesOrIds)
|
||||
if err != nil && !(opts.Ignore && errors.Cause(err) == define.ErrNoSuchPod) {
|
||||
return nil, err
|
||||
}
|
||||
reports := make([]*entities.PodRmReport, 0, len(foundPods))
|
||||
options := new(pods.RemoveOptions).WithForce(opts.Force)
|
||||
for _, p := range foundPods {
|
||||
response, err := pods.Remove(ic.ClientCxt, p.Id, options)
|
||||
response, err := pods.Remove(ic.ClientCtx, p.Id, options)
|
||||
if err != nil {
|
||||
report := entities.PodRmReport{
|
||||
Err: err,
|
||||
|
@ -176,13 +176,13 @@ func (ic *ContainerEngine) PodRm(ctx context.Context, namesOrIds []string, opts
|
|||
}
|
||||
|
||||
func (ic *ContainerEngine) PodPrune(ctx context.Context, opts entities.PodPruneOptions) ([]*entities.PodPruneReport, error) {
|
||||
return pods.Prune(ic.ClientCxt, nil)
|
||||
return pods.Prune(ic.ClientCtx, nil)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) PodCreate(ctx context.Context, opts entities.PodCreateOptions) (*entities.PodCreateReport, error) {
|
||||
podSpec := specgen.NewPodSpecGenerator()
|
||||
opts.ToPodSpecGen(podSpec)
|
||||
return pods.CreatePodFromSpec(ic.ClientCxt, podSpec, nil)
|
||||
return pods.CreatePodFromSpec(ic.ClientCtx, podSpec, nil)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) PodTop(ctx context.Context, opts entities.PodTopOptions) (*entities.StringSliceReport, error) {
|
||||
|
@ -193,7 +193,7 @@ func (ic *ContainerEngine) PodTop(ctx context.Context, opts entities.PodTopOptio
|
|||
return nil, errors.New("NameOrID must be specified")
|
||||
}
|
||||
options := new(pods.TopOptions).WithDescriptors(opts.Descriptors)
|
||||
topOutput, err := pods.Top(ic.ClientCxt, opts.NameOrID, options)
|
||||
topOutput, err := pods.Top(ic.ClientCtx, opts.NameOrID, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -202,7 +202,7 @@ func (ic *ContainerEngine) PodTop(ctx context.Context, opts entities.PodTopOptio
|
|||
|
||||
func (ic *ContainerEngine) PodPs(ctx context.Context, opts entities.PodPSOptions) ([]*entities.ListPodsReport, error) {
|
||||
options := new(pods.ListOptions).WithFilters(opts.Filters)
|
||||
return pods.List(ic.ClientCxt, options)
|
||||
return pods.List(ic.ClientCtx, options)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) PodInspect(ctx context.Context, options entities.PodInspectOptions) (*entities.PodInspectReport, error) {
|
||||
|
@ -212,10 +212,10 @@ func (ic *ContainerEngine) PodInspect(ctx context.Context, options entities.PodI
|
|||
case options.NameOrID == "":
|
||||
return nil, errors.New("NameOrID must be specified")
|
||||
}
|
||||
return pods.Inspect(ic.ClientCxt, options.NameOrID, nil)
|
||||
return pods.Inspect(ic.ClientCtx, options.NameOrID, nil)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) PodStats(ctx context.Context, namesOrIds []string, opts entities.PodStatsOptions) ([]*entities.PodStatsReport, error) {
|
||||
options := new(pods.StatsOptions).WithAll(opts.All)
|
||||
return pods.Stats(ic.ClientCxt, namesOrIds, options)
|
||||
return pods.Stats(ic.ClientCtx, namesOrIds, options)
|
||||
}
|
||||
|
|
|
@ -6,15 +6,15 @@ import (
|
|||
|
||||
// Image-related runtime using an ssh-tunnel to utilize Podman service
|
||||
type ImageEngine struct {
|
||||
ClientCxt context.Context
|
||||
ClientCtx context.Context
|
||||
}
|
||||
|
||||
// Container-related runtime using an ssh-tunnel to utilize Podman service
|
||||
type ContainerEngine struct {
|
||||
ClientCxt context.Context
|
||||
ClientCtx context.Context
|
||||
}
|
||||
|
||||
// Container-related runtime using an ssh-tunnel to utilize Podman service
|
||||
type SystemEngine struct {
|
||||
ClientCxt context.Context
|
||||
ClientCtx context.Context
|
||||
}
|
||||
|
|
|
@ -11,7 +11,7 @@ import (
|
|||
)
|
||||
|
||||
func (ic *ContainerEngine) Info(ctx context.Context) (*define.Info, error) {
|
||||
return system.Info(ic.ClientCxt, nil)
|
||||
return system.Info(ic.ClientCtx, nil)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) SetupRootless(_ context.Context, cmd *cobra.Command) error {
|
||||
|
@ -21,11 +21,11 @@ func (ic *ContainerEngine) SetupRootless(_ context.Context, cmd *cobra.Command)
|
|||
// SystemPrune prunes unused data from the system.
|
||||
func (ic *ContainerEngine) SystemPrune(ctx context.Context, opts entities.SystemPruneOptions) (*entities.SystemPruneReport, error) {
|
||||
options := new(system.PruneOptions).WithAll(opts.All).WithVolumes(opts.Volume).WithFilters(opts.Filters)
|
||||
return system.Prune(ic.ClientCxt, options)
|
||||
return system.Prune(ic.ClientCtx, options)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) SystemDf(ctx context.Context, options entities.SystemDfOptions) (*entities.SystemDfReport, error) {
|
||||
return system.DiskUsage(ic.ClientCxt, nil)
|
||||
return system.DiskUsage(ic.ClientCtx, nil)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) Unshare(ctx context.Context, args []string) error {
|
||||
|
@ -33,5 +33,5 @@ func (ic *ContainerEngine) Unshare(ctx context.Context, args []string) error {
|
|||
}
|
||||
|
||||
func (ic ContainerEngine) Version(ctx context.Context) (*entities.SystemVersionReport, error) {
|
||||
return system.Version(ic.ClientCxt, nil)
|
||||
return system.Version(ic.ClientCtx, nil)
|
||||
}
|
||||
|
|
|
@ -9,7 +9,7 @@ import (
|
|||
)
|
||||
|
||||
func (ic *ContainerEngine) VolumeCreate(ctx context.Context, opts entities.VolumeCreateOptions) (*entities.IDOrNameResponse, error) {
|
||||
response, err := volumes.Create(ic.ClientCxt, opts, nil)
|
||||
response, err := volumes.Create(ic.ClientCtx, opts, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -18,7 +18,7 @@ func (ic *ContainerEngine) VolumeCreate(ctx context.Context, opts entities.Volum
|
|||
|
||||
func (ic *ContainerEngine) VolumeRm(ctx context.Context, namesOrIds []string, opts entities.VolumeRmOptions) ([]*entities.VolumeRmReport, error) {
|
||||
if opts.All {
|
||||
vols, err := volumes.List(ic.ClientCxt, nil)
|
||||
vols, err := volumes.List(ic.ClientCtx, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -30,7 +30,7 @@ func (ic *ContainerEngine) VolumeRm(ctx context.Context, namesOrIds []string, op
|
|||
for _, id := range namesOrIds {
|
||||
options := new(volumes.RemoveOptions).WithForce(opts.Force)
|
||||
reports = append(reports, &entities.VolumeRmReport{
|
||||
Err: volumes.Remove(ic.ClientCxt, id, options),
|
||||
Err: volumes.Remove(ic.ClientCtx, id, options),
|
||||
Id: id,
|
||||
})
|
||||
}
|
||||
|
@ -43,7 +43,7 @@ func (ic *ContainerEngine) VolumeInspect(ctx context.Context, namesOrIds []strin
|
|||
errs = []error{}
|
||||
)
|
||||
if opts.All {
|
||||
vols, err := volumes.List(ic.ClientCxt, nil)
|
||||
vols, err := volumes.List(ic.ClientCtx, nil)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
@ -52,7 +52,7 @@ func (ic *ContainerEngine) VolumeInspect(ctx context.Context, namesOrIds []strin
|
|||
}
|
||||
}
|
||||
for _, id := range namesOrIds {
|
||||
data, err := volumes.Inspect(ic.ClientCxt, id, nil)
|
||||
data, err := volumes.Inspect(ic.ClientCtx, id, nil)
|
||||
if err != nil {
|
||||
errModel, ok := err.(entities.ErrorModel)
|
||||
if !ok {
|
||||
|
@ -71,10 +71,10 @@ func (ic *ContainerEngine) VolumeInspect(ctx context.Context, namesOrIds []strin
|
|||
|
||||
func (ic *ContainerEngine) VolumePrune(ctx context.Context, opts entities.VolumePruneOptions) ([]*entities.VolumePruneReport, error) {
|
||||
options := new(volumes.PruneOptions).WithFilters(opts.Filters)
|
||||
return volumes.Prune(ic.ClientCxt, options)
|
||||
return volumes.Prune(ic.ClientCtx, options)
|
||||
}
|
||||
|
||||
func (ic *ContainerEngine) VolumeList(ctx context.Context, opts entities.VolumeListOptions) ([]*entities.VolumeListReport, error) {
|
||||
options := new(volumes.ListOptions).WithFilters(opts.Filter)
|
||||
return volumes.List(ic.ClientCxt, options)
|
||||
return volumes.List(ic.ClientCtx, options)
|
||||
}
|
||||
|
|
|
@ -9,7 +9,7 @@ This can cause some performance issues.
|
|||
Also a lot of hooks just check if certain configuration is set and then exit early, without doing anything.
|
||||
For example the [oci-systemd-hook][] only executes if the command is `init` or `systemd`, otherwise it just exits.
|
||||
This means if we automatically enabled all hooks, every container would have to execute `oci-systemd-hook`, even if they don't run systemd inside of the container.
|
||||
Performance would also suffer if we exectuted each hook at each stage ([pre-start][], [post-start][], and [post-stop][]).
|
||||
Performance would also suffer if we executed each hook at each stage ([pre-start][], [post-start][], and [post-stop][]).
|
||||
|
||||
The hooks configuration is documented in [`oci-hooks.5`](docs/oci-hooks.5.md).
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Package exec provides utilities for executing Open Container Initative runtime hooks.
|
||||
// Package exec provides utilities for executing Open Container Initiative runtime hooks.
|
||||
package exec
|
||||
|
||||
import (
|
||||
|
|
|
@ -46,7 +46,7 @@ type namedHook struct {
|
|||
//
|
||||
// extensionStages allows callers to add additional stages beyond
|
||||
// those specified in the OCI Runtime Specification and to control
|
||||
// OCI-defined stages instead of delagating to the OCI runtime. See
|
||||
// OCI-defined stages instead of delegating to the OCI runtime. See
|
||||
// Hooks() for more information.
|
||||
func New(ctx context.Context, directories []string, extensionStages []string) (manager *Manager, err error) {
|
||||
manager = &Manager{
|
||||
|
|
|
@ -162,7 +162,7 @@ func NewNS() (ns.NetNS, error) {
|
|||
// bind mount the netns from the current thread (from /proc) onto the
|
||||
// mount point. This causes the namespace to persist, even when there
|
||||
// are no threads in the ns. Make this a shared mount; it needs to be
|
||||
// back-propogated to the host
|
||||
// back-propagated to the host
|
||||
err = unix.Mount(threadNsPath, nsPath, "none", unix.MS_BIND|unix.MS_SHARED|unix.MS_REC, "")
|
||||
if err != nil {
|
||||
err = fmt.Errorf("failed to bind mount ns at %s: %v", nsPath, err)
|
||||
|
|
|
@ -257,7 +257,7 @@ func becomeRootInUserNS(pausePid, fileToRead string, fileOutput *os.File) (_ boo
|
|||
uidsMapped = err == nil
|
||||
}
|
||||
if !uidsMapped {
|
||||
logrus.Warnf("using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids")
|
||||
logrus.Warnf("using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding sub*ids")
|
||||
setgroups := fmt.Sprintf("/proc/%d/setgroups", pid)
|
||||
err = ioutil.WriteFile(setgroups, []byte("deny\n"), 0666)
|
||||
if err != nil {
|
||||
|
|
|
@ -30,7 +30,7 @@ var supportedPolicies = map[string]Policy{
|
|||
"image": PolicyImage,
|
||||
}
|
||||
|
||||
// LookupPolicy looksup the corresponding Policy for the specified
|
||||
// LookupPolicy looks up the corresponding Policy for the specified
|
||||
// string. If none is found, an errors is returned including the list of
|
||||
// supported policies.
|
||||
//
|
||||
|
|
|
@ -163,7 +163,7 @@ func CompleteSpec(ctx context.Context, r *libpod.Runtime, s *specgen.SpecGenerat
|
|||
return nil, err
|
||||
}
|
||||
|
||||
// labels from the image that dont exist already
|
||||
// labels from the image that don't exist already
|
||||
if len(labels) > 0 && s.Labels == nil {
|
||||
s.Labels = make(map[string]string)
|
||||
}
|
||||
|
|
|
@ -104,7 +104,7 @@ func ToSpecGen(ctx context.Context, containerYAML v1.Container, iid string, newI
|
|||
s.ResourceLimits.Memory.Reservation = &memoryRes
|
||||
}
|
||||
|
||||
// TODO: We dont understand why specgen does not take of this, but
|
||||
// TODO: We don't understand why specgen does not take of this, but
|
||||
// integration tests clearly pointed out that it was required.
|
||||
s.Command = []string{}
|
||||
imageData, err := newImage.Inspect(ctx)
|
||||
|
|
|
@ -319,7 +319,7 @@ func SpecGenToOCI(ctx context.Context, s *specgen.SpecGenerator, rt *libpod.Runt
|
|||
}
|
||||
|
||||
// BIND MOUNTS
|
||||
configSpec.Mounts = SupercedeUserMounts(mounts, configSpec.Mounts)
|
||||
configSpec.Mounts = SupersedeUserMounts(mounts, configSpec.Mounts)
|
||||
// Process mounts to ensure correct options
|
||||
if err := InitFSMounts(configSpec.Mounts); err != nil {
|
||||
return nil, err
|
||||
|
|
|
@ -115,7 +115,7 @@ func securityConfigureGenerator(s *specgen.SpecGenerator, g *generate.Generator,
|
|||
if err != nil {
|
||||
return errors.Wrapf(err, "capabilities requested by user or image are not valid: %q", strings.Join(capsRequired, ","))
|
||||
} else {
|
||||
// Verify all capRequiered are in the capList
|
||||
// Verify all capRequired are in the capList
|
||||
for _, cap := range capsRequired {
|
||||
if !util.StringInSlice(cap, caplist) {
|
||||
privCapsRequired = append(privCapsRequired, cap)
|
||||
|
|
|
@ -366,7 +366,7 @@ func addContainerInitBinary(s *specgen.SpecGenerator, path string) (spec.Mount,
|
|||
// TODO: Should we unmount subtree mounts? E.g., if /tmp/ is mounted by
|
||||
// one mount, and we already have /tmp/a and /tmp/b, should we remove
|
||||
// the /tmp/a and /tmp/b mounts in favor of the more general /tmp?
|
||||
func SupercedeUserMounts(mounts []spec.Mount, configMount []spec.Mount) []spec.Mount {
|
||||
func SupersedeUserMounts(mounts []spec.Mount, configMount []spec.Mount) []spec.Mount {
|
||||
if len(mounts) > 0 {
|
||||
// If we have overlappings mounts, remove them from the spec in favor of
|
||||
// the user-added volume mounts
|
||||
|
|
|
@ -48,7 +48,7 @@ func (p *PodSpecGenerator) Validate() error {
|
|||
}
|
||||
if p.NoInfra {
|
||||
if p.NetNS.NSMode != Default && p.NetNS.NSMode != "" {
|
||||
return errors.New("NoInfra and network modes cannot be used toegther")
|
||||
return errors.New("NoInfra and network modes cannot be used together")
|
||||
}
|
||||
if p.StaticIP != nil {
|
||||
return exclusivePodOptions("NoInfra", "StaticIP")
|
||||
|
|
|
@ -19,7 +19,7 @@ type LogConfig struct {
|
|||
// Only available if LogDriver is set to "json-file" or "k8s-file".
|
||||
// Optional.
|
||||
Path string `json:"path,omitempty"`
|
||||
// Size is the maximimup size of the log file
|
||||
// Size is the maximum size of the log file
|
||||
// Optional.
|
||||
Size int64 `json:"size,omitempty"`
|
||||
// A set of options to accompany the log driver.
|
||||
|
@ -302,7 +302,7 @@ type ContainerSecurityConfig struct {
|
|||
IDMappings *storage.IDMappingOptions `json:"idmappings,omitempty"`
|
||||
// ReadOnlyFilesystem indicates that everything will be mounted
|
||||
// as read-only
|
||||
ReadOnlyFilesystem bool `json:"read_only_filesystem,omittempty"`
|
||||
ReadOnlyFilesystem bool `json:"read_only_filesystem,omitempty"`
|
||||
// Umask is the umask the init process of the container will be run with.
|
||||
Umask string `json:"umask,omitempty"`
|
||||
// ProcOpts are the options used for the proc mount.
|
||||
|
|
|
@ -191,7 +191,7 @@ func executeContainerTemplate(info *containerInfo, options entities.GenerateSyst
|
|||
return "", errors.Errorf("container's create command is too short or invalid: %v", info.CreateCommand)
|
||||
}
|
||||
// We're hard-coding the first five arguments and append the
|
||||
// CreateCommand with a stripped command and subcomand.
|
||||
// CreateCommand with a stripped command and subcommand.
|
||||
startCommand := []string{
|
||||
info.Executable,
|
||||
"run",
|
||||
|
@ -241,7 +241,7 @@ func executeContainerTemplate(info *containerInfo, options entities.GenerateSyst
|
|||
}
|
||||
if hasNameParam && !hasReplaceParam {
|
||||
// Enforce --replace for named containers. This will
|
||||
// make systemd units more robuts as it allows them to
|
||||
// make systemd units more robust as it allows them to
|
||||
// start after system crashes (see
|
||||
// github.com/containers/podman/issues/5485).
|
||||
startCommand = append(startCommand, "--replace")
|
||||
|
|
|
@ -266,7 +266,7 @@ func executePodTemplate(info *podInfo, options entities.GenerateSystemdOptions)
|
|||
podCreateArgs = filterPodFlags(info.CreateCommand[podCreateIndex+1:])
|
||||
}
|
||||
// We're hard-coding the first five arguments and append the
|
||||
// CreateCommand with a stripped command and subcomand.
|
||||
// CreateCommand with a stripped command and subcommand.
|
||||
startCommand := []string{info.Executable}
|
||||
startCommand = append(startCommand, podRootArgs...)
|
||||
startCommand = append(startCommand,
|
||||
|
|
|
@ -114,7 +114,7 @@ func GetRootlessPauseProcessPidPath() (string, error) {
|
|||
// files.
|
||||
func GetRootlessPauseProcessPidPathGivenDir(libpodTmpDir string) (string, error) {
|
||||
if libpodTmpDir == "" {
|
||||
return "", errors.Errorf("must provide non-empty tmporary directory")
|
||||
return "", errors.Errorf("must provide non-empty temporary directory")
|
||||
}
|
||||
return filepath.Join(libpodTmpDir, "pause.pid"), nil
|
||||
}
|
||||
|
|
|
@ -18,7 +18,7 @@ can easily fail
|
|||
* Some system unit configuration options do not work in the rootless container
|
||||
* systemd fails to apply several options and failures are silently ignored (e.g. CPUShares, MemoryLimit). Should work on cgroup V2.
|
||||
* Use of certain options will cause service startup failures (e.g. PrivateNetwork). The systemd services requiring `PrivateNetwork` can be made to work by passing `--cap-add SYS_ADMIN`, but the security implications should be carefully evaluated. In most cases, it's better to create an override.conf drop-in that sets `PrivateNetwork=no`. This also applies to containers run by root.
|
||||
* Can not share container images with CRI-O or other rootfull users
|
||||
* Can not share container images with CRI-O or other rootful users
|
||||
* Difficult to use additional stores for sharing content
|
||||
* Does not work on NFS or parallel filesystem homedirs (e.g. [GPFS](https://www.ibm.com/support/knowledgecenter/en/SSFKCN/gpfs_welcome.html))
|
||||
* NFS and parallel filesystems enforce file creation on different UIDs on the server side and does not understand User Namespace.
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue