Currently if the user installs runc in an alternative path
podman run uses it but podman build does not.
This patch will pass the default oci runtime to be used by podman
down to the image builder.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We had two problems with /dev/shm, first, you mount the
container read/only then /dev/shm was mounted read/only.
This is a bug a tmpfs directory should be read/write within
a read-only container.
The second problem is we were ignoring users mounted /dev/shm
from the host.
If user specified
podman run -d -v /dev/shm:/dev/shm ...
We were dropping this mount and still using the internal mount.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
podman play kube adds the ability for the user to recreate pods and containers
from a Kubernetes YAML file in libpod.
Signed-off-by: baude <bbaude@redhat.com>
Display the trust policy of the host system. The trust policy is stored in the /etc/containers/policy.json file and defines a scope of registries or repositories.
Signed-off-by: Qi Wang <qiwan@redhat.com>
This will allow container processes to write to the CRIU socket that gets injected
into the container.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When sharing a network namespace, containers should also share
resolv.conf and /etc/hosts in case a container process made
changes to either (for example, if I set up a VPN client in
container A and join container B to its network namespace, I
expect container B to use the DNS servers from A to ensure it can
see everything on the VPN).
Resolves: #1546
Signed-off-by: Matthew Heon <mheon@redhat.com>
Instead of forcing another user lookup when mounting image
volumes, just use the information we looked up when we started
generating the spec.
This may resolve#1817
Signed-off-by: Matthew Heon <mheon@redhat.com>
Using the default capabilities, we can determine which caps were
added and dropped. Now added them to the security context structure.
Signed-off-by: baude <bbaude@redhat.com>
If one of storage GraphRoot or RunRoot are specified, but the
other is not, c/storage will not use the default, and will throw
an error instead. Ensure that in cases where this would happen,
we populate the fields with the c/storage defaults ourselves.
Signed-off-by: Matthew Heon <mheon@redhat.com>
like podman stop of containers, we should allow the user to specify
a timeout override when stopping pods; otherwise they have to wait
the full timeout time specified during the pod/container creation.
Signed-off-by: baude <bbaude@redhat.com>
i.e. actually reflect the environment variable and/or rootless mode
instead of always using the default path.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
DockerRegistryOptions.DockerInsecureSkipTLSVerify as an types.OptionalBool
can now represent that value, so forceSecure is redundant.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
DockerRegistryOptions.DockerInsecureSkipTLSVerify as an types.OptionalBool
can now represent that value, so forceSecure is redundant.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Following SystemContext.DockerInsecureSkipTLSVerify, make the
DockerRegistryOne also an OptionalBool, and update callers.
Explicitly document that --tls-verify=true and --tls-verify unset
have different behavior in those commands where the behavior changed
(or where it hasn't changed but the documentation needed updating).
Also make the --tls-verify man page sections a tiny bit more consistent
throughout.
This is a minimal fix, without changing the existing "--tls-verify=true"
paths nor existing manual insecure registry lookups.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
containers inside pods need to make sure they get /etc/resolv.conf
and /etc/hosts bind mounted when network is expected
Signed-off-by: baude <bbaude@redhat.com>
Previously not needed as it only worked inside of Batch(), but
now that it can be called anywhere we need to add mutual
exclusion on its config changes.
Signed-off-by: Matthew Heon <mheon@redhat.com>
With the changes made recently to ensure Podman does not hit the
OCI runtime as often to sync state, we can find ourselves in a
situation where the runtime's state does not match ours.
Add a --sync flag to podman rm to ensure we can still remove
containers when this happens.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Add support for podman volume and its subcommands.
The commands supported are:
podman volume create
podman volume inspect
podman volume ls
podman volume rm
podman volume prune
This is a tool to manage volumes used by podman. For now it only handle
named volumes, but eventually it will handle all volumes used by podman.
Signed-off-by: umohnani8 <umohnani@redhat.com>
Allow user to prune unused/unnamed images, the layer images from building,
via podman rmi --prune.
Allow user to prune stopped/exiuted containers via podman rm --prune.
This should resolve#1910
Signed-off-by: baude <bbaude@redhat.com>
When an image config sets config.User [1] to a numeric group (like
1000:1000), but those values do not exist in the container's
/etc/group, libpod is currently breaking:
$ podman run --rm registry.svc.ci.openshift.org/ci-op-zvml7cd6/pipeline:installer --help
error creating temporary passwd file for container 228f6e9943d6f18b93c19644e9b619ec4d459a3e0eb31680e064eeedf6473678: unable to get gid 1000 from group file: no matching entries in group file
However, the OCI spec requires converters to copy numeric uid and gid
to the runtime config verbatim [2].
With this commit, I'm frontloading the "is groupspec an integer?"
check and only bothering with lookup.GetGroup when it was not.
I've also removed a few .Mounted checks, which are originally from
00d38cb3 (podman create/run need to load information from the image,
2017-12-18, #110). We don't need a mounted container filesystem to
translate integers. And when the lookup code needs to fall back to
the mounted root to translate names, it can handle erroring out
internally (and looking it over, it seems to do that already).
[1]: https://github.com/opencontainers/image-spec/blame/v1.0.1/config.md#L118-L123
[2]: https://github.com/opencontainers/image-spec/blame/v1.0.1/conversion.md#L70
Signed-off-by: W. Trevor King <wking@tremily.us>
We don't use CNI to configure networks for rootless containers,
so no need to set it up. It may also cause issues with inotify,
so disabling it resolves some potential problems.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Instead of storing the runtime's file lock dir in the BoltDB
state, refer to the runtime inside the Bolt state instead, and
use the path stored in the runtime.
This is necessary since we moved DB initialization very far up in
runtime init, before the locks dir is properly initialized (and
it must happen before the locks dir can be created, as we use the
DB to retrieve the proper path for the locks dir now).
Signed-off-by: Matthew Heon <mheon@redhat.com>
Part of the motivation for 800eb863 (Hooks supports two directories,
process default and override, 2018-09-17, #1487) was [1]:
> We only use this for override. The reason this was caught is people
> are trying to get hooks to work with CoreOS. You are not allowed to
> write to /usr/share... on CoreOS, so they wanted podman to also look
> at /etc, where users and third parties can write.
But we'd also been disabling hooks completely for rootless users. And
even for root users, the override logic was tricky when folks actually
had content in both directories. For example, if you wanted to
disable a hook from the default directory, you'd have to add a no-op
hook to the override directory.
Also, the previous implementation failed to handle the case where
there hooks defined in the override directory but the default
directory did not exist:
$ podman version
Version: 0.11.2-dev
Go Version: go1.10.3
Git Commit: "6df7409cb5a41c710164c42ed35e33b28f3f7214"
Built: Sun Dec 2 21:30:06 2018
OS/Arch: linux/amd64
$ ls -l /etc/containers/oci/hooks.d/test.json
-rw-r--r--. 1 root root 184 Dec 2 16:27 /etc/containers/oci/hooks.d/test.json
$ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
time="2018-12-02T21:31:19-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2018-12-02T21:31:19-08:00" level=warning msg="failed to load hooks: {}%!(EXTRA *os.PathError=open /usr/share/containers/oci/hooks.d: no such file or directory)"
With this commit:
$ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /etc/containers/oci/hooks.d"
time="2018-12-02T21:33:07-08:00" level=debug msg="added hook /etc/containers/oci/hooks.d/test.json"
time="2018-12-02T21:33:07-08:00" level=debug msg="hook test.json matched; adding to stages [prestart]"
time="2018-12-02T21:33:07-08:00" level=warning msg="implicit hook directories are deprecated; set --hooks-dir="/etc/containers/oci/hooks.d" explicitly to continue to load hooks from this directory"
time="2018-12-02T21:33:07-08:00" level=error msg="container create failed: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"process_linux.go:382: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: oh, noes!\\\\n\\\"\""
(I'd setup the hook to error out). You can see that it's silenly
ignoring the ENOENT for /usr/share/containers/oci/hooks.d and
continuing on to load hooks from /etc/containers/oci/hooks.d.
When it loads the hook, it also logs a warning-level message
suggesting that callers explicitly configure their hook directories.
That will help consumers migrate, so we can drop the implicit hook
directories in some future release. When folks *do* explicitly
configure hook directories (via the newly-public --hooks-dir and
hooks_dir options), we error out if they're missing:
$ podman --hooks-dir /does/not/exist run --rm docker.io/library/alpine echo 'successful container'
error setting up OCI Hooks: open /does/not/exist: no such file or directory
I've dropped the trailing "path" from the old, hidden --hooks-dir-path
and hooks_dir_path because I think "dir(ectory)" is already enough
context for "we expect a path argument". I consider this name change
non-breaking because the old forms were undocumented.
Coming back to rootless users, I've enabled hooks now. I expect they
were previously disabled because users had no way to avoid
/usr/share/containers/oci/hooks.d which might contain hooks that
required root permissions. But now rootless users will have to
explicitly configure hook directories, and since their default config
is from ~/.config/containers/libpod.conf, it's a misconfiguration if
it contains hooks_dir entries which point at directories with hooks
that require root access. We error out so they can fix their
libpod.conf.
[1]: https://github.com/containers/libpod/pull/1487#discussion_r218149355
Signed-off-by: W. Trevor King <wking@tremily.us>
We don't need this for anything more than rootless work in Libpod
now, but Buildah still uses it as it was originally written, so
leave it intact as part of our API.
Signed-off-by: Matthew Heon <mheon@redhat.com>
When graphroot is set by the user, we should set libpod's static
directory to a subdirectory of that by default, to duplicate
previous behavior.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Ensure that the directory where we will create the Podman db
exists prior to creating the database - otherwise creating the DB
will fail.
Signed-off-by: Matthew Heon <mheon@redhat.com>