mirror of https://github.com/docker/docs.git
Merge branch 'master' of github.com:docker/docker into joyentinstall
Resolved conflict in the following file: docs/sources/installation/MAINTAINERS File was deleted upstream and changed in this branch. Deleting the file in this branch as well. Signed-off-by: Casey Bisson <casey.bisson@joyent.com>
This commit is contained in:
commit
cb2280c98e
2
.mailmap
2
.mailmap
|
@ -1,4 +1,4 @@
|
||||||
# Generate AUTHORS: project/generate-authors.sh
|
# Generate AUTHORS: hack/generate-authors.sh
|
||||||
|
|
||||||
# Tip for finding duplicates (besides scanning the output of AUTHORS for name
|
# Tip for finding duplicates (besides scanning the output of AUTHORS for name
|
||||||
# duplicates that aren't also email duplicates): scan the output of:
|
# duplicates that aren't also email duplicates): scan the output of:
|
||||||
|
|
2
AUTHORS
2
AUTHORS
|
@ -1,5 +1,5 @@
|
||||||
# This file lists all individuals having contributed content to the repository.
|
# This file lists all individuals having contributed content to the repository.
|
||||||
# For how it is generated, see `project/generate-authors.sh`.
|
# For how it is generated, see `hack/generate-authors.sh`.
|
||||||
|
|
||||||
Aanand Prasad <aanand.prasad@gmail.com>
|
Aanand Prasad <aanand.prasad@gmail.com>
|
||||||
Aaron Feng <aaron.feng@gmail.com>
|
Aaron Feng <aaron.feng@gmail.com>
|
||||||
|
|
315
CONTRIBUTING.md
315
CONTRIBUTING.md
|
@ -1,70 +1,60 @@
|
||||||
# Contributing to Docker
|
# Contributing to Docker
|
||||||
|
|
||||||
Want to hack on Docker? Awesome! Here are instructions to get you
|
Want to hack on Docker? Awesome! We have a contributor's guide that explains
|
||||||
started. They are probably not perfect; please let us know if anything
|
[setting up a Docker development environment and the contribution
|
||||||
feels wrong or incomplete.
|
process](https://docs.docker.com/project/who-written-for/).
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
This page contains information about reporting issues as well as some tips and
|
||||||
|
guidelines useful to experienced open source contributors. Finally, make sure
|
||||||
|
you read our [community guidelines](#docker-community-guidelines) before you
|
||||||
|
start participating.
|
||||||
|
|
||||||
## Topics
|
## Topics
|
||||||
|
|
||||||
* [Reporting Security Issues](#reporting-security-issues)
|
* [Reporting Security Issues](#reporting-security-issues)
|
||||||
* [Design and Cleanup Proposals](#design-and-cleanup-proposals)
|
* [Design and Cleanup Proposals](#design-and-cleanup-proposals)
|
||||||
* [Reporting Issues](#reporting-issues)
|
* [Reporting Issues](#reporting-other-issues)
|
||||||
* [Build Environment](#build-environment)
|
* [Quick Contribution Tips and Guidelines](#quick-contribution-tips-and-guidelines)
|
||||||
* [Contribution Guidelines](#contribution-guidelines)
|
|
||||||
* [Community Guidelines](#docker-community-guidelines)
|
* [Community Guidelines](#docker-community-guidelines)
|
||||||
|
|
||||||
## Reporting Security Issues
|
## Reporting security issues
|
||||||
|
|
||||||
The Docker maintainers take security very seriously. If you discover a security issue,
|
The Docker maintainers take security seriously. If you discover a security
|
||||||
please bring it to their attention right away!
|
issue, please bring it to their attention right away!
|
||||||
|
|
||||||
Please send your report privately to [security@docker.com](mailto:security@docker.com),
|
Please **DO NOT** file a public issue, instead send your report privately to
|
||||||
please **DO NOT** file a public issue.
|
[security@docker.com](mailto:security@docker.com),
|
||||||
|
|
||||||
Security reports are greatly appreciated and we will publicly thank you for it. We also
|
Security reports are greatly appreciated and we will publicly thank you for it.
|
||||||
like to send gifts - if you're into Docker shwag make sure to let us know :)
|
We also like to send gifts—if you're into Docker schwag make sure to let
|
||||||
We currently do not offer a paid security bounty program, but are not ruling it out in
|
us know We currently do not offer a paid security bounty program, but are not
|
||||||
the future.
|
ruling it out in the future.
|
||||||
|
|
||||||
## Design and Cleanup Proposals
|
|
||||||
|
|
||||||
When considering a design proposal, we are looking for:
|
## Reporting other issues
|
||||||
|
|
||||||
* A description of the problem this design proposal solves
|
|
||||||
* A pull request, not an issue, that modifies the documentation describing
|
|
||||||
the feature you are proposing, adding new documentation if necessary.
|
|
||||||
* Please prefix your issue with `Proposal:` in the title
|
|
||||||
* Please review [the existing Proposals](https://github.com/docker/docker/pulls?q=is%3Aopen+is%3Apr+label%3AProposal)
|
|
||||||
before reporting a new one. You can always pair with someone if you both
|
|
||||||
have the same idea.
|
|
||||||
|
|
||||||
When considering a cleanup task, we are looking for:
|
|
||||||
|
|
||||||
* A description of the refactors made
|
|
||||||
* Please note any logic changes if necessary
|
|
||||||
* A pull request with the code
|
|
||||||
* Please prefix your PR's title with `Cleanup:` so we can quickly address it.
|
|
||||||
* Your pull request must remain up to date with master, so rebase as necessary.
|
|
||||||
|
|
||||||
## Reporting Issues
|
|
||||||
|
|
||||||
A great way to contribute to the project is to send a detailed report when you
|
A great way to contribute to the project is to send a detailed report when you
|
||||||
encounter an issue. We always appreciate a well-written, thorough bug report,
|
encounter an issue. We always appreciate a well-written, thorough bug report,
|
||||||
and will thank you for it!
|
and will thank you for it!
|
||||||
|
|
||||||
When reporting [issues](https://github.com/docker/docker/issues) on
|
Check that [our issue database](https://github.com/docker/docker/issues)
|
||||||
GitHub please include your host OS (Ubuntu 12.04, Fedora 19, etc).
|
doesn't already include that problem or suggestion before submitting an issue.
|
||||||
Please include:
|
If you find a match, add a quick "+1" or "I have this problem too." Doing this
|
||||||
|
helps prioritize the most common problems and requests.
|
||||||
|
|
||||||
|
When reporting issues, please include your host OS (Ubuntu 12.04, Fedora 19,
|
||||||
|
etc). Please include:
|
||||||
|
|
||||||
* The output of `uname -a`.
|
* The output of `uname -a`.
|
||||||
* The output of `docker version`.
|
* The output of `docker version`.
|
||||||
* The output of `docker -D info`.
|
* The output of `docker -D info`.
|
||||||
|
|
||||||
Please also include the steps required to reproduce the problem if
|
Please also include the steps required to reproduce the problem if possible and
|
||||||
possible and applicable. This information will help us review and fix
|
applicable. This information will help us review and fix your issue faster.
|
||||||
your issue faster.
|
|
||||||
|
|
||||||
### Template
|
**Issue Report Template**:
|
||||||
|
|
||||||
```
|
```
|
||||||
Description of problem:
|
Description of problem:
|
||||||
|
@ -103,85 +93,120 @@ Additional info:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Build Environment
|
|
||||||
|
|
||||||
For instructions on setting up your development environment, please
|
##Quick contribution tips and guidelines
|
||||||
see our dedicated [dev environment setup
|
|
||||||
docs](http://docs.docker.com/contributing/devenvironment/).
|
|
||||||
|
|
||||||
## Contribution guidelines
|
This section gives the experienced contributor some tips and guidelines.
|
||||||
|
|
||||||
###Pull requests are always welcome
|
###Pull requests are always welcome
|
||||||
|
|
||||||
We are always thrilled to receive pull requests, and do our best to
|
Not sure if that typo is worth a pull request? Found a bug and know how to fix
|
||||||
process them as quickly as possible. Not sure if that typo is worth a pull
|
it? Do it! We will appreciate it. Any significant improvement should be
|
||||||
request? Do it! We will appreciate it.
|
documented as [a GitHub issue](https://github.com/docker/docker/issues) before
|
||||||
|
anybody starts working on it.
|
||||||
|
|
||||||
If your pull request is not accepted on the first try, don't be
|
We are always thrilled to receive pull requests. We do our best to process them
|
||||||
discouraged! If there's a problem with the implementation, hopefully you
|
quickly. If your pull request is not accepted on the first try,
|
||||||
received feedback on what to improve.
|
don't get discouraged! Our contributor's guide explains [the review process we
|
||||||
|
use for simple changes](https://docs.docker.com/project/make-a-contribution/).
|
||||||
|
|
||||||
We're trying very hard to keep Docker lean and focused. We don't want it
|
### Design and cleanup proposals
|
||||||
to do everything for everybody. This means that we might decide against
|
|
||||||
incorporating a new feature. However, there might be a way to implement
|
|
||||||
that feature *on top of* Docker.
|
|
||||||
|
|
||||||
### Discuss your design on the mailing list
|
You can propose new designs for existing Docker features. You can also design
|
||||||
|
entirely new features. We really appreciate contributors who want to refactor or
|
||||||
|
otherwise cleanup our project. For information on making these types of
|
||||||
|
contributions, see [the advanced contribution
|
||||||
|
section](https://docs.docker.com/project/advanced-contributing/) in the
|
||||||
|
contributors guide.
|
||||||
|
|
||||||
We recommend discussing your plans [on the mailing
|
We try hard to keep Docker lean and focused. Docker can't do everything for
|
||||||
list](https://groups.google.com/forum/?fromgroups#!forum/docker-dev)
|
everybody. This means that we might decide against incorporating a new feature.
|
||||||
before starting to code - especially for more ambitious contributions.
|
However, there might be a way to implement that feature *on top of* Docker.
|
||||||
This gives other contributors a chance to point you in the right
|
|
||||||
direction, give feedback on your design, and maybe point out if someone
|
|
||||||
else is working on the same thing.
|
|
||||||
|
|
||||||
### Create issues...
|
### Talking to other Docker users and contributors
|
||||||
|
|
||||||
Any significant improvement should be documented as [a GitHub
|
<table class="tg">
|
||||||
issue](https://github.com/docker/docker/issues) before anybody
|
<col width="45%">
|
||||||
starts working on it.
|
<col width="65%">
|
||||||
|
<tr>
|
||||||
|
<td>Internet Relay Chat (IRC)</th>
|
||||||
|
<td>
|
||||||
|
<p>
|
||||||
|
IRC a direct line to our most knowledgeable Docker users; we have
|
||||||
|
both the <code>#docker</code> and <code>#docker-dev</code> group on
|
||||||
|
<strong>irc.freenode.net</strong>.
|
||||||
|
IRC is a rich chat protocol but it can overwhelm new users. You can search
|
||||||
|
<a href="https://botbot.me/freenode/docker/#" target="_blank">our chat archives</a>.
|
||||||
|
</p>
|
||||||
|
Read our <a href="https://docs.docker.com/project/get-help/#irc-quickstart" target="_blank">IRC quickstart guide</a> for an easy way to get started.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Google Groups</td>
|
||||||
|
<td>
|
||||||
|
There are two groups.
|
||||||
|
<a href="https://groups.google.com/forum/#!forum/docker-user" target="_blank">Docker-user</a>
|
||||||
|
is for people using Docker containers.
|
||||||
|
The <a href="https://groups.google.com/forum/#!forum/docker-dev" target="_blank">docker-dev</a>
|
||||||
|
group is for contributors and other people contributing to the Docker
|
||||||
|
project.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Twitter</td>
|
||||||
|
<td>
|
||||||
|
You can follow <a href="https://twitter.com/docker/" target="_blank">Docker's Twitter feed</a>
|
||||||
|
to get updates on our products. You can also tweet us questions or just
|
||||||
|
share blogs or stories.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Stack Overflow</td>
|
||||||
|
<td>
|
||||||
|
Stack Overflow has over 7000K Docker questions listed. We regularly
|
||||||
|
monitor <a href="http://stackoverflow.com/search?tab=newest&q=docker" target="_blank">Docker questions</a>
|
||||||
|
and so do many other knowledgeable Docker users.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
### ...but check for existing issues first!
|
|
||||||
|
|
||||||
Please take a moment to check that an issue doesn't already exist
|
|
||||||
documenting your bug report or improvement proposal. If it does, it
|
|
||||||
never hurts to add a quick "+1" or "I have this problem too". This will
|
|
||||||
help prioritize the most common problems and requests.
|
|
||||||
|
|
||||||
### Conventions
|
### Conventions
|
||||||
|
|
||||||
Fork the repository and make changes on your fork in a feature branch:
|
Fork the repository and make changes on your fork in a feature branch:
|
||||||
|
|
||||||
- If it's a bug fix branch, name it XXXX-something where XXXX is the number of the
|
- If it's a bug fix branch, name it XXXX-something where XXXX is the number of
|
||||||
|
the issue.
|
||||||
|
- If it's a feature branch, create an enhancement issue to announce
|
||||||
|
your intentions, and name it XXXX-something where XXXX is the number of the
|
||||||
issue.
|
issue.
|
||||||
- If it's a feature branch, create an enhancement issue to announce your
|
|
||||||
intentions, and name it XXXX-something where XXXX is the number of the issue.
|
|
||||||
|
|
||||||
Submit unit tests for your changes. Go has a great test framework built in; use
|
Submit unit tests for your changes. Go has a great test framework built in; use
|
||||||
it! Take a look at existing tests for inspiration. Run the full test suite on
|
it! Take a look at existing tests for inspiration. [Run the full test
|
||||||
your branch before submitting a pull request.
|
suite](https://docs.docker.com/project/test-and-docs/) on your branch before
|
||||||
|
submitting a pull request.
|
||||||
|
|
||||||
Update the documentation when creating or modifying features. Test
|
Update the documentation when creating or modifying features. Test your
|
||||||
your documentation changes for clarity, concision, and correctness, as
|
documentation changes for clarity, concision, and correctness, as well as a
|
||||||
well as a clean documentation build. See `docs/README.md` for more
|
clean documentation build. See our contributors guide for [our style
|
||||||
information on building the docs and how they get released.
|
guide](https://docs.docker.com/project/doc-style) and instructions on [building
|
||||||
|
the documentation](https://docs.docker.com/project/test-and-docs/#build-and-test-the-documentation).
|
||||||
|
|
||||||
Write clean code. Universally formatted code promotes ease of writing, reading,
|
Write clean code. Universally formatted code promotes ease of writing, reading,
|
||||||
and maintenance. Always run `gofmt -s -w file.go` on each changed file before
|
and maintenance. Always run `gofmt -s -w file.go` on each changed file before
|
||||||
committing your changes. Most editors have plug-ins that do this automatically.
|
committing your changes. Most editors have plug-ins that do this automatically.
|
||||||
|
|
||||||
Pull requests descriptions should be as clear as possible and include a
|
Pull request descriptions should be as clear as possible and include a reference
|
||||||
reference to all the issues that they address.
|
to all the issues that they address.
|
||||||
|
|
||||||
Commit messages must start with a capitalized and short summary (max. 50
|
Commit messages must start with a capitalized and short summary (max. 50 chars)
|
||||||
chars) written in the imperative, followed by an optional, more detailed
|
written in the imperative, followed by an optional, more detailed explanatory
|
||||||
explanatory text which is separated from the summary by an empty line.
|
text which is separated from the summary by an empty line.
|
||||||
|
|
||||||
Code review comments may be added to your pull request. Discuss, then make the
|
Code review comments may be added to your pull request. Discuss, then make the
|
||||||
suggested modifications and push additional commits to your feature branch. Be
|
suggested modifications and push additional commits to your feature branch. Post
|
||||||
sure to post a comment after pushing. The new commits will show up in the pull
|
a comment after pushing. New commits show up in the pull request automatically,
|
||||||
request automatically, but the reviewers will not be notified unless you
|
but the reviewers are notified only when you comment.
|
||||||
comment.
|
|
||||||
|
|
||||||
Pull requests must be cleanly rebased on top of master without multiple branches
|
Pull requests must be cleanly rebased on top of master without multiple branches
|
||||||
mixed into the PR.
|
mixed into the PR.
|
||||||
|
@ -189,37 +214,44 @@ mixed into the PR.
|
||||||
**Git tip**: If your PR no longer merges cleanly, use `rebase master` in your
|
**Git tip**: If your PR no longer merges cleanly, use `rebase master` in your
|
||||||
feature branch to update your pull request rather than `merge master`.
|
feature branch to update your pull request rather than `merge master`.
|
||||||
|
|
||||||
Before the pull request is merged, make sure that you squash your commits into
|
Before you make a pull request, squash your commits into logical units of work
|
||||||
logical units of work using `git rebase -i` and `git push -f`. After every
|
using `git rebase -i` and `git push -f`. A logical unit of work is a consistent
|
||||||
commit the test suite should be passing. Include documentation changes in the
|
set of patches that should be reviewed together: for example, upgrading the
|
||||||
same commit so that a revert would remove all traces of the feature or fix.
|
version of a vendored dependency and taking advantage of its now available new
|
||||||
|
feature constitute two separate units of work. Implementing a new function and
|
||||||
|
calling it in another file constitute a single logical unit of work. The very
|
||||||
|
high majory of submissions should have a single commit, so if in doubt: squash
|
||||||
|
down to one.
|
||||||
|
|
||||||
Commits that fix or close an issue should include a reference like
|
After every commit, [make sure the test suite passes]
|
||||||
`Closes #XXXX` or `Fixes #XXXX`, which will automatically close the
|
((https://docs.docker.com/project/test-and-docs/)). Include documentation
|
||||||
issue when merged.
|
changes in the same pull request so that a revert would remove all traces of
|
||||||
|
the feature or fix.
|
||||||
|
|
||||||
Please do not add yourself to the `AUTHORS` file, as it is regenerated
|
Include an issue reference like `Closes #XXXX` or `Fixes #XXXX` in commits that
|
||||||
regularly from the Git history.
|
close an issue. Including references automatically closes the issue on a merge.
|
||||||
|
|
||||||
|
Please do not add yourself to the `AUTHORS` file, as it is regenerated regularly
|
||||||
|
from the Git history.
|
||||||
|
|
||||||
### Merge approval
|
### Merge approval
|
||||||
|
|
||||||
Docker maintainers use LGTM (Looks Good To Me) in comments on the code review
|
Docker maintainers use LGTM (Looks Good To Me) in comments on the code review to
|
||||||
to indicate acceptance.
|
indicate acceptance.
|
||||||
|
|
||||||
A change requires LGTMs from an absolute majority of the maintainers of each
|
A change requires LGTMs from an absolute majority of the maintainers of each
|
||||||
component affected. For example, if a change affects `docs/` and `registry/`, it
|
component affected. For example, if a change affects `docs/` and `registry/`, it
|
||||||
needs an absolute majority from the maintainers of `docs/` AND, separately, an
|
needs an absolute majority from the maintainers of `docs/` AND, separately, an
|
||||||
absolute majority of the maintainers of `registry/`.
|
absolute majority of the maintainers of `registry/`.
|
||||||
|
|
||||||
For more details see [MAINTAINERS](MAINTAINERS)
|
For more details, see the [MAINTAINERS](MAINTAINERS) page.
|
||||||
|
|
||||||
### Sign your work
|
### Sign your work
|
||||||
|
|
||||||
The sign-off is a simple line at the end of the explanation for the
|
The sign-off is a simple line at the end of the explanation for the patch. Your
|
||||||
patch, which certifies that you wrote it or otherwise have the right to
|
signature certifies that you wrote the patch or otherwise have the right to pass
|
||||||
pass it on as an open-source patch. The rules are pretty simple: if you
|
it on as an open-source patch. The rules are pretty simple: if you can certify
|
||||||
can certify the below (from
|
the below (from [developercertificate.org](http://developercertificate.org/)):
|
||||||
[developercertificate.org](http://developercertificate.org/)):
|
|
||||||
|
|
||||||
```
|
```
|
||||||
Developer Certificate of Origin
|
Developer Certificate of Origin
|
||||||
|
@ -263,7 +295,7 @@ Then you just add a line to every git commit message:
|
||||||
|
|
||||||
Signed-off-by: Joe Smith <joe.smith@email.com>
|
Signed-off-by: Joe Smith <joe.smith@email.com>
|
||||||
|
|
||||||
Using your real name (sorry, no pseudonyms or anonymous contributions.)
|
Use your real name (sorry, no pseudonyms or anonymous contributions.)
|
||||||
|
|
||||||
If you set your `user.name` and `user.email` git configs, you can sign your
|
If you set your `user.name` and `user.email` git configs, you can sign your
|
||||||
commit automatically with `git commit -s`.
|
commit automatically with `git commit -s`.
|
||||||
|
@ -283,42 +315,42 @@ Don't forget: being a maintainer is a time investment. Make sure you
|
||||||
will have time to make yourself available. You don't have to be a
|
will have time to make yourself available. You don't have to be a
|
||||||
maintainer to make a difference on the project!
|
maintainer to make a difference on the project!
|
||||||
|
|
||||||
### IRC Meetings
|
### IRC meetings
|
||||||
|
|
||||||
There are two monthly meetings taking place on #docker-dev IRC to accomodate all timezones.
|
There are two monthly meetings taking place on #docker-dev IRC to accomodate all
|
||||||
Anybody can ask for a topic to be discussed prior to the meeting.
|
timezones. Anybody can propose a topic for discussion prior to the meeting.
|
||||||
|
|
||||||
If you feel the conversation is going off-topic, feel free to point it out.
|
If you feel the conversation is going off-topic, feel free to point it out.
|
||||||
|
|
||||||
For the exact dates and times, have a look at [the irc-minutes repo](https://github.com/docker/irc-minutes).
|
For the exact dates and times, have a look at [the irc-minutes
|
||||||
They also contain all the notes from previous meetings.
|
repo](https://github.com/docker/irc-minutes). The minutes also contain all the
|
||||||
|
notes from previous meetings.
|
||||||
|
|
||||||
## Docker Community Guidelines
|
## Docker community guidelines
|
||||||
|
|
||||||
We want to keep the Docker community awesome, growing and collaborative. We
|
We want to keep the Docker community awesome, growing and collaborative. We need
|
||||||
need your help to keep it that way. To help with this we've come up with some
|
your help to keep it that way. To help with this we've come up with some general
|
||||||
general guidelines for the community as a whole:
|
guidelines for the community as a whole:
|
||||||
|
|
||||||
* Be nice: Be courteous, respectful and polite to fellow community members: no
|
* Be nice: Be courteous, respectful and polite to fellow community members:
|
||||||
regional, racial, gender, or other abuse will be tolerated. We like nice people
|
no regional, racial, gender, or other abuse will be tolerated. We like
|
||||||
way better than mean ones!
|
nice people way better than mean ones!
|
||||||
|
|
||||||
* Encourage diversity and participation: Make everyone in our community
|
* Encourage diversity and participation: Make everyone in our community feel
|
||||||
feel welcome, regardless of their background and the extent of their
|
welcome, regardless of their background and the extent of their
|
||||||
contributions, and do everything possible to encourage participation in
|
contributions, and do everything possible to encourage participation in
|
||||||
our community.
|
our community.
|
||||||
|
|
||||||
* Keep it legal: Basically, don't get us in trouble. Share only content that
|
* Keep it legal: Basically, don't get us in trouble. Share only content that
|
||||||
you own, do not share private or sensitive information, and don't break the
|
you own, do not share private or sensitive information, and don't break
|
||||||
law.
|
the law.
|
||||||
|
|
||||||
* Stay on topic: Make sure that you are posting to the correct channel
|
* Stay on topic: Make sure that you are posting to the correct channel and
|
||||||
and avoid off-topic discussions. Remember when you update an issue or
|
avoid off-topic discussions. Remember when you update an issue or respond
|
||||||
respond to an email you are potentially sending to a large number of
|
to an email you are potentially sending to a large number of people. Please
|
||||||
people. Please consider this before you update. Also remember that
|
consider this before you update. Also remember that nobody likes spam.
|
||||||
nobody likes spam.
|
|
||||||
|
|
||||||
### Guideline Violations — 3 Strikes Method
|
### Guideline violations — 3 strikes method
|
||||||
|
|
||||||
The point of this section is not to find opportunities to punish people, but we
|
The point of this section is not to find opportunities to punish people, but we
|
||||||
do need a fair way to deal with people who are making our community suck.
|
do need a fair way to deal with people who are making our community suck.
|
||||||
|
@ -337,20 +369,19 @@ do need a fair way to deal with people who are making our community suck.
|
||||||
* Obvious spammers are banned on first occurrence. If we don't do this, we'll
|
* Obvious spammers are banned on first occurrence. If we don't do this, we'll
|
||||||
have spam all over the place.
|
have spam all over the place.
|
||||||
|
|
||||||
* Violations are forgiven after 6 months of good behavior, and we won't
|
* Violations are forgiven after 6 months of good behavior, and we won't hold a
|
||||||
hold a grudge.
|
grudge.
|
||||||
|
|
||||||
* People who commit minor infractions will get some education,
|
* People who commit minor infractions will get some education, rather than
|
||||||
rather than hammering them in the 3 strikes process.
|
hammering them in the 3 strikes process.
|
||||||
|
|
||||||
* The rules apply equally to everyone in the community, no matter how
|
* The rules apply equally to everyone in the community, no matter how much
|
||||||
much you've contributed.
|
you've contributed.
|
||||||
|
|
||||||
* Extreme violations of a threatening, abusive, destructive or illegal nature
|
* Extreme violations of a threatening, abusive, destructive or illegal nature
|
||||||
will be addressed immediately and are not subject to 3 strikes or
|
will be addressed immediately and are not subject to 3 strikes or forgiveness.
|
||||||
forgiveness.
|
|
||||||
|
|
||||||
* Contact abuse@docker.com to report abuse or appeal violations. In the case of
|
* Contact abuse@docker.com to report abuse or appeal violations. In the case of
|
||||||
appeals, we know that mistakes happen, and we'll work with you to come up with
|
appeals, we know that mistakes happen, and we'll work with you to come up with a
|
||||||
a fair solution if there has been a misunderstanding.
|
fair solution if there has been a misunderstanding.
|
||||||
|
|
||||||
|
|
12
Dockerfile
12
Dockerfile
|
@ -107,11 +107,8 @@ RUN go get golang.org/x/tools/cmd/cover
|
||||||
# TODO replace FPM with some very minimal debhelper stuff
|
# TODO replace FPM with some very minimal debhelper stuff
|
||||||
RUN gem install --no-rdoc --no-ri fpm --version 1.3.2
|
RUN gem install --no-rdoc --no-ri fpm --version 1.3.2
|
||||||
|
|
||||||
# Get the "busybox" image source so we can build locally instead of pulling
|
|
||||||
RUN git clone -b buildroot-2014.02 https://github.com/jpetazzo/docker-busybox.git /docker-busybox
|
|
||||||
|
|
||||||
# Install registry
|
# Install registry
|
||||||
ENV REGISTRY_COMMIT c448e0416925a9876d5576e412703c9b8b865e19
|
ENV REGISTRY_COMMIT d957768537c5af40e4f4cd96871f7b2bde9e2923
|
||||||
RUN set -x \
|
RUN set -x \
|
||||||
&& git clone https://github.com/docker/distribution.git /go/src/github.com/docker/distribution \
|
&& git clone https://github.com/docker/distribution.git /go/src/github.com/docker/distribution \
|
||||||
&& (cd /go/src/github.com/docker/distribution && git checkout -q $REGISTRY_COMMIT) \
|
&& (cd /go/src/github.com/docker/distribution && git checkout -q $REGISTRY_COMMIT) \
|
||||||
|
@ -145,6 +142,13 @@ ENV DOCKER_BUILDTAGS apparmor selinux btrfs_noversion
|
||||||
# Let us use a .bashrc file
|
# Let us use a .bashrc file
|
||||||
RUN ln -sfv $PWD/.bashrc ~/.bashrc
|
RUN ln -sfv $PWD/.bashrc ~/.bashrc
|
||||||
|
|
||||||
|
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
|
||||||
|
COPY contrib/download-frozen-image.sh /go/src/github.com/docker/docker/contrib/
|
||||||
|
RUN ./contrib/download-frozen-image.sh /docker-frozen-images \
|
||||||
|
busybox:latest@4986bf8c15363d1c5d15512d5266f8777bfba4974ac56e3270e7760f6f0a8125 \
|
||||||
|
hello-world:frozen@e45a5af57b00862e5ef5782a9925979a02ba2b12dff832fd0991335f4a11e5c5
|
||||||
|
# see also "hack/make/.ensure-frozen-images" (which needs to be updated any time this list is)
|
||||||
|
|
||||||
# Install man page generator
|
# Install man page generator
|
||||||
COPY vendor /go/src/github.com/docker/docker/vendor
|
COPY vendor /go/src/github.com/docker/docker/vendor
|
||||||
# (copy vendor/ because go-md2man needs golang.org/x/net)
|
# (copy vendor/ because go-md2man needs golang.org/x/net)
|
||||||
|
|
|
@ -0,0 +1,34 @@
|
||||||
|
# docker build -t docker:simple -f Dockerfile.simple .
|
||||||
|
# docker run --rm docker:simple hack/make.sh dynbinary
|
||||||
|
# docker run --rm --privileged docker:simple hack/dind hack/make.sh test-unit
|
||||||
|
# docker run --rm --privileged -v /var/lib/docker docker:simple hack/dind hack/make.sh dynbinary test-integration-cli
|
||||||
|
|
||||||
|
# This represents the bare minimum required to build and test Docker.
|
||||||
|
|
||||||
|
FROM debian:jessie
|
||||||
|
|
||||||
|
# compile and runtime deps
|
||||||
|
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#build-dependencies
|
||||||
|
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
|
||||||
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
btrfs-tools \
|
||||||
|
curl \
|
||||||
|
gcc \
|
||||||
|
git \
|
||||||
|
golang \
|
||||||
|
libdevmapper-dev \
|
||||||
|
libsqlite3-dev \
|
||||||
|
\
|
||||||
|
ca-certificates \
|
||||||
|
e2fsprogs \
|
||||||
|
iptables \
|
||||||
|
procps \
|
||||||
|
xz-utils \
|
||||||
|
\
|
||||||
|
aufs-tools \
|
||||||
|
lxc \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
ENV AUTO_GOPATH 1
|
||||||
|
WORKDIR /usr/src/docker
|
||||||
|
COPY . /usr/src/docker
|
42
MAINTAINERS
42
MAINTAINERS
|
@ -193,7 +193,7 @@ for each.
|
||||||
# They should ask for any editorial change that makes the documentation more
|
# They should ask for any editorial change that makes the documentation more
|
||||||
# consistent and easier to understand.
|
# consistent and easier to understand.
|
||||||
#
|
#
|
||||||
# Once documentation is approved, a maintainer should make sure to remove this
|
# Once documentation is approved (see below), a maintainer should make sure to remove this
|
||||||
# label and add the next one.
|
# label and add the next one.
|
||||||
|
|
||||||
close = ""
|
close = ""
|
||||||
|
@ -201,6 +201,11 @@ for each.
|
||||||
1-design-review = "raises design concerns"
|
1-design-review = "raises design concerns"
|
||||||
4-merge = "general case"
|
4-merge = "general case"
|
||||||
|
|
||||||
|
# Docs approval
|
||||||
|
[Rules.review.docs-approval]
|
||||||
|
# Changes and additions to docs must be reviewed and approved (LGTM'd) by a minimum of two docs sub-project maintainers.
|
||||||
|
# If the docs change originates with a docs maintainer, only one additional LGTM is required (since we assume a docs maintainer approves of their own PR).
|
||||||
|
|
||||||
# Merge
|
# Merge
|
||||||
[Rules.review.states.4-merge]
|
[Rules.review.states.4-merge]
|
||||||
|
|
||||||
|
@ -424,7 +429,10 @@ made through a pull request.
|
||||||
"dmp42",
|
"dmp42",
|
||||||
"vbatts",
|
"vbatts",
|
||||||
"joffrey",
|
"joffrey",
|
||||||
"samalba"
|
"samalba",
|
||||||
|
"sday",
|
||||||
|
"jlhawn",
|
||||||
|
"dmcg"
|
||||||
]
|
]
|
||||||
|
|
||||||
[Org.Subsystems."build tools"]
|
[Org.Subsystems."build tools"]
|
||||||
|
@ -502,6 +510,16 @@ made through a pull request.
|
||||||
Email = "dug@us.ibm.com"
|
Email = "dug@us.ibm.com"
|
||||||
GitHub = "duglin"
|
GitHub = "duglin"
|
||||||
|
|
||||||
|
[people.dmcg]
|
||||||
|
Name = "Derek McGowan"
|
||||||
|
Email = "derek@docker.com"
|
||||||
|
Github = "dmcgowan"
|
||||||
|
|
||||||
|
[people.dmp42]
|
||||||
|
Name = "Olivier Gambier"
|
||||||
|
Email = "olivier@docker.com"
|
||||||
|
Github = "dmp42"
|
||||||
|
|
||||||
[people.ehazlett]
|
[people.ehazlett]
|
||||||
Name = "Evan Hazlett"
|
Name = "Evan Hazlett"
|
||||||
Email = "ejhazlett@gmail.com"
|
Email = "ejhazlett@gmail.com"
|
||||||
|
@ -522,6 +540,11 @@ made through a pull request.
|
||||||
Email = "estesp@linux.vnet.ibm.com"
|
Email = "estesp@linux.vnet.ibm.com"
|
||||||
GitHub = "estesp"
|
GitHub = "estesp"
|
||||||
|
|
||||||
|
[people.fredlf]
|
||||||
|
Name = "Fred Lifton"
|
||||||
|
Email = "fred.lifton@docker.com"
|
||||||
|
GitHub = "fredlf"
|
||||||
|
|
||||||
[people.icecrime]
|
[people.icecrime]
|
||||||
Name = "Arnaud Porterie"
|
Name = "Arnaud Porterie"
|
||||||
Email = "arnaud@docker.com"
|
Email = "arnaud@docker.com"
|
||||||
|
@ -532,6 +555,16 @@ made through a pull request.
|
||||||
Email = "jess@docker.com"
|
Email = "jess@docker.com"
|
||||||
GitHub = "jfrazelle"
|
GitHub = "jfrazelle"
|
||||||
|
|
||||||
|
[people.jlhawn]
|
||||||
|
Name = "Josh Hawn"
|
||||||
|
Email = "josh.hawn@docker.com"
|
||||||
|
Github = "jlhawn"
|
||||||
|
|
||||||
|
[people.joffrey]
|
||||||
|
Name = "Joffrey Fuhrer"
|
||||||
|
Email = "joffrey@docker.com"
|
||||||
|
Github = "shin-"
|
||||||
|
|
||||||
[people.lk4d4]
|
[people.lk4d4]
|
||||||
Name = "Alexander Morozov"
|
Name = "Alexander Morozov"
|
||||||
Email = "lk4d4@docker.com"
|
Email = "lk4d4@docker.com"
|
||||||
|
@ -542,6 +575,11 @@ made through a pull request.
|
||||||
Email = "mary.anthony@docker.com"
|
Email = "mary.anthony@docker.com"
|
||||||
GitHub = "moxiegirl"
|
GitHub = "moxiegirl"
|
||||||
|
|
||||||
|
[people.sday]
|
||||||
|
Name = "Stephen Day"
|
||||||
|
Email = "stephen.day@docker.com"
|
||||||
|
Github = "stevvooe"
|
||||||
|
|
||||||
[people.shykes]
|
[people.shykes]
|
||||||
Name = "Solomon Hykes"
|
Name = "Solomon Hykes"
|
||||||
Email = "solomon@docker.com"
|
Email = "solomon@docker.com"
|
||||||
|
|
2
Makefile
2
Makefile
|
@ -86,11 +86,11 @@ build: bundles
|
||||||
docker build -t "$(DOCKER_IMAGE)" .
|
docker build -t "$(DOCKER_IMAGE)" .
|
||||||
|
|
||||||
docs-build:
|
docs-build:
|
||||||
git fetch https://github.com/docker/docker.git docs && git diff --name-status FETCH_HEAD...HEAD -- docs > docs/changed-files
|
|
||||||
cp ./VERSION docs/VERSION
|
cp ./VERSION docs/VERSION
|
||||||
echo "$(GIT_BRANCH)" > docs/GIT_BRANCH
|
echo "$(GIT_BRANCH)" > docs/GIT_BRANCH
|
||||||
# echo "$(AWS_S3_BUCKET)" > docs/AWS_S3_BUCKET
|
# echo "$(AWS_S3_BUCKET)" > docs/AWS_S3_BUCKET
|
||||||
echo "$(GITCOMMIT)" > docs/GITCOMMIT
|
echo "$(GITCOMMIT)" > docs/GITCOMMIT
|
||||||
|
docker pull docs/base
|
||||||
docker build -t "$(DOCKER_DOCS_IMAGE)" docs
|
docker build -t "$(DOCKER_DOCS_IMAGE)" docs
|
||||||
|
|
||||||
bundles:
|
bundles:
|
||||||
|
|
10
README.md
10
README.md
|
@ -183,12 +183,14 @@ Contributing to Docker
|
||||||
[](https://jenkins.dockerproject.com/job/Docker%20Master/)
|
[](https://jenkins.dockerproject.com/job/Docker%20Master/)
|
||||||
|
|
||||||
Want to hack on Docker? Awesome! We have [instructions to help you get
|
Want to hack on Docker? Awesome! We have [instructions to help you get
|
||||||
started](CONTRIBUTING.md). If you'd like to contribute to the
|
started contributing code or documentation.](https://docs.docker.com/project/who-written-for/).
|
||||||
documentation, please take a look at this [README.md](https://github.com/docker/docker/blob/master/docs/README.md).
|
|
||||||
|
|
||||||
These instructions are probably not perfect, please let us know if anything
|
These instructions are probably not perfect, please let us know if anything
|
||||||
feels wrong or incomplete. Better yet, submit a PR and improve them yourself.
|
feels wrong or incomplete. Better yet, submit a PR and improve them yourself.
|
||||||
|
|
||||||
|
Getting the development builds
|
||||||
|
==============================
|
||||||
|
|
||||||
Want to run Docker from a master build? You can download
|
Want to run Docker from a master build? You can download
|
||||||
master builds at [master.dockerproject.com](https://master.dockerproject.com).
|
master builds at [master.dockerproject.com](https://master.dockerproject.com).
|
||||||
They are updated with each commit merged into the master branch.
|
They are updated with each commit merged into the master branch.
|
||||||
|
@ -233,8 +235,8 @@ Docker platform to broaden its application and utility.
|
||||||
If you know of another project underway that should be listed here, please help
|
If you know of another project underway that should be listed here, please help
|
||||||
us keep this list up-to-date by submitting a PR.
|
us keep this list up-to-date by submitting a PR.
|
||||||
|
|
||||||
* [Docker Registry](https://github.com/docker/docker-registry): Registry
|
* [Docker Registry](https://github.com/docker/distribution): Registry
|
||||||
server for Docker (hosting/delivering of repositories and images)
|
server for Docker (hosting/delivery of repositories and images)
|
||||||
* [Docker Machine](https://github.com/docker/machine): Machine management
|
* [Docker Machine](https://github.com/docker/machine): Machine management
|
||||||
for a container-centric world
|
for a container-centric world
|
||||||
* [Docker Swarm](https://github.com/docker/swarm): A Docker-native clustering
|
* [Docker Swarm](https://github.com/docker/swarm): A Docker-native clustering
|
||||||
|
|
|
@ -1,2 +0,0 @@
|
||||||
Victor Vieux <vieux@docker.com> (@vieux)
|
|
||||||
Jessie Frazelle <jess@docker.com> (@jfrazelle)
|
|
|
@ -93,6 +93,9 @@ func (cli *DockerCli) Subcmd(name, signature, description string, exitOnError bo
|
||||||
flags := flag.NewFlagSet(name, errorHandling)
|
flags := flag.NewFlagSet(name, errorHandling)
|
||||||
flags.Usage = func() {
|
flags.Usage = func() {
|
||||||
options := ""
|
options := ""
|
||||||
|
if signature != "" {
|
||||||
|
signature = " " + signature
|
||||||
|
}
|
||||||
if flags.FlagCountUndeprecated() > 0 {
|
if flags.FlagCountUndeprecated() > 0 {
|
||||||
options = " [OPTIONS]"
|
options = " [OPTIONS]"
|
||||||
}
|
}
|
||||||
|
|
|
@ -37,8 +37,10 @@ import (
|
||||||
"github.com/docker/docker/pkg/fileutils"
|
"github.com/docker/docker/pkg/fileutils"
|
||||||
"github.com/docker/docker/pkg/homedir"
|
"github.com/docker/docker/pkg/homedir"
|
||||||
flag "github.com/docker/docker/pkg/mflag"
|
flag "github.com/docker/docker/pkg/mflag"
|
||||||
|
"github.com/docker/docker/pkg/networkfs/resolvconf"
|
||||||
"github.com/docker/docker/pkg/parsers"
|
"github.com/docker/docker/pkg/parsers"
|
||||||
"github.com/docker/docker/pkg/parsers/filters"
|
"github.com/docker/docker/pkg/parsers/filters"
|
||||||
|
"github.com/docker/docker/pkg/progressreader"
|
||||||
"github.com/docker/docker/pkg/promise"
|
"github.com/docker/docker/pkg/promise"
|
||||||
"github.com/docker/docker/pkg/signal"
|
"github.com/docker/docker/pkg/signal"
|
||||||
"github.com/docker/docker/pkg/symlink"
|
"github.com/docker/docker/pkg/symlink"
|
||||||
|
@ -87,7 +89,11 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
||||||
rm := cmd.Bool([]string{"#rm", "-rm"}, true, "Remove intermediate containers after a successful build")
|
rm := cmd.Bool([]string{"#rm", "-rm"}, true, "Remove intermediate containers after a successful build")
|
||||||
forceRm := cmd.Bool([]string{"-force-rm"}, false, "Always remove intermediate containers")
|
forceRm := cmd.Bool([]string{"-force-rm"}, false, "Always remove intermediate containers")
|
||||||
pull := cmd.Bool([]string{"-pull"}, false, "Always attempt to pull a newer version of the image")
|
pull := cmd.Bool([]string{"-pull"}, false, "Always attempt to pull a newer version of the image")
|
||||||
dockerfileName := cmd.String([]string{"f", "-file"}, "", "Name of the Dockerfile(Default is 'Dockerfile')")
|
dockerfileName := cmd.String([]string{"f", "-file"}, "", "Name of the Dockerfile (Default is 'PATH/Dockerfile')")
|
||||||
|
flMemoryString := cmd.String([]string{"m", "-memory"}, "", "Memory limit")
|
||||||
|
flMemorySwap := cmd.String([]string{"-memory-swap"}, "", "Total memory (memory + swap), '-1' to disable swap")
|
||||||
|
flCpuShares := cmd.Int64([]string{"c", "-cpu-shares"}, 0, "CPU shares (relative weight)")
|
||||||
|
flCpuSetCpus := cmd.String([]string{"-cpuset-cpus"}, "", "CPUs in which to allow execution (0-3, 0,1)")
|
||||||
|
|
||||||
cmd.Require(flag.Exact, 1)
|
cmd.Require(flag.Exact, 1)
|
||||||
|
|
||||||
|
@ -231,7 +237,36 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
||||||
// FIXME: ProgressReader shouldn't be this annoying to use
|
// FIXME: ProgressReader shouldn't be this annoying to use
|
||||||
if context != nil {
|
if context != nil {
|
||||||
sf := utils.NewStreamFormatter(false)
|
sf := utils.NewStreamFormatter(false)
|
||||||
body = utils.ProgressReader(context, 0, cli.out, sf, true, "", "Sending build context to Docker daemon")
|
body = progressreader.New(progressreader.Config{
|
||||||
|
In: context,
|
||||||
|
Out: cli.out,
|
||||||
|
Formatter: sf,
|
||||||
|
NewLines: true,
|
||||||
|
ID: "",
|
||||||
|
Action: "Sending build context to Docker daemon",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
var memory int64
|
||||||
|
if *flMemoryString != "" {
|
||||||
|
parsedMemory, err := units.RAMInBytes(*flMemoryString)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
memory = parsedMemory
|
||||||
|
}
|
||||||
|
|
||||||
|
var memorySwap int64
|
||||||
|
if *flMemorySwap != "" {
|
||||||
|
if *flMemorySwap == "-1" {
|
||||||
|
memorySwap = -1
|
||||||
|
} else {
|
||||||
|
parsedMemorySwap, err := units.RAMInBytes(*flMemorySwap)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
memorySwap = parsedMemorySwap
|
||||||
|
}
|
||||||
}
|
}
|
||||||
// Send the build context
|
// Send the build context
|
||||||
v := &url.Values{}
|
v := &url.Values{}
|
||||||
|
@ -274,6 +309,11 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
||||||
v.Set("pull", "1")
|
v.Set("pull", "1")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
v.Set("cpusetcpus", *flCpuSetCpus)
|
||||||
|
v.Set("cpushares", strconv.FormatInt(*flCpuShares, 10))
|
||||||
|
v.Set("memory", strconv.FormatInt(memory, 10))
|
||||||
|
v.Set("memswap", strconv.FormatInt(memorySwap, 10))
|
||||||
|
|
||||||
v.Set("dockerfile", *dockerfileName)
|
v.Set("dockerfile", *dockerfileName)
|
||||||
|
|
||||||
cli.LoadConfigFile()
|
cli.LoadConfigFile()
|
||||||
|
@ -344,6 +384,7 @@ func (cli *DockerCli) CmdLogin(args ...string) error {
|
||||||
if username == "" {
|
if username == "" {
|
||||||
promptDefault("Username", authconfig.Username)
|
promptDefault("Username", authconfig.Username)
|
||||||
username = readInput(cli.in, cli.out)
|
username = readInput(cli.in, cli.out)
|
||||||
|
username = strings.Trim(username, " ")
|
||||||
if username == "" {
|
if username == "" {
|
||||||
username = authconfig.Username
|
username = authconfig.Username
|
||||||
}
|
}
|
||||||
|
@ -409,6 +450,8 @@ func (cli *DockerCli) CmdLogin(args ...string) error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
registry.SaveConfig(cli.configFile)
|
registry.SaveConfig(cli.configFile)
|
||||||
|
fmt.Fprintf(cli.out, "WARNING: login credentials saved in %s.\n", path.Join(homedir.Get(), registry.CONFIGFILE))
|
||||||
|
|
||||||
if out2.Get("Status") != "" {
|
if out2.Get("Status") != "" {
|
||||||
fmt.Fprintf(cli.out, "%s\n", out2.Get("Status"))
|
fmt.Fprintf(cli.out, "%s\n", out2.Get("Status"))
|
||||||
}
|
}
|
||||||
|
@ -577,6 +620,14 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
|
||||||
if remoteInfo.Exists("NGoroutines") {
|
if remoteInfo.Exists("NGoroutines") {
|
||||||
fmt.Fprintf(cli.out, "Goroutines: %d\n", remoteInfo.GetInt("NGoroutines"))
|
fmt.Fprintf(cli.out, "Goroutines: %d\n", remoteInfo.GetInt("NGoroutines"))
|
||||||
}
|
}
|
||||||
|
if remoteInfo.Exists("SystemTime") {
|
||||||
|
t, err := remoteInfo.GetTime("SystemTime")
|
||||||
|
if err != nil {
|
||||||
|
log.Errorf("Error reading system time: %v", err)
|
||||||
|
} else {
|
||||||
|
fmt.Fprintf(cli.out, "System Time: %s\n", t.Format(time.UnixDate))
|
||||||
|
}
|
||||||
|
}
|
||||||
if remoteInfo.Exists("NEventsListener") {
|
if remoteInfo.Exists("NEventsListener") {
|
||||||
fmt.Fprintf(cli.out, "EventsListeners: %d\n", remoteInfo.GetInt("NEventsListener"))
|
fmt.Fprintf(cli.out, "EventsListeners: %d\n", remoteInfo.GetInt("NEventsListener"))
|
||||||
}
|
}
|
||||||
|
@ -590,7 +641,15 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
|
||||||
fmt.Fprintf(cli.out, "Docker Root Dir: %s\n", root)
|
fmt.Fprintf(cli.out, "Docker Root Dir: %s\n", root)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if remoteInfo.Exists("HttpProxy") {
|
||||||
|
fmt.Fprintf(cli.out, "Http Proxy: %s\n", remoteInfo.Get("HttpProxy"))
|
||||||
|
}
|
||||||
|
if remoteInfo.Exists("HttpsProxy") {
|
||||||
|
fmt.Fprintf(cli.out, "Https Proxy: %s\n", remoteInfo.Get("HttpsProxy"))
|
||||||
|
}
|
||||||
|
if remoteInfo.Exists("NoProxy") {
|
||||||
|
fmt.Fprintf(cli.out, "No Proxy: %s\n", remoteInfo.Get("NoProxy"))
|
||||||
|
}
|
||||||
if len(remoteInfo.GetList("IndexServerAddress")) != 0 {
|
if len(remoteInfo.GetList("IndexServerAddress")) != 0 {
|
||||||
cli.LoadConfigFile()
|
cli.LoadConfigFile()
|
||||||
u := cli.configFile.Configs[remoteInfo.Get("IndexServerAddress")].Username
|
u := cli.configFile.Configs[remoteInfo.Get("IndexServerAddress")].Username
|
||||||
|
@ -695,7 +754,7 @@ func (cli *DockerCli) CmdStart(args ...string) error {
|
||||||
cErr chan error
|
cErr chan error
|
||||||
tty bool
|
tty bool
|
||||||
|
|
||||||
cmd = cli.Subcmd("start", "CONTAINER [CONTAINER...]", "Restart a stopped container", true)
|
cmd = cli.Subcmd("start", "CONTAINER [CONTAINER...]", "Start one or more stopped containers", true)
|
||||||
attach = cmd.Bool([]string{"a", "-attach"}, false, "Attach STDOUT/STDERR and forward signals")
|
attach = cmd.Bool([]string{"a", "-attach"}, false, "Attach STDOUT/STDERR and forward signals")
|
||||||
openStdin = cmd.Bool([]string{"i", "-interactive"}, false, "Attach container's STDIN")
|
openStdin = cmd.Bool([]string{"i", "-interactive"}, false, "Attach container's STDIN")
|
||||||
)
|
)
|
||||||
|
@ -704,6 +763,16 @@ func (cli *DockerCli) CmdStart(args ...string) error {
|
||||||
utils.ParseFlags(cmd, args, true)
|
utils.ParseFlags(cmd, args, true)
|
||||||
|
|
||||||
hijacked := make(chan io.Closer)
|
hijacked := make(chan io.Closer)
|
||||||
|
// Block the return until the chan gets closed
|
||||||
|
defer func() {
|
||||||
|
log.Debugf("CmdStart() returned, defer waiting for hijack to finish.")
|
||||||
|
if _, ok := <-hijacked; ok {
|
||||||
|
log.Errorf("Hijack did not finish (chan still open)")
|
||||||
|
}
|
||||||
|
if *openStdin || *attach {
|
||||||
|
cli.in.Close()
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
if *attach || *openStdin {
|
if *attach || *openStdin {
|
||||||
if cmd.NArg() > 1 {
|
if cmd.NArg() > 1 {
|
||||||
|
@ -760,25 +829,26 @@ func (cli *DockerCli) CmdStart(args ...string) error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
var encounteredError error
|
var encounteredError error
|
||||||
for _, name := range cmd.Args() {
|
for _, name := range cmd.Args() {
|
||||||
_, _, err := readBody(cli.call("POST", "/containers/"+name+"/start", nil, false))
|
_, _, err := readBody(cli.call("POST", "/containers/"+name+"/start", nil, false))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if !*attach && !*openStdin {
|
if !*attach && !*openStdin {
|
||||||
|
// attach and openStdin is false means it could be starting multiple containers
|
||||||
|
// when a container start failed, show the error message and start next
|
||||||
fmt.Fprintf(cli.err, "%s\n", err)
|
fmt.Fprintf(cli.err, "%s\n", err)
|
||||||
}
|
|
||||||
encounteredError = fmt.Errorf("Error: failed to start one or more containers")
|
encounteredError = fmt.Errorf("Error: failed to start one or more containers")
|
||||||
|
} else {
|
||||||
|
encounteredError = err
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
if !*attach && !*openStdin {
|
if !*attach && !*openStdin {
|
||||||
fmt.Fprintf(cli.out, "%s\n", name)
|
fmt.Fprintf(cli.out, "%s\n", name)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if encounteredError != nil {
|
if encounteredError != nil {
|
||||||
if *openStdin || *attach {
|
|
||||||
cli.in.Close()
|
|
||||||
}
|
|
||||||
return encounteredError
|
return encounteredError
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -881,7 +951,7 @@ func (cli *DockerCli) CmdInspect(args ...string) error {
|
||||||
obj, _, err := readBody(cli.call("GET", "/containers/"+name+"/json", nil, false))
|
obj, _, err := readBody(cli.call("GET", "/containers/"+name+"/json", nil, false))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if strings.Contains(err.Error(), "Too many") {
|
if strings.Contains(err.Error(), "Too many") {
|
||||||
fmt.Fprintf(cli.err, "Error: %s", err.Error())
|
fmt.Fprintf(cli.err, "Error: %v", err)
|
||||||
status = 1
|
status = 1
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
@ -1273,7 +1343,7 @@ func (cli *DockerCli) CmdPush(args ...string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
func (cli *DockerCli) CmdPull(args ...string) error {
|
func (cli *DockerCli) CmdPull(args ...string) error {
|
||||||
cmd := cli.Subcmd("pull", "NAME[:TAG]", "Pull an image or a repository from the registry", true)
|
cmd := cli.Subcmd("pull", "NAME[:TAG|@DIGEST]", "Pull an image or a repository from the registry", true)
|
||||||
allTags := cmd.Bool([]string{"a", "-all-tags"}, false, "Download all tagged images in the repository")
|
allTags := cmd.Bool([]string{"a", "-all-tags"}, false, "Download all tagged images in the repository")
|
||||||
cmd.Require(flag.Exact, 1)
|
cmd.Require(flag.Exact, 1)
|
||||||
|
|
||||||
|
@ -1286,7 +1356,7 @@ func (cli *DockerCli) CmdPull(args ...string) error {
|
||||||
)
|
)
|
||||||
taglessRemote, tag := parsers.ParseRepositoryTag(remote)
|
taglessRemote, tag := parsers.ParseRepositoryTag(remote)
|
||||||
if tag == "" && !*allTags {
|
if tag == "" && !*allTags {
|
||||||
newRemote = taglessRemote + ":" + graph.DEFAULTTAG
|
newRemote = utils.ImageReference(taglessRemote, graph.DEFAULTTAG)
|
||||||
}
|
}
|
||||||
if tag != "" && *allTags {
|
if tag != "" && *allTags {
|
||||||
return fmt.Errorf("tag can't be used with --all-tags/-a")
|
return fmt.Errorf("tag can't be used with --all-tags/-a")
|
||||||
|
@ -1339,6 +1409,7 @@ func (cli *DockerCli) CmdImages(args ...string) error {
|
||||||
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only show numeric IDs")
|
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only show numeric IDs")
|
||||||
all := cmd.Bool([]string{"a", "-all"}, false, "Show all images (default hides intermediate images)")
|
all := cmd.Bool([]string{"a", "-all"}, false, "Show all images (default hides intermediate images)")
|
||||||
noTrunc := cmd.Bool([]string{"#notrunc", "-no-trunc"}, false, "Don't truncate output")
|
noTrunc := cmd.Bool([]string{"#notrunc", "-no-trunc"}, false, "Don't truncate output")
|
||||||
|
showDigests := cmd.Bool([]string{"-digests"}, false, "Show digests")
|
||||||
// FIXME: --viz and --tree are deprecated. Remove them in a future version.
|
// FIXME: --viz and --tree are deprecated. Remove them in a future version.
|
||||||
flViz := cmd.Bool([]string{"#v", "#viz", "#-viz"}, false, "Output graph in graphviz format")
|
flViz := cmd.Bool([]string{"#v", "#viz", "#-viz"}, false, "Output graph in graphviz format")
|
||||||
flTree := cmd.Bool([]string{"#t", "#tree", "#-tree"}, false, "Output graph in tree format")
|
flTree := cmd.Bool([]string{"#t", "#tree", "#-tree"}, false, "Output graph in tree format")
|
||||||
|
@ -1465,20 +1536,46 @@ func (cli *DockerCli) CmdImages(args ...string) error {
|
||||||
|
|
||||||
w := tabwriter.NewWriter(cli.out, 20, 1, 3, ' ', 0)
|
w := tabwriter.NewWriter(cli.out, 20, 1, 3, ' ', 0)
|
||||||
if !*quiet {
|
if !*quiet {
|
||||||
|
if *showDigests {
|
||||||
|
fmt.Fprintln(w, "REPOSITORY\tTAG\tDIGEST\tIMAGE ID\tCREATED\tVIRTUAL SIZE")
|
||||||
|
} else {
|
||||||
fmt.Fprintln(w, "REPOSITORY\tTAG\tIMAGE ID\tCREATED\tVIRTUAL SIZE")
|
fmt.Fprintln(w, "REPOSITORY\tTAG\tIMAGE ID\tCREATED\tVIRTUAL SIZE")
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
for _, out := range outs.Data {
|
for _, out := range outs.Data {
|
||||||
for _, repotag := range out.GetList("RepoTags") {
|
|
||||||
|
|
||||||
repo, tag := parsers.ParseRepositoryTag(repotag)
|
|
||||||
outID := out.Get("Id")
|
outID := out.Get("Id")
|
||||||
if !*noTrunc {
|
if !*noTrunc {
|
||||||
outID = common.TruncateID(outID)
|
outID = common.TruncateID(outID)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
repoTags := out.GetList("RepoTags")
|
||||||
|
repoDigests := out.GetList("RepoDigests")
|
||||||
|
|
||||||
|
if len(repoTags) == 1 && repoTags[0] == "<none>:<none>" && len(repoDigests) == 1 && repoDigests[0] == "<none>@<none>" {
|
||||||
|
// dangling image - clear out either repoTags or repoDigsts so we only show it once below
|
||||||
|
repoDigests = []string{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// combine the tags and digests lists
|
||||||
|
tagsAndDigests := append(repoTags, repoDigests...)
|
||||||
|
for _, repoAndRef := range tagsAndDigests {
|
||||||
|
repo, ref := parsers.ParseRepositoryTag(repoAndRef)
|
||||||
|
// default tag and digest to none - if there's a value, it'll be set below
|
||||||
|
tag := "<none>"
|
||||||
|
digest := "<none>"
|
||||||
|
if utils.DigestReference(ref) {
|
||||||
|
digest = ref
|
||||||
|
} else {
|
||||||
|
tag = ref
|
||||||
|
}
|
||||||
|
|
||||||
if !*quiet {
|
if !*quiet {
|
||||||
|
if *showDigests {
|
||||||
|
fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s ago\t%s\n", repo, tag, digest, outID, units.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))), units.HumanSize(float64(out.GetInt64("VirtualSize"))))
|
||||||
|
} else {
|
||||||
fmt.Fprintf(w, "%s\t%s\t%s\t%s ago\t%s\n", repo, tag, outID, units.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))), units.HumanSize(float64(out.GetInt64("VirtualSize"))))
|
fmt.Fprintf(w, "%s\t%s\t%s\t%s ago\t%s\n", repo, tag, outID, units.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))), units.HumanSize(float64(out.GetInt64("VirtualSize"))))
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
fmt.Fprintln(w, outID)
|
fmt.Fprintln(w, outID)
|
||||||
}
|
}
|
||||||
|
@ -1833,14 +1930,40 @@ func (cli *DockerCli) CmdEvents(args ...string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
func (cli *DockerCli) CmdExport(args ...string) error {
|
func (cli *DockerCli) CmdExport(args ...string) error {
|
||||||
cmd := cli.Subcmd("export", "CONTAINER", "Export the contents of a filesystem as a tar archive to STDOUT", true)
|
cmd := cli.Subcmd("export", "CONTAINER", "Export a filesystem as a tar archive (streamed to STDOUT by default)", true)
|
||||||
|
outfile := cmd.String([]string{"o", "-output"}, "", "Write to a file, instead of STDOUT")
|
||||||
cmd.Require(flag.Exact, 1)
|
cmd.Require(flag.Exact, 1)
|
||||||
|
|
||||||
utils.ParseFlags(cmd, args, true)
|
utils.ParseFlags(cmd, args, true)
|
||||||
|
|
||||||
if err := cli.stream("GET", "/containers/"+cmd.Arg(0)+"/export", nil, cli.out, nil); err != nil {
|
var (
|
||||||
|
output io.Writer = cli.out
|
||||||
|
err error
|
||||||
|
)
|
||||||
|
if *outfile != "" {
|
||||||
|
output, err = os.Create(*outfile)
|
||||||
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
} else if cli.isTerminalOut {
|
||||||
|
return errors.New("Cowardly refusing to save to a terminal. Use the -o flag or redirect.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(cmd.Args()) == 1 {
|
||||||
|
image := cmd.Arg(0)
|
||||||
|
if err := cli.stream("GET", "/containers/"+image+"/export", nil, output, nil); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
v := url.Values{}
|
||||||
|
for _, arg := range cmd.Args() {
|
||||||
|
v.Add("names", arg)
|
||||||
|
}
|
||||||
|
if err := cli.stream("GET", "/containers/get?"+v.Encode(), nil, output, nil); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1898,6 +2021,10 @@ func (cli *DockerCli) CmdLogs(args ...string) error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if env.GetSubEnv("HostConfig").GetSubEnv("LogConfig").Get("Type") != "json-file" {
|
||||||
|
return fmt.Errorf("\"logs\" command is supported only for \"json-file\" logging driver")
|
||||||
|
}
|
||||||
|
|
||||||
v := url.Values{}
|
v := url.Values{}
|
||||||
v.Set("stdout", "1")
|
v.Set("stdout", "1")
|
||||||
v.Set("stderr", "1")
|
v.Set("stderr", "1")
|
||||||
|
@ -2169,7 +2296,7 @@ func (cli *DockerCli) createContainer(config *runconfig.Config, hostConfig *runc
|
||||||
if tag == "" {
|
if tag == "" {
|
||||||
tag = graph.DEFAULTTAG
|
tag = graph.DEFAULTTAG
|
||||||
}
|
}
|
||||||
fmt.Fprintf(cli.err, "Unable to find image '%s:%s' locally\n", repo, tag)
|
fmt.Fprintf(cli.err, "Unable to find image '%s' locally\n", utils.ImageReference(repo, tag))
|
||||||
|
|
||||||
// we don't want to write to stdout anything apart from container.ID
|
// we don't want to write to stdout anything apart from container.ID
|
||||||
if err = cli.pullImageCustomOut(config.Image, cli.err); err != nil {
|
if err = cli.pullImageCustomOut(config.Image, cli.err); err != nil {
|
||||||
|
@ -2244,6 +2371,18 @@ func (cli *DockerCli) CmdRun(args ...string) error {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
utils.ReportError(cmd, err.Error(), true)
|
utils.ReportError(cmd, err.Error(), true)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(hostConfig.Dns) > 0 {
|
||||||
|
// check the DNS settings passed via --dns against
|
||||||
|
// localhost regexp to warn if they are trying to
|
||||||
|
// set a DNS to a localhost address
|
||||||
|
for _, dnsIP := range hostConfig.Dns {
|
||||||
|
if resolvconf.IsLocalhost(dnsIP) {
|
||||||
|
fmt.Fprintf(cli.err, "WARNING: Localhost DNS setting (--dns=%s) may fail in containers.\n", dnsIP)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
if config.Image == "" {
|
if config.Image == "" {
|
||||||
cmd.Usage()
|
cmd.Usage()
|
||||||
return nil
|
return nil
|
||||||
|
@ -2415,7 +2554,7 @@ func (cli *DockerCli) CmdRun(args ...string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
func (cli *DockerCli) CmdCp(args ...string) error {
|
func (cli *DockerCli) CmdCp(args ...string) error {
|
||||||
cmd := cli.Subcmd("cp", "CONTAINER:PATH HOSTPATH", "Copy files/folders from the PATH to the HOSTPATH", true)
|
cmd := cli.Subcmd("cp", "CONTAINER:PATH HOSTDIR|-", "Copy files/folders from a PATH on the container to a HOSTDIR on the host\nrunning the command. Use '-' to write the data\nas a tar file to STDOUT.", true)
|
||||||
cmd.Require(flag.Exact, 2)
|
cmd.Require(flag.Exact, 2)
|
||||||
|
|
||||||
utils.ParseFlags(cmd, args, true)
|
utils.ParseFlags(cmd, args, true)
|
||||||
|
@ -2442,7 +2581,14 @@ func (cli *DockerCli) CmdCp(args ...string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
if statusCode == 200 {
|
if statusCode == 200 {
|
||||||
if err := archive.Untar(stream, copyData.Get("HostPath"), &archive.TarOptions{NoLchown: true}); err != nil {
|
dest := copyData.Get("HostPath")
|
||||||
|
|
||||||
|
if dest == "-" {
|
||||||
|
_, err = io.Copy(cli.out, stream)
|
||||||
|
} else {
|
||||||
|
err = archive.Untar(stream, dest, &archive.TarOptions{NoLchown: true})
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -2737,7 +2883,7 @@ func (cli *DockerCli) CmdStats(args ...string) error {
|
||||||
for _, c := range cStats {
|
for _, c := range cStats {
|
||||||
c.mu.Lock()
|
c.mu.Lock()
|
||||||
if c.err != nil {
|
if c.err != nil {
|
||||||
errs = append(errs, fmt.Sprintf("%s: %s", c.Name, c.err.Error()))
|
errs = append(errs, fmt.Sprintf("%s: %v", c.Name, c.err))
|
||||||
}
|
}
|
||||||
c.mu.Unlock()
|
c.mu.Unlock()
|
||||||
}
|
}
|
||||||
|
|
|
@ -104,7 +104,7 @@ func FormGroup(key string, start, last int) string {
|
||||||
func MatchesContentType(contentType, expectedType string) bool {
|
func MatchesContentType(contentType, expectedType string) bool {
|
||||||
mimetype, _, err := mime.ParseMediaType(contentType)
|
mimetype, _, err := mime.ParseMediaType(contentType)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Errorf("Error parsing media type: %s error: %s", contentType, err.Error())
|
log.Errorf("Error parsing media type: %s error: %v", contentType, err)
|
||||||
}
|
}
|
||||||
return err == nil && mimetype == expectedType
|
return err == nil && mimetype == expectedType
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,2 +0,0 @@
|
||||||
Victor Vieux <vieux@docker.com> (@vieux)
|
|
||||||
# Johan Euphrosine <proppy@google.com> (@proppy)
|
|
|
@ -32,7 +32,6 @@ import (
|
||||||
"github.com/docker/docker/pkg/listenbuffer"
|
"github.com/docker/docker/pkg/listenbuffer"
|
||||||
"github.com/docker/docker/pkg/parsers"
|
"github.com/docker/docker/pkg/parsers"
|
||||||
"github.com/docker/docker/pkg/stdcopy"
|
"github.com/docker/docker/pkg/stdcopy"
|
||||||
"github.com/docker/docker/pkg/systemd"
|
|
||||||
"github.com/docker/docker/pkg/version"
|
"github.com/docker/docker/pkg/version"
|
||||||
"github.com/docker/docker/registry"
|
"github.com/docker/docker/registry"
|
||||||
"github.com/docker/docker/utils"
|
"github.com/docker/docker/utils"
|
||||||
|
@ -135,7 +134,7 @@ func httpError(w http.ResponseWriter, err error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Errorf("HTTP Error: statusCode=%d %s", statusCode, err.Error())
|
log.Errorf("HTTP Error: statusCode=%d %v", statusCode, err)
|
||||||
http.Error(w, err.Error(), statusCode)
|
http.Error(w, err.Error(), statusCode)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1083,6 +1082,10 @@ func postBuild(eng *engine.Engine, version version.Version, w http.ResponseWrite
|
||||||
job.Setenv("forcerm", r.FormValue("forcerm"))
|
job.Setenv("forcerm", r.FormValue("forcerm"))
|
||||||
job.SetenvJson("authConfig", authConfig)
|
job.SetenvJson("authConfig", authConfig)
|
||||||
job.SetenvJson("configFile", configFile)
|
job.SetenvJson("configFile", configFile)
|
||||||
|
job.Setenv("memswap", r.FormValue("memswap"))
|
||||||
|
job.Setenv("memory", r.FormValue("memory"))
|
||||||
|
job.Setenv("cpusetcpus", r.FormValue("cpusetcpus"))
|
||||||
|
job.Setenv("cpushares", r.FormValue("cpushares"))
|
||||||
|
|
||||||
if err := job.Run(); err != nil {
|
if err := job.Run(); err != nil {
|
||||||
if !job.Stdout.Used() {
|
if !job.Stdout.Used() {
|
||||||
|
@ -1123,7 +1126,7 @@ func postContainersCopy(eng *engine.Engine, version version.Version, w http.Resp
|
||||||
job.Stdout.Add(w)
|
job.Stdout.Add(w)
|
||||||
w.Header().Set("Content-Type", "application/x-tar")
|
w.Header().Set("Content-Type", "application/x-tar")
|
||||||
if err := job.Run(); err != nil {
|
if err := job.Run(); err != nil {
|
||||||
log.Errorf("%s", err.Error())
|
log.Errorf("%v", err)
|
||||||
if strings.Contains(strings.ToLower(err.Error()), "no such id") {
|
if strings.Contains(strings.ToLower(err.Error()), "no such id") {
|
||||||
w.WriteHeader(http.StatusNotFound)
|
w.WriteHeader(http.StatusNotFound)
|
||||||
} else if strings.Contains(err.Error(), "no such file or directory") {
|
} else if strings.Contains(err.Error(), "no such file or directory") {
|
||||||
|
@ -1406,43 +1409,6 @@ func ServeRequest(eng *engine.Engine, apiversion version.Version, w http.Respons
|
||||||
router.ServeHTTP(w, req)
|
router.ServeHTTP(w, req)
|
||||||
}
|
}
|
||||||
|
|
||||||
// serveFd creates an http.Server and sets it up to serve given a socket activated
|
|
||||||
// argument.
|
|
||||||
func serveFd(addr string, job *engine.Job) error {
|
|
||||||
r := createRouter(job.Eng, job.GetenvBool("Logging"), job.GetenvBool("EnableCors"), job.Getenv("CorsHeaders"), job.Getenv("Version"))
|
|
||||||
|
|
||||||
ls, e := systemd.ListenFD(addr)
|
|
||||||
if e != nil {
|
|
||||||
return e
|
|
||||||
}
|
|
||||||
|
|
||||||
chErrors := make(chan error, len(ls))
|
|
||||||
|
|
||||||
// We don't want to start serving on these sockets until the
|
|
||||||
// daemon is initialized and installed. Otherwise required handlers
|
|
||||||
// won't be ready.
|
|
||||||
<-activationLock
|
|
||||||
|
|
||||||
// Since ListenFD will return one or more sockets we have
|
|
||||||
// to create a go func to spawn off multiple serves
|
|
||||||
for i := range ls {
|
|
||||||
listener := ls[i]
|
|
||||||
go func() {
|
|
||||||
httpSrv := http.Server{Handler: r}
|
|
||||||
chErrors <- httpSrv.Serve(listener)
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := 0; i < len(ls); i++ {
|
|
||||||
err := <-chErrors
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func lookupGidByName(nameOrGid string) (int, error) {
|
func lookupGidByName(nameOrGid string) (int, error) {
|
||||||
groupFile, err := user.GetGroupPath()
|
groupFile, err := user.GetGroupPath()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -1457,13 +1423,21 @@ func lookupGidByName(nameOrGid string) (int, error) {
|
||||||
if groups != nil && len(groups) > 0 {
|
if groups != nil && len(groups) > 0 {
|
||||||
return groups[0].Gid, nil
|
return groups[0].Gid, nil
|
||||||
}
|
}
|
||||||
|
gid, err := strconv.Atoi(nameOrGid)
|
||||||
|
if err == nil {
|
||||||
|
log.Warnf("Could not find GID %d", gid)
|
||||||
|
return gid, nil
|
||||||
|
}
|
||||||
return -1, fmt.Errorf("Group %s not found", nameOrGid)
|
return -1, fmt.Errorf("Group %s not found", nameOrGid)
|
||||||
}
|
}
|
||||||
|
|
||||||
func setupTls(cert, key, ca string, l net.Listener) (net.Listener, error) {
|
func setupTls(cert, key, ca string, l net.Listener) (net.Listener, error) {
|
||||||
tlsCert, err := tls.LoadX509KeyPair(cert, key)
|
tlsCert, err := tls.LoadX509KeyPair(cert, key)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("Couldn't load X509 key pair (%s, %s): %s. Key encrypted?",
|
if os.IsNotExist(err) {
|
||||||
|
return nil, fmt.Errorf("Could not load X509 key pair (%s, %s): %v", cert, key, err)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("Error reading X509 key pair (%s, %s): %q. Make sure the key is encrypted.",
|
||||||
cert, key, err)
|
cert, key, err)
|
||||||
}
|
}
|
||||||
tlsConfig := &tls.Config{
|
tlsConfig := &tls.Config{
|
||||||
|
@ -1477,7 +1451,7 @@ func setupTls(cert, key, ca string, l net.Listener) (net.Listener, error) {
|
||||||
certPool := x509.NewCertPool()
|
certPool := x509.NewCertPool()
|
||||||
file, err := ioutil.ReadFile(ca)
|
file, err := ioutil.ReadFile(ca)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("Couldn't read CA certificate: %s", err)
|
return nil, fmt.Errorf("Could not read CA certificate: %v", err)
|
||||||
}
|
}
|
||||||
certPool.AppendCertsFromPEM(file)
|
certPool.AppendCertsFromPEM(file)
|
||||||
tlsConfig.ClientAuth = tls.RequireAndVerifyClientCert
|
tlsConfig.ClientAuth = tls.RequireAndVerifyClientCert
|
||||||
|
@ -1617,15 +1591,3 @@ func ServeApi(job *engine.Job) engine.Status {
|
||||||
|
|
||||||
return engine.StatusOK
|
return engine.StatusOK
|
||||||
}
|
}
|
||||||
|
|
||||||
func AcceptConnections(job *engine.Job) engine.Status {
|
|
||||||
// Tell the init daemon we are accepting requests
|
|
||||||
go systemd.SdNotify("READY=1")
|
|
||||||
|
|
||||||
// close the lock so the listeners start accepting connections
|
|
||||||
if activationLock != nil {
|
|
||||||
close(activationLock)
|
|
||||||
}
|
|
||||||
|
|
||||||
return engine.StatusOK
|
|
||||||
}
|
|
||||||
|
|
|
@ -9,6 +9,7 @@ import (
|
||||||
"syscall"
|
"syscall"
|
||||||
|
|
||||||
"github.com/docker/docker/engine"
|
"github.com/docker/docker/engine"
|
||||||
|
"github.com/docker/docker/pkg/systemd"
|
||||||
)
|
)
|
||||||
|
|
||||||
// NewServer sets up the required Server and does protocol specific checking.
|
// NewServer sets up the required Server and does protocol specific checking.
|
||||||
|
@ -50,3 +51,53 @@ func setupUnixHttp(addr string, job *engine.Job) (*HttpServer, error) {
|
||||||
|
|
||||||
return &HttpServer{&http.Server{Addr: addr, Handler: r}, l}, nil
|
return &HttpServer{&http.Server{Addr: addr, Handler: r}, l}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// serveFd creates an http.Server and sets it up to serve given a socket activated
|
||||||
|
// argument.
|
||||||
|
func serveFd(addr string, job *engine.Job) error {
|
||||||
|
r := createRouter(job.Eng, job.GetenvBool("Logging"), job.GetenvBool("EnableCors"), job.Getenv("CorsHeaders"), job.Getenv("Version"))
|
||||||
|
|
||||||
|
ls, e := systemd.ListenFD(addr)
|
||||||
|
if e != nil {
|
||||||
|
return e
|
||||||
|
}
|
||||||
|
|
||||||
|
chErrors := make(chan error, len(ls))
|
||||||
|
|
||||||
|
// We don't want to start serving on these sockets until the
|
||||||
|
// daemon is initialized and installed. Otherwise required handlers
|
||||||
|
// won't be ready.
|
||||||
|
<-activationLock
|
||||||
|
|
||||||
|
// Since ListenFD will return one or more sockets we have
|
||||||
|
// to create a go func to spawn off multiple serves
|
||||||
|
for i := range ls {
|
||||||
|
listener := ls[i]
|
||||||
|
go func() {
|
||||||
|
httpSrv := http.Server{Handler: r}
|
||||||
|
chErrors <- httpSrv.Serve(listener)
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := 0; i < len(ls); i++ {
|
||||||
|
err := <-chErrors
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Called through eng.Job("acceptconnections")
|
||||||
|
func AcceptConnections(job *engine.Job) engine.Status {
|
||||||
|
// Tell the init daemon we are accepting requests
|
||||||
|
go systemd.SdNotify("READY=1")
|
||||||
|
|
||||||
|
// close the lock so the listeners start accepting connections
|
||||||
|
if activationLock != nil {
|
||||||
|
close(activationLock)
|
||||||
|
}
|
||||||
|
|
||||||
|
return engine.StatusOK
|
||||||
|
}
|
||||||
|
|
|
@ -18,3 +18,14 @@ func NewServer(proto, addr string, job *engine.Job) (Server, error) {
|
||||||
return nil, errors.New("Invalid protocol format. Windows only supports tcp.")
|
return nil, errors.New("Invalid protocol format. Windows only supports tcp.")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Called through eng.Job("acceptconnections")
|
||||||
|
func AcceptConnections(job *engine.Job) engine.Status {
|
||||||
|
|
||||||
|
// close the lock so the listeners start accepting connections
|
||||||
|
if activationLock != nil {
|
||||||
|
close(activationLock)
|
||||||
|
}
|
||||||
|
|
||||||
|
return engine.StatusOK
|
||||||
|
}
|
||||||
|
|
|
@ -1,3 +0,0 @@
|
||||||
Tibor Vass <teabee89@gmail.com> (@tiborvass)
|
|
||||||
Erik Hollensbe <github@hollensbe.org> (@erikh)
|
|
||||||
Doug Davis <dug@us.ibm.com> (@duglin)
|
|
|
@ -3,6 +3,7 @@ package command
|
||||||
|
|
||||||
const (
|
const (
|
||||||
Env = "env"
|
Env = "env"
|
||||||
|
Label = "label"
|
||||||
Maintainer = "maintainer"
|
Maintainer = "maintainer"
|
||||||
Add = "add"
|
Add = "add"
|
||||||
Copy = "copy"
|
Copy = "copy"
|
||||||
|
@ -21,6 +22,7 @@ const (
|
||||||
// Commands is list of all Dockerfile commands
|
// Commands is list of all Dockerfile commands
|
||||||
var Commands = map[string]struct{}{
|
var Commands = map[string]struct{}{
|
||||||
Env: {},
|
Env: {},
|
||||||
|
Label: {},
|
||||||
Maintainer: {},
|
Maintainer: {},
|
||||||
Add: {},
|
Add: {},
|
||||||
Copy: {},
|
Copy: {},
|
||||||
|
|
|
@ -85,6 +85,37 @@ func maintainer(b *Builder, args []string, attributes map[string]bool, original
|
||||||
return b.commit("", b.Config.Cmd, fmt.Sprintf("MAINTAINER %s", b.maintainer))
|
return b.commit("", b.Config.Cmd, fmt.Sprintf("MAINTAINER %s", b.maintainer))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// LABEL some json data describing the image
|
||||||
|
//
|
||||||
|
// Sets the Label variable foo to bar,
|
||||||
|
//
|
||||||
|
func label(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||||
|
if len(args) == 0 {
|
||||||
|
return fmt.Errorf("LABEL requires at least one argument")
|
||||||
|
}
|
||||||
|
if len(args)%2 != 0 {
|
||||||
|
// should never get here, but just in case
|
||||||
|
return fmt.Errorf("Bad input to LABEL, too many args")
|
||||||
|
}
|
||||||
|
|
||||||
|
commitStr := "LABEL"
|
||||||
|
|
||||||
|
if b.Config.Labels == nil {
|
||||||
|
b.Config.Labels = map[string]string{}
|
||||||
|
}
|
||||||
|
|
||||||
|
for j := 0; j < len(args); j++ {
|
||||||
|
// name ==> args[j]
|
||||||
|
// value ==> args[j+1]
|
||||||
|
newVar := args[j] + "=" + args[j+1] + ""
|
||||||
|
commitStr += " " + newVar
|
||||||
|
|
||||||
|
b.Config.Labels[args[j]] = args[j+1]
|
||||||
|
j++
|
||||||
|
}
|
||||||
|
return b.commit("", b.Config.Cmd, commitStr)
|
||||||
|
}
|
||||||
|
|
||||||
// ADD foo /path
|
// ADD foo /path
|
||||||
//
|
//
|
||||||
// Add the file 'foo' to '/path'. Tarball and Remote URL (git, http) handling
|
// Add the file 'foo' to '/path'. Tarball and Remote URL (git, http) handling
|
||||||
|
@ -213,8 +244,8 @@ func run(b *Builder, args []string, attributes map[string]bool, original string)
|
||||||
|
|
||||||
args = handleJsonArgs(args, attributes)
|
args = handleJsonArgs(args, attributes)
|
||||||
|
|
||||||
if len(args) == 1 {
|
if !attributes["json"] {
|
||||||
args = append([]string{"/bin/sh", "-c"}, args[0])
|
args = append([]string{"/bin/sh", "-c"}, args...)
|
||||||
}
|
}
|
||||||
|
|
||||||
runCmd := flag.NewFlagSet("run", flag.ContinueOnError)
|
runCmd := flag.NewFlagSet("run", flag.ContinueOnError)
|
||||||
|
@ -339,11 +370,19 @@ func expose(b *Builder, args []string, attributes map[string]bool, original stri
|
||||||
b.Config.ExposedPorts = make(nat.PortSet)
|
b.Config.ExposedPorts = make(nat.PortSet)
|
||||||
}
|
}
|
||||||
|
|
||||||
ports, _, err := nat.ParsePortSpecs(append(portsTab, b.Config.PortSpecs...))
|
ports, bindingMap, err := nat.ParsePortSpecs(append(portsTab, b.Config.PortSpecs...))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for _, bindings := range bindingMap {
|
||||||
|
if bindings[0].HostIp != "" || bindings[0].HostPort != "" {
|
||||||
|
fmt.Fprintf(b.ErrStream, " ---> Using Dockerfile's EXPOSE instruction"+
|
||||||
|
" to map host ports to container ports (ip:hostPort:containerPort) is deprecated.\n"+
|
||||||
|
" Please use -p to publish the ports.\n")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// instead of using ports directly, we build a list of ports and sort it so
|
// instead of using ports directly, we build a list of ports and sort it so
|
||||||
// the order is consistent. This prevents cache burst where map ordering
|
// the order is consistent. This prevents cache burst where map ordering
|
||||||
// changes between builds
|
// changes between builds
|
||||||
|
|
|
@ -49,6 +49,7 @@ var (
|
||||||
// Environment variable interpolation will happen on these statements only.
|
// Environment variable interpolation will happen on these statements only.
|
||||||
var replaceEnvAllowed = map[string]struct{}{
|
var replaceEnvAllowed = map[string]struct{}{
|
||||||
command.Env: {},
|
command.Env: {},
|
||||||
|
command.Label: {},
|
||||||
command.Add: {},
|
command.Add: {},
|
||||||
command.Copy: {},
|
command.Copy: {},
|
||||||
command.Workdir: {},
|
command.Workdir: {},
|
||||||
|
@ -62,6 +63,7 @@ var evaluateTable map[string]func(*Builder, []string, map[string]bool, string) e
|
||||||
func init() {
|
func init() {
|
||||||
evaluateTable = map[string]func(*Builder, []string, map[string]bool, string) error{
|
evaluateTable = map[string]func(*Builder, []string, map[string]bool, string) error{
|
||||||
command.Env: env,
|
command.Env: env,
|
||||||
|
command.Label: label,
|
||||||
command.Maintainer: maintainer,
|
command.Maintainer: maintainer,
|
||||||
command.Add: add,
|
command.Add: add,
|
||||||
command.Copy: dispatchCopy, // copy() is a go builtin
|
command.Copy: dispatchCopy, // copy() is a go builtin
|
||||||
|
@ -123,6 +125,12 @@ type Builder struct {
|
||||||
context tarsum.TarSum // the context is a tarball that is uploaded by the client
|
context tarsum.TarSum // the context is a tarball that is uploaded by the client
|
||||||
contextPath string // the path of the temporary directory the local context is unpacked to (server side)
|
contextPath string // the path of the temporary directory the local context is unpacked to (server side)
|
||||||
noBaseImage bool // indicates that this build does not start from any base image, but is being built from an empty file system.
|
noBaseImage bool // indicates that this build does not start from any base image, but is being built from an empty file system.
|
||||||
|
|
||||||
|
// Set resource restrictions for build containers
|
||||||
|
cpuSetCpus string
|
||||||
|
cpuShares int64
|
||||||
|
memory int64
|
||||||
|
memorySwap int64
|
||||||
}
|
}
|
||||||
|
|
||||||
// Run the builder with the context. This is the lynchpin of this package. This
|
// Run the builder with the context. This is the lynchpin of this package. This
|
||||||
|
@ -154,6 +162,7 @@ func (b *Builder) Run(context io.Reader) (string, error) {
|
||||||
|
|
||||||
// some initializations that would not have been supplied by the caller.
|
// some initializations that would not have been supplied by the caller.
|
||||||
b.Config = &runconfig.Config{}
|
b.Config = &runconfig.Config{}
|
||||||
|
|
||||||
b.TmpContainers = map[string]struct{}{}
|
b.TmpContainers = map[string]struct{}{}
|
||||||
|
|
||||||
for i, n := range b.dockerfile.Children {
|
for i, n := range b.dockerfile.Children {
|
||||||
|
@ -309,7 +318,5 @@ func (b *Builder) dispatch(stepN int, ast *parser.Node) error {
|
||||||
return f(b, strList, attrs, original)
|
return f(b, strList, attrs, original)
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Fprintf(b.ErrStream, "# Skipping unknown instruction %s\n", strings.ToUpper(cmd))
|
return fmt.Errorf("Unknown instruction: %s", strings.ToUpper(cmd))
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -28,11 +28,13 @@ import (
|
||||||
"github.com/docker/docker/pkg/common"
|
"github.com/docker/docker/pkg/common"
|
||||||
"github.com/docker/docker/pkg/ioutils"
|
"github.com/docker/docker/pkg/ioutils"
|
||||||
"github.com/docker/docker/pkg/parsers"
|
"github.com/docker/docker/pkg/parsers"
|
||||||
|
"github.com/docker/docker/pkg/progressreader"
|
||||||
"github.com/docker/docker/pkg/symlink"
|
"github.com/docker/docker/pkg/symlink"
|
||||||
"github.com/docker/docker/pkg/system"
|
"github.com/docker/docker/pkg/system"
|
||||||
"github.com/docker/docker/pkg/tarsum"
|
"github.com/docker/docker/pkg/tarsum"
|
||||||
"github.com/docker/docker/pkg/urlutil"
|
"github.com/docker/docker/pkg/urlutil"
|
||||||
"github.com/docker/docker/registry"
|
"github.com/docker/docker/registry"
|
||||||
|
"github.com/docker/docker/runconfig"
|
||||||
"github.com/docker/docker/utils"
|
"github.com/docker/docker/utils"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -268,7 +270,15 @@ func calcCopyInfo(b *Builder, cmdName string, cInfos *[]*copyInfo, origPath stri
|
||||||
}
|
}
|
||||||
|
|
||||||
// Download and dump result to tmp file
|
// Download and dump result to tmp file
|
||||||
if _, err := io.Copy(tmpFile, utils.ProgressReader(resp.Body, int(resp.ContentLength), b.OutOld, b.StreamFormatter, true, "", "Downloading")); err != nil {
|
if _, err := io.Copy(tmpFile, progressreader.New(progressreader.Config{
|
||||||
|
In: resp.Body,
|
||||||
|
Out: b.OutOld,
|
||||||
|
Formatter: b.StreamFormatter,
|
||||||
|
Size: int(resp.ContentLength),
|
||||||
|
NewLines: true,
|
||||||
|
ID: "",
|
||||||
|
Action: "Downloading",
|
||||||
|
})); err != nil {
|
||||||
tmpFile.Close()
|
tmpFile.Close()
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -528,10 +538,17 @@ func (b *Builder) create() (*daemon.Container, error) {
|
||||||
}
|
}
|
||||||
b.Config.Image = b.image
|
b.Config.Image = b.image
|
||||||
|
|
||||||
|
hostConfig := &runconfig.HostConfig{
|
||||||
|
CpuShares: b.cpuShares,
|
||||||
|
CpusetCpus: b.cpuSetCpus,
|
||||||
|
Memory: b.memory,
|
||||||
|
MemorySwap: b.memorySwap,
|
||||||
|
}
|
||||||
|
|
||||||
config := *b.Config
|
config := *b.Config
|
||||||
|
|
||||||
// Create the container
|
// Create the container
|
||||||
c, warnings, err := b.Daemon.Create(b.Config, nil, "")
|
c, warnings, err := b.Daemon.Create(b.Config, hostConfig, "")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -725,7 +742,7 @@ func (b *Builder) clearTmp() {
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := b.Daemon.Rm(tmp); err != nil {
|
if err := b.Daemon.Rm(tmp); err != nil {
|
||||||
fmt.Fprintf(b.OutStream, "Error removing intermediate container %s: %s\n", common.TruncateID(c), err.Error())
|
fmt.Fprintf(b.OutStream, "Error removing intermediate container %s: %v\n", common.TruncateID(c), err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
b.Daemon.DeleteVolumes(tmp.VolumePaths())
|
b.Daemon.DeleteVolumes(tmp.VolumePaths())
|
||||||
|
|
|
@ -57,6 +57,10 @@ func (b *BuilderJob) CmdBuild(job *engine.Job) engine.Status {
|
||||||
rm = job.GetenvBool("rm")
|
rm = job.GetenvBool("rm")
|
||||||
forceRm = job.GetenvBool("forcerm")
|
forceRm = job.GetenvBool("forcerm")
|
||||||
pull = job.GetenvBool("pull")
|
pull = job.GetenvBool("pull")
|
||||||
|
memory = job.GetenvInt64("memory")
|
||||||
|
memorySwap = job.GetenvInt64("memswap")
|
||||||
|
cpuShares = job.GetenvInt64("cpushares")
|
||||||
|
cpuSetCpus = job.Getenv("cpusetcpus")
|
||||||
authConfig = ®istry.AuthConfig{}
|
authConfig = ®istry.AuthConfig{}
|
||||||
configFile = ®istry.ConfigFile{}
|
configFile = ®istry.ConfigFile{}
|
||||||
tag string
|
tag string
|
||||||
|
@ -145,6 +149,10 @@ func (b *BuilderJob) CmdBuild(job *engine.Job) engine.Status {
|
||||||
AuthConfig: authConfig,
|
AuthConfig: authConfig,
|
||||||
AuthConfigFile: configFile,
|
AuthConfigFile: configFile,
|
||||||
dockerfileName: dockerfileName,
|
dockerfileName: dockerfileName,
|
||||||
|
cpuShares: cpuShares,
|
||||||
|
cpuSetCpus: cpuSetCpus,
|
||||||
|
memory: memory,
|
||||||
|
memorySwap: memorySwap,
|
||||||
}
|
}
|
||||||
|
|
||||||
id, err := builder.Run(context)
|
id, err := builder.Run(context)
|
||||||
|
|
|
@ -44,10 +44,10 @@ func parseSubCommand(rest string) (*Node, map[string]bool, error) {
|
||||||
|
|
||||||
// parse environment like statements. Note that this does *not* handle
|
// parse environment like statements. Note that this does *not* handle
|
||||||
// variable interpolation, which will be handled in the evaluator.
|
// variable interpolation, which will be handled in the evaluator.
|
||||||
func parseEnv(rest string) (*Node, map[string]bool, error) {
|
func parseNameVal(rest string, key string) (*Node, map[string]bool, error) {
|
||||||
// This is kind of tricky because we need to support the old
|
// This is kind of tricky because we need to support the old
|
||||||
// variant: ENV name value
|
// variant: KEY name value
|
||||||
// as well as the new one: ENV name=value ...
|
// as well as the new one: KEY name=value ...
|
||||||
// The trigger to know which one is being used will be whether we hit
|
// The trigger to know which one is being used will be whether we hit
|
||||||
// a space or = first. space ==> old, "=" ==> new
|
// a space or = first. space ==> old, "=" ==> new
|
||||||
|
|
||||||
|
@ -137,10 +137,10 @@ func parseEnv(rest string) (*Node, map[string]bool, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(words) == 0 {
|
if len(words) == 0 {
|
||||||
return nil, nil, fmt.Errorf("ENV requires at least one argument")
|
return nil, nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Old format (ENV name value)
|
// Old format (KEY name value)
|
||||||
var rootnode *Node
|
var rootnode *Node
|
||||||
|
|
||||||
if !strings.Contains(words[0], "=") {
|
if !strings.Contains(words[0], "=") {
|
||||||
|
@ -149,7 +149,7 @@ func parseEnv(rest string) (*Node, map[string]bool, error) {
|
||||||
strs := TOKEN_WHITESPACE.Split(rest, 2)
|
strs := TOKEN_WHITESPACE.Split(rest, 2)
|
||||||
|
|
||||||
if len(strs) < 2 {
|
if len(strs) < 2 {
|
||||||
return nil, nil, fmt.Errorf("ENV must have two arguments")
|
return nil, nil, fmt.Errorf(key + " must have two arguments")
|
||||||
}
|
}
|
||||||
|
|
||||||
node.Value = strs[0]
|
node.Value = strs[0]
|
||||||
|
@ -182,6 +182,14 @@ func parseEnv(rest string) (*Node, map[string]bool, error) {
|
||||||
return rootnode, nil, nil
|
return rootnode, nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func parseEnv(rest string) (*Node, map[string]bool, error) {
|
||||||
|
return parseNameVal(rest, "ENV")
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseLabel(rest string) (*Node, map[string]bool, error) {
|
||||||
|
return parseNameVal(rest, "LABEL")
|
||||||
|
}
|
||||||
|
|
||||||
// parses a whitespace-delimited set of arguments. The result is effectively a
|
// parses a whitespace-delimited set of arguments. The result is effectively a
|
||||||
// linked list of string arguments.
|
// linked list of string arguments.
|
||||||
func parseStringsWhitespaceDelimited(rest string) (*Node, map[string]bool, error) {
|
func parseStringsWhitespaceDelimited(rest string) (*Node, map[string]bool, error) {
|
||||||
|
|
|
@ -50,6 +50,7 @@ func init() {
|
||||||
command.Onbuild: parseSubCommand,
|
command.Onbuild: parseSubCommand,
|
||||||
command.Workdir: parseString,
|
command.Workdir: parseString,
|
||||||
command.Env: parseEnv,
|
command.Env: parseEnv,
|
||||||
|
command.Label: parseLabel,
|
||||||
command.Maintainer: parseString,
|
command.Maintainer: parseString,
|
||||||
command.From: parseString,
|
command.From: parseString,
|
||||||
command.Add: parseMaybeJSONToList,
|
command.Add: parseMaybeJSONToList,
|
||||||
|
|
|
@ -11,7 +11,7 @@ import (
|
||||||
const testDir = "testfiles"
|
const testDir = "testfiles"
|
||||||
const negativeTestDir = "testfiles-negative"
|
const negativeTestDir = "testfiles-negative"
|
||||||
|
|
||||||
func getDirs(t *testing.T, dir string) []os.FileInfo {
|
func getDirs(t *testing.T, dir string) []string {
|
||||||
f, err := os.Open(dir)
|
f, err := os.Open(dir)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
|
@ -19,7 +19,7 @@ func getDirs(t *testing.T, dir string) []os.FileInfo {
|
||||||
|
|
||||||
defer f.Close()
|
defer f.Close()
|
||||||
|
|
||||||
dirs, err := f.Readdir(0)
|
dirs, err := f.Readdirnames(0)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
@ -29,16 +29,16 @@ func getDirs(t *testing.T, dir string) []os.FileInfo {
|
||||||
|
|
||||||
func TestTestNegative(t *testing.T) {
|
func TestTestNegative(t *testing.T) {
|
||||||
for _, dir := range getDirs(t, negativeTestDir) {
|
for _, dir := range getDirs(t, negativeTestDir) {
|
||||||
dockerfile := filepath.Join(negativeTestDir, dir.Name(), "Dockerfile")
|
dockerfile := filepath.Join(negativeTestDir, dir, "Dockerfile")
|
||||||
|
|
||||||
df, err := os.Open(dockerfile)
|
df, err := os.Open(dockerfile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("Dockerfile missing for %s: %s", dir.Name(), err.Error())
|
t.Fatalf("Dockerfile missing for %s: %v", dir, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err = Parse(df)
|
_, err = Parse(df)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Fatalf("No error parsing broken dockerfile for %s", dir.Name())
|
t.Fatalf("No error parsing broken dockerfile for %s", dir)
|
||||||
}
|
}
|
||||||
|
|
||||||
df.Close()
|
df.Close()
|
||||||
|
@ -47,29 +47,29 @@ func TestTestNegative(t *testing.T) {
|
||||||
|
|
||||||
func TestTestData(t *testing.T) {
|
func TestTestData(t *testing.T) {
|
||||||
for _, dir := range getDirs(t, testDir) {
|
for _, dir := range getDirs(t, testDir) {
|
||||||
dockerfile := filepath.Join(testDir, dir.Name(), "Dockerfile")
|
dockerfile := filepath.Join(testDir, dir, "Dockerfile")
|
||||||
resultfile := filepath.Join(testDir, dir.Name(), "result")
|
resultfile := filepath.Join(testDir, dir, "result")
|
||||||
|
|
||||||
df, err := os.Open(dockerfile)
|
df, err := os.Open(dockerfile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("Dockerfile missing for %s: %s", dir.Name(), err.Error())
|
t.Fatalf("Dockerfile missing for %s: %v", dir, err)
|
||||||
}
|
}
|
||||||
defer df.Close()
|
defer df.Close()
|
||||||
|
|
||||||
ast, err := Parse(df)
|
ast, err := Parse(df)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("Error parsing %s's dockerfile: %s", dir.Name(), err.Error())
|
t.Fatalf("Error parsing %s's dockerfile: %v", dir, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
content, err := ioutil.ReadFile(resultfile)
|
content, err := ioutil.ReadFile(resultfile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("Error reading %s's result file: %s", dir.Name(), err.Error())
|
t.Fatalf("Error reading %s's result file: %v", dir, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if ast.Dump()+"\n" != string(content) {
|
if ast.Dump()+"\n" != string(content) {
|
||||||
fmt.Fprintln(os.Stderr, "Result:\n"+ast.Dump())
|
fmt.Fprintln(os.Stderr, "Result:\n"+ast.Dump())
|
||||||
fmt.Fprintln(os.Stderr, "Expected:\n"+string(content))
|
fmt.Fprintln(os.Stderr, "Expected:\n"+string(content))
|
||||||
t.Fatalf("%s: AST dump of dockerfile does not match result", dir.Name())
|
t.Fatalf("%s: AST dump of dockerfile does not match result", dir)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -138,7 +138,7 @@ fi
|
||||||
flags=(
|
flags=(
|
||||||
NAMESPACES {NET,PID,IPC,UTS}_NS
|
NAMESPACES {NET,PID,IPC,UTS}_NS
|
||||||
DEVPTS_MULTIPLE_INSTANCES
|
DEVPTS_MULTIPLE_INSTANCES
|
||||||
CGROUPS CGROUP_CPUACCT CGROUP_DEVICE CGROUP_FREEZER CGROUP_SCHED
|
CGROUPS CGROUP_CPUACCT CGROUP_DEVICE CGROUP_FREEZER CGROUP_SCHED CPUSETS
|
||||||
MACVLAN VETH BRIDGE
|
MACVLAN VETH BRIDGE
|
||||||
NF_NAT_IPV4 IP_NF_FILTER IP_NF_TARGET_MASQUERADE
|
NF_NAT_IPV4 IP_NF_FILTER IP_NF_TARGET_MASQUERADE
|
||||||
NETFILTER_XT_MATCH_{ADDRTYPE,CONNTRACK}
|
NETFILTER_XT_MATCH_{ADDRTYPE,CONNTRACK}
|
||||||
|
|
|
@ -131,6 +131,7 @@ __docker_capabilities() {
|
||||||
ALL
|
ALL
|
||||||
AUDIT_CONTROL
|
AUDIT_CONTROL
|
||||||
AUDIT_WRITE
|
AUDIT_WRITE
|
||||||
|
AUDIT_READ
|
||||||
BLOCK_SUSPEND
|
BLOCK_SUSPEND
|
||||||
CHOWN
|
CHOWN
|
||||||
DAC_OVERRIDE
|
DAC_OVERRIDE
|
||||||
|
@ -188,7 +189,6 @@ __docker_signals() {
|
||||||
|
|
||||||
_docker_docker() {
|
_docker_docker() {
|
||||||
local boolean_options="
|
local boolean_options="
|
||||||
--api-enable-cors
|
|
||||||
--daemon -d
|
--daemon -d
|
||||||
--debug -D
|
--debug -D
|
||||||
--help -h
|
--help -h
|
||||||
|
@ -238,7 +238,7 @@ _docker_docker() {
|
||||||
_docker_attach() {
|
_docker_attach() {
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--no-stdin --sig-proxy" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--help --no-stdin --sig-proxy" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
local counter="$(__docker_pos_first_nonflag)"
|
local counter="$(__docker_pos_first_nonflag)"
|
||||||
|
@ -255,11 +255,15 @@ _docker_build() {
|
||||||
__docker_image_repos_and_tags
|
__docker_image_repos_and_tags
|
||||||
return
|
return
|
||||||
;;
|
;;
|
||||||
|
--file|-f)
|
||||||
|
_filedir
|
||||||
|
return
|
||||||
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--force-rm --no-cache --quiet -q --rm --tag -t" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--file -f --force-rm --help --no-cache --pull --quiet -q --rm --tag -t" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
local counter="$(__docker_pos_first_nonflag '--tag|-t')"
|
local counter="$(__docker_pos_first_nonflag '--tag|-t')"
|
||||||
|
@ -272,17 +276,17 @@ _docker_build() {
|
||||||
|
|
||||||
_docker_commit() {
|
_docker_commit() {
|
||||||
case "$prev" in
|
case "$prev" in
|
||||||
--author|-a|--message|-m|--run)
|
--author|-a|--change|-c|--message|-m)
|
||||||
return
|
return
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--author -a --message -m --run" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--author -a --change -c --help --message -m --pause -p" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag '--author|-a|--message|-m|--run')
|
local counter=$(__docker_pos_first_nonflag '--author|-a|--change|-c|--message|-m')
|
||||||
|
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
__docker_containers_all
|
__docker_containers_all
|
||||||
|
@ -299,6 +303,11 @@ _docker_commit() {
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_cp() {
|
_docker_cp() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
|
@ -319,6 +328,8 @@ _docker_cp() {
|
||||||
_filedir
|
_filedir
|
||||||
return
|
return
|
||||||
fi
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_create() {
|
_docker_create() {
|
||||||
|
@ -326,22 +337,53 @@ _docker_create() {
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_diff() {
|
_docker_diff() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
__docker_containers_all
|
__docker_containers_all
|
||||||
fi
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_events() {
|
_docker_events() {
|
||||||
case "$prev" in
|
case "$prev" in
|
||||||
--since)
|
--filter|-f)
|
||||||
|
COMPREPLY=( $( compgen -S = -W "container event image" -- "$cur" ) )
|
||||||
|
compopt -o nospace
|
||||||
|
return
|
||||||
|
;;
|
||||||
|
--since|--until)
|
||||||
|
return
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
# "=" gets parsed to a word and assigned to either $cur or $prev depending on whether
|
||||||
|
# it is the last character or not. So we search for "xxx=" in the the last two words.
|
||||||
|
case "${words[$cword-2]}$prev=" in
|
||||||
|
*container=*)
|
||||||
|
cur="${cur#=}"
|
||||||
|
__docker_containers_all
|
||||||
|
return
|
||||||
|
;;
|
||||||
|
*event=*)
|
||||||
|
COMPREPLY=( $( compgen -W "create destroy die export kill pause restart start stop unpause" -- "${cur#=}" ) )
|
||||||
|
return
|
||||||
|
;;
|
||||||
|
*image=*)
|
||||||
|
cur="${cur#=}"
|
||||||
|
__docker_image_repos_and_tags_and_ids
|
||||||
return
|
return
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--since" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--filter -f --help --since --until" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
}
|
}
|
||||||
|
@ -349,7 +391,7 @@ _docker_events() {
|
||||||
_docker_exec() {
|
_docker_exec() {
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--detach -d --interactive -i -t --tty" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--detach -d --help --interactive -i -t --tty" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
__docker_containers_running
|
__docker_containers_running
|
||||||
|
@ -358,10 +400,17 @@ _docker_exec() {
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_export() {
|
_docker_export() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
__docker_containers_all
|
__docker_containers_all
|
||||||
fi
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_help() {
|
_docker_help() {
|
||||||
|
@ -374,7 +423,7 @@ _docker_help() {
|
||||||
_docker_history() {
|
_docker_history() {
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--no-trunc --quiet -q" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--help --no-trunc --quiet -q" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
|
@ -386,9 +435,23 @@ _docker_history() {
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_images() {
|
_docker_images() {
|
||||||
|
case "$prev" in
|
||||||
|
--filter|-f)
|
||||||
|
COMPREPLY=( $( compgen -W "dangling=true" -- "$cur" ) )
|
||||||
|
return
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
case "${words[$cword-2]}$prev=" in
|
||||||
|
*dangling=*)
|
||||||
|
COMPREPLY=( $( compgen -W "true false" -- "${cur#=}" ) )
|
||||||
|
return
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--all -a --no-trunc --quiet -q" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--all -a --filter -f --help --no-trunc --quiet -q" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
|
@ -400,6 +463,11 @@ _docker_images() {
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_import() {
|
_docker_import() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
return
|
return
|
||||||
|
@ -410,10 +478,16 @@ _docker_import() {
|
||||||
__docker_image_repos_and_tags
|
__docker_image_repos_and_tags
|
||||||
return
|
return
|
||||||
fi
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_info() {
|
_docker_info() {
|
||||||
return
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_inspect() {
|
_docker_inspect() {
|
||||||
|
@ -425,7 +499,7 @@ _docker_inspect() {
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--format -f" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--format -f --help" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
__docker_containers_and_images
|
__docker_containers_and_images
|
||||||
|
@ -443,7 +517,7 @@ _docker_kill() {
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--signal -s" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--help --signal -s" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
__docker_containers_running
|
__docker_containers_running
|
||||||
|
@ -461,7 +535,7 @@ _docker_load() {
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--input -i" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--help --input -i" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
}
|
}
|
||||||
|
@ -475,18 +549,32 @@ _docker_login() {
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--email -e --password -p --username -u" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--email -e --help --password -p --username -u" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
_docker_logout() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_logs() {
|
_docker_logs() {
|
||||||
|
case "$prev" in
|
||||||
|
--tail)
|
||||||
|
return
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--follow -f" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--follow -f --help --tail --timestamps -t" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag '--tail')
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
__docker_containers_all
|
__docker_containers_all
|
||||||
fi
|
fi
|
||||||
|
@ -495,17 +583,31 @@ _docker_logs() {
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_pause() {
|
_docker_pause() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
__docker_containers_pauseable
|
__docker_containers_pauseable
|
||||||
fi
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_port() {
|
_docker_port() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
__docker_containers_all
|
__docker_containers_all
|
||||||
fi
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_ps() {
|
_docker_ps() {
|
||||||
|
@ -513,31 +615,37 @@ _docker_ps() {
|
||||||
--before|--since)
|
--before|--since)
|
||||||
__docker_containers_all
|
__docker_containers_all
|
||||||
;;
|
;;
|
||||||
|
--filter|-f)
|
||||||
|
COMPREPLY=( $( compgen -S = -W "exited status" -- "$cur" ) )
|
||||||
|
compopt -o nospace
|
||||||
|
return
|
||||||
|
;;
|
||||||
-n)
|
-n)
|
||||||
return
|
return
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
case "$cur" in
|
case "${words[$cword-2]}$prev=" in
|
||||||
-*)
|
*status=*)
|
||||||
COMPREPLY=( $( compgen -W "--all -a --before --latest -l --no-trunc -n --quiet -q --size -s --since" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "exited paused restarting running" -- "${cur#=}" ) )
|
||||||
;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
_docker_pull() {
|
|
||||||
case "$prev" in
|
|
||||||
--tag|-t)
|
|
||||||
return
|
return
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--tag -t" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--all -a --before --filter -f --help --latest -l -n --no-trunc --quiet -q --size -s --since" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
_docker_pull() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--all-tags -a --help" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag '--tag|-t')
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
__docker_image_repos_and_tags
|
__docker_image_repos_and_tags
|
||||||
fi
|
fi
|
||||||
|
@ -546,17 +654,31 @@ _docker_pull() {
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_push() {
|
_docker_push() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
__docker_image_repos_and_tags
|
__docker_image_repos_and_tags
|
||||||
fi
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_rename() {
|
_docker_rename() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
__docker_containers_all
|
__docker_containers_all
|
||||||
fi
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_restart() {
|
_docker_restart() {
|
||||||
|
@ -568,7 +690,7 @@ _docker_restart() {
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--time -t" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--help --time -t" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
__docker_containers_all
|
__docker_containers_all
|
||||||
|
@ -579,8 +701,7 @@ _docker_restart() {
|
||||||
_docker_rm() {
|
_docker_rm() {
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--force -f --link -l --volumes -v" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--force -f --help --link -l --volumes -v" -- "$cur" ) )
|
||||||
return
|
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
for arg in "${COMP_WORDS[@]}"; do
|
for arg in "${COMP_WORDS[@]}"; do
|
||||||
|
@ -592,13 +713,19 @@ _docker_rm() {
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
__docker_containers_stopped
|
__docker_containers_stopped
|
||||||
return
|
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_rmi() {
|
_docker_rmi() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--force -f --help --no-prune" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
__docker_image_repos_and_tags_and_ids
|
__docker_image_repos_and_tags_and_ids
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_run() {
|
_docker_run() {
|
||||||
|
@ -623,21 +750,26 @@ _docker_run() {
|
||||||
--lxc-conf
|
--lxc-conf
|
||||||
--mac-address
|
--mac-address
|
||||||
--memory -m
|
--memory -m
|
||||||
|
--memory-swap
|
||||||
--name
|
--name
|
||||||
--net
|
--net
|
||||||
|
--pid
|
||||||
--publish -p
|
--publish -p
|
||||||
--restart
|
--restart
|
||||||
--security-opt
|
--security-opt
|
||||||
--user -u
|
--user -u
|
||||||
|
--ulimit
|
||||||
--volumes-from
|
--volumes-from
|
||||||
--volume -v
|
--volume -v
|
||||||
--workdir -w
|
--workdir -w
|
||||||
"
|
"
|
||||||
|
|
||||||
local all_options="$options_with_args
|
local all_options="$options_with_args
|
||||||
|
--help
|
||||||
--interactive -i
|
--interactive -i
|
||||||
--privileged
|
--privileged
|
||||||
--publish-all -P
|
--publish-all -P
|
||||||
|
--read-only
|
||||||
--tty -t
|
--tty -t
|
||||||
"
|
"
|
||||||
|
|
||||||
|
@ -794,7 +926,7 @@ _docker_save() {
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "-o --output" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--help --output -o" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
__docker_image_repos_and_tags_and_ids
|
__docker_image_repos_and_tags_and_ids
|
||||||
|
@ -811,7 +943,7 @@ _docker_search() {
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--automated --no-trunc --stars -s" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--automated --help --no-trunc --stars -s" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
}
|
}
|
||||||
|
@ -819,7 +951,7 @@ _docker_search() {
|
||||||
_docker_start() {
|
_docker_start() {
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--attach -a --interactive -i" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--attach -a --help --interactive -i" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
__docker_containers_stopped
|
__docker_containers_stopped
|
||||||
|
@ -828,7 +960,14 @@ _docker_start() {
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_stats() {
|
_docker_stats() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
__docker_containers_running
|
__docker_containers_running
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_stop() {
|
_docker_stop() {
|
||||||
|
@ -840,7 +979,7 @@ _docker_stop() {
|
||||||
|
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--time -t" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--help --time -t" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
__docker_containers_running
|
__docker_containers_running
|
||||||
|
@ -851,7 +990,7 @@ _docker_stop() {
|
||||||
_docker_tag() {
|
_docker_tag() {
|
||||||
case "$cur" in
|
case "$cur" in
|
||||||
-*)
|
-*)
|
||||||
COMPREPLY=( $( compgen -W "--force -f" -- "$cur" ) )
|
COMPREPLY=( $( compgen -W "--force -f --help" -- "$cur" ) )
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
|
@ -871,25 +1010,50 @@ _docker_tag() {
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_unpause() {
|
_docker_unpause() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
__docker_containers_unpauseable
|
__docker_containers_unpauseable
|
||||||
fi
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_top() {
|
_docker_top() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
local counter=$(__docker_pos_first_nonflag)
|
local counter=$(__docker_pos_first_nonflag)
|
||||||
if [ $cword -eq $counter ]; then
|
if [ $cword -eq $counter ]; then
|
||||||
__docker_containers_running
|
__docker_containers_running
|
||||||
fi
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_version() {
|
_docker_version() {
|
||||||
return
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker_wait() {
|
_docker_wait() {
|
||||||
|
case "$cur" in
|
||||||
|
-*)
|
||||||
|
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||||
|
;;
|
||||||
|
*)
|
||||||
__docker_containers_all
|
__docker_containers_all
|
||||||
|
;;
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
_docker() {
|
_docker() {
|
||||||
|
@ -910,11 +1074,11 @@ _docker() {
|
||||||
images
|
images
|
||||||
import
|
import
|
||||||
info
|
info
|
||||||
insert
|
|
||||||
inspect
|
inspect
|
||||||
kill
|
kill
|
||||||
load
|
load
|
||||||
login
|
login
|
||||||
|
logout
|
||||||
logs
|
logs
|
||||||
pause
|
pause
|
||||||
port
|
port
|
||||||
|
@ -939,8 +1103,10 @@ _docker() {
|
||||||
)
|
)
|
||||||
|
|
||||||
local main_options_with_args="
|
local main_options_with_args="
|
||||||
|
--api-cors-header
|
||||||
--bip
|
--bip
|
||||||
--bridge -b
|
--bridge -b
|
||||||
|
--default-ulimit
|
||||||
--dns
|
--dns
|
||||||
--dns-search
|
--dns-search
|
||||||
--exec-driver -e
|
--exec-driver -e
|
||||||
|
|
|
@ -16,7 +16,7 @@
|
||||||
|
|
||||||
function __fish_docker_no_subcommand --description 'Test if docker has yet to be given the subcommand'
|
function __fish_docker_no_subcommand --description 'Test if docker has yet to be given the subcommand'
|
||||||
for i in (commandline -opc)
|
for i in (commandline -opc)
|
||||||
if contains -- $i attach build commit cp create diff events exec export history images import info insert inspect kill load login logout logs pause port ps pull push restart rm rmi run save search start stop tag top unpause version wait
|
if contains -- $i attach build commit cp create diff events exec export history images import info inspect kill load login logout logs pause port ps pull push rename restart rm rmi run save search start stop tag top unpause version wait
|
||||||
return 1
|
return 1
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
@ -43,7 +43,7 @@ function __fish_print_docker_repositories --description 'Print a list of docker
|
||||||
end
|
end
|
||||||
|
|
||||||
# common options
|
# common options
|
||||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l api-enable-cors -d 'Enable CORS headers in the remote API'
|
complete -c docker -f -n '__fish_docker_no_subcommand' -l api-cors-header -d "Set CORS headers in the remote API. Default is cors disabled"
|
||||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s b -l bridge -d 'Attach containers to a pre-existing network bridge'
|
complete -c docker -f -n '__fish_docker_no_subcommand' -s b -l bridge -d 'Attach containers to a pre-existing network bridge'
|
||||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l bip -d "Use this CIDR notation address for the network bridge's IP, not compatible with -b"
|
complete -c docker -f -n '__fish_docker_no_subcommand' -l bip -d "Use this CIDR notation address for the network bridge's IP, not compatible with -b"
|
||||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s D -l debug -d 'Enable debug mode'
|
complete -c docker -f -n '__fish_docker_no_subcommand' -s D -l debug -d 'Enable debug mode'
|
||||||
|
|
|
@ -270,11 +270,6 @@ __docker_subcommand () {
|
||||||
{-q,--quiet}'[Only show numeric IDs]' \
|
{-q,--quiet}'[Only show numeric IDs]' \
|
||||||
':repository:__docker_repositories'
|
':repository:__docker_repositories'
|
||||||
;;
|
;;
|
||||||
(inspect)
|
|
||||||
_arguments \
|
|
||||||
{-f,--format=-}'[Format the output using the given go template]:template: ' \
|
|
||||||
'*:containers:__docker_containers'
|
|
||||||
;;
|
|
||||||
(import)
|
(import)
|
||||||
_arguments \
|
_arguments \
|
||||||
':URL:(- http:// file://)' \
|
':URL:(- http:// file://)' \
|
||||||
|
@ -282,15 +277,10 @@ __docker_subcommand () {
|
||||||
;;
|
;;
|
||||||
(info)
|
(info)
|
||||||
;;
|
;;
|
||||||
(import)
|
(inspect)
|
||||||
_arguments \
|
_arguments \
|
||||||
':URL:(- http:// file://)' \
|
{-f,--format=-}'[Format the output using the given go template]:template: ' \
|
||||||
':repository:__docker_repositories_with_tags'
|
'*:containers:__docker_containers'
|
||||||
;;
|
|
||||||
(insert)
|
|
||||||
_arguments '1:containers:__docker_containers' \
|
|
||||||
'2:URL:(http:// file://)' \
|
|
||||||
'3:file:_files'
|
|
||||||
;;
|
;;
|
||||||
(kill)
|
(kill)
|
||||||
_arguments \
|
_arguments \
|
||||||
|
|
|
@ -0,0 +1,104 @@
|
||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# hello-world latest ef872312fe1b 3 months ago 910 B
|
||||||
|
# hello-world latest ef872312fe1bbc5e05aae626791a47ee9b032efa8f3bda39cc0be7b56bfe59b9 3 months ago 910 B
|
||||||
|
|
||||||
|
# debian latest f6fab3b798be 10 weeks ago 85.1 MB
|
||||||
|
# debian latest f6fab3b798be3174f45aa1eb731f8182705555f89c9026d8c1ef230cbf8301dd 10 weeks ago 85.1 MB
|
||||||
|
|
||||||
|
if ! command -v curl &> /dev/null; then
|
||||||
|
echo >&2 'error: "curl" not found!'
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
echo "usage: $0 dir image[:tag][@image-id] ..."
|
||||||
|
echo " ie: $0 /tmp/hello-world hello-world"
|
||||||
|
echo " $0 /tmp/debian-jessie debian:jessie"
|
||||||
|
echo " $0 /tmp/old-hello-world hello-world@ef872312fe1bbc5e05aae626791a47ee9b032efa8f3bda39cc0be7b56bfe59b9"
|
||||||
|
echo " $0 /tmp/old-debian debian:latest@f6fab3b798be3174f45aa1eb731f8182705555f89c9026d8c1ef230cbf8301dd"
|
||||||
|
[ -z "$1" ] || exit "$1"
|
||||||
|
}
|
||||||
|
|
||||||
|
dir="$1" # dir for building tar in
|
||||||
|
shift || usage 1 >&2
|
||||||
|
|
||||||
|
[ $# -gt 0 -a "$dir" ] || usage 2 >&2
|
||||||
|
mkdir -p "$dir"
|
||||||
|
|
||||||
|
# hacky workarounds for Bash 3 support (no associative arrays)
|
||||||
|
images=()
|
||||||
|
rm -f "$dir"/tags-*.tmp
|
||||||
|
# repositories[busybox]='"latest": "...", "ubuntu-14.04": "..."'
|
||||||
|
|
||||||
|
while [ $# -gt 0 ]; do
|
||||||
|
imageTag="$1"
|
||||||
|
shift
|
||||||
|
image="${imageTag%%[:@]*}"
|
||||||
|
tag="${imageTag#*:}"
|
||||||
|
imageId="${tag##*@}"
|
||||||
|
[ "$imageId" != "$tag" ] || imageId=
|
||||||
|
[ "$tag" != "$imageTag" ] || tag='latest'
|
||||||
|
tag="${tag%@*}"
|
||||||
|
|
||||||
|
token="$(curl -sSL -o /dev/null -D- -H 'X-Docker-Token: true' "https://index.docker.io/v1/repositories/$image/images" | tr -d '\r' | awk -F ': *' '$1 == "X-Docker-Token" { print $2 }')"
|
||||||
|
|
||||||
|
if [ -z "$imageId" ]; then
|
||||||
|
imageId="$(curl -sSL -H "Authorization: Token $token" "https://registry-1.docker.io/v1/repositories/$image/tags/$tag")"
|
||||||
|
imageId="${imageId//\"/}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
ancestryJson="$(curl -sSL -H "Authorization: Token $token" "https://registry-1.docker.io/v1/images/$imageId/ancestry")"
|
||||||
|
if [ "${ancestryJson:0:1}" != '[' ]; then
|
||||||
|
echo >&2 "error: /v1/images/$imageId/ancestry returned something unexpected:"
|
||||||
|
echo >&2 " $ancestryJson"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
IFS=','
|
||||||
|
ancestry=( ${ancestryJson//[\[\] \"]/} )
|
||||||
|
unset IFS
|
||||||
|
|
||||||
|
if [ -s "$dir/tags-$image.tmp" ]; then
|
||||||
|
echo -n ', ' >> "$dir/tags-$image.tmp"
|
||||||
|
else
|
||||||
|
images=( "${images[@]}" "$image" )
|
||||||
|
fi
|
||||||
|
echo -n '"'"$tag"'": "'"$imageId"'"' >> "$dir/tags-$image.tmp"
|
||||||
|
|
||||||
|
echo "Downloading '$imageTag' (${#ancestry[@]} layers)..."
|
||||||
|
for imageId in "${ancestry[@]}"; do
|
||||||
|
mkdir -p "$dir/$imageId"
|
||||||
|
echo '1.0' > "$dir/$imageId/VERSION"
|
||||||
|
|
||||||
|
curl -sSL -H "Authorization: Token $token" "https://registry-1.docker.io/v1/images/$imageId/json" -o "$dir/$imageId/json"
|
||||||
|
|
||||||
|
# TODO figure out why "-C -" doesn't work here
|
||||||
|
# "curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume."
|
||||||
|
# "HTTP/1.1 416 Requested Range Not Satisfiable"
|
||||||
|
if [ -f "$dir/$imageId/layer.tar" ]; then
|
||||||
|
# TODO hackpatch for no -C support :'(
|
||||||
|
echo "skipping existing ${imageId:0:12}"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
curl -SL --progress -H "Authorization: Token $token" "https://registry-1.docker.io/v1/images/$imageId/layer" -o "$dir/$imageId/layer.tar" # -C -
|
||||||
|
done
|
||||||
|
echo
|
||||||
|
done
|
||||||
|
|
||||||
|
echo -n '{' > "$dir/repositories"
|
||||||
|
firstImage=1
|
||||||
|
for image in "${images[@]}"; do
|
||||||
|
[ "$firstImage" ] || echo -n ',' >> "$dir/repositories"
|
||||||
|
firstImage=
|
||||||
|
echo -n $'\n\t' >> "$dir/repositories"
|
||||||
|
echo -n '"'"$image"'": { '"$(cat "$dir/tags-$image.tmp")"' }' >> "$dir/repositories"
|
||||||
|
done
|
||||||
|
echo -n $'\n}\n' >> "$dir/repositories"
|
||||||
|
|
||||||
|
rm -f "$dir"/tags-*.tmp
|
||||||
|
|
||||||
|
echo "Download of images into '$dir' complete."
|
||||||
|
echo "Use something like the following to load the result into a Docker daemon:"
|
||||||
|
echo " tar -cC '$dir' . | docker load"
|
|
@ -0,0 +1,4 @@
|
||||||
|
FROM busybox
|
||||||
|
EXPOSE 80/tcp
|
||||||
|
COPY httpserver .
|
||||||
|
CMD ["./httpserver"]
|
|
@ -0,0 +1,12 @@
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"log"
|
||||||
|
"net/http"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
fs := http.FileServer(http.Dir("/static"))
|
||||||
|
http.Handle("/", fs)
|
||||||
|
log.Panic(http.ListenAndServe(":80", nil))
|
||||||
|
}
|
|
@ -128,6 +128,27 @@ if [ -d "$rootfsDir/etc/apt/apt.conf.d" ]; then
|
||||||
Acquire::GzipIndexes "true";
|
Acquire::GzipIndexes "true";
|
||||||
Acquire::CompressionTypes::Order:: "gz";
|
Acquire::CompressionTypes::Order:: "gz";
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
|
# update "autoremove" configuration to be aggressive about removing suggests deps that weren't manually installed
|
||||||
|
echo >&2 "+ echo Apt::AutoRemove::SuggestsImportant 'false' > '$rootfsDir/etc/apt/apt.conf.d/docker-autoremove-suggests'"
|
||||||
|
cat > "$rootfsDir/etc/apt/apt.conf.d/docker-autoremove-suggests" <<-'EOF'
|
||||||
|
# Since Docker users are looking for the smallest possible final images, the
|
||||||
|
# following emerges as a very common pattern:
|
||||||
|
|
||||||
|
# RUN apt-get update \
|
||||||
|
# && apt-get install -y <packages> \
|
||||||
|
# && <do some compilation work> \
|
||||||
|
# && apt-get purge -y --auto-remove <packages>
|
||||||
|
|
||||||
|
# By default, APT will actually _keep_ packages installed via Recommends or
|
||||||
|
# Depends if another package Suggests them, even and including if the package
|
||||||
|
# that originally caused them to be installed is removed. Setting this to
|
||||||
|
# "false" ensures that APT is appropriately aggressive about removing the
|
||||||
|
# packages it added.
|
||||||
|
|
||||||
|
# https://aptitude.alioth.debian.org/doc/en/ch02s05s05.html#configApt-AutoRemove-SuggestsImportant
|
||||||
|
Apt::AutoRemove::SuggestsImportant "false";
|
||||||
|
EOF
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ -z "$DONT_TOUCH_SOURCES_LIST" ]; then
|
if [ -z "$DONT_TOUCH_SOURCES_LIST" ]; then
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
<item> CMD </item>
|
<item> CMD </item>
|
||||||
<item> WORKDIR </item>
|
<item> WORKDIR </item>
|
||||||
<item> USER </item>
|
<item> USER </item>
|
||||||
|
<item> LABEL </item>
|
||||||
</list>
|
</list>
|
||||||
|
|
||||||
<contexts>
|
<contexts>
|
||||||
|
|
|
@ -12,7 +12,7 @@
|
||||||
<array>
|
<array>
|
||||||
<dict>
|
<dict>
|
||||||
<key>match</key>
|
<key>match</key>
|
||||||
<string>^\s*(ONBUILD\s+)?(FROM|MAINTAINER|RUN|EXPOSE|ENV|ADD|VOLUME|USER|WORKDIR|COPY)\s</string>
|
<string>^\s*(ONBUILD\s+)?(FROM|MAINTAINER|RUN|EXPOSE|ENV|ADD|VOLUME|USER|LABEL|WORKDIR|COPY)\s</string>
|
||||||
<key>captures</key>
|
<key>captures</key>
|
||||||
<dict>
|
<dict>
|
||||||
<key>0</key>
|
<key>0</key>
|
||||||
|
|
|
@ -11,7 +11,7 @@ let b:current_syntax = "dockerfile"
|
||||||
|
|
||||||
syntax case ignore
|
syntax case ignore
|
||||||
|
|
||||||
syntax match dockerfileKeyword /\v^\s*(ONBUILD\s+)?(ADD|CMD|ENTRYPOINT|ENV|EXPOSE|FROM|MAINTAINER|RUN|USER|VOLUME|WORKDIR|COPY)\s/
|
syntax match dockerfileKeyword /\v^\s*(ONBUILD\s+)?(ADD|CMD|ENTRYPOINT|ENV|EXPOSE|FROM|MAINTAINER|RUN|USER|LABEL|VOLUME|WORKDIR|COPY)\s/
|
||||||
highlight link dockerfileKeyword Keyword
|
highlight link dockerfileKeyword Keyword
|
||||||
|
|
||||||
syntax region dockerfileString start=/\v"/ skip=/\v\\./ end=/\v"/
|
syntax region dockerfileString start=/\v"/ skip=/\v\\./ end=/\v"/
|
||||||
|
|
|
@ -1,7 +0,0 @@
|
||||||
Solomon Hykes <solomon@docker.com> (@shykes)
|
|
||||||
Victor Vieux <vieux@docker.com> (@vieux)
|
|
||||||
Michael Crosby <michael@crosbymichael.com> (@crosbymichael)
|
|
||||||
Cristian Staretu <cristian.staretu@gmail.com> (@unclejack)
|
|
||||||
Tibor Vass <teabee89@gmail.com> (@tiborvass)
|
|
||||||
Vishnu Kannan <vishnuk@google.com> (@vishh)
|
|
||||||
volumes.go: Brian Goff <cpuguy83@gmail.com> (@cpuguy83)
|
|
|
@ -7,6 +7,7 @@ import (
|
||||||
"github.com/docker/docker/opts"
|
"github.com/docker/docker/opts"
|
||||||
flag "github.com/docker/docker/pkg/mflag"
|
flag "github.com/docker/docker/pkg/mflag"
|
||||||
"github.com/docker/docker/pkg/ulimit"
|
"github.com/docker/docker/pkg/ulimit"
|
||||||
|
"github.com/docker/docker/runconfig"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
|
@ -47,6 +48,7 @@ type Config struct {
|
||||||
TrustKeyPath string
|
TrustKeyPath string
|
||||||
Labels []string
|
Labels []string
|
||||||
Ulimits map[string]*ulimit.Ulimit
|
Ulimits map[string]*ulimit.Ulimit
|
||||||
|
LogConfig runconfig.LogConfig
|
||||||
}
|
}
|
||||||
|
|
||||||
// InstallFlags adds command-line options to the top-level flag parser for
|
// InstallFlags adds command-line options to the top-level flag parser for
|
||||||
|
@ -81,6 +83,7 @@ func (config *Config) InstallFlags() {
|
||||||
opts.LabelListVar(&config.Labels, []string{"-label"}, "Set key=value labels to the daemon")
|
opts.LabelListVar(&config.Labels, []string{"-label"}, "Set key=value labels to the daemon")
|
||||||
config.Ulimits = make(map[string]*ulimit.Ulimit)
|
config.Ulimits = make(map[string]*ulimit.Ulimit)
|
||||||
opts.UlimitMapVar(config.Ulimits, []string{"-default-ulimit"}, "Set default ulimits for containers")
|
opts.UlimitMapVar(config.Ulimits, []string{"-default-ulimit"}, "Set default ulimits for containers")
|
||||||
|
flag.StringVar(&config.LogConfig.Type, []string{"-log-driver"}, "json-file", "Containers logging driver(json-file/none)")
|
||||||
}
|
}
|
||||||
|
|
||||||
func getDefaultNetworkMtu() int {
|
func getDefaultNetworkMtu() int {
|
||||||
|
|
|
@ -14,11 +14,15 @@ import (
|
||||||
"syscall"
|
"syscall"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/libcontainer"
|
||||||
|
"github.com/docker/libcontainer/configs"
|
||||||
"github.com/docker/libcontainer/devices"
|
"github.com/docker/libcontainer/devices"
|
||||||
"github.com/docker/libcontainer/label"
|
"github.com/docker/libcontainer/label"
|
||||||
|
|
||||||
log "github.com/Sirupsen/logrus"
|
log "github.com/Sirupsen/logrus"
|
||||||
"github.com/docker/docker/daemon/execdriver"
|
"github.com/docker/docker/daemon/execdriver"
|
||||||
|
"github.com/docker/docker/daemon/logger"
|
||||||
|
"github.com/docker/docker/daemon/logger/jsonfilelog"
|
||||||
"github.com/docker/docker/engine"
|
"github.com/docker/docker/engine"
|
||||||
"github.com/docker/docker/image"
|
"github.com/docker/docker/image"
|
||||||
"github.com/docker/docker/links"
|
"github.com/docker/docker/links"
|
||||||
|
@ -26,6 +30,7 @@ import (
|
||||||
"github.com/docker/docker/pkg/archive"
|
"github.com/docker/docker/pkg/archive"
|
||||||
"github.com/docker/docker/pkg/broadcastwriter"
|
"github.com/docker/docker/pkg/broadcastwriter"
|
||||||
"github.com/docker/docker/pkg/common"
|
"github.com/docker/docker/pkg/common"
|
||||||
|
"github.com/docker/docker/pkg/directory"
|
||||||
"github.com/docker/docker/pkg/ioutils"
|
"github.com/docker/docker/pkg/ioutils"
|
||||||
"github.com/docker/docker/pkg/networkfs/etchosts"
|
"github.com/docker/docker/pkg/networkfs/etchosts"
|
||||||
"github.com/docker/docker/pkg/networkfs/resolvconf"
|
"github.com/docker/docker/pkg/networkfs/resolvconf"
|
||||||
|
@ -98,6 +103,9 @@ type Container struct {
|
||||||
activeLinks map[string]*links.Link
|
activeLinks map[string]*links.Link
|
||||||
monitor *containerMonitor
|
monitor *containerMonitor
|
||||||
execCommands *execStore
|
execCommands *execStore
|
||||||
|
// logDriver for closing
|
||||||
|
logDriver logger.Logger
|
||||||
|
logCopier *logger.Copier
|
||||||
AppliedVolumesFrom map[string]struct{}
|
AppliedVolumesFrom map[string]struct{}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -258,18 +266,18 @@ func populateCommand(c *Container, env []string) error {
|
||||||
pid.HostPid = c.hostConfig.PidMode.IsHost()
|
pid.HostPid = c.hostConfig.PidMode.IsHost()
|
||||||
|
|
||||||
// Build lists of devices allowed and created within the container.
|
// Build lists of devices allowed and created within the container.
|
||||||
userSpecifiedDevices := make([]*devices.Device, len(c.hostConfig.Devices))
|
userSpecifiedDevices := make([]*configs.Device, len(c.hostConfig.Devices))
|
||||||
for i, deviceMapping := range c.hostConfig.Devices {
|
for i, deviceMapping := range c.hostConfig.Devices {
|
||||||
device, err := devices.GetDevice(deviceMapping.PathOnHost, deviceMapping.CgroupPermissions)
|
device, err := devices.DeviceFromPath(deviceMapping.PathOnHost, deviceMapping.CgroupPermissions)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("error gathering device information while adding custom device %q: %s", deviceMapping.PathOnHost, err)
|
return fmt.Errorf("error gathering device information while adding custom device %q: %s", deviceMapping.PathOnHost, err)
|
||||||
}
|
}
|
||||||
device.Path = deviceMapping.PathInContainer
|
device.Path = deviceMapping.PathInContainer
|
||||||
userSpecifiedDevices[i] = device
|
userSpecifiedDevices[i] = device
|
||||||
}
|
}
|
||||||
allowedDevices := append(devices.DefaultAllowedDevices, userSpecifiedDevices...)
|
allowedDevices := append(configs.DefaultAllowedDevices, userSpecifiedDevices...)
|
||||||
|
|
||||||
autoCreatedDevices := append(devices.DefaultAutoCreatedDevices, userSpecifiedDevices...)
|
autoCreatedDevices := append(configs.DefaultAutoCreatedDevices, userSpecifiedDevices...)
|
||||||
|
|
||||||
// TODO: this can be removed after lxc-conf is fully deprecated
|
// TODO: this can be removed after lxc-conf is fully deprecated
|
||||||
lxcConfig, err := mergeLxcConfIntoOptions(c.hostConfig)
|
lxcConfig, err := mergeLxcConfIntoOptions(c.hostConfig)
|
||||||
|
@ -300,10 +308,10 @@ func populateCommand(c *Container, env []string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
resources := &execdriver.Resources{
|
resources := &execdriver.Resources{
|
||||||
Memory: c.Config.Memory,
|
Memory: c.hostConfig.Memory,
|
||||||
MemorySwap: c.Config.MemorySwap,
|
MemorySwap: c.hostConfig.MemorySwap,
|
||||||
CpuShares: c.Config.CpuShares,
|
CpuShares: c.hostConfig.CpuShares,
|
||||||
Cpuset: c.Config.Cpuset,
|
CpusetCpus: c.hostConfig.CpusetCpus,
|
||||||
Rlimits: rlimits,
|
Rlimits: rlimits,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -337,6 +345,7 @@ func populateCommand(c *Container, env []string) error {
|
||||||
MountLabel: c.GetMountLabel(),
|
MountLabel: c.GetMountLabel(),
|
||||||
LxcConfig: lxcConfig,
|
LxcConfig: lxcConfig,
|
||||||
AppArmorProfile: c.AppArmorProfile,
|
AppArmorProfile: c.AppArmorProfile,
|
||||||
|
CgroupParent: c.hostConfig.CgroupParent,
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
@ -894,7 +903,7 @@ func (container *Container) GetSize() (int64, int64) {
|
||||||
)
|
)
|
||||||
|
|
||||||
if err := container.Mount(); err != nil {
|
if err := container.Mount(); err != nil {
|
||||||
log.Errorf("Warning: failed to compute size of container rootfs %s: %s", container.ID, err)
|
log.Errorf("Failed to compute size of container rootfs %s: %s", container.ID, err)
|
||||||
return sizeRw, sizeRootfs
|
return sizeRw, sizeRootfs
|
||||||
}
|
}
|
||||||
defer container.Unmount()
|
defer container.Unmount()
|
||||||
|
@ -902,14 +911,14 @@ func (container *Container) GetSize() (int64, int64) {
|
||||||
initID := fmt.Sprintf("%s-init", container.ID)
|
initID := fmt.Sprintf("%s-init", container.ID)
|
||||||
sizeRw, err = driver.DiffSize(container.ID, initID)
|
sizeRw, err = driver.DiffSize(container.ID, initID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Errorf("Warning: driver %s couldn't return diff size of container %s: %s", driver, container.ID, err)
|
log.Errorf("Driver %s couldn't return diff size of container %s: %s", driver, container.ID, err)
|
||||||
// FIXME: GetSize should return an error. Not changing it now in case
|
// FIXME: GetSize should return an error. Not changing it now in case
|
||||||
// there is a side-effect.
|
// there is a side-effect.
|
||||||
sizeRw = -1
|
sizeRw = -1
|
||||||
}
|
}
|
||||||
|
|
||||||
if _, err = os.Stat(container.basefs); err != nil {
|
if _, err = os.Stat(container.basefs); err != nil {
|
||||||
if sizeRootfs, err = utils.TreeSize(container.basefs); err != nil {
|
if sizeRootfs, err = directory.Size(container.basefs); err != nil {
|
||||||
sizeRootfs = -1
|
sizeRootfs = -1
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -971,7 +980,7 @@ func (container *Container) Exposes(p nat.Port) bool {
|
||||||
return exists
|
return exists
|
||||||
}
|
}
|
||||||
|
|
||||||
func (container *Container) GetPtyMaster() (*os.File, error) {
|
func (container *Container) GetPtyMaster() (libcontainer.Console, error) {
|
||||||
ttyConsole, ok := container.command.ProcessConfig.Terminal.(execdriver.TtyTerminal)
|
ttyConsole, ok := container.command.ProcessConfig.Terminal.(execdriver.TtyTerminal)
|
||||||
if !ok {
|
if !ok {
|
||||||
return nil, ErrNoTTY
|
return nil, ErrNoTTY
|
||||||
|
@ -1233,15 +1242,15 @@ func (container *Container) initializeNetworking() error {
|
||||||
// Make sure the config is compatible with the current kernel
|
// Make sure the config is compatible with the current kernel
|
||||||
func (container *Container) verifyDaemonSettings() {
|
func (container *Container) verifyDaemonSettings() {
|
||||||
if container.Config.Memory > 0 && !container.daemon.sysInfo.MemoryLimit {
|
if container.Config.Memory > 0 && !container.daemon.sysInfo.MemoryLimit {
|
||||||
log.Infof("WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.")
|
log.Warnf("Your kernel does not support memory limit capabilities. Limitation discarded.")
|
||||||
container.Config.Memory = 0
|
container.Config.Memory = 0
|
||||||
}
|
}
|
||||||
if container.Config.Memory > 0 && !container.daemon.sysInfo.SwapLimit {
|
if container.Config.Memory > 0 && !container.daemon.sysInfo.SwapLimit {
|
||||||
log.Infof("WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.")
|
log.Warnf("Your kernel does not support swap limit capabilities. Limitation discarded.")
|
||||||
container.Config.MemorySwap = -1
|
container.Config.MemorySwap = -1
|
||||||
}
|
}
|
||||||
if container.daemon.sysInfo.IPv4ForwardingDisabled {
|
if container.daemon.sysInfo.IPv4ForwardingDisabled {
|
||||||
log.Infof("WARNING: IPv4 forwarding is disabled. Networking will not work")
|
log.Warnf("IPv4 forwarding is disabled. Networking will not work")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1352,21 +1361,37 @@ func (container *Container) setupWorkingDirectory() error {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (container *Container) startLoggingToDisk() error {
|
func (container *Container) startLogging() error {
|
||||||
// Setup logging of stdout and stderr to disk
|
cfg := container.hostConfig.LogConfig
|
||||||
logPath, err := container.logPath("json")
|
if cfg.Type == "" {
|
||||||
|
cfg = container.daemon.defaultLogConfig
|
||||||
|
}
|
||||||
|
var l logger.Logger
|
||||||
|
switch cfg.Type {
|
||||||
|
case "json-file":
|
||||||
|
pth, err := container.logPath("json")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
container.LogPath = logPath
|
|
||||||
|
|
||||||
if err := container.daemon.LogToDisk(container.stdout, container.LogPath, "stdout"); err != nil {
|
dl, err := jsonfilelog.New(pth)
|
||||||
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
l = dl
|
||||||
|
case "none":
|
||||||
|
return nil
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("Unknown logging driver: %s", cfg.Type)
|
||||||
|
}
|
||||||
|
|
||||||
if err := container.daemon.LogToDisk(container.stderr, container.LogPath, "stderr"); err != nil {
|
copier, err := logger.NewCopier(container.ID, map[string]io.Reader{"stdout": container.StdoutPipe(), "stderr": container.StderrPipe()}, l)
|
||||||
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
container.logCopier = copier
|
||||||
|
copier.Run()
|
||||||
|
container.logDriver = l
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
@ -1467,3 +1492,12 @@ func (container *Container) getNetworkedContainer() (*Container, error) {
|
||||||
func (container *Container) Stats() (*execdriver.ResourceStats, error) {
|
func (container *Container) Stats() (*execdriver.ResourceStats, error) {
|
||||||
return container.daemon.Stats(container)
|
return container.daemon.Stats(container)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (c *Container) LogDriverType() string {
|
||||||
|
c.Lock()
|
||||||
|
defer c.Unlock()
|
||||||
|
if c.hostConfig.LogConfig.Type == "" {
|
||||||
|
return c.daemon.defaultLogConfig.Type
|
||||||
|
}
|
||||||
|
return c.hostConfig.LogConfig.Type
|
||||||
|
}
|
||||||
|
|
|
@ -2,6 +2,7 @@ package daemon
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/docker/engine"
|
"github.com/docker/docker/engine"
|
||||||
"github.com/docker/docker/graph"
|
"github.com/docker/docker/graph"
|
||||||
|
@ -18,33 +19,31 @@ func (daemon *Daemon) ContainerCreate(job *engine.Job) engine.Status {
|
||||||
} else if len(job.Args) > 1 {
|
} else if len(job.Args) > 1 {
|
||||||
return job.Errorf("Usage: %s", job.Name)
|
return job.Errorf("Usage: %s", job.Name)
|
||||||
}
|
}
|
||||||
|
|
||||||
config := runconfig.ContainerConfigFromJob(job)
|
config := runconfig.ContainerConfigFromJob(job)
|
||||||
if config.Memory != 0 && config.Memory < 4194304 {
|
hostConfig := runconfig.ContainerHostConfigFromJob(job)
|
||||||
|
|
||||||
|
if len(hostConfig.LxcConf) > 0 && !strings.Contains(daemon.ExecutionDriver().Name(), "lxc") {
|
||||||
|
return job.Errorf("Cannot use --lxc-conf with execdriver: %s", daemon.ExecutionDriver().Name())
|
||||||
|
}
|
||||||
|
if hostConfig.Memory != 0 && hostConfig.Memory < 4194304 {
|
||||||
return job.Errorf("Minimum memory limit allowed is 4MB")
|
return job.Errorf("Minimum memory limit allowed is 4MB")
|
||||||
}
|
}
|
||||||
if config.Memory > 0 && !daemon.SystemConfig().MemoryLimit {
|
if hostConfig.Memory > 0 && !daemon.SystemConfig().MemoryLimit {
|
||||||
job.Errorf("Your kernel does not support memory limit capabilities. Limitation discarded.\n")
|
job.Errorf("Your kernel does not support memory limit capabilities. Limitation discarded.\n")
|
||||||
config.Memory = 0
|
hostConfig.Memory = 0
|
||||||
}
|
}
|
||||||
if config.Memory > 0 && !daemon.SystemConfig().SwapLimit {
|
if hostConfig.Memory > 0 && !daemon.SystemConfig().SwapLimit {
|
||||||
job.Errorf("Your kernel does not support swap limit capabilities. Limitation discarded.\n")
|
job.Errorf("Your kernel does not support swap limit capabilities. Limitation discarded.\n")
|
||||||
config.MemorySwap = -1
|
hostConfig.MemorySwap = -1
|
||||||
}
|
}
|
||||||
if config.Memory > 0 && config.MemorySwap > 0 && config.MemorySwap < config.Memory {
|
if hostConfig.Memory > 0 && hostConfig.MemorySwap > 0 && hostConfig.MemorySwap < hostConfig.Memory {
|
||||||
return job.Errorf("Minimum memoryswap limit should be larger than memory limit, see usage.\n")
|
return job.Errorf("Minimum memoryswap limit should be larger than memory limit, see usage.\n")
|
||||||
}
|
}
|
||||||
if config.Memory == 0 && config.MemorySwap > 0 {
|
if hostConfig.Memory == 0 && hostConfig.MemorySwap > 0 {
|
||||||
return job.Errorf("You should always set the Memory limit when using Memoryswap limit, see usage.\n")
|
return job.Errorf("You should always set the Memory limit when using Memoryswap limit, see usage.\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
var hostConfig *runconfig.HostConfig
|
|
||||||
if job.EnvExists("HostConfig") {
|
|
||||||
hostConfig = runconfig.ContainerHostConfigFromJob(job)
|
|
||||||
} else {
|
|
||||||
// Older versions of the API don't provide a HostConfig.
|
|
||||||
hostConfig = nil
|
|
||||||
}
|
|
||||||
|
|
||||||
container, buildWarnings, err := daemon.Create(config, hostConfig, name)
|
container, buildWarnings, err := daemon.Create(config, hostConfig, name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if daemon.Graph().IsNotExist(err) {
|
if daemon.Graph().IsNotExist(err) {
|
||||||
|
|
|
@ -106,6 +106,7 @@ type Daemon struct {
|
||||||
execDriver execdriver.Driver
|
execDriver execdriver.Driver
|
||||||
trustStore *trust.TrustStore
|
trustStore *trust.TrustStore
|
||||||
statsCollector *statsCollector
|
statsCollector *statsCollector
|
||||||
|
defaultLogConfig runconfig.LogConfig
|
||||||
}
|
}
|
||||||
|
|
||||||
// Install installs daemon capabilities to eng.
|
// Install installs daemon capabilities to eng.
|
||||||
|
@ -345,7 +346,7 @@ func (daemon *Daemon) restore() error {
|
||||||
for _, v := range dir {
|
for _, v := range dir {
|
||||||
id := v.Name()
|
id := v.Name()
|
||||||
container, err := daemon.load(id)
|
container, err := daemon.load(id)
|
||||||
if !debug {
|
if !debug && log.GetLevel() == log.InfoLevel {
|
||||||
fmt.Print(".")
|
fmt.Print(".")
|
||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -367,7 +368,7 @@ func (daemon *Daemon) restore() error {
|
||||||
|
|
||||||
if entities := daemon.containerGraph.List("/", -1); entities != nil {
|
if entities := daemon.containerGraph.List("/", -1); entities != nil {
|
||||||
for _, p := range entities.Paths() {
|
for _, p := range entities.Paths() {
|
||||||
if !debug {
|
if !debug && log.GetLevel() == log.InfoLevel {
|
||||||
fmt.Print(".")
|
fmt.Print(".")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -419,7 +420,9 @@ func (daemon *Daemon) restore() error {
|
||||||
}
|
}
|
||||||
|
|
||||||
if !debug {
|
if !debug {
|
||||||
|
if log.GetLevel() == log.InfoLevel {
|
||||||
fmt.Println()
|
fmt.Println()
|
||||||
|
}
|
||||||
log.Infof("Loading containers: done.")
|
log.Infof("Loading containers: done.")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -774,6 +777,13 @@ func (daemon *Daemon) RegisterLinks(container *Container, hostConfig *runconfig.
|
||||||
//An error from daemon.Get() means this name could not be found
|
//An error from daemon.Get() means this name could not be found
|
||||||
return fmt.Errorf("Could not get container for %s", parts["name"])
|
return fmt.Errorf("Could not get container for %s", parts["name"])
|
||||||
}
|
}
|
||||||
|
for child.hostConfig.NetworkMode.IsContainer() {
|
||||||
|
parts := strings.SplitN(string(child.hostConfig.NetworkMode), ":", 2)
|
||||||
|
child, err = daemon.Get(parts[1])
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Could not get container for %s", parts[1])
|
||||||
|
}
|
||||||
|
}
|
||||||
if child.hostConfig.NetworkMode.IsHost() {
|
if child.hostConfig.NetworkMode.IsHost() {
|
||||||
return runconfig.ErrConflictHostNetworkAndLinks
|
return runconfig.ErrConflictHostNetworkAndLinks
|
||||||
}
|
}
|
||||||
|
@ -817,6 +827,12 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error)
|
||||||
}
|
}
|
||||||
config.DisableNetwork = config.BridgeIface == disableNetworkBridge
|
config.DisableNetwork = config.BridgeIface == disableNetworkBridge
|
||||||
|
|
||||||
|
// register portallocator release on shutdown
|
||||||
|
eng.OnShutdown(func() {
|
||||||
|
if err := portallocator.ReleaseAll(); err != nil {
|
||||||
|
log.Errorf("portallocator.ReleaseAll(): %s", err)
|
||||||
|
}
|
||||||
|
})
|
||||||
// Claim the pidfile first, to avoid any and all unexpected race conditions.
|
// Claim the pidfile first, to avoid any and all unexpected race conditions.
|
||||||
// Some of the init doesn't need a pidfile lock - but let's not try to be smart.
|
// Some of the init doesn't need a pidfile lock - but let's not try to be smart.
|
||||||
if config.Pidfile != "" {
|
if config.Pidfile != "" {
|
||||||
|
@ -850,9 +866,6 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error)
|
||||||
return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err)
|
return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err)
|
||||||
}
|
}
|
||||||
os.Setenv("TMPDIR", realTmp)
|
os.Setenv("TMPDIR", realTmp)
|
||||||
if !config.EnableSelinuxSupport {
|
|
||||||
selinuxSetDisabled()
|
|
||||||
}
|
|
||||||
|
|
||||||
// get the canonical path to the Docker root directory
|
// get the canonical path to the Docker root directory
|
||||||
var realRoot string
|
var realRoot string
|
||||||
|
@ -876,13 +889,28 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error)
|
||||||
// Load storage driver
|
// Load storage driver
|
||||||
driver, err := graphdriver.New(config.Root, config.GraphOptions)
|
driver, err := graphdriver.New(config.Root, config.GraphOptions)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, fmt.Errorf("error intializing graphdriver: %v", err)
|
||||||
}
|
}
|
||||||
log.Debugf("Using graph driver %s", driver)
|
log.Debugf("Using graph driver %s", driver)
|
||||||
|
// register cleanup for graph driver
|
||||||
|
eng.OnShutdown(func() {
|
||||||
|
if err := driver.Cleanup(); err != nil {
|
||||||
|
log.Errorf("Error during graph storage driver.Cleanup(): %v", err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
if config.EnableSelinuxSupport {
|
||||||
|
if selinuxEnabled() {
|
||||||
// As Docker on btrfs and SELinux are incompatible at present, error on both being enabled
|
// As Docker on btrfs and SELinux are incompatible at present, error on both being enabled
|
||||||
if selinuxEnabled() && config.EnableSelinuxSupport && driver.String() == "btrfs" {
|
if driver.String() == "btrfs" {
|
||||||
return nil, fmt.Errorf("SELinux is not supported with the BTRFS graph driver!")
|
return nil, fmt.Errorf("SELinux is not supported with the BTRFS graph driver")
|
||||||
|
}
|
||||||
|
log.Debug("SELinux enabled successfully")
|
||||||
|
} else {
|
||||||
|
log.Warn("Docker could not enable SELinux on the host system")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
selinuxSetDisabled()
|
||||||
}
|
}
|
||||||
|
|
||||||
daemonRepo := path.Join(config.Root, "containers")
|
daemonRepo := path.Join(config.Root, "containers")
|
||||||
|
@ -956,6 +984,12 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
// register graph close on shutdown
|
||||||
|
eng.OnShutdown(func() {
|
||||||
|
if err := graph.Close(); err != nil {
|
||||||
|
log.Errorf("Error during container graph.Close(): %v", err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
localCopy := path.Join(config.Root, "init", fmt.Sprintf("dockerinit-%s", dockerversion.VERSION))
|
localCopy := path.Join(config.Root, "init", fmt.Sprintf("dockerinit-%s", dockerversion.VERSION))
|
||||||
sysInitPath := utils.DockerInitPath(localCopy)
|
sysInitPath := utils.DockerInitPath(localCopy)
|
||||||
|
@ -1001,7 +1035,15 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error)
|
||||||
eng: eng,
|
eng: eng,
|
||||||
trustStore: t,
|
trustStore: t,
|
||||||
statsCollector: newStatsCollector(1 * time.Second),
|
statsCollector: newStatsCollector(1 * time.Second),
|
||||||
|
defaultLogConfig: config.LogConfig,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
eng.OnShutdown(func() {
|
||||||
|
if err := daemon.shutdown(); err != nil {
|
||||||
|
log.Errorf("Error during daemon.shutdown(): %v", err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
if err := daemon.restore(); err != nil {
|
if err := daemon.restore(); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -1011,25 +1053,6 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error)
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Setup shutdown handlers
|
|
||||||
// FIXME: can these shutdown handlers be registered closer to their source?
|
|
||||||
eng.OnShutdown(func() {
|
|
||||||
// FIXME: if these cleanup steps can be called concurrently, register
|
|
||||||
// them as separate handlers to speed up total shutdown time
|
|
||||||
if err := daemon.shutdown(); err != nil {
|
|
||||||
log.Errorf("daemon.shutdown(): %s", err)
|
|
||||||
}
|
|
||||||
if err := portallocator.ReleaseAll(); err != nil {
|
|
||||||
log.Errorf("portallocator.ReleaseAll(): %s", err)
|
|
||||||
}
|
|
||||||
if err := daemon.driver.Cleanup(); err != nil {
|
|
||||||
log.Errorf("daemon.driver.Cleanup(): %s", err.Error())
|
|
||||||
}
|
|
||||||
if err := daemon.containerGraph.Close(); err != nil {
|
|
||||||
log.Errorf("daemon.containerGraph.Close(): %s", err.Error())
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
return daemon, nil
|
return daemon, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1230,11 +1253,11 @@ func checkKernel() error {
|
||||||
// the circumstances of pre-3.8 crashes are clearer.
|
// the circumstances of pre-3.8 crashes are clearer.
|
||||||
// For details see http://github.com/docker/docker/issues/407
|
// For details see http://github.com/docker/docker/issues/407
|
||||||
if k, err := kernel.GetKernelVersion(); err != nil {
|
if k, err := kernel.GetKernelVersion(); err != nil {
|
||||||
log.Infof("WARNING: %s", err)
|
log.Warnf("%s", err)
|
||||||
} else {
|
} else {
|
||||||
if kernel.CompareKernelVersion(k, &kernel.KernelVersionInfo{Kernel: 3, Major: 8, Minor: 0}) < 0 {
|
if kernel.CompareKernelVersion(k, &kernel.KernelVersionInfo{Kernel: 3, Major: 8, Minor: 0}) < 0 {
|
||||||
if os.Getenv("DOCKER_NOWARN_KERNEL_VERSION") == "" {
|
if os.Getenv("DOCKER_NOWARN_KERNEL_VERSION") == "" {
|
||||||
log.Infof("WARNING: You are running linux kernel version %s, which might be unstable running docker. Please upgrade your kernel to 3.8.0.", k.String())
|
log.Warnf("You are running linux kernel version %s, which might be unstable running docker. Please upgrade your kernel to 3.8.0.", k.String())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,2 +0,0 @@
|
||||||
Michael Crosby <michael@crosbymichael.com> (@crosbymichael)
|
|
||||||
Victor Vieux <vieux@docker.com> (@vieux)
|
|
|
@ -1,17 +1,22 @@
|
||||||
package execdriver
|
package execdriver
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
"io"
|
"io"
|
||||||
|
"io/ioutil"
|
||||||
"os"
|
"os"
|
||||||
"os/exec"
|
"os/exec"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/docker/daemon/execdriver/native/template"
|
"github.com/docker/docker/daemon/execdriver/native/template"
|
||||||
"github.com/docker/docker/pkg/ulimit"
|
"github.com/docker/docker/pkg/ulimit"
|
||||||
"github.com/docker/libcontainer"
|
"github.com/docker/libcontainer"
|
||||||
"github.com/docker/libcontainer/devices"
|
"github.com/docker/libcontainer/cgroups/fs"
|
||||||
|
"github.com/docker/libcontainer/configs"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Context is a generic key value pair that allows
|
// Context is a generic key value pair that allows
|
||||||
|
@ -42,7 +47,7 @@ type Terminal interface {
|
||||||
}
|
}
|
||||||
|
|
||||||
type TtyTerminal interface {
|
type TtyTerminal interface {
|
||||||
Master() *os.File
|
Master() libcontainer.Console
|
||||||
}
|
}
|
||||||
|
|
||||||
// ExitStatus provides exit reasons for a container.
|
// ExitStatus provides exit reasons for a container.
|
||||||
|
@ -104,12 +109,12 @@ type Resources struct {
|
||||||
Memory int64 `json:"memory"`
|
Memory int64 `json:"memory"`
|
||||||
MemorySwap int64 `json:"memory_swap"`
|
MemorySwap int64 `json:"memory_swap"`
|
||||||
CpuShares int64 `json:"cpu_shares"`
|
CpuShares int64 `json:"cpu_shares"`
|
||||||
Cpuset string `json:"cpuset"`
|
CpusetCpus string `json:"cpuset_cpus"`
|
||||||
Rlimits []*ulimit.Rlimit `json:"rlimits"`
|
Rlimits []*ulimit.Rlimit `json:"rlimits"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type ResourceStats struct {
|
type ResourceStats struct {
|
||||||
*libcontainer.ContainerStats
|
*libcontainer.Stats
|
||||||
Read time.Time `json:"read"`
|
Read time.Time `json:"read"`
|
||||||
MemoryLimit int64 `json:"memory_limit"`
|
MemoryLimit int64 `json:"memory_limit"`
|
||||||
SystemUsage uint64 `json:"system_usage"`
|
SystemUsage uint64 `json:"system_usage"`
|
||||||
|
@ -149,8 +154,8 @@ type Command struct {
|
||||||
Pid *Pid `json:"pid"`
|
Pid *Pid `json:"pid"`
|
||||||
Resources *Resources `json:"resources"`
|
Resources *Resources `json:"resources"`
|
||||||
Mounts []Mount `json:"mounts"`
|
Mounts []Mount `json:"mounts"`
|
||||||
AllowedDevices []*devices.Device `json:"allowed_devices"`
|
AllowedDevices []*configs.Device `json:"allowed_devices"`
|
||||||
AutoCreatedDevices []*devices.Device `json:"autocreated_devices"`
|
AutoCreatedDevices []*configs.Device `json:"autocreated_devices"`
|
||||||
CapAdd []string `json:"cap_add"`
|
CapAdd []string `json:"cap_add"`
|
||||||
CapDrop []string `json:"cap_drop"`
|
CapDrop []string `json:"cap_drop"`
|
||||||
ContainerPid int `json:"container_pid"` // the pid for the process inside a container
|
ContainerPid int `json:"container_pid"` // the pid for the process inside a container
|
||||||
|
@ -159,25 +164,27 @@ type Command struct {
|
||||||
MountLabel string `json:"mount_label"`
|
MountLabel string `json:"mount_label"`
|
||||||
LxcConfig []string `json:"lxc_config"`
|
LxcConfig []string `json:"lxc_config"`
|
||||||
AppArmorProfile string `json:"apparmor_profile"`
|
AppArmorProfile string `json:"apparmor_profile"`
|
||||||
|
CgroupParent string `json:"cgroup_parent"` // The parent cgroup for this command.
|
||||||
}
|
}
|
||||||
|
|
||||||
func InitContainer(c *Command) *libcontainer.Config {
|
func InitContainer(c *Command) *configs.Config {
|
||||||
container := template.New()
|
container := template.New()
|
||||||
|
|
||||||
container.Hostname = getEnv("HOSTNAME", c.ProcessConfig.Env)
|
container.Hostname = getEnv("HOSTNAME", c.ProcessConfig.Env)
|
||||||
container.Tty = c.ProcessConfig.Tty
|
|
||||||
container.User = c.ProcessConfig.User
|
|
||||||
container.WorkingDir = c.WorkingDir
|
|
||||||
container.Env = c.ProcessConfig.Env
|
|
||||||
container.Cgroups.Name = c.ID
|
container.Cgroups.Name = c.ID
|
||||||
container.Cgroups.AllowedDevices = c.AllowedDevices
|
container.Cgroups.AllowedDevices = c.AllowedDevices
|
||||||
container.MountConfig.DeviceNodes = c.AutoCreatedDevices
|
container.Readonlyfs = c.ReadonlyRootfs
|
||||||
container.RootFs = c.Rootfs
|
container.Devices = c.AutoCreatedDevices
|
||||||
container.MountConfig.ReadonlyFs = c.ReadonlyRootfs
|
container.Rootfs = c.Rootfs
|
||||||
|
container.Readonlyfs = c.ReadonlyRootfs
|
||||||
|
|
||||||
// check to see if we are running in ramdisk to disable pivot root
|
// check to see if we are running in ramdisk to disable pivot root
|
||||||
container.MountConfig.NoPivotRoot = os.Getenv("DOCKER_RAMDISK") != ""
|
container.NoPivotRoot = os.Getenv("DOCKER_RAMDISK") != ""
|
||||||
container.RestrictSys = true
|
|
||||||
|
// Default parent cgroup is "docker". Override if required.
|
||||||
|
if c.CgroupParent != "" {
|
||||||
|
container.Cgroups.Parent = c.CgroupParent
|
||||||
|
}
|
||||||
return container
|
return container
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -191,40 +198,110 @@ func getEnv(key string, env []string) string {
|
||||||
return ""
|
return ""
|
||||||
}
|
}
|
||||||
|
|
||||||
func SetupCgroups(container *libcontainer.Config, c *Command) error {
|
func SetupCgroups(container *configs.Config, c *Command) error {
|
||||||
if c.Resources != nil {
|
if c.Resources != nil {
|
||||||
container.Cgroups.CpuShares = c.Resources.CpuShares
|
container.Cgroups.CpuShares = c.Resources.CpuShares
|
||||||
container.Cgroups.Memory = c.Resources.Memory
|
container.Cgroups.Memory = c.Resources.Memory
|
||||||
container.Cgroups.MemoryReservation = c.Resources.Memory
|
container.Cgroups.MemoryReservation = c.Resources.Memory
|
||||||
container.Cgroups.MemorySwap = c.Resources.MemorySwap
|
container.Cgroups.MemorySwap = c.Resources.MemorySwap
|
||||||
container.Cgroups.CpusetCpus = c.Resources.Cpuset
|
container.Cgroups.CpusetCpus = c.Resources.CpusetCpus
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func Stats(stateFile string, containerMemoryLimit int64, machineMemory int64) (*ResourceStats, error) {
|
// Returns the network statistics for the network interfaces represented by the NetworkRuntimeInfo.
|
||||||
state, err := libcontainer.GetState(stateFile)
|
func getNetworkInterfaceStats(interfaceName string) (*libcontainer.NetworkInterface, error) {
|
||||||
if err != nil {
|
out := &libcontainer.NetworkInterface{Name: interfaceName}
|
||||||
if os.IsNotExist(err) {
|
// This can happen if the network runtime information is missing - possible if the
|
||||||
return nil, ErrNotRunning
|
// container was created by an old version of libcontainer.
|
||||||
|
if interfaceName == "" {
|
||||||
|
return out, nil
|
||||||
}
|
}
|
||||||
|
type netStatsPair struct {
|
||||||
|
// Where to write the output.
|
||||||
|
Out *uint64
|
||||||
|
// The network stats file to read.
|
||||||
|
File string
|
||||||
|
}
|
||||||
|
// Ingress for host veth is from the container. Hence tx_bytes stat on the host veth is actually number of bytes received by the container.
|
||||||
|
netStats := []netStatsPair{
|
||||||
|
{Out: &out.RxBytes, File: "tx_bytes"},
|
||||||
|
{Out: &out.RxPackets, File: "tx_packets"},
|
||||||
|
{Out: &out.RxErrors, File: "tx_errors"},
|
||||||
|
{Out: &out.RxDropped, File: "tx_dropped"},
|
||||||
|
|
||||||
|
{Out: &out.TxBytes, File: "rx_bytes"},
|
||||||
|
{Out: &out.TxPackets, File: "rx_packets"},
|
||||||
|
{Out: &out.TxErrors, File: "rx_errors"},
|
||||||
|
{Out: &out.TxDropped, File: "rx_dropped"},
|
||||||
|
}
|
||||||
|
for _, netStat := range netStats {
|
||||||
|
data, err := readSysfsNetworkStats(interfaceName, netStat.File)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
*(netStat.Out) = data
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reads the specified statistics available under /sys/class/net/<EthInterface>/statistics
|
||||||
|
func readSysfsNetworkStats(ethInterface, statsFile string) (uint64, error) {
|
||||||
|
data, err := ioutil.ReadFile(filepath.Join("/sys/class/net", ethInterface, "statistics", statsFile))
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return strconv.ParseUint(strings.TrimSpace(string(data)), 10, 64)
|
||||||
|
}
|
||||||
|
|
||||||
|
func Stats(containerDir string, containerMemoryLimit int64, machineMemory int64) (*ResourceStats, error) {
|
||||||
|
f, err := os.Open(filepath.Join(containerDir, "state.json"))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
type network struct {
|
||||||
|
Type string
|
||||||
|
HostInterfaceName string
|
||||||
|
}
|
||||||
|
|
||||||
|
state := struct {
|
||||||
|
CgroupPaths map[string]string `json:"cgroup_paths"`
|
||||||
|
Networks []network
|
||||||
|
}{}
|
||||||
|
|
||||||
|
if err := json.NewDecoder(f).Decode(&state); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
stats, err := libcontainer.GetStats(nil, state)
|
|
||||||
|
mgr := fs.Manager{Paths: state.CgroupPaths}
|
||||||
|
cstats, err := mgr.GetStats()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
stats := &libcontainer.Stats{CgroupStats: cstats}
|
||||||
// if the container does not have any memory limit specified set the
|
// if the container does not have any memory limit specified set the
|
||||||
// limit to the machines memory
|
// limit to the machines memory
|
||||||
memoryLimit := containerMemoryLimit
|
memoryLimit := containerMemoryLimit
|
||||||
if memoryLimit == 0 {
|
if memoryLimit == 0 {
|
||||||
memoryLimit = machineMemory
|
memoryLimit = machineMemory
|
||||||
}
|
}
|
||||||
|
for _, iface := range state.Networks {
|
||||||
|
switch iface.Type {
|
||||||
|
case "veth":
|
||||||
|
istats, err := getNetworkInterfaceStats(iface.HostInterfaceName)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
stats.Interfaces = append(stats.Interfaces, istats)
|
||||||
|
}
|
||||||
|
}
|
||||||
return &ResourceStats{
|
return &ResourceStats{
|
||||||
|
Stats: stats,
|
||||||
Read: now,
|
Read: now,
|
||||||
ContainerStats: stats,
|
|
||||||
MemoryLimit: memoryLimit,
|
MemoryLimit: memoryLimit,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,2 +0,0 @@
|
||||||
# the LXC exec driver needs more maintainers and contributions
|
|
||||||
Dinesh Subhraveti <dineshs@altiscale.com> (@dineshs-altiscale)
|
|
|
@ -23,7 +23,9 @@ import (
|
||||||
"github.com/docker/docker/utils"
|
"github.com/docker/docker/utils"
|
||||||
"github.com/docker/libcontainer"
|
"github.com/docker/libcontainer"
|
||||||
"github.com/docker/libcontainer/cgroups"
|
"github.com/docker/libcontainer/cgroups"
|
||||||
"github.com/docker/libcontainer/mount/nodes"
|
"github.com/docker/libcontainer/configs"
|
||||||
|
"github.com/docker/libcontainer/system"
|
||||||
|
"github.com/docker/libcontainer/user"
|
||||||
"github.com/kr/pty"
|
"github.com/kr/pty"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -42,7 +44,7 @@ type driver struct {
|
||||||
}
|
}
|
||||||
|
|
||||||
type activeContainer struct {
|
type activeContainer struct {
|
||||||
container *libcontainer.Config
|
container *configs.Config
|
||||||
cmd *exec.Cmd
|
cmd *exec.Cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -190,7 +192,7 @@ func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallba
|
||||||
c.ProcessConfig.Path = aname
|
c.ProcessConfig.Path = aname
|
||||||
c.ProcessConfig.Args = append([]string{name}, arg...)
|
c.ProcessConfig.Args = append([]string{name}, arg...)
|
||||||
|
|
||||||
if err := nodes.CreateDeviceNodes(c.Rootfs, c.AutoCreatedDevices); err != nil {
|
if err := createDeviceNodes(c.Rootfs, c.AutoCreatedDevices); err != nil {
|
||||||
return execdriver.ExitStatus{ExitCode: -1}, err
|
return execdriver.ExitStatus{ExitCode: -1}, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -231,11 +233,17 @@ func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallba
|
||||||
}
|
}
|
||||||
|
|
||||||
state := &libcontainer.State{
|
state := &libcontainer.State{
|
||||||
InitPid: pid,
|
InitProcessPid: pid,
|
||||||
CgroupPaths: cgroupPaths,
|
CgroupPaths: cgroupPaths,
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := libcontainer.SaveState(dataPath, state); err != nil {
|
f, err := os.Create(filepath.Join(dataPath, "state.json"))
|
||||||
|
if err != nil {
|
||||||
|
return terminate(err)
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
if err := json.NewEncoder(f).Encode(state); err != nil {
|
||||||
return terminate(err)
|
return terminate(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -245,18 +253,19 @@ func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallba
|
||||||
log.Debugf("Invoking startCallback")
|
log.Debugf("Invoking startCallback")
|
||||||
startCallback(&c.ProcessConfig, pid)
|
startCallback(&c.ProcessConfig, pid)
|
||||||
}
|
}
|
||||||
|
|
||||||
oomKill := false
|
oomKill := false
|
||||||
oomKillNotification, err := libcontainer.NotifyOnOOM(state)
|
oomKillNotification, err := notifyOnOOM(cgroupPaths)
|
||||||
|
|
||||||
|
<-waitLock
|
||||||
|
|
||||||
if err == nil {
|
if err == nil {
|
||||||
_, oomKill = <-oomKillNotification
|
_, oomKill = <-oomKillNotification
|
||||||
log.Debugf("oomKill error %s waitErr %s", oomKill, waitErr)
|
log.Debugf("oomKill error %s waitErr %s", oomKill, waitErr)
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
log.Warnf("WARNING: Your kernel does not support OOM notifications: %s", err)
|
log.Warnf("Your kernel does not support OOM notifications: %s", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
<-waitLock
|
|
||||||
|
|
||||||
// check oom error
|
// check oom error
|
||||||
exitCode := getExitCode(c)
|
exitCode := getExitCode(c)
|
||||||
if oomKill {
|
if oomKill {
|
||||||
|
@ -265,9 +274,57 @@ func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallba
|
||||||
return execdriver.ExitStatus{ExitCode: exitCode, OOMKilled: oomKill}, waitErr
|
return execdriver.ExitStatus{ExitCode: exitCode, OOMKilled: oomKill}, waitErr
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// copy from libcontainer
|
||||||
|
func notifyOnOOM(paths map[string]string) (<-chan struct{}, error) {
|
||||||
|
dir := paths["memory"]
|
||||||
|
if dir == "" {
|
||||||
|
return nil, fmt.Errorf("There is no path for %q in state", "memory")
|
||||||
|
}
|
||||||
|
oomControl, err := os.Open(filepath.Join(dir, "memory.oom_control"))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
fd, _, syserr := syscall.RawSyscall(syscall.SYS_EVENTFD2, 0, syscall.FD_CLOEXEC, 0)
|
||||||
|
if syserr != 0 {
|
||||||
|
oomControl.Close()
|
||||||
|
return nil, syserr
|
||||||
|
}
|
||||||
|
|
||||||
|
eventfd := os.NewFile(fd, "eventfd")
|
||||||
|
|
||||||
|
eventControlPath := filepath.Join(dir, "cgroup.event_control")
|
||||||
|
data := fmt.Sprintf("%d %d", eventfd.Fd(), oomControl.Fd())
|
||||||
|
if err := ioutil.WriteFile(eventControlPath, []byte(data), 0700); err != nil {
|
||||||
|
eventfd.Close()
|
||||||
|
oomControl.Close()
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
ch := make(chan struct{})
|
||||||
|
go func() {
|
||||||
|
defer func() {
|
||||||
|
close(ch)
|
||||||
|
eventfd.Close()
|
||||||
|
oomControl.Close()
|
||||||
|
}()
|
||||||
|
buf := make([]byte, 8)
|
||||||
|
for {
|
||||||
|
if _, err := eventfd.Read(buf); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// When a cgroup is destroyed, an event is sent to eventfd.
|
||||||
|
// So if the control path is gone, return instead of notifying.
|
||||||
|
if _, err := os.Lstat(eventControlPath); os.IsNotExist(err) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ch <- struct{}{}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
return ch, nil
|
||||||
|
}
|
||||||
|
|
||||||
// createContainer populates and configures the container type with the
|
// createContainer populates and configures the container type with the
|
||||||
// data provided by the execdriver.Command
|
// data provided by the execdriver.Command
|
||||||
func (d *driver) createContainer(c *execdriver.Command) (*libcontainer.Config, error) {
|
func (d *driver) createContainer(c *execdriver.Command) (*configs.Config, error) {
|
||||||
container := execdriver.InitContainer(c)
|
container := execdriver.InitContainer(c)
|
||||||
if err := execdriver.SetupCgroups(container, c); err != nil {
|
if err := execdriver.SetupCgroups(container, c); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
@ -297,6 +354,90 @@ func cgroupPaths(containerId string) (map[string]string, error) {
|
||||||
return paths, nil
|
return paths, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// this is copy from old libcontainer nodes.go
|
||||||
|
func createDeviceNodes(rootfs string, nodesToCreate []*configs.Device) error {
|
||||||
|
oldMask := syscall.Umask(0000)
|
||||||
|
defer syscall.Umask(oldMask)
|
||||||
|
|
||||||
|
for _, node := range nodesToCreate {
|
||||||
|
if err := createDeviceNode(rootfs, node); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Creates the device node in the rootfs of the container.
|
||||||
|
func createDeviceNode(rootfs string, node *configs.Device) error {
|
||||||
|
var (
|
||||||
|
dest = filepath.Join(rootfs, node.Path)
|
||||||
|
parent = filepath.Dir(dest)
|
||||||
|
)
|
||||||
|
|
||||||
|
if err := os.MkdirAll(parent, 0755); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
fileMode := node.FileMode
|
||||||
|
switch node.Type {
|
||||||
|
case 'c':
|
||||||
|
fileMode |= syscall.S_IFCHR
|
||||||
|
case 'b':
|
||||||
|
fileMode |= syscall.S_IFBLK
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("%c is not a valid device type for device %s", node.Type, node.Path)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := syscall.Mknod(dest, uint32(fileMode), node.Mkdev()); err != nil && !os.IsExist(err) {
|
||||||
|
return fmt.Errorf("mknod %s %s", node.Path, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := syscall.Chown(dest, int(node.Uid), int(node.Gid)); err != nil {
|
||||||
|
return fmt.Errorf("chown %s to %d:%d", node.Path, node.Uid, node.Gid)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// setupUser changes the groups, gid, and uid for the user inside the container
|
||||||
|
// copy from libcontainer, cause not it's private
|
||||||
|
func setupUser(userSpec string) error {
|
||||||
|
// Set up defaults.
|
||||||
|
defaultExecUser := user.ExecUser{
|
||||||
|
Uid: syscall.Getuid(),
|
||||||
|
Gid: syscall.Getgid(),
|
||||||
|
Home: "/",
|
||||||
|
}
|
||||||
|
passwdPath, err := user.GetPasswdPath()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
groupPath, err := user.GetGroupPath()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
execUser, err := user.GetExecUserPath(userSpec, &defaultExecUser, passwdPath, groupPath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := syscall.Setgroups(execUser.Sgids); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := system.Setgid(execUser.Gid); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := system.Setuid(execUser.Uid); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// if we didn't get HOME already, set it based on the user's HOME
|
||||||
|
if envHome := os.Getenv("HOME"); envHome == "" {
|
||||||
|
if err := os.Setenv("HOME", execUser.Home); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
/// Return the exit code of the process
|
/// Return the exit code of the process
|
||||||
// if the process has not exited -1 will be returned
|
// if the process has not exited -1 will be returned
|
||||||
func getExitCode(c *execdriver.Command) int {
|
func getExitCode(c *execdriver.Command) int {
|
||||||
|
|
|
@ -3,8 +3,6 @@ package lxc
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"github.com/docker/libcontainer"
|
|
||||||
"github.com/docker/libcontainer/namespaces"
|
|
||||||
"github.com/docker/libcontainer/utils"
|
"github.com/docker/libcontainer/utils"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -12,9 +10,7 @@ func finalizeNamespace(args *InitArgs) error {
|
||||||
if err := utils.CloseExecFrom(3); err != nil {
|
if err := utils.CloseExecFrom(3); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if err := namespaces.SetupUser(&libcontainer.Config{
|
if err := setupUser(args.User); err != nil {
|
||||||
User: args.User,
|
|
||||||
}); err != nil {
|
|
||||||
return fmt.Errorf("setup user %s", err)
|
return fmt.Errorf("setup user %s", err)
|
||||||
}
|
}
|
||||||
if err := setupWorkingDirectory(args); err != nil {
|
if err := setupWorkingDirectory(args); err != nil {
|
||||||
|
|
|
@ -11,7 +11,6 @@ import (
|
||||||
nativeTemplate "github.com/docker/docker/daemon/execdriver/native/template"
|
nativeTemplate "github.com/docker/docker/daemon/execdriver/native/template"
|
||||||
"github.com/docker/docker/utils"
|
"github.com/docker/docker/utils"
|
||||||
"github.com/docker/libcontainer/label"
|
"github.com/docker/libcontainer/label"
|
||||||
"github.com/docker/libcontainer/security/capabilities"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const LxcTemplate = `
|
const LxcTemplate = `
|
||||||
|
@ -52,7 +51,7 @@ lxc.cgroup.devices.allow = a
|
||||||
lxc.cgroup.devices.deny = a
|
lxc.cgroup.devices.deny = a
|
||||||
#Allow the devices passed to us in the AllowedDevices list.
|
#Allow the devices passed to us in the AllowedDevices list.
|
||||||
{{range $allowedDevice := .AllowedDevices}}
|
{{range $allowedDevice := .AllowedDevices}}
|
||||||
lxc.cgroup.devices.allow = {{$allowedDevice.GetCgroupAllowString}}
|
lxc.cgroup.devices.allow = {{$allowedDevice.CgroupString}}
|
||||||
{{end}}
|
{{end}}
|
||||||
{{end}}
|
{{end}}
|
||||||
|
|
||||||
|
@ -108,8 +107,8 @@ lxc.cgroup.memory.memsw.limit_in_bytes = {{$memSwap}}
|
||||||
{{if .Resources.CpuShares}}
|
{{if .Resources.CpuShares}}
|
||||||
lxc.cgroup.cpu.shares = {{.Resources.CpuShares}}
|
lxc.cgroup.cpu.shares = {{.Resources.CpuShares}}
|
||||||
{{end}}
|
{{end}}
|
||||||
{{if .Resources.Cpuset}}
|
{{if .Resources.CpusetCpus}}
|
||||||
lxc.cgroup.cpuset.cpus = {{.Resources.Cpuset}}
|
lxc.cgroup.cpuset.cpus = {{.Resources.CpusetCpus}}
|
||||||
{{end}}
|
{{end}}
|
||||||
{{end}}
|
{{end}}
|
||||||
|
|
||||||
|
@ -169,7 +168,7 @@ func keepCapabilities(adds []string, drops []string) ([]string, error) {
|
||||||
var newCaps []string
|
var newCaps []string
|
||||||
for _, cap := range caps {
|
for _, cap := range caps {
|
||||||
log.Debugf("cap %s\n", cap)
|
log.Debugf("cap %s\n", cap)
|
||||||
realCap := capabilities.GetCapability(cap)
|
realCap := execdriver.GetCapability(cap)
|
||||||
numCap := fmt.Sprintf("%d", realCap.Value)
|
numCap := fmt.Sprintf("%d", realCap.Value)
|
||||||
newCaps = append(newCaps, numCap)
|
newCaps = append(newCaps, numCap)
|
||||||
}
|
}
|
||||||
|
@ -180,13 +179,10 @@ func keepCapabilities(adds []string, drops []string) ([]string, error) {
|
||||||
func dropList(drops []string) ([]string, error) {
|
func dropList(drops []string) ([]string, error) {
|
||||||
if utils.StringsContainsNoCase(drops, "all") {
|
if utils.StringsContainsNoCase(drops, "all") {
|
||||||
var newCaps []string
|
var newCaps []string
|
||||||
for _, cap := range capabilities.GetAllCapabilities() {
|
for _, capName := range execdriver.GetAllCapabilities() {
|
||||||
log.Debugf("drop cap %s\n", cap)
|
cap := execdriver.GetCapability(capName)
|
||||||
realCap := capabilities.GetCapability(cap)
|
log.Debugf("drop cap %s\n", cap.Key)
|
||||||
if realCap == nil {
|
numCap := fmt.Sprintf("%d", cap.Value)
|
||||||
return nil, fmt.Errorf("Invalid capability '%s'", cap)
|
|
||||||
}
|
|
||||||
numCap := fmt.Sprintf("%d", realCap.Value)
|
|
||||||
newCaps = append(newCaps, numCap)
|
newCaps = append(newCaps, numCap)
|
||||||
}
|
}
|
||||||
return newCaps, nil
|
return newCaps, nil
|
||||||
|
|
|
@ -5,11 +5,6 @@ package lxc
|
||||||
import (
|
import (
|
||||||
"bufio"
|
"bufio"
|
||||||
"fmt"
|
"fmt"
|
||||||
"github.com/docker/docker/daemon/execdriver"
|
|
||||||
nativeTemplate "github.com/docker/docker/daemon/execdriver/native/template"
|
|
||||||
"github.com/docker/libcontainer/devices"
|
|
||||||
"github.com/docker/libcontainer/security/capabilities"
|
|
||||||
"github.com/syndtr/gocapability/capability"
|
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"math/rand"
|
"math/rand"
|
||||||
"os"
|
"os"
|
||||||
|
@ -17,6 +12,11 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/docker/daemon/execdriver"
|
||||||
|
nativeTemplate "github.com/docker/docker/daemon/execdriver/native/template"
|
||||||
|
"github.com/docker/libcontainer/configs"
|
||||||
|
"github.com/syndtr/gocapability/capability"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestLXCConfig(t *testing.T) {
|
func TestLXCConfig(t *testing.T) {
|
||||||
|
@ -53,7 +53,7 @@ func TestLXCConfig(t *testing.T) {
|
||||||
Mtu: 1500,
|
Mtu: 1500,
|
||||||
Interface: nil,
|
Interface: nil,
|
||||||
},
|
},
|
||||||
AllowedDevices: make([]*devices.Device, 0),
|
AllowedDevices: make([]*configs.Device, 0),
|
||||||
ProcessConfig: execdriver.ProcessConfig{},
|
ProcessConfig: execdriver.ProcessConfig{},
|
||||||
}
|
}
|
||||||
p, err := driver.generateLXCConfig(command)
|
p, err := driver.generateLXCConfig(command)
|
||||||
|
@ -295,7 +295,7 @@ func TestCustomLxcConfigMisc(t *testing.T) {
|
||||||
grepFile(t, p, "lxc.cgroup.cpuset.cpus = 0,1")
|
grepFile(t, p, "lxc.cgroup.cpuset.cpus = 0,1")
|
||||||
container := nativeTemplate.New()
|
container := nativeTemplate.New()
|
||||||
for _, cap := range container.Capabilities {
|
for _, cap := range container.Capabilities {
|
||||||
realCap := capabilities.GetCapability(cap)
|
realCap := execdriver.GetCapability(cap)
|
||||||
numCap := fmt.Sprintf("%d", realCap.Value)
|
numCap := fmt.Sprintf("%d", realCap.Value)
|
||||||
if cap != "MKNOD" && cap != "KILL" {
|
if cap != "MKNOD" && cap != "KILL" {
|
||||||
grepFile(t, p, fmt.Sprintf("lxc.cap.keep = %s", numCap))
|
grepFile(t, p, fmt.Sprintf("lxc.cap.keep = %s", numCap))
|
||||||
|
@ -359,7 +359,7 @@ func TestCustomLxcConfigMiscOverride(t *testing.T) {
|
||||||
grepFile(t, p, "lxc.cgroup.cpuset.cpus = 0,1")
|
grepFile(t, p, "lxc.cgroup.cpuset.cpus = 0,1")
|
||||||
container := nativeTemplate.New()
|
container := nativeTemplate.New()
|
||||||
for _, cap := range container.Capabilities {
|
for _, cap := range container.Capabilities {
|
||||||
realCap := capabilities.GetCapability(cap)
|
realCap := execdriver.GetCapability(cap)
|
||||||
numCap := fmt.Sprintf("%d", realCap.Value)
|
numCap := fmt.Sprintf("%d", realCap.Value)
|
||||||
if cap != "MKNOD" && cap != "KILL" {
|
if cap != "MKNOD" && cap != "KILL" {
|
||||||
grepFile(t, p, fmt.Sprintf("lxc.cap.keep = %s", numCap))
|
grepFile(t, p, fmt.Sprintf("lxc.cap.keep = %s", numCap))
|
||||||
|
|
|
@ -3,21 +3,24 @@
|
||||||
package native
|
package native
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os/exec"
|
"net"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"syscall"
|
||||||
|
|
||||||
"github.com/docker/docker/daemon/execdriver"
|
"github.com/docker/docker/daemon/execdriver"
|
||||||
"github.com/docker/libcontainer"
|
"github.com/docker/docker/pkg/symlink"
|
||||||
"github.com/docker/libcontainer/apparmor"
|
"github.com/docker/libcontainer/apparmor"
|
||||||
|
"github.com/docker/libcontainer/configs"
|
||||||
"github.com/docker/libcontainer/devices"
|
"github.com/docker/libcontainer/devices"
|
||||||
"github.com/docker/libcontainer/mount"
|
"github.com/docker/libcontainer/utils"
|
||||||
"github.com/docker/libcontainer/security/capabilities"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// createContainer populates and configures the container type with the
|
// createContainer populates and configures the container type with the
|
||||||
// data provided by the execdriver.Command
|
// data provided by the execdriver.Command
|
||||||
func (d *driver) createContainer(c *execdriver.Command) (*libcontainer.Config, error) {
|
func (d *driver) createContainer(c *execdriver.Command) (*configs.Config, error) {
|
||||||
container := execdriver.InitContainer(c)
|
container := execdriver.InitContainer(c)
|
||||||
|
|
||||||
if err := d.createIpc(container, c); err != nil {
|
if err := d.createIpc(container, c); err != nil {
|
||||||
|
@ -33,6 +36,14 @@ func (d *driver) createContainer(c *execdriver.Command) (*libcontainer.Config, e
|
||||||
}
|
}
|
||||||
|
|
||||||
if c.ProcessConfig.Privileged {
|
if c.ProcessConfig.Privileged {
|
||||||
|
// clear readonly for /sys
|
||||||
|
for i := range container.Mounts {
|
||||||
|
if container.Mounts[i].Destination == "/sys" {
|
||||||
|
container.Mounts[i].Flags &= ^syscall.MS_RDONLY
|
||||||
|
}
|
||||||
|
}
|
||||||
|
container.ReadonlyPaths = nil
|
||||||
|
container.MaskPaths = nil
|
||||||
if err := d.setPrivileged(container); err != nil {
|
if err := d.setPrivileged(container); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -57,43 +68,52 @@ func (d *driver) createContainer(c *execdriver.Command) (*libcontainer.Config, e
|
||||||
if err := d.setupLabels(container, c); err != nil {
|
if err := d.setupLabels(container, c); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
d.setupRlimits(container, c)
|
d.setupRlimits(container, c)
|
||||||
|
|
||||||
cmds := make(map[string]*exec.Cmd)
|
|
||||||
d.Lock()
|
|
||||||
for k, v := range d.activeContainers {
|
|
||||||
cmds[k] = v.cmd
|
|
||||||
}
|
|
||||||
d.Unlock()
|
|
||||||
|
|
||||||
return container, nil
|
return container, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) createNetwork(container *libcontainer.Config, c *execdriver.Command) error {
|
func generateIfaceName() (string, error) {
|
||||||
|
for i := 0; i < 10; i++ {
|
||||||
|
name, err := utils.GenerateRandomName("veth", 7)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if _, err := net.InterfaceByName(name); err != nil {
|
||||||
|
if strings.Contains(err.Error(), "no such") {
|
||||||
|
return name, nil
|
||||||
|
}
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return "", errors.New("Failed to find name for new interface")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *driver) createNetwork(container *configs.Config, c *execdriver.Command) error {
|
||||||
if c.Network.HostNetworking {
|
if c.Network.HostNetworking {
|
||||||
container.Namespaces.Remove(libcontainer.NEWNET)
|
container.Namespaces.Remove(configs.NEWNET)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
container.Networks = []*libcontainer.Network{
|
container.Networks = []*configs.Network{
|
||||||
{
|
{
|
||||||
Mtu: c.Network.Mtu,
|
|
||||||
Address: fmt.Sprintf("%s/%d", "127.0.0.1", 0),
|
|
||||||
Gateway: "localhost",
|
|
||||||
Type: "loopback",
|
Type: "loopback",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
iName, err := generateIfaceName()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
if c.Network.Interface != nil {
|
if c.Network.Interface != nil {
|
||||||
vethNetwork := libcontainer.Network{
|
vethNetwork := configs.Network{
|
||||||
|
Name: "eth0",
|
||||||
|
HostInterfaceName: iName,
|
||||||
Mtu: c.Network.Mtu,
|
Mtu: c.Network.Mtu,
|
||||||
Address: fmt.Sprintf("%s/%d", c.Network.Interface.IPAddress, c.Network.Interface.IPPrefixLen),
|
Address: fmt.Sprintf("%s/%d", c.Network.Interface.IPAddress, c.Network.Interface.IPPrefixLen),
|
||||||
MacAddress: c.Network.Interface.MacAddress,
|
MacAddress: c.Network.Interface.MacAddress,
|
||||||
Gateway: c.Network.Interface.Gateway,
|
Gateway: c.Network.Interface.Gateway,
|
||||||
Type: "veth",
|
Type: "veth",
|
||||||
Bridge: c.Network.Interface.Bridge,
|
Bridge: c.Network.Interface.Bridge,
|
||||||
VethPrefix: "veth",
|
|
||||||
}
|
}
|
||||||
if c.Network.Interface.GlobalIPv6Address != "" {
|
if c.Network.Interface.GlobalIPv6Address != "" {
|
||||||
vethNetwork.IPv6Address = fmt.Sprintf("%s/%d", c.Network.Interface.GlobalIPv6Address, c.Network.Interface.GlobalIPv6PrefixLen)
|
vethNetwork.IPv6Address = fmt.Sprintf("%s/%d", c.Network.Interface.GlobalIPv6Address, c.Network.Interface.GlobalIPv6PrefixLen)
|
||||||
|
@ -107,21 +127,24 @@ func (d *driver) createNetwork(container *libcontainer.Config, c *execdriver.Com
|
||||||
active := d.activeContainers[c.Network.ContainerID]
|
active := d.activeContainers[c.Network.ContainerID]
|
||||||
d.Unlock()
|
d.Unlock()
|
||||||
|
|
||||||
if active == nil || active.cmd.Process == nil {
|
if active == nil {
|
||||||
return fmt.Errorf("%s is not a valid running container to join", c.Network.ContainerID)
|
return fmt.Errorf("%s is not a valid running container to join", c.Network.ContainerID)
|
||||||
}
|
}
|
||||||
cmd := active.cmd
|
|
||||||
|
|
||||||
nspath := filepath.Join("/proc", fmt.Sprint(cmd.Process.Pid), "ns", "net")
|
state, err := active.State()
|
||||||
container.Namespaces.Add(libcontainer.NEWNET, nspath)
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
container.Namespaces.Add(configs.NEWNET, state.NamespacePaths[configs.NEWNET])
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) createIpc(container *libcontainer.Config, c *execdriver.Command) error {
|
func (d *driver) createIpc(container *configs.Config, c *execdriver.Command) error {
|
||||||
if c.Ipc.HostIpc {
|
if c.Ipc.HostIpc {
|
||||||
container.Namespaces.Remove(libcontainer.NEWIPC)
|
container.Namespaces.Remove(configs.NEWIPC)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -130,37 +153,38 @@ func (d *driver) createIpc(container *libcontainer.Config, c *execdriver.Command
|
||||||
active := d.activeContainers[c.Ipc.ContainerID]
|
active := d.activeContainers[c.Ipc.ContainerID]
|
||||||
d.Unlock()
|
d.Unlock()
|
||||||
|
|
||||||
if active == nil || active.cmd.Process == nil {
|
if active == nil {
|
||||||
return fmt.Errorf("%s is not a valid running container to join", c.Ipc.ContainerID)
|
return fmt.Errorf("%s is not a valid running container to join", c.Ipc.ContainerID)
|
||||||
}
|
}
|
||||||
cmd := active.cmd
|
|
||||||
|
|
||||||
container.Namespaces.Add(libcontainer.NEWIPC, filepath.Join("/proc", fmt.Sprint(cmd.Process.Pid), "ns", "ipc"))
|
state, err := active.State()
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *driver) createPid(container *libcontainer.Config, c *execdriver.Command) error {
|
|
||||||
if c.Pid.HostPid {
|
|
||||||
container.Namespaces.Remove(libcontainer.NEWPID)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *driver) setPrivileged(container *libcontainer.Config) (err error) {
|
|
||||||
container.Capabilities = capabilities.GetAllCapabilities()
|
|
||||||
container.Cgroups.AllowAllDevices = true
|
|
||||||
|
|
||||||
hostDeviceNodes, err := devices.GetHostDeviceNodes()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
container.MountConfig.DeviceNodes = hostDeviceNodes
|
container.Namespaces.Add(configs.NEWIPC, state.NamespacePaths[configs.NEWIPC])
|
||||||
|
}
|
||||||
|
|
||||||
container.RestrictSys = false
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *driver) createPid(container *configs.Config, c *execdriver.Command) error {
|
||||||
|
if c.Pid.HostPid {
|
||||||
|
container.Namespaces.Remove(configs.NEWPID)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *driver) setPrivileged(container *configs.Config) (err error) {
|
||||||
|
container.Capabilities = execdriver.GetAllCapabilities()
|
||||||
|
container.Cgroups.AllowAllDevices = true
|
||||||
|
|
||||||
|
hostDevices, err := devices.HostDevices()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
container.Devices = hostDevices
|
||||||
|
|
||||||
if apparmor.IsEnabled() {
|
if apparmor.IsEnabled() {
|
||||||
container.AppArmorProfile = "unconfined"
|
container.AppArmorProfile = "unconfined"
|
||||||
|
@ -169,39 +193,66 @@ func (d *driver) setPrivileged(container *libcontainer.Config) (err error) {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) setCapabilities(container *libcontainer.Config, c *execdriver.Command) (err error) {
|
func (d *driver) setCapabilities(container *configs.Config, c *execdriver.Command) (err error) {
|
||||||
container.Capabilities, err = execdriver.TweakCapabilities(container.Capabilities, c.CapAdd, c.CapDrop)
|
container.Capabilities, err = execdriver.TweakCapabilities(container.Capabilities, c.CapAdd, c.CapDrop)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) setupRlimits(container *libcontainer.Config, c *execdriver.Command) {
|
func (d *driver) setupRlimits(container *configs.Config, c *execdriver.Command) {
|
||||||
if c.Resources == nil {
|
if c.Resources == nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, rlimit := range c.Resources.Rlimits {
|
for _, rlimit := range c.Resources.Rlimits {
|
||||||
container.Rlimits = append(container.Rlimits, libcontainer.Rlimit((*rlimit)))
|
container.Rlimits = append(container.Rlimits, configs.Rlimit{
|
||||||
}
|
Type: rlimit.Type,
|
||||||
}
|
Hard: rlimit.Hard,
|
||||||
|
Soft: rlimit.Soft,
|
||||||
func (d *driver) setupMounts(container *libcontainer.Config, c *execdriver.Command) error {
|
|
||||||
for _, m := range c.Mounts {
|
|
||||||
container.MountConfig.Mounts = append(container.MountConfig.Mounts, &mount.Mount{
|
|
||||||
Type: "bind",
|
|
||||||
Source: m.Source,
|
|
||||||
Destination: m.Destination,
|
|
||||||
Writable: m.Writable,
|
|
||||||
Private: m.Private,
|
|
||||||
Slave: m.Slave,
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *driver) setupMounts(container *configs.Config, c *execdriver.Command) error {
|
||||||
|
userMounts := make(map[string]struct{})
|
||||||
|
for _, m := range c.Mounts {
|
||||||
|
userMounts[m.Destination] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter out mounts that are overriden by user supplied mounts
|
||||||
|
var defaultMounts []*configs.Mount
|
||||||
|
for _, m := range container.Mounts {
|
||||||
|
if _, ok := userMounts[m.Destination]; !ok {
|
||||||
|
defaultMounts = append(defaultMounts, m)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
container.Mounts = defaultMounts
|
||||||
|
|
||||||
|
for _, m := range c.Mounts {
|
||||||
|
dest, err := symlink.FollowSymlinkInScope(filepath.Join(c.Rootfs, m.Destination), c.Rootfs)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
flags := syscall.MS_BIND | syscall.MS_REC
|
||||||
|
if !m.Writable {
|
||||||
|
flags |= syscall.MS_RDONLY
|
||||||
|
}
|
||||||
|
if m.Slave {
|
||||||
|
flags |= syscall.MS_SLAVE
|
||||||
|
}
|
||||||
|
|
||||||
|
container.Mounts = append(container.Mounts, &configs.Mount{
|
||||||
|
Source: m.Source,
|
||||||
|
Destination: dest,
|
||||||
|
Device: "bind",
|
||||||
|
Flags: flags,
|
||||||
|
})
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) setupLabels(container *libcontainer.Config, c *execdriver.Command) error {
|
func (d *driver) setupLabels(container *configs.Config, c *execdriver.Command) error {
|
||||||
container.ProcessLabel = c.ProcessLabel
|
container.ProcessLabel = c.ProcessLabel
|
||||||
container.MountConfig.MountLabel = c.MountLabel
|
container.MountLabel = c.MountLabel
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,28 +4,28 @@ package native
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"os"
|
"os"
|
||||||
"os/exec"
|
"os/exec"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"syscall"
|
"syscall"
|
||||||
|
"time"
|
||||||
|
|
||||||
log "github.com/Sirupsen/logrus"
|
log "github.com/Sirupsen/logrus"
|
||||||
"github.com/docker/docker/daemon/execdriver"
|
"github.com/docker/docker/daemon/execdriver"
|
||||||
|
"github.com/docker/docker/pkg/reexec"
|
||||||
sysinfo "github.com/docker/docker/pkg/system"
|
sysinfo "github.com/docker/docker/pkg/system"
|
||||||
"github.com/docker/docker/pkg/term"
|
"github.com/docker/docker/pkg/term"
|
||||||
"github.com/docker/libcontainer"
|
"github.com/docker/libcontainer"
|
||||||
"github.com/docker/libcontainer/apparmor"
|
"github.com/docker/libcontainer/apparmor"
|
||||||
"github.com/docker/libcontainer/cgroups/fs"
|
|
||||||
"github.com/docker/libcontainer/cgroups/systemd"
|
"github.com/docker/libcontainer/cgroups/systemd"
|
||||||
consolepkg "github.com/docker/libcontainer/console"
|
"github.com/docker/libcontainer/configs"
|
||||||
"github.com/docker/libcontainer/namespaces"
|
|
||||||
_ "github.com/docker/libcontainer/namespaces/nsenter"
|
|
||||||
"github.com/docker/libcontainer/system"
|
"github.com/docker/libcontainer/system"
|
||||||
|
"github.com/docker/libcontainer/utils"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
|
@ -33,16 +33,12 @@ const (
|
||||||
Version = "0.2"
|
Version = "0.2"
|
||||||
)
|
)
|
||||||
|
|
||||||
type activeContainer struct {
|
|
||||||
container *libcontainer.Config
|
|
||||||
cmd *exec.Cmd
|
|
||||||
}
|
|
||||||
|
|
||||||
type driver struct {
|
type driver struct {
|
||||||
root string
|
root string
|
||||||
initPath string
|
initPath string
|
||||||
activeContainers map[string]*activeContainer
|
activeContainers map[string]libcontainer.Container
|
||||||
machineMemory int64
|
machineMemory int64
|
||||||
|
factory libcontainer.Factory
|
||||||
sync.Mutex
|
sync.Mutex
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -59,11 +55,27 @@ func NewDriver(root, initPath string) (*driver, error) {
|
||||||
if err := apparmor.InstallDefaultProfile(); err != nil {
|
if err := apparmor.InstallDefaultProfile(); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
cgm := libcontainer.Cgroupfs
|
||||||
|
if systemd.UseSystemd() {
|
||||||
|
cgm = libcontainer.SystemdCgroups
|
||||||
|
}
|
||||||
|
|
||||||
|
f, err := libcontainer.New(
|
||||||
|
root,
|
||||||
|
cgm,
|
||||||
|
libcontainer.InitPath(reexec.Self(), DriverName),
|
||||||
|
libcontainer.TmpfsRoot,
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
return &driver{
|
return &driver{
|
||||||
root: root,
|
root: root,
|
||||||
initPath: initPath,
|
initPath: initPath,
|
||||||
activeContainers: make(map[string]*activeContainer),
|
activeContainers: make(map[string]libcontainer.Container),
|
||||||
machineMemory: meminfo.MemTotal,
|
machineMemory: meminfo.MemTotal,
|
||||||
|
factory: f,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -81,101 +93,141 @@ func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallba
|
||||||
|
|
||||||
var term execdriver.Terminal
|
var term execdriver.Terminal
|
||||||
|
|
||||||
|
p := &libcontainer.Process{
|
||||||
|
Args: append([]string{c.ProcessConfig.Entrypoint}, c.ProcessConfig.Arguments...),
|
||||||
|
Env: c.ProcessConfig.Env,
|
||||||
|
Cwd: c.WorkingDir,
|
||||||
|
User: c.ProcessConfig.User,
|
||||||
|
}
|
||||||
|
|
||||||
if c.ProcessConfig.Tty {
|
if c.ProcessConfig.Tty {
|
||||||
term, err = NewTtyConsole(&c.ProcessConfig, pipes)
|
rootuid, err := container.HostUID()
|
||||||
|
if err != nil {
|
||||||
|
return execdriver.ExitStatus{ExitCode: -1}, err
|
||||||
|
}
|
||||||
|
cons, err := p.NewConsole(rootuid)
|
||||||
|
if err != nil {
|
||||||
|
return execdriver.ExitStatus{ExitCode: -1}, err
|
||||||
|
}
|
||||||
|
term, err = NewTtyConsole(cons, pipes, rootuid)
|
||||||
} else {
|
} else {
|
||||||
term, err = execdriver.NewStdConsole(&c.ProcessConfig, pipes)
|
p.Stdout = pipes.Stdout
|
||||||
|
p.Stderr = pipes.Stderr
|
||||||
|
r, w, err := os.Pipe()
|
||||||
|
if err != nil {
|
||||||
|
return execdriver.ExitStatus{ExitCode: -1}, err
|
||||||
|
}
|
||||||
|
if pipes.Stdin != nil {
|
||||||
|
go func() {
|
||||||
|
io.Copy(w, pipes.Stdin)
|
||||||
|
w.Close()
|
||||||
|
}()
|
||||||
|
p.Stdin = r
|
||||||
|
}
|
||||||
|
term = &execdriver.StdConsole{}
|
||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return execdriver.ExitStatus{ExitCode: -1}, err
|
return execdriver.ExitStatus{ExitCode: -1}, err
|
||||||
}
|
}
|
||||||
c.ProcessConfig.Terminal = term
|
c.ProcessConfig.Terminal = term
|
||||||
|
|
||||||
|
cont, err := d.factory.Create(c.ID, container)
|
||||||
|
if err != nil {
|
||||||
|
return execdriver.ExitStatus{ExitCode: -1}, err
|
||||||
|
}
|
||||||
d.Lock()
|
d.Lock()
|
||||||
d.activeContainers[c.ID] = &activeContainer{
|
d.activeContainers[c.ID] = cont
|
||||||
container: container,
|
|
||||||
cmd: &c.ProcessConfig.Cmd,
|
|
||||||
}
|
|
||||||
d.Unlock()
|
d.Unlock()
|
||||||
|
defer func() {
|
||||||
var (
|
cont.Destroy()
|
||||||
dataPath = filepath.Join(d.root, c.ID)
|
d.cleanContainer(c.ID)
|
||||||
args = append([]string{c.ProcessConfig.Entrypoint}, c.ProcessConfig.Arguments...)
|
|
||||||
)
|
|
||||||
|
|
||||||
if err := d.createContainerRoot(c.ID); err != nil {
|
|
||||||
return execdriver.ExitStatus{ExitCode: -1}, err
|
|
||||||
}
|
|
||||||
defer d.cleanContainer(c.ID)
|
|
||||||
|
|
||||||
if err := d.writeContainerFile(container, c.ID); err != nil {
|
|
||||||
return execdriver.ExitStatus{ExitCode: -1}, err
|
|
||||||
}
|
|
||||||
|
|
||||||
execOutputChan := make(chan execOutput, 1)
|
|
||||||
waitForStart := make(chan struct{})
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
exitCode, err := namespaces.Exec(container, c.ProcessConfig.Stdin, c.ProcessConfig.Stdout, c.ProcessConfig.Stderr, c.ProcessConfig.Console, dataPath, args, func(container *libcontainer.Config, console, dataPath, init string, child *os.File, args []string) *exec.Cmd {
|
|
||||||
c.ProcessConfig.Path = d.initPath
|
|
||||||
c.ProcessConfig.Args = append([]string{
|
|
||||||
DriverName,
|
|
||||||
"-console", console,
|
|
||||||
"-pipe", "3",
|
|
||||||
"-root", filepath.Join(d.root, c.ID),
|
|
||||||
"--",
|
|
||||||
}, args...)
|
|
||||||
|
|
||||||
// set this to nil so that when we set the clone flags anything else is reset
|
|
||||||
c.ProcessConfig.SysProcAttr = &syscall.SysProcAttr{
|
|
||||||
Cloneflags: uintptr(namespaces.GetNamespaceFlags(container.Namespaces)),
|
|
||||||
}
|
|
||||||
c.ProcessConfig.ExtraFiles = []*os.File{child}
|
|
||||||
|
|
||||||
c.ProcessConfig.Env = container.Env
|
|
||||||
c.ProcessConfig.Dir = container.RootFs
|
|
||||||
|
|
||||||
return &c.ProcessConfig.Cmd
|
|
||||||
}, func() {
|
|
||||||
close(waitForStart)
|
|
||||||
if startCallback != nil {
|
|
||||||
c.ContainerPid = c.ProcessConfig.Process.Pid
|
|
||||||
startCallback(&c.ProcessConfig, c.ContainerPid)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
execOutputChan <- execOutput{exitCode, err}
|
|
||||||
}()
|
}()
|
||||||
|
|
||||||
select {
|
if err := cont.Start(p); err != nil {
|
||||||
case execOutput := <-execOutputChan:
|
return execdriver.ExitStatus{ExitCode: -1}, err
|
||||||
return execdriver.ExitStatus{ExitCode: execOutput.exitCode}, execOutput.err
|
|
||||||
case <-waitForStart:
|
|
||||||
break
|
|
||||||
}
|
}
|
||||||
|
|
||||||
oomKill := false
|
if startCallback != nil {
|
||||||
state, err := libcontainer.GetState(filepath.Join(d.root, c.ID))
|
pid, err := p.Pid()
|
||||||
if err == nil {
|
if err != nil {
|
||||||
oomKillNotification, err := libcontainer.NotifyOnOOM(state)
|
p.Signal(os.Kill)
|
||||||
if err == nil {
|
p.Wait()
|
||||||
_, oomKill = <-oomKillNotification
|
return execdriver.ExitStatus{ExitCode: -1}, err
|
||||||
|
}
|
||||||
|
startCallback(&c.ProcessConfig, pid)
|
||||||
|
}
|
||||||
|
|
||||||
|
oomKillNotification, err := cont.NotifyOOM()
|
||||||
|
if err != nil {
|
||||||
|
oomKillNotification = nil
|
||||||
|
log.Warnf("Your kernel does not support OOM notifications: %s", err)
|
||||||
|
}
|
||||||
|
waitF := p.Wait
|
||||||
|
if nss := cont.Config().Namespaces; nss.Contains(configs.NEWPID) {
|
||||||
|
// we need such hack for tracking processes with inerited fds,
|
||||||
|
// because cmd.Wait() waiting for all streams to be copied
|
||||||
|
waitF = waitInPIDHost(p, cont)
|
||||||
|
}
|
||||||
|
ps, err := waitF()
|
||||||
|
if err != nil {
|
||||||
|
if err, ok := err.(*exec.ExitError); !ok {
|
||||||
|
return execdriver.ExitStatus{ExitCode: -1}, err
|
||||||
} else {
|
} else {
|
||||||
log.Warnf("WARNING: Your kernel does not support OOM notifications: %s", err)
|
ps = err.ProcessState
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
cont.Destroy()
|
||||||
|
|
||||||
|
_, oomKill := <-oomKillNotification
|
||||||
|
|
||||||
|
return execdriver.ExitStatus{ExitCode: utils.ExitStatus(ps.Sys().(syscall.WaitStatus)), OOMKilled: oomKill}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func waitInPIDHost(p *libcontainer.Process, c libcontainer.Container) func() (*os.ProcessState, error) {
|
||||||
|
return func() (*os.ProcessState, error) {
|
||||||
|
pid, err := p.Pid()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
process, err := os.FindProcess(pid)
|
||||||
|
s, err := process.Wait()
|
||||||
|
if err != nil {
|
||||||
|
if err, ok := err.(*exec.ExitError); !ok {
|
||||||
|
return s, err
|
||||||
} else {
|
} else {
|
||||||
log.Warnf("Failed to get container state, oom notify will not work: %s", err)
|
s = err.ProcessState
|
||||||
}
|
}
|
||||||
// wait for the container to exit.
|
}
|
||||||
execOutput := <-execOutputChan
|
processes, err := c.Processes()
|
||||||
|
if err != nil {
|
||||||
return execdriver.ExitStatus{ExitCode: execOutput.exitCode, OOMKilled: oomKill}, execOutput.err
|
return s, err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) Kill(p *execdriver.Command, sig int) error {
|
for _, pid := range processes {
|
||||||
if p.ProcessConfig.Process == nil {
|
process, err := os.FindProcess(pid)
|
||||||
return errors.New("exec: not started")
|
if err != nil {
|
||||||
|
log.Errorf("Failed to kill process: %d", pid)
|
||||||
|
continue
|
||||||
}
|
}
|
||||||
return syscall.Kill(p.ProcessConfig.Process.Pid, syscall.Signal(sig))
|
process.Kill()
|
||||||
|
}
|
||||||
|
|
||||||
|
p.Wait()
|
||||||
|
return s, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *driver) Kill(c *execdriver.Command, sig int) error {
|
||||||
|
active := d.activeContainers[c.ID]
|
||||||
|
if active == nil {
|
||||||
|
return fmt.Errorf("active container for %s does not exist", c.ID)
|
||||||
|
}
|
||||||
|
state, err := active.State()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return syscall.Kill(state.InitProcessPid, syscall.Signal(sig))
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) Pause(c *execdriver.Command) error {
|
func (d *driver) Pause(c *execdriver.Command) error {
|
||||||
|
@ -183,11 +235,7 @@ func (d *driver) Pause(c *execdriver.Command) error {
|
||||||
if active == nil {
|
if active == nil {
|
||||||
return fmt.Errorf("active container for %s does not exist", c.ID)
|
return fmt.Errorf("active container for %s does not exist", c.ID)
|
||||||
}
|
}
|
||||||
active.container.Cgroups.Freezer = "FROZEN"
|
return active.Pause()
|
||||||
if systemd.UseSystemd() {
|
|
||||||
return systemd.Freeze(active.container.Cgroups, active.container.Cgroups.Freezer)
|
|
||||||
}
|
|
||||||
return fs.Freeze(active.container.Cgroups, active.container.Cgroups.Freezer)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) Unpause(c *execdriver.Command) error {
|
func (d *driver) Unpause(c *execdriver.Command) error {
|
||||||
|
@ -195,44 +243,31 @@ func (d *driver) Unpause(c *execdriver.Command) error {
|
||||||
if active == nil {
|
if active == nil {
|
||||||
return fmt.Errorf("active container for %s does not exist", c.ID)
|
return fmt.Errorf("active container for %s does not exist", c.ID)
|
||||||
}
|
}
|
||||||
active.container.Cgroups.Freezer = "THAWED"
|
return active.Resume()
|
||||||
if systemd.UseSystemd() {
|
|
||||||
return systemd.Freeze(active.container.Cgroups, active.container.Cgroups.Freezer)
|
|
||||||
}
|
|
||||||
return fs.Freeze(active.container.Cgroups, active.container.Cgroups.Freezer)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) Terminate(p *execdriver.Command) error {
|
func (d *driver) Terminate(c *execdriver.Command) error {
|
||||||
|
defer d.cleanContainer(c.ID)
|
||||||
// lets check the start time for the process
|
// lets check the start time for the process
|
||||||
state, err := libcontainer.GetState(filepath.Join(d.root, p.ID))
|
active := d.activeContainers[c.ID]
|
||||||
|
if active == nil {
|
||||||
|
return fmt.Errorf("active container for %s does not exist", c.ID)
|
||||||
|
}
|
||||||
|
state, err := active.State()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if !os.IsNotExist(err) {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
// TODO: Remove this part for version 1.2.0
|
pid := state.InitProcessPid
|
||||||
// This is added only to ensure smooth upgrades from pre 1.1.0 to 1.1.0
|
|
||||||
data, err := ioutil.ReadFile(filepath.Join(d.root, p.ID, "start"))
|
|
||||||
if err != nil {
|
|
||||||
// if we don't have the data on disk then we can assume the process is gone
|
|
||||||
// because this is only removed after we know the process has stopped
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
state = &libcontainer.State{InitStartTime: string(data)}
|
|
||||||
}
|
|
||||||
|
|
||||||
currentStartTime, err := system.GetProcessStartTime(p.ProcessConfig.Process.Pid)
|
currentStartTime, err := system.GetProcessStartTime(pid)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if state.InitStartTime == currentStartTime {
|
if state.InitProcessStartTime == currentStartTime {
|
||||||
err = syscall.Kill(p.ProcessConfig.Process.Pid, 9)
|
err = syscall.Kill(pid, 9)
|
||||||
syscall.Wait4(p.ProcessConfig.Process.Pid, nil, 0, nil)
|
syscall.Wait4(pid, nil, 0, nil)
|
||||||
}
|
}
|
||||||
d.cleanContainer(p.ID)
|
|
||||||
|
|
||||||
return err
|
return err
|
||||||
|
|
||||||
|
@ -257,15 +292,10 @@ func (d *driver) GetPidsForContainer(id string) ([]int, error) {
|
||||||
if active == nil {
|
if active == nil {
|
||||||
return nil, fmt.Errorf("active container for %s does not exist", id)
|
return nil, fmt.Errorf("active container for %s does not exist", id)
|
||||||
}
|
}
|
||||||
c := active.container.Cgroups
|
return active.Processes()
|
||||||
|
|
||||||
if systemd.UseSystemd() {
|
|
||||||
return systemd.GetPids(c)
|
|
||||||
}
|
|
||||||
return fs.GetPids(c)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) writeContainerFile(container *libcontainer.Config, id string) error {
|
func (d *driver) writeContainerFile(container *configs.Config, id string) error {
|
||||||
data, err := json.Marshal(container)
|
data, err := json.Marshal(container)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
@ -277,7 +307,7 @@ func (d *driver) cleanContainer(id string) error {
|
||||||
d.Lock()
|
d.Lock()
|
||||||
delete(d.activeContainers, id)
|
delete(d.activeContainers, id)
|
||||||
d.Unlock()
|
d.Unlock()
|
||||||
return os.RemoveAll(filepath.Join(d.root, id, "container.json"))
|
return os.RemoveAll(filepath.Join(d.root, id))
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) createContainerRoot(id string) error {
|
func (d *driver) createContainerRoot(id string) error {
|
||||||
|
@ -289,42 +319,64 @@ func (d *driver) Clean(id string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *driver) Stats(id string) (*execdriver.ResourceStats, error) {
|
func (d *driver) Stats(id string) (*execdriver.ResourceStats, error) {
|
||||||
return execdriver.Stats(filepath.Join(d.root, id), d.activeContainers[id].container.Cgroups.Memory, d.machineMemory)
|
c := d.activeContainers[id]
|
||||||
|
if c == nil {
|
||||||
|
return nil, execdriver.ErrNotRunning
|
||||||
}
|
}
|
||||||
|
now := time.Now()
|
||||||
type TtyConsole struct {
|
stats, err := c.Stats()
|
||||||
MasterPty *os.File
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewTtyConsole(processConfig *execdriver.ProcessConfig, pipes *execdriver.Pipes) (*TtyConsole, error) {
|
|
||||||
ptyMaster, console, err := consolepkg.CreateMasterAndConsole()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
memoryLimit := c.Config().Cgroups.Memory
|
||||||
tty := &TtyConsole{
|
// if the container does not have any memory limit specified set the
|
||||||
MasterPty: ptyMaster,
|
// limit to the machines memory
|
||||||
|
if memoryLimit == 0 {
|
||||||
|
memoryLimit = d.machineMemory
|
||||||
|
}
|
||||||
|
return &execdriver.ResourceStats{
|
||||||
|
Stats: stats,
|
||||||
|
Read: now,
|
||||||
|
MemoryLimit: memoryLimit,
|
||||||
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := tty.AttachPipes(&processConfig.Cmd, pipes); err != nil {
|
func getEnv(key string, env []string) string {
|
||||||
|
for _, pair := range env {
|
||||||
|
parts := strings.Split(pair, "=")
|
||||||
|
if parts[0] == key {
|
||||||
|
return parts[1]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
type TtyConsole struct {
|
||||||
|
console libcontainer.Console
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewTtyConsole(console libcontainer.Console, pipes *execdriver.Pipes, rootuid int) (*TtyConsole, error) {
|
||||||
|
tty := &TtyConsole{
|
||||||
|
console: console,
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tty.AttachPipes(pipes); err != nil {
|
||||||
tty.Close()
|
tty.Close()
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
processConfig.Console = console
|
|
||||||
|
|
||||||
return tty, nil
|
return tty, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *TtyConsole) Master() *os.File {
|
func (t *TtyConsole) Master() libcontainer.Console {
|
||||||
return t.MasterPty
|
return t.console
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *TtyConsole) Resize(h, w int) error {
|
func (t *TtyConsole) Resize(h, w int) error {
|
||||||
return term.SetWinsize(t.MasterPty.Fd(), &term.Winsize{Height: uint16(h), Width: uint16(w)})
|
return term.SetWinsize(t.console.Fd(), &term.Winsize{Height: uint16(h), Width: uint16(w)})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *TtyConsole) AttachPipes(command *exec.Cmd, pipes *execdriver.Pipes) error {
|
func (t *TtyConsole) AttachPipes(pipes *execdriver.Pipes) error {
|
||||||
go func() {
|
go func() {
|
||||||
if wb, ok := pipes.Stdout.(interface {
|
if wb, ok := pipes.Stdout.(interface {
|
||||||
CloseWriters() error
|
CloseWriters() error
|
||||||
|
@ -332,12 +384,12 @@ func (t *TtyConsole) AttachPipes(command *exec.Cmd, pipes *execdriver.Pipes) err
|
||||||
defer wb.CloseWriters()
|
defer wb.CloseWriters()
|
||||||
}
|
}
|
||||||
|
|
||||||
io.Copy(pipes.Stdout, t.MasterPty)
|
io.Copy(pipes.Stdout, t.console)
|
||||||
}()
|
}()
|
||||||
|
|
||||||
if pipes.Stdin != nil {
|
if pipes.Stdin != nil {
|
||||||
go func() {
|
go func() {
|
||||||
io.Copy(t.MasterPty, pipes.Stdin)
|
io.Copy(t.console, pipes.Stdin)
|
||||||
|
|
||||||
pipes.Stdin.Close()
|
pipes.Stdin.Close()
|
||||||
}()
|
}()
|
||||||
|
@ -347,5 +399,5 @@ func (t *TtyConsole) AttachPipes(command *exec.Cmd, pipes *execdriver.Pipes) err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *TtyConsole) Close() error {
|
func (t *TtyConsole) Close() error {
|
||||||
return t.MasterPty.Close()
|
return t.console.Close()
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,67 +4,77 @@ package native
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
|
||||||
"os"
|
"os"
|
||||||
"os/exec"
|
"os/exec"
|
||||||
"path/filepath"
|
"syscall"
|
||||||
"runtime"
|
|
||||||
|
|
||||||
"github.com/docker/docker/daemon/execdriver"
|
"github.com/docker/docker/daemon/execdriver"
|
||||||
"github.com/docker/docker/pkg/reexec"
|
|
||||||
"github.com/docker/libcontainer"
|
"github.com/docker/libcontainer"
|
||||||
"github.com/docker/libcontainer/namespaces"
|
_ "github.com/docker/libcontainer/nsenter"
|
||||||
|
"github.com/docker/libcontainer/utils"
|
||||||
)
|
)
|
||||||
|
|
||||||
const execCommandName = "nsenter-exec"
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
reexec.Register(execCommandName, nsenterExec)
|
|
||||||
}
|
|
||||||
|
|
||||||
func nsenterExec() {
|
|
||||||
runtime.LockOSThread()
|
|
||||||
|
|
||||||
// User args are passed after '--' in the command line.
|
|
||||||
userArgs := findUserArgs()
|
|
||||||
|
|
||||||
config, err := loadConfigFromFd()
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("docker-exec: unable to receive config from sync pipe: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := namespaces.FinalizeSetns(config, userArgs); err != nil {
|
|
||||||
log.Fatalf("docker-exec: failed to exec: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO(vishh): Add support for running in priviledged mode and running as a different user.
|
// TODO(vishh): Add support for running in priviledged mode and running as a different user.
|
||||||
func (d *driver) Exec(c *execdriver.Command, processConfig *execdriver.ProcessConfig, pipes *execdriver.Pipes, startCallback execdriver.StartCallback) (int, error) {
|
func (d *driver) Exec(c *execdriver.Command, processConfig *execdriver.ProcessConfig, pipes *execdriver.Pipes, startCallback execdriver.StartCallback) (int, error) {
|
||||||
active := d.activeContainers[c.ID]
|
active := d.activeContainers[c.ID]
|
||||||
if active == nil {
|
if active == nil {
|
||||||
return -1, fmt.Errorf("No active container exists with ID %s", c.ID)
|
return -1, fmt.Errorf("No active container exists with ID %s", c.ID)
|
||||||
}
|
}
|
||||||
state, err := libcontainer.GetState(filepath.Join(d.root, c.ID))
|
|
||||||
if err != nil {
|
|
||||||
return -1, fmt.Errorf("State unavailable for container with ID %s. The container may have been cleaned up already. Error: %s", c.ID, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var term execdriver.Terminal
|
var term execdriver.Terminal
|
||||||
|
var err error
|
||||||
|
|
||||||
|
p := &libcontainer.Process{
|
||||||
|
Args: append([]string{processConfig.Entrypoint}, processConfig.Arguments...),
|
||||||
|
Env: c.ProcessConfig.Env,
|
||||||
|
Cwd: c.WorkingDir,
|
||||||
|
User: c.ProcessConfig.User,
|
||||||
|
}
|
||||||
|
|
||||||
if processConfig.Tty {
|
if processConfig.Tty {
|
||||||
term, err = NewTtyConsole(processConfig, pipes)
|
config := active.Config()
|
||||||
|
rootuid, err := config.HostUID()
|
||||||
|
if err != nil {
|
||||||
|
return -1, err
|
||||||
|
}
|
||||||
|
cons, err := p.NewConsole(rootuid)
|
||||||
|
if err != nil {
|
||||||
|
return -1, err
|
||||||
|
}
|
||||||
|
term, err = NewTtyConsole(cons, pipes, rootuid)
|
||||||
} else {
|
} else {
|
||||||
term, err = execdriver.NewStdConsole(processConfig, pipes)
|
p.Stdout = pipes.Stdout
|
||||||
|
p.Stderr = pipes.Stderr
|
||||||
|
p.Stdin = pipes.Stdin
|
||||||
|
term = &execdriver.StdConsole{}
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return -1, err
|
||||||
}
|
}
|
||||||
|
|
||||||
processConfig.Terminal = term
|
processConfig.Terminal = term
|
||||||
|
|
||||||
args := append([]string{processConfig.Entrypoint}, processConfig.Arguments...)
|
if err := active.Start(p); err != nil {
|
||||||
|
return -1, err
|
||||||
|
}
|
||||||
|
|
||||||
return namespaces.ExecIn(active.container, state, args, os.Args[0], "exec", processConfig.Stdin, processConfig.Stdout, processConfig.Stderr, processConfig.Console,
|
|
||||||
func(cmd *exec.Cmd) {
|
|
||||||
if startCallback != nil {
|
if startCallback != nil {
|
||||||
startCallback(&c.ProcessConfig, cmd.Process.Pid)
|
pid, err := p.Pid()
|
||||||
|
if err != nil {
|
||||||
|
p.Signal(os.Kill)
|
||||||
|
p.Wait()
|
||||||
|
return -1, err
|
||||||
}
|
}
|
||||||
})
|
startCallback(&c.ProcessConfig, pid)
|
||||||
|
}
|
||||||
|
|
||||||
|
ps, err := p.Wait()
|
||||||
|
if err != nil {
|
||||||
|
exitErr, ok := err.(*exec.ExitError)
|
||||||
|
if !ok {
|
||||||
|
return -1, err
|
||||||
|
}
|
||||||
|
ps = exitErr.ProcessState
|
||||||
|
}
|
||||||
|
return utils.ExitStatus(ps.Sys().(syscall.WaitStatus)), nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,13 +2,6 @@
|
||||||
|
|
||||||
package native
|
package native
|
||||||
|
|
||||||
import (
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
|
|
||||||
"github.com/docker/libcontainer"
|
|
||||||
)
|
|
||||||
|
|
||||||
type info struct {
|
type info struct {
|
||||||
ID string
|
ID string
|
||||||
driver *driver
|
driver *driver
|
||||||
|
@ -18,13 +11,6 @@ type info struct {
|
||||||
// pid file for a container. If the file exists then the
|
// pid file for a container. If the file exists then the
|
||||||
// container is currently running
|
// container is currently running
|
||||||
func (i *info) IsRunning() bool {
|
func (i *info) IsRunning() bool {
|
||||||
if _, err := libcontainer.GetState(filepath.Join(i.driver.root, i.ID)); err == nil {
|
_, ok := i.driver.activeContainers[i.ID]
|
||||||
return true
|
return ok
|
||||||
}
|
|
||||||
// TODO: Remove this part for version 1.2.0
|
|
||||||
// This is added only to ensure smooth upgrades from pre 1.1.0 to 1.1.0
|
|
||||||
if _, err := os.Stat(filepath.Join(i.driver.root, i.ID, "pid")); err == nil {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -3,55 +3,40 @@
|
||||||
package native
|
package native
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
|
||||||
"runtime"
|
"runtime"
|
||||||
|
|
||||||
"github.com/docker/docker/pkg/reexec"
|
"github.com/docker/docker/pkg/reexec"
|
||||||
"github.com/docker/libcontainer"
|
"github.com/docker/libcontainer"
|
||||||
"github.com/docker/libcontainer/namespaces"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
reexec.Register(DriverName, initializer)
|
reexec.Register(DriverName, initializer)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func fatal(err error) {
|
||||||
|
if lerr, ok := err.(libcontainer.Error); ok {
|
||||||
|
lerr.Detail(os.Stderr)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintln(os.Stderr, err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
func initializer() {
|
func initializer() {
|
||||||
|
runtime.GOMAXPROCS(1)
|
||||||
runtime.LockOSThread()
|
runtime.LockOSThread()
|
||||||
|
factory, err := libcontainer.New("")
|
||||||
var (
|
|
||||||
pipe = flag.Int("pipe", 0, "sync pipe fd")
|
|
||||||
console = flag.String("console", "", "console (pty slave) path")
|
|
||||||
root = flag.String("root", ".", "root path for configuration files")
|
|
||||||
)
|
|
||||||
|
|
||||||
flag.Parse()
|
|
||||||
|
|
||||||
var container *libcontainer.Config
|
|
||||||
f, err := os.Open(filepath.Join(*root, "container.json"))
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
writeError(err)
|
fatal(err)
|
||||||
|
}
|
||||||
|
if err := factory.StartInitialization(3); err != nil {
|
||||||
|
fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := json.NewDecoder(f).Decode(&container); err != nil {
|
panic("unreachable")
|
||||||
f.Close()
|
|
||||||
writeError(err)
|
|
||||||
}
|
|
||||||
f.Close()
|
|
||||||
|
|
||||||
rootfs, err := os.Getwd()
|
|
||||||
if err != nil {
|
|
||||||
writeError(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := namespaces.Init(container, rootfs, *console, os.NewFile(uintptr(*pipe), "child"), flag.Args()); err != nil {
|
|
||||||
writeError(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
panic("Unreachable")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func writeError(err error) {
|
func writeError(err error) {
|
||||||
|
|
|
@ -1,14 +1,17 @@
|
||||||
package template
|
package template
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/docker/libcontainer"
|
"syscall"
|
||||||
|
|
||||||
"github.com/docker/libcontainer/apparmor"
|
"github.com/docker/libcontainer/apparmor"
|
||||||
"github.com/docker/libcontainer/cgroups"
|
"github.com/docker/libcontainer/configs"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const defaultMountFlags = syscall.MS_NOEXEC | syscall.MS_NOSUID | syscall.MS_NODEV
|
||||||
|
|
||||||
// New returns the docker default configuration for libcontainer
|
// New returns the docker default configuration for libcontainer
|
||||||
func New() *libcontainer.Config {
|
func New() *configs.Config {
|
||||||
container := &libcontainer.Config{
|
container := &configs.Config{
|
||||||
Capabilities: []string{
|
Capabilities: []string{
|
||||||
"CHOWN",
|
"CHOWN",
|
||||||
"DAC_OVERRIDE",
|
"DAC_OVERRIDE",
|
||||||
|
@ -25,18 +28,64 @@ func New() *libcontainer.Config {
|
||||||
"KILL",
|
"KILL",
|
||||||
"AUDIT_WRITE",
|
"AUDIT_WRITE",
|
||||||
},
|
},
|
||||||
Namespaces: libcontainer.Namespaces([]libcontainer.Namespace{
|
Namespaces: configs.Namespaces([]configs.Namespace{
|
||||||
{Type: "NEWNS"},
|
{Type: "NEWNS"},
|
||||||
{Type: "NEWUTS"},
|
{Type: "NEWUTS"},
|
||||||
{Type: "NEWIPC"},
|
{Type: "NEWIPC"},
|
||||||
{Type: "NEWPID"},
|
{Type: "NEWPID"},
|
||||||
{Type: "NEWNET"},
|
{Type: "NEWNET"},
|
||||||
}),
|
}),
|
||||||
Cgroups: &cgroups.Cgroup{
|
Cgroups: &configs.Cgroup{
|
||||||
Parent: "docker",
|
Parent: "docker",
|
||||||
AllowAllDevices: false,
|
AllowAllDevices: false,
|
||||||
},
|
},
|
||||||
MountConfig: &libcontainer.MountConfig{},
|
Mounts: []*configs.Mount{
|
||||||
|
{
|
||||||
|
Source: "proc",
|
||||||
|
Destination: "/proc",
|
||||||
|
Device: "proc",
|
||||||
|
Flags: defaultMountFlags,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Source: "tmpfs",
|
||||||
|
Destination: "/dev",
|
||||||
|
Device: "tmpfs",
|
||||||
|
Flags: syscall.MS_NOSUID | syscall.MS_STRICTATIME,
|
||||||
|
Data: "mode=755",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Source: "devpts",
|
||||||
|
Destination: "/dev/pts",
|
||||||
|
Device: "devpts",
|
||||||
|
Flags: syscall.MS_NOSUID | syscall.MS_NOEXEC,
|
||||||
|
Data: "newinstance,ptmxmode=0666,mode=0620,gid=5",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Device: "tmpfs",
|
||||||
|
Source: "shm",
|
||||||
|
Destination: "/dev/shm",
|
||||||
|
Data: "mode=1777,size=65536k",
|
||||||
|
Flags: defaultMountFlags,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Source: "mqueue",
|
||||||
|
Destination: "/dev/mqueue",
|
||||||
|
Device: "mqueue",
|
||||||
|
Flags: defaultMountFlags,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Source: "sysfs",
|
||||||
|
Destination: "/sys",
|
||||||
|
Device: "sysfs",
|
||||||
|
Flags: defaultMountFlags | syscall.MS_RDONLY,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
MaskPaths: []string{
|
||||||
|
"/proc/kcore",
|
||||||
|
},
|
||||||
|
ReadonlyPaths: []string{
|
||||||
|
"/proc/sys", "/proc/sysrq-trigger", "/proc/irq", "/proc/bus",
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
if apparmor.IsEnabled() {
|
if apparmor.IsEnabled() {
|
||||||
|
|
|
@ -2,28 +2,21 @@
|
||||||
|
|
||||||
package native
|
package native
|
||||||
|
|
||||||
import (
|
//func findUserArgs() []string {
|
||||||
"encoding/json"
|
//for i, a := range os.Args {
|
||||||
"os"
|
//if a == "--" {
|
||||||
|
//return os.Args[i+1:]
|
||||||
|
//}
|
||||||
|
//}
|
||||||
|
//return []string{}
|
||||||
|
//}
|
||||||
|
|
||||||
"github.com/docker/libcontainer"
|
//// loadConfigFromFd loads a container's config from the sync pipe that is provided by
|
||||||
)
|
//// fd 3 when running a process
|
||||||
|
//func loadConfigFromFd() (*configs.Config, error) {
|
||||||
func findUserArgs() []string {
|
//var config *libcontainer.Config
|
||||||
for i, a := range os.Args {
|
//if err := json.NewDecoder(os.NewFile(3, "child")).Decode(&config); err != nil {
|
||||||
if a == "--" {
|
//return nil, err
|
||||||
return os.Args[i+1:]
|
//}
|
||||||
}
|
//return config, nil
|
||||||
}
|
//}
|
||||||
return []string{}
|
|
||||||
}
|
|
||||||
|
|
||||||
// loadConfigFromFd loads a container's config from the sync pipe that is provided by
|
|
||||||
// fd 3 when running a process
|
|
||||||
func loadConfigFromFd() (*libcontainer.Config, error) {
|
|
||||||
var config *libcontainer.Config
|
|
||||||
if err := json.NewDecoder(os.NewFile(3, "child")).Decode(&config); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return config, nil
|
|
||||||
}
|
|
||||||
|
|
|
@ -5,13 +5,83 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/docker/utils"
|
"github.com/docker/docker/utils"
|
||||||
"github.com/docker/libcontainer/security/capabilities"
|
"github.com/syndtr/gocapability/capability"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var capabilityList = Capabilities{
|
||||||
|
{Key: "SETPCAP", Value: capability.CAP_SETPCAP},
|
||||||
|
{Key: "SYS_MODULE", Value: capability.CAP_SYS_MODULE},
|
||||||
|
{Key: "SYS_RAWIO", Value: capability.CAP_SYS_RAWIO},
|
||||||
|
{Key: "SYS_PACCT", Value: capability.CAP_SYS_PACCT},
|
||||||
|
{Key: "SYS_ADMIN", Value: capability.CAP_SYS_ADMIN},
|
||||||
|
{Key: "SYS_NICE", Value: capability.CAP_SYS_NICE},
|
||||||
|
{Key: "SYS_RESOURCE", Value: capability.CAP_SYS_RESOURCE},
|
||||||
|
{Key: "SYS_TIME", Value: capability.CAP_SYS_TIME},
|
||||||
|
{Key: "SYS_TTY_CONFIG", Value: capability.CAP_SYS_TTY_CONFIG},
|
||||||
|
{Key: "MKNOD", Value: capability.CAP_MKNOD},
|
||||||
|
{Key: "AUDIT_WRITE", Value: capability.CAP_AUDIT_WRITE},
|
||||||
|
{Key: "AUDIT_CONTROL", Value: capability.CAP_AUDIT_CONTROL},
|
||||||
|
{Key: "MAC_OVERRIDE", Value: capability.CAP_MAC_OVERRIDE},
|
||||||
|
{Key: "MAC_ADMIN", Value: capability.CAP_MAC_ADMIN},
|
||||||
|
{Key: "NET_ADMIN", Value: capability.CAP_NET_ADMIN},
|
||||||
|
{Key: "SYSLOG", Value: capability.CAP_SYSLOG},
|
||||||
|
{Key: "CHOWN", Value: capability.CAP_CHOWN},
|
||||||
|
{Key: "NET_RAW", Value: capability.CAP_NET_RAW},
|
||||||
|
{Key: "DAC_OVERRIDE", Value: capability.CAP_DAC_OVERRIDE},
|
||||||
|
{Key: "FOWNER", Value: capability.CAP_FOWNER},
|
||||||
|
{Key: "DAC_READ_SEARCH", Value: capability.CAP_DAC_READ_SEARCH},
|
||||||
|
{Key: "FSETID", Value: capability.CAP_FSETID},
|
||||||
|
{Key: "KILL", Value: capability.CAP_KILL},
|
||||||
|
{Key: "SETGID", Value: capability.CAP_SETGID},
|
||||||
|
{Key: "SETUID", Value: capability.CAP_SETUID},
|
||||||
|
{Key: "LINUX_IMMUTABLE", Value: capability.CAP_LINUX_IMMUTABLE},
|
||||||
|
{Key: "NET_BIND_SERVICE", Value: capability.CAP_NET_BIND_SERVICE},
|
||||||
|
{Key: "NET_BROADCAST", Value: capability.CAP_NET_BROADCAST},
|
||||||
|
{Key: "IPC_LOCK", Value: capability.CAP_IPC_LOCK},
|
||||||
|
{Key: "IPC_OWNER", Value: capability.CAP_IPC_OWNER},
|
||||||
|
{Key: "SYS_CHROOT", Value: capability.CAP_SYS_CHROOT},
|
||||||
|
{Key: "SYS_PTRACE", Value: capability.CAP_SYS_PTRACE},
|
||||||
|
{Key: "SYS_BOOT", Value: capability.CAP_SYS_BOOT},
|
||||||
|
{Key: "LEASE", Value: capability.CAP_LEASE},
|
||||||
|
{Key: "SETFCAP", Value: capability.CAP_SETFCAP},
|
||||||
|
{Key: "WAKE_ALARM", Value: capability.CAP_WAKE_ALARM},
|
||||||
|
{Key: "BLOCK_SUSPEND", Value: capability.CAP_BLOCK_SUSPEND},
|
||||||
|
}
|
||||||
|
|
||||||
|
type (
|
||||||
|
CapabilityMapping struct {
|
||||||
|
Key string `json:"key,omitempty"`
|
||||||
|
Value capability.Cap `json:"value,omitempty"`
|
||||||
|
}
|
||||||
|
Capabilities []*CapabilityMapping
|
||||||
|
)
|
||||||
|
|
||||||
|
func (c *CapabilityMapping) String() string {
|
||||||
|
return c.Key
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetCapability(key string) *CapabilityMapping {
|
||||||
|
for _, capp := range capabilityList {
|
||||||
|
if capp.Key == key {
|
||||||
|
cpy := *capp
|
||||||
|
return &cpy
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetAllCapabilities() []string {
|
||||||
|
output := make([]string, len(capabilityList))
|
||||||
|
for i, capability := range capabilityList {
|
||||||
|
output[i] = capability.String()
|
||||||
|
}
|
||||||
|
return output
|
||||||
|
}
|
||||||
|
|
||||||
func TweakCapabilities(basics, adds, drops []string) ([]string, error) {
|
func TweakCapabilities(basics, adds, drops []string) ([]string, error) {
|
||||||
var (
|
var (
|
||||||
newCaps []string
|
newCaps []string
|
||||||
allCaps = capabilities.GetAllCapabilities()
|
allCaps = GetAllCapabilities()
|
||||||
)
|
)
|
||||||
|
|
||||||
// look for invalid cap in the drop list
|
// look for invalid cap in the drop list
|
||||||
|
@ -26,7 +96,7 @@ func TweakCapabilities(basics, adds, drops []string) ([]string, error) {
|
||||||
|
|
||||||
// handle --cap-add=all
|
// handle --cap-add=all
|
||||||
if utils.StringsContainsNoCase(adds, "all") {
|
if utils.StringsContainsNoCase(adds, "all") {
|
||||||
basics = capabilities.GetAllCapabilities()
|
basics = allCaps
|
||||||
}
|
}
|
||||||
|
|
||||||
if !utils.StringsContainsNoCase(drops, "all") {
|
if !utils.StringsContainsNoCase(drops, "all") {
|
||||||
|
|
|
@ -35,8 +35,8 @@ import (
|
||||||
"github.com/docker/docker/pkg/archive"
|
"github.com/docker/docker/pkg/archive"
|
||||||
"github.com/docker/docker/pkg/chrootarchive"
|
"github.com/docker/docker/pkg/chrootarchive"
|
||||||
"github.com/docker/docker/pkg/common"
|
"github.com/docker/docker/pkg/common"
|
||||||
|
"github.com/docker/docker/pkg/directory"
|
||||||
mountpk "github.com/docker/docker/pkg/mount"
|
mountpk "github.com/docker/docker/pkg/mount"
|
||||||
"github.com/docker/docker/utils"
|
|
||||||
"github.com/docker/libcontainer/label"
|
"github.com/docker/libcontainer/label"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -216,7 +216,7 @@ func (a *Driver) Remove(id string) error {
|
||||||
defer a.Unlock()
|
defer a.Unlock()
|
||||||
|
|
||||||
if a.active[id] != 0 {
|
if a.active[id] != 0 {
|
||||||
log.Errorf("Warning: removing active id %s", id)
|
log.Errorf("Removing active id %s", id)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Make sure the dir is umounted first
|
// Make sure the dir is umounted first
|
||||||
|
@ -320,7 +320,7 @@ func (a *Driver) applyDiff(id string, diff archive.ArchiveReader) error {
|
||||||
// relative to its base filesystem directory.
|
// relative to its base filesystem directory.
|
||||||
func (a *Driver) DiffSize(id, parent string) (size int64, err error) {
|
func (a *Driver) DiffSize(id, parent string) (size int64, err error) {
|
||||||
// AUFS doesn't need the parent layer to calculate the diff size.
|
// AUFS doesn't need the parent layer to calculate the diff size.
|
||||||
return utils.TreeSize(path.Join(a.rootPath(), "diff", id))
|
return directory.Size(path.Join(a.rootPath(), "diff", id))
|
||||||
}
|
}
|
||||||
|
|
||||||
// ApplyDiff extracts the changeset from the given diff into the
|
// ApplyDiff extracts the changeset from the given diff into the
|
||||||
|
@ -378,7 +378,7 @@ func (a *Driver) mount(id, mountLabel string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := a.aufsMount(layers, rw, target, mountLabel); err != nil {
|
if err := a.aufsMount(layers, rw, target, mountLabel); err != nil {
|
||||||
return err
|
return fmt.Errorf("error creating aufs mount to %s: %v", target, err)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -9,7 +9,7 @@ import (
|
||||||
|
|
||||||
func Unmount(target string) error {
|
func Unmount(target string) error {
|
||||||
if err := exec.Command("auplink", target, "flush").Run(); err != nil {
|
if err := exec.Command("auplink", target, "flush").Run(); err != nil {
|
||||||
log.Errorf("[warning]: couldn't run auplink before unmount: %s", err)
|
log.Errorf("Couldn't run auplink before unmount: %s", err)
|
||||||
}
|
}
|
||||||
if err := syscall.Unmount(target, 0); err != nil {
|
if err := syscall.Unmount(target, 0); err != nil {
|
||||||
return err
|
return err
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Alexander Larsson <alexl@redhat.com> (@alexlarsson)
|
|
|
@ -1,2 +0,0 @@
|
||||||
Alexander Larsson <alexl@redhat.com> (@alexlarsson)
|
|
||||||
Vincent Batts <vbatts@redhat.com> (@vbatts)
|
|
|
@ -150,7 +150,7 @@ Here is the list of supported options:
|
||||||
If using a block device for device mapper storage, ideally lvm2
|
If using a block device for device mapper storage, ideally lvm2
|
||||||
would be used to create/manage the thin-pool volume that is then
|
would be used to create/manage the thin-pool volume that is then
|
||||||
handed to docker to exclusively create/manage the thin and thin
|
handed to docker to exclusively create/manage the thin and thin
|
||||||
snapshot volumes needed for it's containers. Managing the thin-pool
|
snapshot volumes needed for its containers. Managing the thin-pool
|
||||||
outside of docker makes for the most feature-rich method of having
|
outside of docker makes for the most feature-rich method of having
|
||||||
docker utilize device mapper thin provisioning as the backing
|
docker utilize device mapper thin provisioning as the backing
|
||||||
storage for docker's containers. lvm2-based thin-pool management
|
storage for docker's containers. lvm2-based thin-pool management
|
||||||
|
|
|
@ -347,7 +347,7 @@ func (devices *DeviceSet) deviceFileWalkFunction(path string, finfo os.FileInfo)
|
||||||
}
|
}
|
||||||
|
|
||||||
if dinfo.DeviceId > MaxDeviceId {
|
if dinfo.DeviceId > MaxDeviceId {
|
||||||
log.Errorf("Warning: Ignoring Invalid DeviceId=%d", dinfo.DeviceId)
|
log.Errorf("Ignoring Invalid DeviceId=%d", dinfo.DeviceId)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -554,7 +554,7 @@ func (devices *DeviceSet) createRegisterDevice(hash string) (*DevInfo, error) {
|
||||||
// happen. Now we have a mechianism to find
|
// happen. Now we have a mechianism to find
|
||||||
// a free device Id. So something is not right.
|
// a free device Id. So something is not right.
|
||||||
// Give a warning and continue.
|
// Give a warning and continue.
|
||||||
log.Errorf("Warning: Device Id %d exists in pool but it is supposed to be unused", deviceId)
|
log.Errorf("Device Id %d exists in pool but it is supposed to be unused", deviceId)
|
||||||
deviceId, err = devices.getNextFreeDeviceId()
|
deviceId, err = devices.getNextFreeDeviceId()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
@ -606,7 +606,7 @@ func (devices *DeviceSet) createRegisterSnapDevice(hash string, baseInfo *DevInf
|
||||||
// happen. Now we have a mechianism to find
|
// happen. Now we have a mechianism to find
|
||||||
// a free device Id. So something is not right.
|
// a free device Id. So something is not right.
|
||||||
// Give a warning and continue.
|
// Give a warning and continue.
|
||||||
log.Errorf("Warning: Device Id %d exists in pool but it is supposed to be unused", deviceId)
|
log.Errorf("Device Id %d exists in pool but it is supposed to be unused", deviceId)
|
||||||
deviceId, err = devices.getNextFreeDeviceId()
|
deviceId, err = devices.getNextFreeDeviceId()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
@ -852,18 +852,18 @@ func (devices *DeviceSet) rollbackTransaction() error {
|
||||||
// closed. In that case this call will fail. Just leave a message
|
// closed. In that case this call will fail. Just leave a message
|
||||||
// in case of failure.
|
// in case of failure.
|
||||||
if err := devicemapper.DeleteDevice(devices.getPoolDevName(), devices.DeviceId); err != nil {
|
if err := devicemapper.DeleteDevice(devices.getPoolDevName(), devices.DeviceId); err != nil {
|
||||||
log.Errorf("Warning: Unable to delete device: %s", err)
|
log.Errorf("Unable to delete device: %s", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
dinfo := &DevInfo{Hash: devices.DeviceIdHash}
|
dinfo := &DevInfo{Hash: devices.DeviceIdHash}
|
||||||
if err := devices.removeMetadata(dinfo); err != nil {
|
if err := devices.removeMetadata(dinfo); err != nil {
|
||||||
log.Errorf("Warning: Unable to remove metadata: %s", err)
|
log.Errorf("Unable to remove metadata: %s", err)
|
||||||
} else {
|
} else {
|
||||||
devices.markDeviceIdFree(devices.DeviceId)
|
devices.markDeviceIdFree(devices.DeviceId)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := devices.removeTransactionMetaData(); err != nil {
|
if err := devices.removeTransactionMetaData(); err != nil {
|
||||||
log.Errorf("Warning: Unable to remove transaction meta file %s: %s", devices.transactionMetaFile(), err)
|
log.Errorf("Unable to remove transaction meta file %s: %s", devices.transactionMetaFile(), err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
@ -883,7 +883,7 @@ func (devices *DeviceSet) processPendingTransaction() error {
|
||||||
// If open transaction Id is less than pool transaction Id, something
|
// If open transaction Id is less than pool transaction Id, something
|
||||||
// is wrong. Bail out.
|
// is wrong. Bail out.
|
||||||
if devices.OpenTransactionId < devices.TransactionId {
|
if devices.OpenTransactionId < devices.TransactionId {
|
||||||
log.Errorf("Warning: Open Transaction id %d is less than pool transaction id %d", devices.OpenTransactionId, devices.TransactionId)
|
log.Errorf("Open Transaction id %d is less than pool transaction id %d", devices.OpenTransactionId, devices.TransactionId)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -963,7 +963,7 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
|
||||||
|
|
||||||
// https://github.com/docker/docker/issues/4036
|
// https://github.com/docker/docker/issues/4036
|
||||||
if supported := devicemapper.UdevSetSyncSupport(true); !supported {
|
if supported := devicemapper.UdevSetSyncSupport(true); !supported {
|
||||||
log.Warnf("WARNING: Udev sync is not supported. This will lead to unexpected behavior, data loss and errors")
|
log.Warnf("Udev sync is not supported. This will lead to unexpected behavior, data loss and errors")
|
||||||
}
|
}
|
||||||
log.Debugf("devicemapper: udev sync support: %v", devicemapper.UdevSyncSupported())
|
log.Debugf("devicemapper: udev sync support: %v", devicemapper.UdevSyncSupported())
|
||||||
|
|
||||||
|
@ -1221,7 +1221,7 @@ func (devices *DeviceSet) deactivateDevice(info *DevInfo) error {
|
||||||
// Wait for the unmount to be effective,
|
// Wait for the unmount to be effective,
|
||||||
// by watching the value of Info.OpenCount for the device
|
// by watching the value of Info.OpenCount for the device
|
||||||
if err := devices.waitClose(info); err != nil {
|
if err := devices.waitClose(info); err != nil {
|
||||||
log.Errorf("Warning: error waiting for device %s to close: %s", info.Hash, err)
|
log.Errorf("Error waiting for device %s to close: %s", info.Hash, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
devinfo, err := devicemapper.GetInfo(info.Name())
|
devinfo, err := devicemapper.GetInfo(info.Name())
|
||||||
|
@ -1584,7 +1584,7 @@ func (devices *DeviceSet) getUnderlyingAvailableSpace(loopFile string) (uint64,
|
||||||
buf := new(syscall.Statfs_t)
|
buf := new(syscall.Statfs_t)
|
||||||
err := syscall.Statfs(loopFile, buf)
|
err := syscall.Statfs(loopFile, buf)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Warnf("Warning: Couldn't stat loopfile filesystem %v: %v", loopFile, err)
|
log.Warnf("Couldn't stat loopfile filesystem %v: %v", loopFile, err)
|
||||||
return 0, err
|
return 0, err
|
||||||
}
|
}
|
||||||
return buf.Bfree * uint64(buf.Bsize), nil
|
return buf.Bfree * uint64(buf.Bsize), nil
|
||||||
|
@ -1594,7 +1594,7 @@ func (devices *DeviceSet) isRealFile(loopFile string) (bool, error) {
|
||||||
if loopFile != "" {
|
if loopFile != "" {
|
||||||
fi, err := os.Stat(loopFile)
|
fi, err := os.Stat(loopFile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Warnf("Warning: Couldn't stat loopfile %v: %v", loopFile, err)
|
log.Warnf("Couldn't stat loopfile %v: %v", loopFile, err)
|
||||||
return false, err
|
return false, err
|
||||||
}
|
}
|
||||||
return fi.Mode().IsRegular(), nil
|
return fi.Mode().IsRegular(), nil
|
||||||
|
|
|
@ -164,7 +164,7 @@ func (d *Driver) Get(id, mountLabel string) (string, error) {
|
||||||
func (d *Driver) Put(id string) error {
|
func (d *Driver) Put(id string) error {
|
||||||
err := d.DeviceSet.UnmountDevice(id)
|
err := d.DeviceSet.UnmountDevice(id)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Errorf("Warning: error unmounting device %s: %s", id, err)
|
log.Errorf("Error unmounting device %s: %s", id, err)
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
|
@ -184,6 +184,6 @@ func checkPriorDriver(name, root string) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if len(priorDrivers) > 0 {
|
if len(priorDrivers) > 0 {
|
||||||
log.Warnf("graphdriver %s selected. Warning: your graphdriver directory %s already contains data managed by other graphdrivers: %s", name, root, strings.Join(priorDrivers, ","))
|
log.Warnf("Graphdriver %s selected. Your graphdriver directory %s already contains data managed by other graphdrivers: %s", name, root, strings.Join(priorDrivers, ","))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -73,7 +73,7 @@ func newDriver(t *testing.T, name string) *Driver {
|
||||||
|
|
||||||
d, err := graphdriver.GetDriver(name, root, nil)
|
d, err := graphdriver.GetDriver(name, root, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Logf("graphdriver: %s\n", err.Error())
|
t.Logf("graphdriver: %v\n", err)
|
||||||
if err == graphdriver.ErrNotSupported || err == graphdriver.ErrPrerequisites || err == graphdriver.ErrIncompatibleFS {
|
if err == graphdriver.ErrNotSupported || err == graphdriver.ErrPrerequisites || err == graphdriver.ErrIncompatibleFS {
|
||||||
t.Skipf("Driver %s not supported", name)
|
t.Skipf("Driver %s not supported", name)
|
||||||
}
|
}
|
||||||
|
|
|
@ -301,7 +301,7 @@ func (d *Driver) Get(id string, mountLabel string) (string, error) {
|
||||||
|
|
||||||
opts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", lowerDir, upperDir, workDir)
|
opts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", lowerDir, upperDir, workDir)
|
||||||
if err := syscall.Mount("overlay", mergedDir, "overlay", 0, label.FormatMountLabel(opts, mountLabel)); err != nil {
|
if err := syscall.Mount("overlay", mergedDir, "overlay", 0, label.FormatMountLabel(opts, mountLabel)); err != nil {
|
||||||
return "", err
|
return "", fmt.Errorf("error creating overlay mount to %s: %v", mergedDir, err)
|
||||||
}
|
}
|
||||||
mount.path = mergedDir
|
mount.path = mergedDir
|
||||||
mount.mounted = true
|
mount.mounted = true
|
||||||
|
|
|
@ -9,6 +9,7 @@ import (
|
||||||
"github.com/docker/docker/image"
|
"github.com/docker/docker/image"
|
||||||
"github.com/docker/docker/pkg/common"
|
"github.com/docker/docker/pkg/common"
|
||||||
"github.com/docker/docker/pkg/parsers"
|
"github.com/docker/docker/pkg/parsers"
|
||||||
|
"github.com/docker/docker/utils"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (daemon *Daemon) ImageDelete(job *engine.Job) engine.Status {
|
func (daemon *Daemon) ImageDelete(job *engine.Job) engine.Status {
|
||||||
|
@ -48,7 +49,7 @@ func (daemon *Daemon) DeleteImage(eng *engine.Engine, name string, imgs *engine.
|
||||||
img, err := daemon.Repositories().LookupImage(name)
|
img, err := daemon.Repositories().LookupImage(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if r, _ := daemon.Repositories().Get(repoName); r != nil {
|
if r, _ := daemon.Repositories().Get(repoName); r != nil {
|
||||||
return fmt.Errorf("No such image: %s:%s", repoName, tag)
|
return fmt.Errorf("No such image: %s", utils.ImageReference(repoName, tag))
|
||||||
}
|
}
|
||||||
return fmt.Errorf("No such image: %s", name)
|
return fmt.Errorf("No such image: %s", name)
|
||||||
}
|
}
|
||||||
|
@ -102,7 +103,7 @@ func (daemon *Daemon) DeleteImage(eng *engine.Engine, name string, imgs *engine.
|
||||||
}
|
}
|
||||||
if tagDeleted {
|
if tagDeleted {
|
||||||
out := &engine.Env{}
|
out := &engine.Env{}
|
||||||
out.Set("Untagged", repoName+":"+tag)
|
out.Set("Untagged", utils.ImageReference(repoName, tag))
|
||||||
imgs.Add(out)
|
imgs.Add(out)
|
||||||
eng.Job("log", "untag", img.ID, "").Run()
|
eng.Job("log", "untag", img.ID, "").Run()
|
||||||
}
|
}
|
||||||
|
|
|
@ -3,6 +3,7 @@ package daemon
|
||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
"runtime"
|
"runtime"
|
||||||
|
"time"
|
||||||
|
|
||||||
log "github.com/Sirupsen/logrus"
|
log "github.com/Sirupsen/logrus"
|
||||||
"github.com/docker/docker/autogen/dockerversion"
|
"github.com/docker/docker/autogen/dockerversion"
|
||||||
|
@ -76,6 +77,7 @@ func (daemon *Daemon) CmdInfo(job *engine.Job) engine.Status {
|
||||||
v.SetBool("Debug", os.Getenv("DEBUG") != "")
|
v.SetBool("Debug", os.Getenv("DEBUG") != "")
|
||||||
v.SetInt("NFd", utils.GetTotalUsedFds())
|
v.SetInt("NFd", utils.GetTotalUsedFds())
|
||||||
v.SetInt("NGoroutines", runtime.NumGoroutine())
|
v.SetInt("NGoroutines", runtime.NumGoroutine())
|
||||||
|
v.Set("SystemTime", time.Now().Format(time.RFC3339Nano))
|
||||||
v.Set("ExecutionDriver", daemon.ExecutionDriver().Name())
|
v.Set("ExecutionDriver", daemon.ExecutionDriver().Name())
|
||||||
v.SetInt("NEventsListener", env.GetInt("count"))
|
v.SetInt("NEventsListener", env.GetInt("count"))
|
||||||
v.Set("KernelVersion", kernelVersion)
|
v.Set("KernelVersion", kernelVersion)
|
||||||
|
@ -87,6 +89,16 @@ func (daemon *Daemon) CmdInfo(job *engine.Job) engine.Status {
|
||||||
v.SetInt("NCPU", runtime.NumCPU())
|
v.SetInt("NCPU", runtime.NumCPU())
|
||||||
v.SetInt64("MemTotal", meminfo.MemTotal)
|
v.SetInt64("MemTotal", meminfo.MemTotal)
|
||||||
v.Set("DockerRootDir", daemon.Config().Root)
|
v.Set("DockerRootDir", daemon.Config().Root)
|
||||||
|
if http_proxy := os.Getenv("http_proxy"); http_proxy != "" {
|
||||||
|
v.Set("HttpProxy", http_proxy)
|
||||||
|
}
|
||||||
|
if https_proxy := os.Getenv("https_proxy"); https_proxy != "" {
|
||||||
|
v.Set("HttpsProxy", https_proxy)
|
||||||
|
}
|
||||||
|
if no_proxy := os.Getenv("no_proxy"); no_proxy != "" {
|
||||||
|
v.Set("NoProxy", no_proxy)
|
||||||
|
}
|
||||||
|
|
||||||
if hostname, err := os.Hostname(); err == nil {
|
if hostname, err := os.Hostname(); err == nil {
|
||||||
v.SetJson("Name", hostname)
|
v.SetJson("Name", hostname)
|
||||||
}
|
}
|
||||||
|
|
|
@ -62,6 +62,14 @@ func (daemon *Daemon) ContainerInspect(job *engine.Job) engine.Status {
|
||||||
container.hostConfig.Links = append(container.hostConfig.Links, fmt.Sprintf("%s:%s", child.Name, linkAlias))
|
container.hostConfig.Links = append(container.hostConfig.Links, fmt.Sprintf("%s:%s", child.Name, linkAlias))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
// we need this trick to preserve empty log driver, so
|
||||||
|
// container will use daemon defaults even if daemon change them
|
||||||
|
if container.hostConfig.LogConfig.Type == "" {
|
||||||
|
container.hostConfig.LogConfig = daemon.defaultLogConfig
|
||||||
|
defer func() {
|
||||||
|
container.hostConfig.LogConfig = runconfig.LogConfig{}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
out.SetJson("HostConfig", container.hostConfig)
|
out.SetJson("HostConfig", container.hostConfig)
|
||||||
|
|
||||||
|
|
|
@ -8,6 +8,7 @@ import (
|
||||||
|
|
||||||
"github.com/docker/docker/graph"
|
"github.com/docker/docker/graph"
|
||||||
"github.com/docker/docker/pkg/graphdb"
|
"github.com/docker/docker/pkg/graphdb"
|
||||||
|
"github.com/docker/docker/utils"
|
||||||
|
|
||||||
"github.com/docker/docker/engine"
|
"github.com/docker/docker/engine"
|
||||||
"github.com/docker/docker/pkg/parsers"
|
"github.com/docker/docker/pkg/parsers"
|
||||||
|
@ -90,6 +91,10 @@ func (daemon *Daemon) Containers(job *engine.Job) engine.Status {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if !psFilters.MatchKVList("label", container.Config.Labels) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
if before != "" && !foundBefore {
|
if before != "" && !foundBefore {
|
||||||
if container.ID == beforeCont.ID {
|
if container.ID == beforeCont.ID {
|
||||||
foundBefore = true
|
foundBefore = true
|
||||||
|
@ -127,7 +132,7 @@ func (daemon *Daemon) Containers(job *engine.Job) engine.Status {
|
||||||
img := container.Config.Image
|
img := container.Config.Image
|
||||||
_, tag := parsers.ParseRepositoryTag(container.Config.Image)
|
_, tag := parsers.ParseRepositoryTag(container.Config.Image)
|
||||||
if tag == "" {
|
if tag == "" {
|
||||||
img = img + ":" + graph.DEFAULTTAG
|
img = utils.ImageReference(img, graph.DEFAULTTAG)
|
||||||
}
|
}
|
||||||
out.SetJson("Image", img)
|
out.SetJson("Image", img)
|
||||||
if len(container.Args) > 0 {
|
if len(container.Args) > 0 {
|
||||||
|
@ -157,6 +162,7 @@ func (daemon *Daemon) Containers(job *engine.Job) engine.Status {
|
||||||
out.SetInt64("SizeRw", sizeRw)
|
out.SetInt64("SizeRw", sizeRw)
|
||||||
out.SetInt64("SizeRootFs", sizeRootFs)
|
out.SetInt64("SizeRootFs", sizeRootFs)
|
||||||
}
|
}
|
||||||
|
out.SetJson("Labels", container.Config.Labels)
|
||||||
outs.Add(out)
|
outs.Add(out)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,57 @@
|
||||||
|
package logger
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"io"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/Sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Copier can copy logs from specified sources to Logger and attach
|
||||||
|
// ContainerID and Timestamp.
|
||||||
|
// Writes are concurrent, so you need implement some sync in your logger
|
||||||
|
type Copier struct {
|
||||||
|
// cid is container id for which we copying logs
|
||||||
|
cid string
|
||||||
|
// srcs is map of name -> reader pairs, for example "stdout", "stderr"
|
||||||
|
srcs map[string]io.Reader
|
||||||
|
dst Logger
|
||||||
|
copyJobs sync.WaitGroup
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewCopier creates new Copier
|
||||||
|
func NewCopier(cid string, srcs map[string]io.Reader, dst Logger) (*Copier, error) {
|
||||||
|
return &Copier{
|
||||||
|
cid: cid,
|
||||||
|
srcs: srcs,
|
||||||
|
dst: dst,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run starts logs copying
|
||||||
|
func (c *Copier) Run() {
|
||||||
|
for src, w := range c.srcs {
|
||||||
|
c.copyJobs.Add(1)
|
||||||
|
go c.copySrc(src, w)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Copier) copySrc(name string, src io.Reader) {
|
||||||
|
defer c.copyJobs.Done()
|
||||||
|
scanner := bufio.NewScanner(src)
|
||||||
|
for scanner.Scan() {
|
||||||
|
if err := c.dst.Log(&Message{ContainerID: c.cid, Line: scanner.Bytes(), Source: name, Timestamp: time.Now().UTC()}); err != nil {
|
||||||
|
logrus.Errorf("Failed to log msg %q for logger %s: %s", scanner.Bytes(), c.dst.Name(), err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err := scanner.Err(); err != nil {
|
||||||
|
logrus.Errorf("Error scanning log stream: %s", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait waits until all copying is done
|
||||||
|
func (c *Copier) Wait() {
|
||||||
|
c.copyJobs.Wait()
|
||||||
|
}
|
|
@ -0,0 +1,109 @@
|
||||||
|
package logger
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/json"
|
||||||
|
"io"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type TestLoggerJSON struct {
|
||||||
|
*json.Encoder
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *TestLoggerJSON) Log(m *Message) error {
|
||||||
|
return l.Encode(m)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *TestLoggerJSON) Close() error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *TestLoggerJSON) Name() string {
|
||||||
|
return "json"
|
||||||
|
}
|
||||||
|
|
||||||
|
type TestLoggerText struct {
|
||||||
|
*bytes.Buffer
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *TestLoggerText) Log(m *Message) error {
|
||||||
|
_, err := l.WriteString(m.ContainerID + " " + m.Source + " " + string(m.Line) + "\n")
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *TestLoggerText) Close() error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *TestLoggerText) Name() string {
|
||||||
|
return "text"
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCopier(t *testing.T) {
|
||||||
|
stdoutLine := "Line that thinks that it is log line from docker stdout"
|
||||||
|
stderrLine := "Line that thinks that it is log line from docker stderr"
|
||||||
|
var stdout bytes.Buffer
|
||||||
|
var stderr bytes.Buffer
|
||||||
|
for i := 0; i < 30; i++ {
|
||||||
|
if _, err := stdout.WriteString(stdoutLine + "\n"); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if _, err := stderr.WriteString(stderrLine + "\n"); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var jsonBuf bytes.Buffer
|
||||||
|
|
||||||
|
jsonLog := &TestLoggerJSON{Encoder: json.NewEncoder(&jsonBuf)}
|
||||||
|
|
||||||
|
cid := "a7317399f3f857173c6179d44823594f8294678dea9999662e5c625b5a1c7657"
|
||||||
|
c, err := NewCopier(cid,
|
||||||
|
map[string]io.Reader{
|
||||||
|
"stdout": &stdout,
|
||||||
|
"stderr": &stderr,
|
||||||
|
},
|
||||||
|
jsonLog)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
c.Run()
|
||||||
|
wait := make(chan struct{})
|
||||||
|
go func() {
|
||||||
|
c.Wait()
|
||||||
|
close(wait)
|
||||||
|
}()
|
||||||
|
select {
|
||||||
|
case <-time.After(1 * time.Second):
|
||||||
|
t.Fatal("Copier failed to do its work in 1 second")
|
||||||
|
case <-wait:
|
||||||
|
}
|
||||||
|
dec := json.NewDecoder(&jsonBuf)
|
||||||
|
for {
|
||||||
|
var msg Message
|
||||||
|
if err := dec.Decode(&msg); err != nil {
|
||||||
|
if err == io.EOF {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if msg.Source != "stdout" && msg.Source != "stderr" {
|
||||||
|
t.Fatalf("Wrong Source: %q, should be %q or %q", msg.Source, "stdout", "stderr")
|
||||||
|
}
|
||||||
|
if msg.ContainerID != cid {
|
||||||
|
t.Fatalf("Wrong ContainerID: %q, expected %q", msg.ContainerID, cid)
|
||||||
|
}
|
||||||
|
if msg.Source == "stdout" {
|
||||||
|
if string(msg.Line) != stdoutLine {
|
||||||
|
t.Fatalf("Wrong Line: %q, expected %q", msg.Line, stdoutLine)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if msg.Source == "stderr" {
|
||||||
|
if string(msg.Line) != stderrLine {
|
||||||
|
t.Fatalf("Wrong Line: %q, expected %q", msg.Line, stderrLine)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,58 @@
|
||||||
|
package jsonfilelog
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"os"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"github.com/docker/docker/daemon/logger"
|
||||||
|
"github.com/docker/docker/pkg/jsonlog"
|
||||||
|
)
|
||||||
|
|
||||||
|
// JSONFileLogger is Logger implementation for default docker logging:
|
||||||
|
// JSON objects to file
|
||||||
|
type JSONFileLogger struct {
|
||||||
|
buf *bytes.Buffer
|
||||||
|
f *os.File // store for closing
|
||||||
|
mu sync.Mutex // protects buffer
|
||||||
|
}
|
||||||
|
|
||||||
|
// New creates new JSONFileLogger which writes to filename
|
||||||
|
func New(filename string) (logger.Logger, error) {
|
||||||
|
log, err := os.OpenFile(filename, os.O_RDWR|os.O_APPEND|os.O_CREATE, 0600)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &JSONFileLogger{
|
||||||
|
f: log,
|
||||||
|
buf: bytes.NewBuffer(nil),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Log converts logger.Message to jsonlog.JSONLog and serializes it to file
|
||||||
|
func (l *JSONFileLogger) Log(msg *logger.Message) error {
|
||||||
|
l.mu.Lock()
|
||||||
|
defer l.mu.Unlock()
|
||||||
|
err := (&jsonlog.JSONLog{Log: string(msg.Line) + "\n", Stream: msg.Source, Created: msg.Timestamp}).MarshalJSONBuf(l.buf)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
l.buf.WriteByte('\n')
|
||||||
|
_, err = l.buf.WriteTo(l.f)
|
||||||
|
if err != nil {
|
||||||
|
// this buffer is screwed, replace it with another to avoid races
|
||||||
|
l.buf = bytes.NewBuffer(nil)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Close closes underlying file
|
||||||
|
func (l *JSONFileLogger) Close() error {
|
||||||
|
return l.f.Close()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Name returns name of this logger
|
||||||
|
func (l *JSONFileLogger) Name() string {
|
||||||
|
return "JSONFile"
|
||||||
|
}
|
|
@ -0,0 +1,78 @@
|
||||||
|
package jsonfilelog
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io/ioutil"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/docker/daemon/logger"
|
||||||
|
"github.com/docker/docker/pkg/jsonlog"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestJSONFileLogger(t *testing.T) {
|
||||||
|
tmp, err := ioutil.TempDir("", "docker-logger-")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tmp)
|
||||||
|
filename := filepath.Join(tmp, "container.log")
|
||||||
|
l, err := New(filename)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
defer l.Close()
|
||||||
|
cid := "a7317399f3f857173c6179d44823594f8294678dea9999662e5c625b5a1c7657"
|
||||||
|
if err := l.Log(&logger.Message{ContainerID: cid, Line: []byte("line1"), Source: "src1"}); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if err := l.Log(&logger.Message{ContainerID: cid, Line: []byte("line2"), Source: "src2"}); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if err := l.Log(&logger.Message{ContainerID: cid, Line: []byte("line3"), Source: "src3"}); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
res, err := ioutil.ReadFile(filename)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
expected := `{"log":"line1\n","stream":"src1","time":"0001-01-01T00:00:00Z"}
|
||||||
|
{"log":"line2\n","stream":"src2","time":"0001-01-01T00:00:00Z"}
|
||||||
|
{"log":"line3\n","stream":"src3","time":"0001-01-01T00:00:00Z"}
|
||||||
|
`
|
||||||
|
|
||||||
|
if string(res) != expected {
|
||||||
|
t.Fatalf("Wrong log content: %q, expected %q", res, expected)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkJSONFileLogger(b *testing.B) {
|
||||||
|
tmp, err := ioutil.TempDir("", "docker-logger-")
|
||||||
|
if err != nil {
|
||||||
|
b.Fatal(err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tmp)
|
||||||
|
filename := filepath.Join(tmp, "container.log")
|
||||||
|
l, err := New(filename)
|
||||||
|
if err != nil {
|
||||||
|
b.Fatal(err)
|
||||||
|
}
|
||||||
|
defer l.Close()
|
||||||
|
cid := "a7317399f3f857173c6179d44823594f8294678dea9999662e5c625b5a1c7657"
|
||||||
|
testLine := "Line that thinks that it is log line from docker\n"
|
||||||
|
msg := &logger.Message{ContainerID: cid, Line: []byte(testLine), Source: "stderr", Timestamp: time.Now().UTC()}
|
||||||
|
jsonlog, err := (&jsonlog.JSONLog{Log: string(msg.Line) + "\n", Stream: msg.Source, Created: msg.Timestamp}).MarshalJSON()
|
||||||
|
if err != nil {
|
||||||
|
b.Fatal(err)
|
||||||
|
}
|
||||||
|
b.SetBytes(int64(len(jsonlog)+1) * 30)
|
||||||
|
b.ResetTimer()
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
for j := 0; j < 30; j++ {
|
||||||
|
if err := l.Log(msg); err != nil {
|
||||||
|
b.Fatal(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,18 @@
|
||||||
|
package logger
|
||||||
|
|
||||||
|
import "time"
|
||||||
|
|
||||||
|
// Message is datastructure that represents record from some container
|
||||||
|
type Message struct {
|
||||||
|
ContainerID string
|
||||||
|
Line []byte
|
||||||
|
Source string
|
||||||
|
Timestamp time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// Logger is interface for docker logging drivers
|
||||||
|
type Logger interface {
|
||||||
|
Log(*Message) error
|
||||||
|
Name() string
|
||||||
|
Close() error
|
||||||
|
}
|
|
@ -44,6 +44,9 @@ func (daemon *Daemon) ContainerLogs(job *engine.Job) engine.Status {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return job.Error(err)
|
return job.Error(err)
|
||||||
}
|
}
|
||||||
|
if container.LogDriverType() != "json-file" {
|
||||||
|
return job.Errorf("\"logs\" endpoint is supported only for \"json-file\" logging driver")
|
||||||
|
}
|
||||||
cLog, err := container.ReadLog("json")
|
cLog, err := container.ReadLog("json")
|
||||||
if err != nil && os.IsNotExist(err) {
|
if err != nil && os.IsNotExist(err) {
|
||||||
// Legacy logs
|
// Legacy logs
|
||||||
|
|
|
@ -123,7 +123,7 @@ func (m *containerMonitor) Start() error {
|
||||||
for {
|
for {
|
||||||
m.container.RestartCount++
|
m.container.RestartCount++
|
||||||
|
|
||||||
if err := m.container.startLoggingToDisk(); err != nil {
|
if err := m.container.startLogging(); err != nil {
|
||||||
m.resetContainer(false)
|
m.resetContainer(false)
|
||||||
|
|
||||||
return err
|
return err
|
||||||
|
@ -182,7 +182,7 @@ func (m *containerMonitor) Start() error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// resetMonitor resets the stateful fields on the containerMonitor based on the
|
// resetMonitor resets the stateful fields on the containerMonitor based on the
|
||||||
// previous runs success or failure. Reguardless of success, if the container had
|
// previous runs success or failure. Regardless of success, if the container had
|
||||||
// an execution time of more than 10s then reset the timer back to the default
|
// an execution time of more than 10s then reset the timer back to the default
|
||||||
func (m *containerMonitor) resetMonitor(successful bool) {
|
func (m *containerMonitor) resetMonitor(successful bool) {
|
||||||
executionTime := time.Now().Sub(m.lastStartTime).Seconds()
|
executionTime := time.Now().Sub(m.lastStartTime).Seconds()
|
||||||
|
@ -302,6 +302,24 @@ func (m *containerMonitor) resetContainer(lock bool) {
|
||||||
container.stdin, container.stdinPipe = io.Pipe()
|
container.stdin, container.stdinPipe = io.Pipe()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if container.logDriver != nil {
|
||||||
|
if container.logCopier != nil {
|
||||||
|
exit := make(chan struct{})
|
||||||
|
go func() {
|
||||||
|
container.logCopier.Wait()
|
||||||
|
close(exit)
|
||||||
|
}()
|
||||||
|
select {
|
||||||
|
case <-time.After(1 * time.Second):
|
||||||
|
log.Warnf("Logger didn't exit in time: logs may be truncated")
|
||||||
|
case <-exit:
|
||||||
|
}
|
||||||
|
}
|
||||||
|
container.logDriver.Close()
|
||||||
|
container.logCopier = nil
|
||||||
|
container.logDriver = nil
|
||||||
|
}
|
||||||
|
|
||||||
c := container.command.ProcessConfig.Cmd
|
c := container.command.ProcessConfig.Cmd
|
||||||
|
|
||||||
container.command.ProcessConfig.Cmd = exec.Cmd{
|
container.command.ProcessConfig.Cmd = exec.Cmd{
|
||||||
|
|
|
@ -284,10 +284,11 @@ func setupIPTables(addr net.Addr, icc, ipmasq bool) error {
|
||||||
// Enable NAT
|
// Enable NAT
|
||||||
|
|
||||||
if ipmasq {
|
if ipmasq {
|
||||||
natArgs := []string{"POSTROUTING", "-t", "nat", "-s", addr.String(), "!", "-o", bridgeIface, "-j", "MASQUERADE"}
|
natArgs := []string{"-s", addr.String(), "!", "-o", bridgeIface, "-j", "MASQUERADE"}
|
||||||
|
|
||||||
if !iptables.Exists(natArgs...) {
|
if !iptables.Exists(iptables.Nat, "POSTROUTING", natArgs...) {
|
||||||
if output, err := iptables.Raw(append([]string{"-I"}, natArgs...)...); err != nil {
|
if output, err := iptables.Raw(append([]string{
|
||||||
|
"-t", string(iptables.Nat), "-I", "POSTROUTING"}, natArgs...)...); err != nil {
|
||||||
return fmt.Errorf("Unable to enable network bridge NAT: %s", err)
|
return fmt.Errorf("Unable to enable network bridge NAT: %s", err)
|
||||||
} else if len(output) != 0 {
|
} else if len(output) != 0 {
|
||||||
return &iptables.ChainError{Chain: "POSTROUTING", Output: output}
|
return &iptables.ChainError{Chain: "POSTROUTING", Output: output}
|
||||||
|
@ -296,28 +297,28 @@ func setupIPTables(addr net.Addr, icc, ipmasq bool) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
args = []string{"FORWARD", "-i", bridgeIface, "-o", bridgeIface, "-j"}
|
args = []string{"-i", bridgeIface, "-o", bridgeIface, "-j"}
|
||||||
acceptArgs = append(args, "ACCEPT")
|
acceptArgs = append(args, "ACCEPT")
|
||||||
dropArgs = append(args, "DROP")
|
dropArgs = append(args, "DROP")
|
||||||
)
|
)
|
||||||
|
|
||||||
if !icc {
|
if !icc {
|
||||||
iptables.Raw(append([]string{"-D"}, acceptArgs...)...)
|
iptables.Raw(append([]string{"-D", "FORWARD"}, acceptArgs...)...)
|
||||||
|
|
||||||
if !iptables.Exists(dropArgs...) {
|
if !iptables.Exists(iptables.Filter, "FORWARD", dropArgs...) {
|
||||||
log.Debugf("Disable inter-container communication")
|
log.Debugf("Disable inter-container communication")
|
||||||
if output, err := iptables.Raw(append([]string{"-I"}, dropArgs...)...); err != nil {
|
if output, err := iptables.Raw(append([]string{"-I", "FORWARD"}, dropArgs...)...); err != nil {
|
||||||
return fmt.Errorf("Unable to prevent intercontainer communication: %s", err)
|
return fmt.Errorf("Unable to prevent intercontainer communication: %s", err)
|
||||||
} else if len(output) != 0 {
|
} else if len(output) != 0 {
|
||||||
return fmt.Errorf("Error disabling intercontainer communication: %s", output)
|
return fmt.Errorf("Error disabling intercontainer communication: %s", output)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
iptables.Raw(append([]string{"-D"}, dropArgs...)...)
|
iptables.Raw(append([]string{"-D", "FORWARD"}, dropArgs...)...)
|
||||||
|
|
||||||
if !iptables.Exists(acceptArgs...) {
|
if !iptables.Exists(iptables.Filter, "FORWARD", acceptArgs...) {
|
||||||
log.Debugf("Enable inter-container communication")
|
log.Debugf("Enable inter-container communication")
|
||||||
if output, err := iptables.Raw(append([]string{"-I"}, acceptArgs...)...); err != nil {
|
if output, err := iptables.Raw(append([]string{"-I", "FORWARD"}, acceptArgs...)...); err != nil {
|
||||||
return fmt.Errorf("Unable to allow intercontainer communication: %s", err)
|
return fmt.Errorf("Unable to allow intercontainer communication: %s", err)
|
||||||
} else if len(output) != 0 {
|
} else if len(output) != 0 {
|
||||||
return fmt.Errorf("Error enabling intercontainer communication: %s", output)
|
return fmt.Errorf("Error enabling intercontainer communication: %s", output)
|
||||||
|
@ -326,9 +327,9 @@ func setupIPTables(addr net.Addr, icc, ipmasq bool) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Accept all non-intercontainer outgoing packets
|
// Accept all non-intercontainer outgoing packets
|
||||||
outgoingArgs := []string{"FORWARD", "-i", bridgeIface, "!", "-o", bridgeIface, "-j", "ACCEPT"}
|
outgoingArgs := []string{"-i", bridgeIface, "!", "-o", bridgeIface, "-j", "ACCEPT"}
|
||||||
if !iptables.Exists(outgoingArgs...) {
|
if !iptables.Exists(iptables.Filter, "FORWARD", outgoingArgs...) {
|
||||||
if output, err := iptables.Raw(append([]string{"-I"}, outgoingArgs...)...); err != nil {
|
if output, err := iptables.Raw(append([]string{"-I", "FORWARD"}, outgoingArgs...)...); err != nil {
|
||||||
return fmt.Errorf("Unable to allow outgoing packets: %s", err)
|
return fmt.Errorf("Unable to allow outgoing packets: %s", err)
|
||||||
} else if len(output) != 0 {
|
} else if len(output) != 0 {
|
||||||
return &iptables.ChainError{Chain: "FORWARD outgoing", Output: output}
|
return &iptables.ChainError{Chain: "FORWARD outgoing", Output: output}
|
||||||
|
@ -336,10 +337,10 @@ func setupIPTables(addr net.Addr, icc, ipmasq bool) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Accept incoming packets for existing connections
|
// Accept incoming packets for existing connections
|
||||||
existingArgs := []string{"FORWARD", "-o", bridgeIface, "-m", "conntrack", "--ctstate", "RELATED,ESTABLISHED", "-j", "ACCEPT"}
|
existingArgs := []string{"-o", bridgeIface, "-m", "conntrack", "--ctstate", "RELATED,ESTABLISHED", "-j", "ACCEPT"}
|
||||||
|
|
||||||
if !iptables.Exists(existingArgs...) {
|
if !iptables.Exists(iptables.Filter, "FORWARD", existingArgs...) {
|
||||||
if output, err := iptables.Raw(append([]string{"-I"}, existingArgs...)...); err != nil {
|
if output, err := iptables.Raw(append([]string{"-I", "FORWARD"}, existingArgs...)...); err != nil {
|
||||||
return fmt.Errorf("Unable to allow incoming packets: %s", err)
|
return fmt.Errorf("Unable to allow incoming packets: %s", err)
|
||||||
} else if len(output) != 0 {
|
} else if len(output) != 0 {
|
||||||
return &iptables.ChainError{Chain: "FORWARD incoming", Output: output}
|
return &iptables.ChainError{Chain: "FORWARD incoming", Output: output}
|
||||||
|
@ -522,7 +523,8 @@ func Allocate(job *engine.Job) engine.Status {
|
||||||
// If globalIPv6Network Size is at least a /80 subnet generate IPv6 address from MAC address
|
// If globalIPv6Network Size is at least a /80 subnet generate IPv6 address from MAC address
|
||||||
netmask_ones, _ := globalIPv6Network.Mask.Size()
|
netmask_ones, _ := globalIPv6Network.Mask.Size()
|
||||||
if requestedIPv6 == nil && netmask_ones <= 80 {
|
if requestedIPv6 == nil && netmask_ones <= 80 {
|
||||||
requestedIPv6 = globalIPv6Network.IP
|
requestedIPv6 = make(net.IP, len(globalIPv6Network.IP))
|
||||||
|
copy(requestedIPv6, globalIPv6Network.IP)
|
||||||
for i, h := range mac {
|
for i, h := range mac {
|
||||||
requestedIPv6[i+10] = h
|
requestedIPv6[i+10] = h
|
||||||
}
|
}
|
||||||
|
@ -530,7 +532,7 @@ func Allocate(job *engine.Job) engine.Status {
|
||||||
|
|
||||||
globalIPv6, err = ipallocator.RequestIP(globalIPv6Network, requestedIPv6)
|
globalIPv6, err = ipallocator.RequestIP(globalIPv6Network, requestedIPv6)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Errorf("Allocator: RequestIP v6: %s", err.Error())
|
log.Errorf("Allocator: RequestIP v6: %v", err)
|
||||||
return job.Error(err)
|
return job.Error(err)
|
||||||
}
|
}
|
||||||
log.Infof("Allocated IPv6 %s", globalIPv6)
|
log.Infof("Allocated IPv6 %s", globalIPv6)
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
package bridge
|
package bridge
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"net"
|
"net"
|
||||||
"strconv"
|
"strconv"
|
||||||
"testing"
|
"testing"
|
||||||
|
@ -104,6 +105,123 @@ func TestHostnameFormatChecking(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func newInterfaceAllocation(t *testing.T, input engine.Env) (output engine.Env) {
|
||||||
|
eng := engine.New()
|
||||||
|
eng.Logging = false
|
||||||
|
|
||||||
|
done := make(chan bool)
|
||||||
|
|
||||||
|
// set IPv6 global if given
|
||||||
|
if input.Exists("globalIPv6Network") {
|
||||||
|
_, globalIPv6Network, _ = net.ParseCIDR(input.Get("globalIPv6Network"))
|
||||||
|
}
|
||||||
|
|
||||||
|
job := eng.Job("allocate_interface", "container_id")
|
||||||
|
job.Env().Init(&input)
|
||||||
|
reader, _ := job.Stdout.AddPipe()
|
||||||
|
go func() {
|
||||||
|
output.Decode(reader)
|
||||||
|
done <- true
|
||||||
|
}()
|
||||||
|
|
||||||
|
res := Allocate(job)
|
||||||
|
job.Stdout.Close()
|
||||||
|
<-done
|
||||||
|
|
||||||
|
if input.Exists("expectFail") && input.GetBool("expectFail") {
|
||||||
|
if res == engine.StatusOK {
|
||||||
|
t.Fatal("Doesn't fail to allocate network interface")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if res != engine.StatusOK {
|
||||||
|
t.Fatal("Failed to allocate network interface")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if input.Exists("globalIPv6Network") {
|
||||||
|
// check for bug #11427
|
||||||
|
_, subnet, _ := net.ParseCIDR(input.Get("globalIPv6Network"))
|
||||||
|
if globalIPv6Network.IP.String() != subnet.IP.String() {
|
||||||
|
t.Fatal("globalIPv6Network was modified during allocation")
|
||||||
|
}
|
||||||
|
// clean up IPv6 global
|
||||||
|
globalIPv6Network = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIPv6InterfaceAllocationAutoNetmaskGt80(t *testing.T) {
|
||||||
|
|
||||||
|
input := engine.Env{}
|
||||||
|
|
||||||
|
_, subnet, _ := net.ParseCIDR("2001:db8:1234:1234:1234::/81")
|
||||||
|
|
||||||
|
// set global ipv6
|
||||||
|
input.Set("globalIPv6Network", subnet.String())
|
||||||
|
|
||||||
|
output := newInterfaceAllocation(t, input)
|
||||||
|
|
||||||
|
// ensure low manually assigend global ip
|
||||||
|
ip := net.ParseIP(output.Get("GlobalIPv6"))
|
||||||
|
_, subnet, _ = net.ParseCIDR(fmt.Sprintf("%s/%d", subnet.IP.String(), 120))
|
||||||
|
if !subnet.Contains(ip) {
|
||||||
|
t.Fatalf("Error ip %s not in subnet %s", ip.String(), subnet.String())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIPv6InterfaceAllocationAutoNetmaskLe80(t *testing.T) {
|
||||||
|
|
||||||
|
input := engine.Env{}
|
||||||
|
|
||||||
|
_, subnet, _ := net.ParseCIDR("2001:db8:1234:1234:1234::/80")
|
||||||
|
|
||||||
|
// set global ipv6
|
||||||
|
input.Set("globalIPv6Network", subnet.String())
|
||||||
|
input.Set("RequestedMac", "ab:cd:ab:cd:ab:cd")
|
||||||
|
|
||||||
|
output := newInterfaceAllocation(t, input)
|
||||||
|
|
||||||
|
// ensure global ip with mac
|
||||||
|
ip := net.ParseIP(output.Get("GlobalIPv6"))
|
||||||
|
expected_ip := net.ParseIP("2001:db8:1234:1234:1234:abcd:abcd:abcd")
|
||||||
|
if ip.String() != expected_ip.String() {
|
||||||
|
t.Fatalf("Error ip %s should be %s", ip.String(), expected_ip.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ensure link local format
|
||||||
|
ip = net.ParseIP(output.Get("LinkLocalIPv6"))
|
||||||
|
expected_ip = net.ParseIP("fe80::a9cd:abff:fecd:abcd")
|
||||||
|
if ip.String() != expected_ip.String() {
|
||||||
|
t.Fatalf("Error ip %s should be %s", ip.String(), expected_ip.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIPv6InterfaceAllocationRequest(t *testing.T) {
|
||||||
|
|
||||||
|
input := engine.Env{}
|
||||||
|
|
||||||
|
_, subnet, _ := net.ParseCIDR("2001:db8:1234:1234:1234::/80")
|
||||||
|
expected_ip := net.ParseIP("2001:db8:1234:1234:1234::1328")
|
||||||
|
|
||||||
|
// set global ipv6
|
||||||
|
input.Set("globalIPv6Network", subnet.String())
|
||||||
|
input.Set("RequestedIPv6", expected_ip.String())
|
||||||
|
|
||||||
|
output := newInterfaceAllocation(t, input)
|
||||||
|
|
||||||
|
// ensure global ip with mac
|
||||||
|
ip := net.ParseIP(output.Get("GlobalIPv6"))
|
||||||
|
if ip.String() != expected_ip.String() {
|
||||||
|
t.Fatalf("Error ip %s should be %s", ip.String(), expected_ip.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
// retry -> fails for duplicated address
|
||||||
|
input.SetBool("expectFail", true)
|
||||||
|
output = newInterfaceAllocation(t, input)
|
||||||
|
}
|
||||||
|
|
||||||
func TestMacAddrGeneration(t *testing.T) {
|
func TestMacAddrGeneration(t *testing.T) {
|
||||||
ip := net.ParseIP("192.168.0.1")
|
ip := net.ParseIP("192.168.0.1")
|
||||||
mac := generateMacAddr(ip).String()
|
mac := generateMacAddr(ip).String()
|
||||||
|
|
|
@ -1,10 +1,24 @@
|
||||||
package portallocator
|
package portallocator
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bufio"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net"
|
"net"
|
||||||
|
"os"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
|
log "github.com/Sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
DefaultPortRangeStart = 49153
|
||||||
|
DefaultPortRangeEnd = 65535
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
beginPortRange = DefaultPortRangeStart
|
||||||
|
endPortRange = DefaultPortRangeEnd
|
||||||
)
|
)
|
||||||
|
|
||||||
type portMap struct {
|
type portMap struct {
|
||||||
|
@ -15,7 +29,7 @@ type portMap struct {
|
||||||
func newPortMap() *portMap {
|
func newPortMap() *portMap {
|
||||||
return &portMap{
|
return &portMap{
|
||||||
p: map[int]struct{}{},
|
p: map[int]struct{}{},
|
||||||
last: EndPortRange,
|
last: endPortRange,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -30,11 +44,6 @@ func newProtoMap() protoMap {
|
||||||
|
|
||||||
type ipMapping map[string]protoMap
|
type ipMapping map[string]protoMap
|
||||||
|
|
||||||
const (
|
|
||||||
BeginPortRange = 49153
|
|
||||||
EndPortRange = 65535
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
var (
|
||||||
ErrAllPortsAllocated = errors.New("all ports are allocated")
|
ErrAllPortsAllocated = errors.New("all ports are allocated")
|
||||||
ErrUnknownProtocol = errors.New("unknown protocol")
|
ErrUnknownProtocol = errors.New("unknown protocol")
|
||||||
|
@ -59,6 +68,31 @@ func NewErrPortAlreadyAllocated(ip string, port int) ErrPortAlreadyAllocated {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
const portRangeKernelParam = "/proc/sys/net/ipv4/ip_local_port_range"
|
||||||
|
|
||||||
|
file, err := os.Open(portRangeKernelParam)
|
||||||
|
if err != nil {
|
||||||
|
log.Warnf("Failed to read %s kernel parameter: %v", portRangeKernelParam, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
var start, end int
|
||||||
|
n, err := fmt.Fscanf(bufio.NewReader(file), "%d\t%d", &start, &end)
|
||||||
|
if n != 2 || err != nil {
|
||||||
|
if err == nil {
|
||||||
|
err = fmt.Errorf("unexpected count of parsed numbers (%d)", n)
|
||||||
|
}
|
||||||
|
log.Errorf("Failed to parse port range from %s: %v", portRangeKernelParam, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
beginPortRange = start
|
||||||
|
endPortRange = end
|
||||||
|
}
|
||||||
|
|
||||||
|
func PortRange() (int, int) {
|
||||||
|
return beginPortRange, endPortRange
|
||||||
|
}
|
||||||
|
|
||||||
func (e ErrPortAlreadyAllocated) IP() string {
|
func (e ErrPortAlreadyAllocated) IP() string {
|
||||||
return e.ip
|
return e.ip
|
||||||
}
|
}
|
||||||
|
@ -137,10 +171,10 @@ func ReleaseAll() error {
|
||||||
|
|
||||||
func (pm *portMap) findPort() (int, error) {
|
func (pm *portMap) findPort() (int, error) {
|
||||||
port := pm.last
|
port := pm.last
|
||||||
for i := 0; i <= EndPortRange-BeginPortRange; i++ {
|
for i := 0; i <= endPortRange-beginPortRange; i++ {
|
||||||
port++
|
port++
|
||||||
if port > EndPortRange {
|
if port > endPortRange {
|
||||||
port = BeginPortRange
|
port = beginPortRange
|
||||||
}
|
}
|
||||||
|
|
||||||
if _, ok := pm.p[port]; !ok {
|
if _, ok := pm.p[port]; !ok {
|
||||||
|
|
|
@ -5,6 +5,11 @@ import (
|
||||||
"testing"
|
"testing"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
beginPortRange = DefaultPortRangeStart
|
||||||
|
endPortRange = DefaultPortRangeEnd
|
||||||
|
}
|
||||||
|
|
||||||
func reset() {
|
func reset() {
|
||||||
ReleaseAll()
|
ReleaseAll()
|
||||||
}
|
}
|
||||||
|
@ -17,7 +22,7 @@ func TestRequestNewPort(t *testing.T) {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if expected := BeginPortRange; port != expected {
|
if expected := beginPortRange; port != expected {
|
||||||
t.Fatalf("Expected port %d got %d", expected, port)
|
t.Fatalf("Expected port %d got %d", expected, port)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -102,13 +107,13 @@ func TestUnknowProtocol(t *testing.T) {
|
||||||
func TestAllocateAllPorts(t *testing.T) {
|
func TestAllocateAllPorts(t *testing.T) {
|
||||||
defer reset()
|
defer reset()
|
||||||
|
|
||||||
for i := 0; i <= EndPortRange-BeginPortRange; i++ {
|
for i := 0; i <= endPortRange-beginPortRange; i++ {
|
||||||
port, err := RequestPort(defaultIP, "tcp", 0)
|
port, err := RequestPort(defaultIP, "tcp", 0)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if expected := BeginPortRange + i; port != expected {
|
if expected := beginPortRange + i; port != expected {
|
||||||
t.Fatalf("Expected port %d got %d", expected, port)
|
t.Fatalf("Expected port %d got %d", expected, port)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -123,7 +128,7 @@ func TestAllocateAllPorts(t *testing.T) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// release a port in the middle and ensure we get another tcp port
|
// release a port in the middle and ensure we get another tcp port
|
||||||
port := BeginPortRange + 5
|
port := beginPortRange + 5
|
||||||
if err := ReleasePort(defaultIP, "tcp", port); err != nil {
|
if err := ReleasePort(defaultIP, "tcp", port); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
@ -153,13 +158,13 @@ func BenchmarkAllocatePorts(b *testing.B) {
|
||||||
defer reset()
|
defer reset()
|
||||||
|
|
||||||
for i := 0; i < b.N; i++ {
|
for i := 0; i < b.N; i++ {
|
||||||
for i := 0; i <= EndPortRange-BeginPortRange; i++ {
|
for i := 0; i <= endPortRange-beginPortRange; i++ {
|
||||||
port, err := RequestPort(defaultIP, "tcp", 0)
|
port, err := RequestPort(defaultIP, "tcp", 0)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
b.Fatal(err)
|
b.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if expected := BeginPortRange + i; port != expected {
|
if expected := beginPortRange + i; port != expected {
|
||||||
b.Fatalf("Expected port %d got %d", expected, port)
|
b.Fatalf("Expected port %d got %d", expected, port)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -231,15 +236,15 @@ func TestPortAllocation(t *testing.T) {
|
||||||
func TestNoDuplicateBPR(t *testing.T) {
|
func TestNoDuplicateBPR(t *testing.T) {
|
||||||
defer reset()
|
defer reset()
|
||||||
|
|
||||||
if port, err := RequestPort(defaultIP, "tcp", BeginPortRange); err != nil {
|
if port, err := RequestPort(defaultIP, "tcp", beginPortRange); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
} else if port != BeginPortRange {
|
} else if port != beginPortRange {
|
||||||
t.Fatalf("Expected port %d got %d", BeginPortRange, port)
|
t.Fatalf("Expected port %d got %d", beginPortRange, port)
|
||||||
}
|
}
|
||||||
|
|
||||||
if port, err := RequestPort(defaultIP, "tcp", 0); err != nil {
|
if port, err := RequestPort(defaultIP, "tcp", 0); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
} else if port == BeginPortRange {
|
} else if port == beginPortRange {
|
||||||
t.Fatalf("Acquire(0) allocated the same port twice: %d", port)
|
t.Fatalf("Acquire(0) allocated the same port twice: %d", port)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -129,7 +129,8 @@ func TestMapAllPortsSingleInterface(t *testing.T) {
|
||||||
}()
|
}()
|
||||||
|
|
||||||
for i := 0; i < 10; i++ {
|
for i := 0; i < 10; i++ {
|
||||||
for i := portallocator.BeginPortRange; i < portallocator.EndPortRange; i++ {
|
start, end := portallocator.PortRange()
|
||||||
|
for i := start; i < end; i++ {
|
||||||
if host, err = Map(srcAddr1, dstIp1, 0); err != nil {
|
if host, err = Map(srcAddr1, dstIp1, 0); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
@ -137,8 +138,8 @@ func TestMapAllPortsSingleInterface(t *testing.T) {
|
||||||
hosts = append(hosts, host)
|
hosts = append(hosts, host)
|
||||||
}
|
}
|
||||||
|
|
||||||
if _, err := Map(srcAddr1, dstIp1, portallocator.BeginPortRange); err == nil {
|
if _, err := Map(srcAddr1, dstIp1, start); err == nil {
|
||||||
t.Fatalf("Port %d should be bound but is not", portallocator.BeginPortRange)
|
t.Fatalf("Port %d should be bound but is not", start)
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, val := range hosts {
|
for _, val := range hosts {
|
||||||
|
|
|
@ -1,8 +1,6 @@
|
||||||
package daemon
|
package daemon
|
||||||
|
|
||||||
import (
|
import "github.com/docker/docker/engine"
|
||||||
"github.com/docker/docker/engine"
|
|
||||||
)
|
|
||||||
|
|
||||||
func (daemon *Daemon) ContainerRename(job *engine.Job) engine.Status {
|
func (daemon *Daemon) ContainerRename(job *engine.Job) engine.Status {
|
||||||
if len(job.Args) != 2 {
|
if len(job.Args) != 2 {
|
||||||
|
@ -26,9 +24,21 @@ func (daemon *Daemon) ContainerRename(job *engine.Job) engine.Status {
|
||||||
|
|
||||||
container.Name = newName
|
container.Name = newName
|
||||||
|
|
||||||
|
undo := func() {
|
||||||
|
container.Name = oldName
|
||||||
|
daemon.reserveName(container.ID, oldName)
|
||||||
|
daemon.containerGraph.Delete(newName)
|
||||||
|
}
|
||||||
|
|
||||||
if err := daemon.containerGraph.Delete(oldName); err != nil {
|
if err := daemon.containerGraph.Delete(oldName); err != nil {
|
||||||
|
undo()
|
||||||
return job.Errorf("Failed to delete container %q: %v", oldName, err)
|
return job.Errorf("Failed to delete container %q: %v", oldName, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if err := container.toDisk(); err != nil {
|
||||||
|
undo()
|
||||||
|
return job.Error(err)
|
||||||
|
}
|
||||||
|
|
||||||
return engine.StatusOK
|
return engine.StatusOK
|
||||||
}
|
}
|
||||||
|
|
|
@ -66,7 +66,7 @@ func (daemon *Daemon) setHostConfig(container *Container, hostConfig *runconfig.
|
||||||
if err != nil && os.IsNotExist(err) {
|
if err != nil && os.IsNotExist(err) {
|
||||||
err = os.MkdirAll(source, 0755)
|
err = os.MkdirAll(source, 0755)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("Could not create local directory '%s' for bind mount: %s!", source, err.Error())
|
return fmt.Errorf("Could not create local directory '%s' for bind mount: %v!", source, err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -18,7 +18,7 @@ func (daemon *Daemon) ContainerStats(job *engine.Job) engine.Status {
|
||||||
enc := json.NewEncoder(job.Stdout)
|
enc := json.NewEncoder(job.Stdout)
|
||||||
for v := range updates {
|
for v := range updates {
|
||||||
update := v.(*execdriver.ResourceStats)
|
update := v.(*execdriver.ResourceStats)
|
||||||
ss := convertToAPITypes(update.ContainerStats)
|
ss := convertToAPITypes(update.Stats)
|
||||||
ss.MemoryStats.Limit = uint64(update.MemoryLimit)
|
ss.MemoryStats.Limit = uint64(update.MemoryLimit)
|
||||||
ss.Read = update.Read
|
ss.Read = update.Read
|
||||||
ss.CpuStats.SystemUsage = update.SystemUsage
|
ss.CpuStats.SystemUsage = update.SystemUsage
|
||||||
|
@ -31,20 +31,21 @@ func (daemon *Daemon) ContainerStats(job *engine.Job) engine.Status {
|
||||||
return engine.StatusOK
|
return engine.StatusOK
|
||||||
}
|
}
|
||||||
|
|
||||||
// convertToAPITypes converts the libcontainer.ContainerStats to the api specific
|
// convertToAPITypes converts the libcontainer.Stats to the api specific
|
||||||
// structs. This is done to preserve API compatibility and versioning.
|
// structs. This is done to preserve API compatibility and versioning.
|
||||||
func convertToAPITypes(ls *libcontainer.ContainerStats) *types.Stats {
|
func convertToAPITypes(ls *libcontainer.Stats) *types.Stats {
|
||||||
s := &types.Stats{}
|
s := &types.Stats{}
|
||||||
if ls.NetworkStats != nil {
|
if ls.Interfaces != nil {
|
||||||
s.Network = types.Network{
|
s.Network = types.Network{}
|
||||||
RxBytes: ls.NetworkStats.RxBytes,
|
for _, iface := range ls.Interfaces {
|
||||||
RxPackets: ls.NetworkStats.RxPackets,
|
s.Network.RxBytes += iface.RxBytes
|
||||||
RxErrors: ls.NetworkStats.RxErrors,
|
s.Network.RxPackets += iface.RxPackets
|
||||||
RxDropped: ls.NetworkStats.RxDropped,
|
s.Network.RxErrors += iface.RxErrors
|
||||||
TxBytes: ls.NetworkStats.TxBytes,
|
s.Network.RxDropped += iface.RxDropped
|
||||||
TxPackets: ls.NetworkStats.TxPackets,
|
s.Network.TxBytes += iface.TxBytes
|
||||||
TxErrors: ls.NetworkStats.TxErrors,
|
s.Network.TxPackets += iface.TxPackets
|
||||||
TxDropped: ls.NetworkStats.TxDropped,
|
s.Network.TxErrors += iface.TxErrors
|
||||||
|
s.Network.TxDropped += iface.TxDropped
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
cs := ls.CgroupStats
|
cs := ls.CgroupStats
|
||||||
|
|
|
@ -8,12 +8,12 @@ import (
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"syscall"
|
|
||||||
|
|
||||||
log "github.com/Sirupsen/logrus"
|
log "github.com/Sirupsen/logrus"
|
||||||
"github.com/docker/docker/daemon/execdriver"
|
"github.com/docker/docker/daemon/execdriver"
|
||||||
"github.com/docker/docker/pkg/chrootarchive"
|
"github.com/docker/docker/pkg/chrootarchive"
|
||||||
"github.com/docker/docker/pkg/symlink"
|
"github.com/docker/docker/pkg/symlink"
|
||||||
|
"github.com/docker/docker/pkg/system"
|
||||||
"github.com/docker/docker/volumes"
|
"github.com/docker/docker/volumes"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -385,15 +385,14 @@ func copyExistingContents(source, destination string) error {
|
||||||
// copyOwnership copies the permissions and uid:gid of the source file
|
// copyOwnership copies the permissions and uid:gid of the source file
|
||||||
// into the destination file
|
// into the destination file
|
||||||
func copyOwnership(source, destination string) error {
|
func copyOwnership(source, destination string) error {
|
||||||
var stat syscall.Stat_t
|
stat, err := system.Stat(source)
|
||||||
|
if err != nil {
|
||||||
if err := syscall.Stat(source, &stat); err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := os.Chown(destination, int(stat.Uid), int(stat.Gid)); err != nil {
|
if err := os.Chown(destination, int(stat.Uid()), int(stat.Gid())); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
return os.Chmod(destination, os.FileMode(stat.Mode))
|
return os.Chmod(destination, os.FileMode(stat.Mode()))
|
||||||
}
|
}
|
||||||
|
|
|
@ -13,7 +13,7 @@ func (daemon *Daemon) ContainerWait(job *engine.Job) engine.Status {
|
||||||
name := job.Args[0]
|
name := job.Args[0]
|
||||||
container, err := daemon.Get(name)
|
container, err := daemon.Get(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return job.Errorf("%s: %s", job.Name, err.Error())
|
return job.Errorf("%s: %v", job.Name, err)
|
||||||
}
|
}
|
||||||
status, _ := container.WaitStop(-1 * time.Second)
|
status, _ := container.WaitStop(-1 * time.Second)
|
||||||
job.Printf("%d\n", status)
|
job.Printf("%d\n", status)
|
||||||
|
|
|
@ -7,6 +7,7 @@ import (
|
||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
|
||||||
log "github.com/Sirupsen/logrus"
|
log "github.com/Sirupsen/logrus"
|
||||||
"github.com/docker/docker/autogen/dockerversion"
|
"github.com/docker/docker/autogen/dockerversion"
|
||||||
|
@ -101,11 +102,14 @@ func mainDaemon() {
|
||||||
// load the daemon in the background so we can immediately start
|
// load the daemon in the background so we can immediately start
|
||||||
// the http api so that connections don't fail while the daemon
|
// the http api so that connections don't fail while the daemon
|
||||||
// is booting
|
// is booting
|
||||||
|
daemonInitWait := make(chan error)
|
||||||
go func() {
|
go func() {
|
||||||
d, err := daemon.NewDaemon(daemonCfg, eng)
|
d, err := daemon.NewDaemon(daemonCfg, eng)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal(err)
|
daemonInitWait <- err
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Infof("docker daemon: %s %s; execdriver: %s; graphdriver: %s",
|
log.Infof("docker daemon: %s %s; execdriver: %s; graphdriver: %s",
|
||||||
dockerversion.VERSION,
|
dockerversion.VERSION,
|
||||||
dockerversion.GITCOMMIT,
|
dockerversion.GITCOMMIT,
|
||||||
|
@ -114,7 +118,8 @@ func mainDaemon() {
|
||||||
)
|
)
|
||||||
|
|
||||||
if err := d.Install(eng); err != nil {
|
if err := d.Install(eng); err != nil {
|
||||||
log.Fatal(err)
|
daemonInitWait <- err
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
b := &builder.BuilderJob{eng, d}
|
b := &builder.BuilderJob{eng, d}
|
||||||
|
@ -123,8 +128,10 @@ func mainDaemon() {
|
||||||
// after the daemon is done setting up we can tell the api to start
|
// after the daemon is done setting up we can tell the api to start
|
||||||
// accepting connections
|
// accepting connections
|
||||||
if err := eng.Job("acceptconnections").Run(); err != nil {
|
if err := eng.Job("acceptconnections").Run(); err != nil {
|
||||||
log.Fatal(err)
|
daemonInitWait <- err
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
daemonInitWait <- nil
|
||||||
}()
|
}()
|
||||||
|
|
||||||
// Serve api
|
// Serve api
|
||||||
|
@ -141,7 +148,46 @@ func mainDaemon() {
|
||||||
job.Setenv("TlsCert", *flCert)
|
job.Setenv("TlsCert", *flCert)
|
||||||
job.Setenv("TlsKey", *flKey)
|
job.Setenv("TlsKey", *flKey)
|
||||||
job.SetenvBool("BufferRequests", true)
|
job.SetenvBool("BufferRequests", true)
|
||||||
|
|
||||||
|
// The serve API job never exits unless an error occurs
|
||||||
|
// We need to start it as a goroutine and wait on it so
|
||||||
|
// daemon doesn't exit
|
||||||
|
serveAPIWait := make(chan error)
|
||||||
|
go func() {
|
||||||
if err := job.Run(); err != nil {
|
if err := job.Run(); err != nil {
|
||||||
log.Fatal(err)
|
log.Errorf("ServeAPI error: %v", err)
|
||||||
|
serveAPIWait <- err
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
serveAPIWait <- nil
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Wait for the daemon startup goroutine to finish
|
||||||
|
// This makes sure we can actually cleanly shutdown the daemon
|
||||||
|
log.Debug("waiting for daemon to initialize")
|
||||||
|
errDaemon := <-daemonInitWait
|
||||||
|
if errDaemon != nil {
|
||||||
|
eng.Shutdown()
|
||||||
|
outStr := fmt.Sprintf("Shutting down daemon due to errors: %v", errDaemon)
|
||||||
|
if strings.Contains(errDaemon.Error(), "engine is shutdown") {
|
||||||
|
// if the error is "engine is shutdown", we've already reported (or
|
||||||
|
// will report below in API server errors) the error
|
||||||
|
outStr = "Shutting down daemon due to reported errors"
|
||||||
|
}
|
||||||
|
// we must "fatal" exit here as the API server may be happy to
|
||||||
|
// continue listening forever if the error had no impact to API
|
||||||
|
log.Fatal(outStr)
|
||||||
|
} else {
|
||||||
|
log.Info("Daemon has completed initialization")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Daemon is fully initialized and handling API traffic
|
||||||
|
// Wait for serve API job to complete
|
||||||
|
errAPI := <-serveAPIWait
|
||||||
|
// If we have an error here it is unique to API (as daemonErr would have
|
||||||
|
// exited the daemon process above)
|
||||||
|
if errAPI != nil {
|
||||||
|
log.Errorf("Shutting down due to ServeAPI error: %v", errAPI)
|
||||||
|
}
|
||||||
|
eng.Shutdown()
|
||||||
}
|
}
|
||||||
|
|
|
@ -21,6 +21,7 @@ COPY ./VERSION VERSION
|
||||||
# TODO: don't do this - look at merging the yml file in build.sh
|
# TODO: don't do this - look at merging the yml file in build.sh
|
||||||
COPY ./mkdocs.yml mkdocs.yml
|
COPY ./mkdocs.yml mkdocs.yml
|
||||||
COPY ./s3_website.json s3_website.json
|
COPY ./s3_website.json s3_website.json
|
||||||
|
COPY ./release.sh release.sh
|
||||||
|
|
||||||
# Docker Swarm
|
# Docker Swarm
|
||||||
#ADD https://raw.githubusercontent.com/docker/swarm/master/docs/mkdocs.yml /docs/mkdocs-swarm.yml
|
#ADD https://raw.githubusercontent.com/docker/swarm/master/docs/mkdocs.yml /docs/mkdocs-swarm.yml
|
||||||
|
|
|
@ -1,3 +0,0 @@
|
||||||
Fred Lifton <fred.lifton@docker.com> (@fredlf)
|
|
||||||
James Turnbull <james@lovedthanlost.net> (@jamtur01)
|
|
||||||
Sven Dowideit <SvenDowideit@fosiki.com> (@SvenDowideit)
|
|
|
@ -106,7 +106,7 @@ also update the root docs pages by running
|
||||||
> if you are using Boot2Docker on OSX and the above command returns an error,
|
> if you are using Boot2Docker on OSX and the above command returns an error,
|
||||||
> `Post http:///var/run/docker.sock/build?rm=1&t=docker-docs%3Apost-1.2.0-docs_update-2:
|
> `Post http:///var/run/docker.sock/build?rm=1&t=docker-docs%3Apost-1.2.0-docs_update-2:
|
||||||
> dial unix /var/run/docker.sock: no such file or directory', you need to set the Docker
|
> dial unix /var/run/docker.sock: no such file or directory', you need to set the Docker
|
||||||
> host. Run `$(boot2docker shellinit)` to see the correct variable to set. The command
|
> host. Run `eval "$(boot2docker shellinit)"` to see the correct variable to set. The command
|
||||||
> will return the full `export` command, so you can just cut and paste.
|
> will return the full `export` command, so you can just cut and paste.
|
||||||
|
|
||||||
## Cherry-picking documentation changes to update an existing release.
|
## Cherry-picking documentation changes to update an existing release.
|
||||||
|
@ -152,3 +152,32 @@ _if_ the `DISTRIBUTION_ID` is set to the Cloudfront distribution ID (ask the met
|
||||||
team) - this will take at least 15 minutes to run and you can check its progress
|
team) - this will take at least 15 minutes to run and you can check its progress
|
||||||
with the CDN Cloudfront Chrome addin.
|
with the CDN Cloudfront Chrome addin.
|
||||||
|
|
||||||
|
## Removing files from the docs.docker.com site
|
||||||
|
|
||||||
|
Sometimes it becomes necessary to remove files from the historical published documentation.
|
||||||
|
The most reliable way to do this is to do it directly using `aws s3` commands running in a
|
||||||
|
docs container:
|
||||||
|
|
||||||
|
Start the docs container like `make docs-shell`, but bind mount in your `awsconfig`:
|
||||||
|
|
||||||
|
```
|
||||||
|
docker run --rm -it -v $(CURDIR)/docs/awsconfig:/docs/awsconfig docker-docs:master bash
|
||||||
|
```
|
||||||
|
|
||||||
|
and then the following example shows deleting 2 documents from s3, and then requesting the
|
||||||
|
CloudFlare cache to invalidate them:
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
export BUCKET BUCKET=docs.docker.com
|
||||||
|
export AWS_CONFIG_FILE=$(pwd)/awsconfig
|
||||||
|
aws s3 --profile $BUCKET ls s3://$BUCKET
|
||||||
|
aws s3 --profile $BUCKET rm s3://$BUCKET/v1.0/reference/api/docker_io_oauth_api/index.html
|
||||||
|
aws s3 --profile $BUCKET rm s3://$BUCKET/v1.1/reference/api/docker_io_oauth_api/index.html
|
||||||
|
|
||||||
|
aws configure set preview.cloudfront true
|
||||||
|
export DISTRIBUTION_ID=YUTIYUTIUTIUYTIUT
|
||||||
|
aws cloudfront create-invalidation --profile docs.docker.com --distribution-id $DISTRIBUTION_ID --invalidation-batch '{"Paths":{"Quantity":1, "Items":["/v1.0/reference/api/docker_io_oauth_api/"]},"CallerReference":"6Mar2015sventest1"}'
|
||||||
|
aws cloudfront create-invalidation --profile docs.docker.com --distribution-id $DISTRIBUTION_ID --invalidation-batch '{"Paths":{"Quantity":1, "Items":["/v1.1/reference/api/docker_io_oauth_api/"]},"CallerReference":"6Mar2015sventest1"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
|
|
@ -97,6 +97,9 @@ A Dockerfile is similar to a Makefile.
|
||||||
exec form makes it possible to avoid shell string munging. The exec form makes
|
exec form makes it possible to avoid shell string munging. The exec form makes
|
||||||
it possible to **RUN** commands using a base image that does not contain `/bin/sh`.
|
it possible to **RUN** commands using a base image that does not contain `/bin/sh`.
|
||||||
|
|
||||||
|
Note that the exec form is parsed as a JSON array, which means that you must
|
||||||
|
use double-quotes (") around words not single-quotes (').
|
||||||
|
|
||||||
**CMD**
|
**CMD**
|
||||||
-- **CMD** has three forms:
|
-- **CMD** has three forms:
|
||||||
|
|
||||||
|
@ -120,6 +123,9 @@ A Dockerfile is similar to a Makefile.
|
||||||
be executed when running the image.
|
be executed when running the image.
|
||||||
If you use the shell form of the **CMD**, the `<command>` executes in `/bin/sh -c`:
|
If you use the shell form of the **CMD**, the `<command>` executes in `/bin/sh -c`:
|
||||||
|
|
||||||
|
Note that the exec form is parsed as a JSON array, which means that you must
|
||||||
|
use double-quotes (") around words not single-quotes (').
|
||||||
|
|
||||||
```
|
```
|
||||||
FROM ubuntu
|
FROM ubuntu
|
||||||
CMD echo "This is a test." | wc -
|
CMD echo "This is a test." | wc -
|
||||||
|
@ -143,6 +149,25 @@ A Dockerfile is similar to a Makefile.
|
||||||
**CMD** executes nothing at build time, but specifies the intended command for
|
**CMD** executes nothing at build time, but specifies the intended command for
|
||||||
the image.
|
the image.
|
||||||
|
|
||||||
|
**LABEL**
|
||||||
|
-- `LABEL <key>[=<value>] [<key>[=<value>] ...]`
|
||||||
|
The **LABEL** instruction adds metadata to an image. A **LABEL** is a
|
||||||
|
key-value pair. To include spaces within a **LABEL** value, use quotes and
|
||||||
|
backslashes as you would in command-line parsing.
|
||||||
|
|
||||||
|
```
|
||||||
|
LABEL "com.example.vendor"="ACME Incorporated"
|
||||||
|
```
|
||||||
|
|
||||||
|
An image can have more than one label. To specify multiple labels, separate
|
||||||
|
each key-value pair by a space.
|
||||||
|
|
||||||
|
Labels are additive including `LABEL`s in `FROM` images. As the system
|
||||||
|
encounters and then applies a new label, new `key`s override any previous
|
||||||
|
labels with identical keys.
|
||||||
|
|
||||||
|
To display an image's labels, use the `docker inspect` command.
|
||||||
|
|
||||||
**EXPOSE**
|
**EXPOSE**
|
||||||
-- `EXPOSE <port> [<port>...]`
|
-- `EXPOSE <port> [<port>...]`
|
||||||
The **EXPOSE** instruction informs Docker that the container listens on the
|
The **EXPOSE** instruction informs Docker that the container listens on the
|
||||||
|
@ -269,20 +294,22 @@ A Dockerfile is similar to a Makefile.
|
||||||
|
|
||||||
**ONBUILD**
|
**ONBUILD**
|
||||||
-- `ONBUILD [INSTRUCTION]`
|
-- `ONBUILD [INSTRUCTION]`
|
||||||
The **ONBUILD** instruction adds a trigger instruction to the image, which is
|
The **ONBUILD** instruction adds a trigger instruction to an image. The
|
||||||
executed at a later time, when the image is used as the base for another
|
trigger is executed at a later time, when the image is used as the base for
|
||||||
build. The trigger is executed in the context of the downstream build, as
|
another build. Docker executes the trigger in the context of the downstream
|
||||||
if it had been inserted immediately after the **FROM** instruction in the
|
build, as if the trigger existed immediately after the **FROM** instruction in
|
||||||
downstream Dockerfile. Any build instruction can be registered as a
|
the downstream Dockerfile.
|
||||||
trigger. This is useful if you are building an image to be
|
|
||||||
used as a base for building other images, for example an application build
|
You can register any build instruction as a trigger. A trigger is useful if
|
||||||
environment or a daemon to be customized with a user-specific
|
you are defining an image to use as a base for building other images. For
|
||||||
configuration. For example, if your image is a reusable python
|
example, if you are defining an application build environment or a daemon that
|
||||||
application builder, it requires application source code to be
|
is customized with a user-specific configuration.
|
||||||
added in a particular directory, and might require a build script
|
|
||||||
to be called after that. You can't just call **ADD** and **RUN** now, because
|
Consider an image intended as a reusable python application builder. It must
|
||||||
you don't yet have access to the application source code, and it
|
add application source code to a particular directory, and might need a build
|
||||||
is different for each application build.
|
script called after that. You can't just call **ADD** and **RUN** now, because
|
||||||
|
you don't yet have access to the application source code, and it is different
|
||||||
|
for each application build.
|
||||||
|
|
||||||
-- Providing application developers with a boilerplate Dockerfile to copy-paste
|
-- Providing application developers with a boilerplate Dockerfile to copy-paste
|
||||||
into their application is inefficient, error-prone, and
|
into their application is inefficient, error-prone, and
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue