Move notary docs to notary subdirectory

This commit is contained in:
Misty Stanley-Jones 2016-09-28 16:35:14 -07:00
parent d522d7c6b3
commit 9c69f388c5
1863 changed files with 0 additions and 667551 deletions

View File

@ -1,53 +0,0 @@
# Changelog
## [v0.3.0](https://github.com/docker/notary/releases/tag/v0.3.0) 5/11/2016
+ Root rotations
+ RethinkDB support as a storage backend for Server and Signer
+ A new TUF repo builder that merges server and client validation
+ Trust Pinning: configure known good key IDs and CAs to replace TOFU.
+ Add --input, --output, and --quiet flags to notary verify command
+ Remove local certificate store. It was redundant as all certs were also stored in the cached root.json
+ Cleanup of dead code in client side key storage logic
+ Update project to Go 1.6.1
+ Reorganize vendoring to meet Go 1.6+ standard. Still using Godeps to manage vendored packages
+ Add targets by hash, no longer necessary to have the original target data available
+ Active Key ID verification during signature verification
+ Switch all testing from assert to require, reduces noise in test runs
+ Use alpine based images for smaller downloads and faster setup times
+ Clean up out of data signatures when re-signing content
+ Set cache control headers on HTTP responses from Notary Server
+ Add sha512 support for targets
+ Add environment variable for delegation key passphrase
+ Reduce permissions requested by client from token server
+ Update formatting for delegation list output
+ Move SQLite dependency to tests only so it doesn't get built into official images
+ Fixed asking for password to list private repositories
+ Enable using notary client with username/password in a scripted fashion
+ Fix static compilation of client
+ Enforce TUF version to be >= 1, previously 0 was acceptable although unused
+ json.RawMessage should always be used as *json.RawMessage due to concepts of addressability in Go and effects on encoding
## [v0.2](https://github.com/docker/notary/releases/tag/v0.2.0) 2/24/2016
+ Add support for delegation roles in `notary` server and client
+ Add `notary CLI` commands for managing delegation roles: `notary delegation`
+ `add`, `list` and `remove` subcommands
+ Enhance `notary CLI` commands for adding targets to delegation roles
+ `notary add --roles` and `notary remove --roles` to manipulate targets for delegations
+ Support for rotating the snapshot key to one managed by the `notary` server
+ Add consistent download functionality to download metadata and content by checksum
+ Update `docker-compose` configuration to use official mariadb image
+ deprecate `notarymysql`
+ default to using a volume for `data` directory
+ use separate databases for `notary-server` and `notary-signer` with separate users
+ Add `notary CLI` command for changing private key passphrases: `notary key passwd`
+ Enhance `notary CLI` commands for importing and exporting keys
+ Change default `notary CLI` log level to fatal, introduce new verbose (error-level) and debug-level settings
+ Store roles as PEM headers in private keys, incompatible with previous notary v0.1 key format
+ No longer store keys as `<KEY_ID>_role.key`, instead store as `<KEY_ID>.key`; new private keys from new notary clients will crash old notary clients
+ Support logging as JSON format on server and signer
+ Support mutual TLS between notary client and notary server
## [v0.1](https://github.com/docker/notary/releases/tag/v0.1) 11/15/2015
+ Initial non-alpha `notary` version
+ Implement TUF (the update framework) with support for root, targets, snapshot, and timestamp roles
+ Add PKCS11 interface to store and sign with keys in HSMs (i.e. Yubikey)

View File

@ -1,85 +0,0 @@
# Contributing to notary
## Before reporting an issue...
### If your problem is with...
- automated builds
- your account on the [Docker Hub](https://hub.docker.com/)
- any other [Docker Hub](https://hub.docker.com/) issue
Then please do not report your issue here - you should instead report it to [https://support.docker.com](https://support.docker.com)
### If you...
- need help setting up notary
- can't figure out something
- are not sure what's going on or what your problem is
Then please do not open an issue here yet - you should first try one of the following support forums:
- irc: #docker-trust on freenode
## Reporting an issue properly
By following these simple rules you will get better and faster feedback on your issue.
- search the bugtracker for an already reported issue
### If you found an issue that describes your problem:
- please read other user comments first, and confirm this is the same issue: a given error condition might be indicative of different problems - you may also find a workaround in the comments
- please refrain from adding "same thing here" or "+1" comments
- you don't need to comment on an issue to get notified of updates: just hit the "subscribe" button
- comment if you have some new, technical and relevant information to add to the case
### If you have not found an existing issue that describes your problem:
1. create a new issue, with a succinct title that describes your issue:
- bad title: "It doesn't work with my docker"
- good title: "Publish fail: 400 error with E_INVALID_DIGEST"
2. copy the output of:
- `docker version`
- `docker info`
- `docker exec <registry-container> registry -version`
3. copy the command line you used to run `notary` or launch `notaryserver`
4. if relevant, copy your `notaryserver` logs that show the error
## Contributing a patch for a known bug, or a small correction
You should follow the basic GitHub workflow:
1. fork
2. commit a change
3. make sure the tests pass
4. PR
Additionally, you must [sign your commits](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work). It's very simple:
- configure your name with git: `git config user.name "Real Name" && git config user.email mail@example.com`
- sign your commits using `-s`: `git commit -s -m "My commit"`
Some simple rules to ensure quick merge:
- clearly point to the issue(s) you want to fix in your PR comment (e.g., `closes #12345`)
- prefer multiple (smaller) PRs addressing individual issues over a big one trying to address multiple issues at once
- if you need to amend your PR following comments, please squash instead of adding more commits
## Contributing new features
You are heavily encouraged to first discuss what you want to do. You can do so on the irc channel, or by opening an issue that clearly describes the use case you want to fulfill, or the problem you are trying to solve.
If this is a major new feature, you should then submit a proposal that describes your technical solution and reasoning.
If you did discuss it first, this will likely be greenlighted very fast. It's advisable to address all feedback on this proposal before starting actual work
Then you should submit your implementation, clearly linking to the issue (and possible proposal).
Your PR will be reviewed by the community, then ultimately by the project maintainers, before being merged.
It's mandatory to:
- interact respectfully with other community members and maintainers - more generally, you are expected to abide by the [Docker community rules](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#docker-community-guidelines)
- address maintainers' comments and modify your submission accordingly
- write tests for any new code
Complying to these simple rules will greatly accelerate the review process, and will ensure you have a pleasant experience in contributing code to the Registry.

View File

@ -1,4 +0,0 @@
David Williamson <david.williamson@docker.com> (github: davidwilliamson)
Aaron Lehmann <aaron.lehmann@docker.com> (github: aaronlehmann)
Lewis Marshall <lewis@flynn.io> (github: lmars)
Jonathan Rudenberg <jonathan@flynn.io> (github: titanous)

View File

@ -1,38 +0,0 @@
FROM golang:1.6.2
RUN apt-get update && apt-get install -y \
curl \
clang \
libltdl-dev \
libsqlite3-dev \
patch \
tar \
xz-utils \
python \
python-pip \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -ms /bin/bash notary \
&& pip install codecov \
&& go get golang.org/x/tools/cmd/cover github.com/golang/lint/golint github.com/client9/misspell/cmd/misspell
# Configure the container for OSX cross compilation
ENV OSX_SDK MacOSX10.11.sdk
ENV OSX_CROSS_COMMIT 8aa9b71a394905e6c5f4b59e2b97b87a004658a4
RUN set -x \
&& export OSXCROSS_PATH="/osxcross" \
&& git clone https://github.com/tpoechtrager/osxcross.git $OSXCROSS_PATH \
&& ( cd $OSXCROSS_PATH && git checkout -q $OSX_CROSS_COMMIT) \
&& curl -sSL https://s3.dockerproject.org/darwin/v2/${OSX_SDK}.tar.xz -o "${OSXCROSS_PATH}/tarballs/${OSX_SDK}.tar.xz" \
&& UNATTENDED=yes OSX_VERSION_MIN=10.6 ${OSXCROSS_PATH}/build.sh > /dev/null
ENV PATH /osxcross/target/bin:$PATH
ENV NOTARYDIR /go/src/github.com/docker/notary
COPY . ${NOTARYDIR}
RUN chmod -R a+rw /go
WORKDIR ${NOTARYDIR}
# Note this cannot use alpine because of the MacOSX Cross SDK: the cctools there uses sys/cdefs.h and that cannot be used in alpine: http://wiki.musl-libc.org/wiki/FAQ#Q:_I.27m_trying_to_compile_something_against_musl_and_I_get_error_messages_about_sys.2Fcdefs.h

907
Godeps/Godeps.json generated
View File

@ -1,907 +0,0 @@
{
"ImportPath": "github.com/docker/notary",
"GoVersion": "go1.6",
"GodepVersion": "v71",
"Packages": [
"./..."
],
"Deps": [
{
"ImportPath": "github.com/Azure/go-ansiterm",
"Rev": "388960b655244e76e24c75f48631564eaefade62"
},
{
"ImportPath": "github.com/Azure/go-ansiterm/winterm",
"Rev": "388960b655244e76e24c75f48631564eaefade62"
},
{
"ImportPath": "github.com/BurntSushi/toml",
"Rev": "bd2bdf7f18f849530ef7a1c29a4290217cab32a1"
},
{
"ImportPath": "github.com/BurntSushi/toml/cmd/toml-test-decoder",
"Rev": "bd2bdf7f18f849530ef7a1c29a4290217cab32a1"
},
{
"ImportPath": "github.com/BurntSushi/toml/cmd/toml-test-encoder",
"Rev": "bd2bdf7f18f849530ef7a1c29a4290217cab32a1"
},
{
"ImportPath": "github.com/BurntSushi/toml/cmd/tomlv",
"Rev": "bd2bdf7f18f849530ef7a1c29a4290217cab32a1"
},
{
"ImportPath": "github.com/Shopify/logrus-bugsnag",
"Rev": "5a46080c635f13e8b60c24765c19d62e1ca8d0fb"
},
{
"ImportPath": "github.com/Sirupsen/logrus",
"Comment": "v0.10.0-18-g6d9ae30",
"Rev": "6d9ae300aaf85d6acd2e5424081c7fcddb21dab8"
},
{
"ImportPath": "github.com/agl/ed25519",
"Rev": "278e1ec8e8a6e017cd07577924d6766039146ced"
},
{
"ImportPath": "github.com/agl/ed25519/edwards25519",
"Rev": "278e1ec8e8a6e017cd07577924d6766039146ced"
},
{
"ImportPath": "github.com/agtorre/gocolorize",
"Comment": "v1.0.0",
"Rev": "f42b554bf7f006936130c9bb4f971afd2d87f671"
},
{
"ImportPath": "github.com/armon/consul-api",
"Rev": "dcfedd50ed5334f96adee43fc88518a4f095e15c"
},
{
"ImportPath": "github.com/beorn7/perks/quantile",
"Rev": "b965b613227fddccbfffe13eae360ed3fa822f8d"
},
{
"ImportPath": "github.com/bitly/go-simplejson",
"Comment": "v0.5.0",
"Rev": "aabad6e819789e569bd6aabf444c935aa9ba1e44"
},
{
"ImportPath": "github.com/bugsnag/bugsnag-go",
"Comment": "v1.0.4-2-g13fd6b8",
"Rev": "13fd6b8acda029830ef9904df6b63be0a83369d0"
},
{
"ImportPath": "github.com/bugsnag/bugsnag-go/errors",
"Comment": "v1.0.4-2-g13fd6b8",
"Rev": "13fd6b8acda029830ef9904df6b63be0a83369d0"
},
{
"ImportPath": "github.com/bugsnag/bugsnag-go/examples/appengine",
"Comment": "v1.0.4-2-g13fd6b8",
"Rev": "13fd6b8acda029830ef9904df6b63be0a83369d0"
},
{
"ImportPath": "github.com/bugsnag/bugsnag-go/examples/http",
"Comment": "v1.0.4-2-g13fd6b8",
"Rev": "13fd6b8acda029830ef9904df6b63be0a83369d0"
},
{
"ImportPath": "github.com/bugsnag/bugsnag-go/examples/revelapp/app",
"Comment": "v1.0.4-2-g13fd6b8",
"Rev": "13fd6b8acda029830ef9904df6b63be0a83369d0"
},
{
"ImportPath": "github.com/bugsnag/bugsnag-go/examples/revelapp/app/controllers",
"Comment": "v1.0.4-2-g13fd6b8",
"Rev": "13fd6b8acda029830ef9904df6b63be0a83369d0"
},
{
"ImportPath": "github.com/bugsnag/bugsnag-go/examples/revelapp/tests",
"Comment": "v1.0.4-2-g13fd6b8",
"Rev": "13fd6b8acda029830ef9904df6b63be0a83369d0"
},
{
"ImportPath": "github.com/bugsnag/bugsnag-go/revel",
"Comment": "v1.0.4-2-g13fd6b8",
"Rev": "13fd6b8acda029830ef9904df6b63be0a83369d0"
},
{
"ImportPath": "github.com/bugsnag/osext",
"Rev": "0dd3f918b21bec95ace9dc86c7e70266cfc5c702"
},
{
"ImportPath": "github.com/bugsnag/panicwrap",
"Rev": "e2c28503fcd0675329da73bf48b33404db873782"
},
{
"ImportPath": "github.com/cenkalti/backoff",
"Rev": "4dc77674aceaabba2c7e3da25d4c823edfb73f99"
},
{
"ImportPath": "github.com/coreos/etcd/client",
"Comment": "v3.0.0-beta.0-217-g6acb3d6",
"Rev": "6acb3d67fbe131b3b2d5d010e00ec80182be4628"
},
{
"ImportPath": "github.com/coreos/etcd/pkg/pathutil",
"Comment": "v3.0.0-beta.0-217-g6acb3d6",
"Rev": "6acb3d67fbe131b3b2d5d010e00ec80182be4628"
},
{
"ImportPath": "github.com/coreos/etcd/pkg/types",
"Comment": "v3.0.0-beta.0-217-g6acb3d6",
"Rev": "6acb3d67fbe131b3b2d5d010e00ec80182be4628"
},
{
"ImportPath": "github.com/coreos/go-etcd/etcd",
"Comment": "v2.0.0-38-g003851b",
"Rev": "003851be7bb0694fe3cc457a49529a19388ee7cf"
},
{
"ImportPath": "github.com/cpuguy83/go-md2man/md2man",
"Comment": "v1.0.4",
"Rev": "71acacd42f85e5e82f70a55327789582a5200a90"
},
{
"ImportPath": "github.com/denisenkom/go-mssqldb",
"Rev": "6e7f3d73dade2e5566f87d18c3a1d00d2ce33421"
},
{
"ImportPath": "github.com/docker/distribution",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/context",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/digest",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/health",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/reference",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/registry/api/errcode",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/registry/api/v2",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/registry/auth",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/registry/auth/htpasswd",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/registry/auth/silly",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/registry/auth/token",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/registry/client",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/registry/client/auth",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/registry/client/transport",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/registry/storage/cache",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/registry/storage/cache/memory",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/distribution/uuid",
"Comment": "v2.2.1-20-gc56d49b",
"Rev": "c56d49b111aea675a81d411c2db1acfac6179de9"
},
{
"ImportPath": "github.com/docker/docker/pkg/system",
"Comment": "v1.11.0",
"Rev": "4dc5990d7565a4a15d641bc6a0bc50a02cfcf302"
},
{
"ImportPath": "github.com/docker/docker/pkg/term",
"Comment": "v1.11.0",
"Rev": "4dc5990d7565a4a15d641bc6a0bc50a02cfcf302"
},
{
"ImportPath": "github.com/docker/docker/pkg/term/windows",
"Comment": "v1.11.0",
"Rev": "4dc5990d7565a4a15d641bc6a0bc50a02cfcf302"
},
{
"ImportPath": "github.com/docker/go-connections/tlsconfig",
"Comment": "v0.1.2-16-gf549a93",
"Rev": "f549a9393d05688dff0992ef3efd8bbe6c628aeb"
},
{
"ImportPath": "github.com/docker/go-units",
"Comment": "v0.1.0-21-g0bbddae",
"Rev": "0bbddae09c5a5419a8c6dcdd7ff90da3d450393b"
},
{
"ImportPath": "github.com/docker/go/canonical/json",
"Comment": "v1.5.1-1-6-gd30aec9",
"Rev": "d30aec9fd63c35133f8f79c3412ad91a3b08be06"
},
{
"ImportPath": "github.com/docker/libtrust",
"Rev": "9cbd2a1374f46905c68a4eb3694a130610adc62a"
},
{
"ImportPath": "github.com/docker/libtrust/testutil",
"Rev": "9cbd2a1374f46905c68a4eb3694a130610adc62a"
},
{
"ImportPath": "github.com/docker/libtrust/tlsdemo",
"Rev": "9cbd2a1374f46905c68a4eb3694a130610adc62a"
},
{
"ImportPath": "github.com/docker/libtrust/trustgraph",
"Rev": "9cbd2a1374f46905c68a4eb3694a130610adc62a"
},
{
"ImportPath": "github.com/dvsekhvalnov/jose2go",
"Comment": "v1.2",
"Rev": "6387d3c1f5abd8443b223577d5a7e0f4e0e5731f"
},
{
"ImportPath": "github.com/dvsekhvalnov/jose2go/aes",
"Comment": "v1.2",
"Rev": "6387d3c1f5abd8443b223577d5a7e0f4e0e5731f"
},
{
"ImportPath": "github.com/dvsekhvalnov/jose2go/arrays",
"Comment": "v1.2",
"Rev": "6387d3c1f5abd8443b223577d5a7e0f4e0e5731f"
},
{
"ImportPath": "github.com/dvsekhvalnov/jose2go/base64url",
"Comment": "v1.2",
"Rev": "6387d3c1f5abd8443b223577d5a7e0f4e0e5731f"
},
{
"ImportPath": "github.com/dvsekhvalnov/jose2go/compact",
"Comment": "v1.2",
"Rev": "6387d3c1f5abd8443b223577d5a7e0f4e0e5731f"
},
{
"ImportPath": "github.com/dvsekhvalnov/jose2go/kdf",
"Comment": "v1.2",
"Rev": "6387d3c1f5abd8443b223577d5a7e0f4e0e5731f"
},
{
"ImportPath": "github.com/dvsekhvalnov/jose2go/keys/ecc",
"Comment": "v1.2",
"Rev": "6387d3c1f5abd8443b223577d5a7e0f4e0e5731f"
},
{
"ImportPath": "github.com/dvsekhvalnov/jose2go/keys/rsa",
"Comment": "v1.2",
"Rev": "6387d3c1f5abd8443b223577d5a7e0f4e0e5731f"
},
{
"ImportPath": "github.com/dvsekhvalnov/jose2go/padding",
"Comment": "v1.2",
"Rev": "6387d3c1f5abd8443b223577d5a7e0f4e0e5731f"
},
{
"ImportPath": "github.com/erikstmartin/go-testdb",
"Rev": "8d10e4a1bae52cd8b81ffdec3445890d6dccab3d"
},
{
"ImportPath": "github.com/getsentry/raven-go",
"Rev": "1cc47a9463b90f246a0503d4c2e9a55c9459ced3"
},
{
"ImportPath": "github.com/go-sql-driver/mysql",
"Comment": "v1.2-97-g0cc29e9",
"Rev": "0cc29e9fe8e25c2c58cf47bcab566e029bbaa88b"
},
{
"ImportPath": "github.com/golang/glog",
"Rev": "23def4e6c14b4da8ac2ed8007337bc5eb5007998"
},
{
"ImportPath": "github.com/golang/protobuf/proto",
"Rev": "3d2510a4dd961caffa2ae781669c628d82db700a"
},
{
"ImportPath": "github.com/golang/protobuf/proto/proto3_proto",
"Rev": "3d2510a4dd961caffa2ae781669c628d82db700a"
},
{
"ImportPath": "github.com/google/gofuzz",
"Rev": "bbcb9da2d746f8bdbd6a936686a0a6067ada0ec5"
},
{
"ImportPath": "github.com/gorilla/context",
"Rev": "14f550f51af52180c2eefed15e5fd18d63c0a64a"
},
{
"ImportPath": "github.com/gorilla/mux",
"Rev": "e444e69cbd2e2e3e0749a2f3c717cec491552bbf"
},
{
"ImportPath": "github.com/hailocab/go-hostpool",
"Rev": "e80d13ce29ede4452c43dea11e79b9bc8a15b478"
},
{
"ImportPath": "github.com/inconshreveable/mousetrap",
"Rev": "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
},
{
"ImportPath": "github.com/jinzhu/gorm",
"Rev": "82d726bbfd8cefbe2dcdc7f7f0484551c0d40433"
},
{
"ImportPath": "github.com/jinzhu/now",
"Rev": "ce80572eb55aa0ac839330041ca9db1afa5f1f6c"
},
{
"ImportPath": "github.com/juju/loggo",
"Rev": "8477fc936adf0e382d680310047ca27e128a309a"
},
{
"ImportPath": "github.com/kr/pretty",
"Comment": "go.weekly.2011-12-22-18-gbc9499c",
"Rev": "bc9499caa0f45ee5edb2f0209fbd61fbf3d9018f"
},
{
"ImportPath": "github.com/kr/pty",
"Comment": "release.r56-29-gf7ee69f",
"Rev": "f7ee69f31298ecbe5d2b349c711e2547a617d398"
},
{
"ImportPath": "github.com/kr/text",
"Rev": "6807e777504f54ad073ecef66747de158294b639"
},
{
"ImportPath": "github.com/kr/text/colwriter",
"Rev": "6807e777504f54ad073ecef66747de158294b639"
},
{
"ImportPath": "github.com/kr/text/mc",
"Rev": "6807e777504f54ad073ecef66747de158294b639"
},
{
"ImportPath": "github.com/lib/pq",
"Comment": "go1.0-cutoff-58-g0dad96c",
"Rev": "0dad96c0b94f8dee039aa40467f767467392a0af"
},
{
"ImportPath": "github.com/lib/pq/hstore",
"Comment": "go1.0-cutoff-58-g0dad96c",
"Rev": "0dad96c0b94f8dee039aa40467f767467392a0af"
},
{
"ImportPath": "github.com/lib/pq/oid",
"Comment": "go1.0-cutoff-58-g0dad96c",
"Rev": "0dad96c0b94f8dee039aa40467f767467392a0af"
},
{
"ImportPath": "github.com/magiconair/properties",
"Comment": "v1.5.3",
"Rev": "624009598839a9432bd97bb75552389422357723"
},
{
"ImportPath": "github.com/mattn/go-sqlite3",
"Comment": "v1.0.0",
"Rev": "b4142c444a8941d0d92b0b7103a24df9cd815e42"
},
{
"ImportPath": "github.com/mattn/go-sqlite3/sqlite3_test",
"Comment": "v1.0.0",
"Rev": "b4142c444a8941d0d92b0b7103a24df9cd815e42"
},
{
"ImportPath": "github.com/matttproud/golang_protobuf_extensions/pbutil",
"Rev": "d0c3fe89de86839aecf2e0579c40ba3bb336a453"
},
{
"ImportPath": "github.com/miekg/pkcs11",
"Rev": "ba39b9c6300b7e0be41b115330145ef8afdff7d6"
},
{
"ImportPath": "github.com/mitchellh/go-homedir",
"Rev": "df55a15e5ce646808815381b3db47a8c66ea62f4"
},
{
"ImportPath": "github.com/mitchellh/mapstructure",
"Rev": "2caf8efc93669b6c43e0441cdc6aed17546c96f3"
},
{
"ImportPath": "github.com/olekukonko/tablewriter",
"Rev": "a5eefc286b03d5560735698ef36c83728a6ae560"
},
{
"ImportPath": "github.com/olekukonko/tablewriter/csv2table",
"Rev": "a5eefc286b03d5560735698ef36c83728a6ae560"
},
{
"ImportPath": "github.com/prometheus/client_golang/prometheus",
"Comment": "0.7.0-53-g449ccef",
"Rev": "449ccefff16c8e2b7229f6be1921ba22f62461fe"
},
{
"ImportPath": "github.com/prometheus/client_model/go",
"Comment": "model-0.0.2-12-gfa8ad6f",
"Rev": "fa8ad6fec33561be4280a8f0514318c79d7f6cb6"
},
{
"ImportPath": "github.com/prometheus/common/expfmt",
"Rev": "4fdc91a58c9d3696b982e8a680f4997403132d44"
},
{
"ImportPath": "github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg",
"Rev": "4fdc91a58c9d3696b982e8a680f4997403132d44"
},
{
"ImportPath": "github.com/prometheus/common/model",
"Rev": "4fdc91a58c9d3696b982e8a680f4997403132d44"
},
{
"ImportPath": "github.com/prometheus/procfs",
"Rev": "b1afdc266f54247f5dc725544f5d351a8661f502"
},
{
"ImportPath": "github.com/revel/revel",
"Comment": "v0.12.0-3-ga9a2ff4",
"Rev": "a9a2ff45fae4330ef4116b257bcf9c82e53350c2"
},
{
"ImportPath": "github.com/robfig/config",
"Rev": "0f78529c8c7e3e9a25f15876532ecbc07c7d99e6"
},
{
"ImportPath": "github.com/robfig/pathtree",
"Rev": "41257a1839e945fce74afd070e02bab2ea2c776a"
},
{
"ImportPath": "github.com/russross/blackfriday",
"Comment": "v1.3",
"Rev": "8cec3a854e68dba10faabbe31c089abf4a3e57a6"
},
{
"ImportPath": "github.com/shurcooL/sanitized_anchor_name",
"Rev": "244f5ac324cb97e1987ef901a0081a77bfd8e845"
},
{
"ImportPath": "github.com/spf13/cast",
"Rev": "4d07383ffe94b5e5a6fa3af9211374a4507a0184"
},
{
"ImportPath": "github.com/spf13/cobra",
"Rev": "f368244301305f414206f889b1735a54cfc8bde8"
},
{
"ImportPath": "github.com/spf13/cobra/cobra",
"Rev": "8e91712f174ced10270cf66615e0a9127e7c4de5"
},
{
"ImportPath": "github.com/spf13/cobra/cobra/cmd",
"Rev": "8e91712f174ced10270cf66615e0a9127e7c4de5"
},
{
"ImportPath": "github.com/spf13/cobra/doc",
"Rev": "f368244301305f414206f889b1735a54cfc8bde8"
},
{
"ImportPath": "github.com/spf13/jwalterweatherman",
"Rev": "3d60171a64319ef63c78bd45bd60e6eab1e75f8b"
},
{
"ImportPath": "github.com/spf13/pflag",
"Rev": "cb88ea77998c3f024757528e3305022ab50b43be"
},
{
"ImportPath": "github.com/spf13/viper",
"Rev": "be5ff3e4840cf692388bde7a057595a474ef379e"
},
{
"ImportPath": "github.com/spf13/viper/remote",
"Rev": "be5ff3e4840cf692388bde7a057595a474ef379e"
},
{
"ImportPath": "github.com/stevvooe/resumable",
"Rev": "eb352b28d119500cb0382a8379f639c1c8d65831"
},
{
"ImportPath": "github.com/stevvooe/resumable/sha256",
"Rev": "eb352b28d119500cb0382a8379f639c1c8d65831"
},
{
"ImportPath": "github.com/stretchr/testify/assert",
"Comment": "v1.0-17-g089c718",
"Rev": "089c7181b8c728499929ff09b62d3fdd8df8adff"
},
{
"ImportPath": "github.com/stretchr/testify/require",
"Comment": "v1.0-17-g089c718",
"Rev": "089c7181b8c728499929ff09b62d3fdd8df8adff"
},
{
"ImportPath": "github.com/stvp/go-udp-testing",
"Rev": "06eb4f886d9f8242b0c176cf0d3ce5ec2cedda05"
},
{
"ImportPath": "github.com/tobi/airbrake-go",
"Rev": "a3cdd910a3ffef88a20fbecc10363a520ad61a0a"
},
{
"ImportPath": "github.com/ugorji/go/codec",
"Rev": "c062049c1793b01a3cc3fe786108edabbaf7756b"
},
{
"ImportPath": "github.com/xordataexchange/crypt/backend",
"Comment": "v0.0.2-17-g749e360",
"Rev": "749e360c8f236773f28fc6d3ddfce4a470795227"
},
{
"ImportPath": "github.com/xordataexchange/crypt/backend/consul",
"Comment": "v0.0.2-17-g749e360",
"Rev": "749e360c8f236773f28fc6d3ddfce4a470795227"
},
{
"ImportPath": "github.com/xordataexchange/crypt/backend/etcd",
"Comment": "v0.0.2-17-g749e360",
"Rev": "749e360c8f236773f28fc6d3ddfce4a470795227"
},
{
"ImportPath": "github.com/xordataexchange/crypt/config",
"Comment": "v0.0.2-17-g749e360",
"Rev": "749e360c8f236773f28fc6d3ddfce4a470795227"
},
{
"ImportPath": "github.com/xordataexchange/crypt/encoding/secconf",
"Comment": "v0.0.2-17-g749e360",
"Rev": "749e360c8f236773f28fc6d3ddfce4a470795227"
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/cast5",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/md4",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/nacl/secretbox",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/openpgp",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/openpgp/armor",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/openpgp/elgamal",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/openpgp/errors",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/openpgp/packet",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/openpgp/s2k",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/pbkdf2",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/poly1305",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/salsa20/salsa",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/scrypt",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/crypto/ssh/terminal",
"Rev": "5bcd134fee4dd1475da17714aac19c0aa0142e2f"
},
{
"ImportPath": "golang.org/x/net/context",
"Rev": "47990a1ba55743e6ef1affd3a14e5bac8553615d"
},
{
"ImportPath": "golang.org/x/net/context/ctxhttp",
"Rev": "47990a1ba55743e6ef1affd3a14e5bac8553615d"
},
{
"ImportPath": "golang.org/x/net/http2",
"Rev": "47990a1ba55743e6ef1affd3a14e5bac8553615d"
},
{
"ImportPath": "golang.org/x/net/http2/h2i",
"Rev": "47990a1ba55743e6ef1affd3a14e5bac8553615d"
},
{
"ImportPath": "golang.org/x/net/http2/hpack",
"Rev": "47990a1ba55743e6ef1affd3a14e5bac8553615d"
},
{
"ImportPath": "golang.org/x/net/internal/timeseries",
"Rev": "47990a1ba55743e6ef1affd3a14e5bac8553615d"
},
{
"ImportPath": "golang.org/x/net/trace",
"Rev": "47990a1ba55743e6ef1affd3a14e5bac8553615d"
},
{
"ImportPath": "golang.org/x/net/websocket",
"Rev": "47990a1ba55743e6ef1affd3a14e5bac8553615d"
},
{
"ImportPath": "golang.org/x/oauth2",
"Rev": "93758b5cba8ca0dbceaf339f864a96445d343c29"
},
{
"ImportPath": "golang.org/x/oauth2/google",
"Rev": "93758b5cba8ca0dbceaf339f864a96445d343c29"
},
{
"ImportPath": "golang.org/x/oauth2/internal",
"Rev": "93758b5cba8ca0dbceaf339f864a96445d343c29"
},
{
"ImportPath": "golang.org/x/oauth2/jws",
"Rev": "93758b5cba8ca0dbceaf339f864a96445d343c29"
},
{
"ImportPath": "golang.org/x/oauth2/jwt",
"Rev": "93758b5cba8ca0dbceaf339f864a96445d343c29"
},
{
"ImportPath": "golang.org/x/sys/unix",
"Rev": "442cd600860ce722f6615730eb008a37a87b13ee"
},
{
"ImportPath": "google.golang.org/appengine",
"Rev": "41265fb44deca5c3b05a946d5db1f54ae54fe67e"
},
{
"ImportPath": "google.golang.org/appengine/internal",
"Rev": "41265fb44deca5c3b05a946d5db1f54ae54fe67e"
},
{
"ImportPath": "google.golang.org/appengine/internal/app_identity",
"Rev": "41265fb44deca5c3b05a946d5db1f54ae54fe67e"
},
{
"ImportPath": "google.golang.org/appengine/internal/base",
"Rev": "41265fb44deca5c3b05a946d5db1f54ae54fe67e"
},
{
"ImportPath": "google.golang.org/appengine/internal/datastore",
"Rev": "41265fb44deca5c3b05a946d5db1f54ae54fe67e"
},
{
"ImportPath": "google.golang.org/appengine/internal/log",
"Rev": "41265fb44deca5c3b05a946d5db1f54ae54fe67e"
},
{
"ImportPath": "google.golang.org/appengine/internal/modules",
"Rev": "41265fb44deca5c3b05a946d5db1f54ae54fe67e"
},
{
"ImportPath": "google.golang.org/appengine/internal/remote_api",
"Rev": "41265fb44deca5c3b05a946d5db1f54ae54fe67e"
},
{
"ImportPath": "google.golang.org/cloud/compute/metadata",
"Rev": "5530fc8457464580b66ade0ebe62b8c63e4530f3"
},
{
"ImportPath": "google.golang.org/cloud/internal",
"Rev": "5530fc8457464580b66ade0ebe62b8c63e4530f3"
},
{
"ImportPath": "google.golang.org/grpc",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/benchmark",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/benchmark/client",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/benchmark/grpc_testing",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/benchmark/server",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/benchmark/stats",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/codes",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/credentials",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/credentials/oauth",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/examples/helloworld/greeter_client",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/examples/helloworld/greeter_server",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/examples/helloworld/helloworld",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/examples/route_guide/client",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/examples/route_guide/routeguide",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/examples/route_guide/server",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/grpclog",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/grpclog/glogger",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/health",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/health/grpc_health_v1alpha",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/interop/client",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/interop/grpc_testing",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/interop/server",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/metadata",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/naming",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/naming/etcd",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/test/codec_perf",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/test/grpc_testing",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "google.golang.org/grpc/transport",
"Rev": "3e7b7e58f491074e9577050058fb95d2351a60b0"
},
{
"ImportPath": "gopkg.in/check.v1",
"Rev": "4f90aeace3a26ad7021961c297b22c42160c7b25"
},
{
"ImportPath": "gopkg.in/dancannon/gorethink.v2",
"Comment": "v2.0.2",
"Rev": "3742792da4bc279ccd6d807f24687009cbeda860"
},
{
"ImportPath": "gopkg.in/dancannon/gorethink.v2/encoding",
"Comment": "v2.0.2",
"Rev": "3742792da4bc279ccd6d807f24687009cbeda860"
},
{
"ImportPath": "gopkg.in/dancannon/gorethink.v2/ql2",
"Comment": "v2.0.2",
"Rev": "3742792da4bc279ccd6d807f24687009cbeda860"
},
{
"ImportPath": "gopkg.in/dancannon/gorethink.v2/types",
"Comment": "v2.0.2",
"Rev": "3742792da4bc279ccd6d807f24687009cbeda860"
},
{
"ImportPath": "gopkg.in/fatih/pool.v2",
"Rev": "cba550ebf9bce999a02e963296d4bc7a486cb715"
},
{
"ImportPath": "gopkg.in/fsnotify.v1",
"Comment": "v1.2.10",
"Rev": "875cf421b32f8f1b31bd43776297876d01542279"
},
{
"ImportPath": "gopkg.in/yaml.v2",
"Rev": "bef53efd0c76e49e6de55ead051f886bea7e9420"
}
]
}

5
Godeps/Readme generated
View File

@ -1,5 +0,0 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

201
LICENSE
View File

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2015 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,58 +0,0 @@
# Notary maintainers file
#
# This file describes who runs the docker/notary project and how.
# This is a living document - if you see something out of date or missing, speak up!
#
# It is structured to be consumable by both humans and programs.
# To extract its contents programmatically, use any TOML-compliant parser.
#
# This file is compiled into the MAINTAINERS file in docker/opensource.
#
[Org]
[Org."Core maintainers"]
people = [
"cyli",
"diogomonica",
"dmcgowan",
"endophage",
"nathanmccauley",
"riyazdf",
]
[people]
# A reference list of all people associated with the project.
# All other sections should refer to people by their canonical key
# in the people section.
# ADD YOURSELF HERE IN ALPHABETICAL ORDER
[people.cyli]
Name = "Ying Li"
Email = "ying.li@docker.com"
GitHub = "cyli"
[people.diogomonica]
Name = "Diogo Monica"
Email = "diogo@docker.com"
GitHub = "diogomonica"
[people.dmcgowan]
Name = "Derek McGowan"
Email = "derek@docker.com"
GitHub = "dmcgowan"
[people.endophage]
Name = "David Lawrence"
Email = "david.lawrence@docker.com"
GitHub = "endophage"
[people.nathanmccauley]
Name = "Nathan McCauley"
Email = "nathan.mccauley@docker.com"
GitHub = "nathanmccauley"
[people.riyazdf]
Name = "Riyaz Faizullabhoy"
Email = "riyaz@docker.com"
GitHub = "riyazdf"

215
Makefile
View File

@ -1,215 +0,0 @@
# Set an output prefix, which is the local directory if not specified
PREFIX?=$(shell pwd)
# Populate version variables
# Add to compile time flags
NOTARY_PKG := github.com/docker/notary
NOTARY_VERSION := $(shell cat NOTARY_VERSION)
GITCOMMIT := $(shell git rev-parse --short HEAD)
GITUNTRACKEDCHANGES := $(shell git status --porcelain --untracked-files=no)
ifneq ($(GITUNTRACKEDCHANGES),)
GITCOMMIT := $(GITCOMMIT)-dirty
endif
CTIMEVAR=-X $(NOTARY_PKG)/version.GitCommit=$(GITCOMMIT) -X $(NOTARY_PKG)/version.NotaryVersion=$(NOTARY_VERSION)
GO_LDFLAGS=-ldflags "-w $(CTIMEVAR)"
GO_LDFLAGS_STATIC=-ldflags "-w $(CTIMEVAR) -extldflags -static"
GOOSES = darwin linux
NOTARY_BUILDTAGS ?= pkcs11
NOTARYDIR := /go/src/github.com/docker/notary
GO_VERSION := $(shell go version | grep "1\.[6-9]\(\.[0-9]+\)*\|devel")
# check to make sure we have the right version. development versions of Go are
# not officially supported, but allowed for building
ifeq ($(strip $(GO_VERSION))$(SKIPENVCHECK),)
$(error Bad Go version - please install Go >= 1.6)
endif
# check to be sure pkcs11 lib is always imported with a build tag
GO_LIST_PKCS11 := $(shell go list -tags "${NOTARY_BUILDTAGS}" -e -f '{{join .Deps "\n"}}' ./... | grep -v /vendor/ | xargs go list -e -f '{{if not .Standard}}{{.ImportPath}}{{end}}' | grep -q pkcs11)
ifeq ($(GO_LIST_PKCS11),)
$(info pkcs11 import was not found anywhere without a build tag, yay)
else
$(error You are importing pkcs11 somewhere and not using a build tag)
endif
_empty :=
_space := $(empty) $(empty)
# go cover test variables
COVERDIR=.cover
COVERPROFILE?=$(COVERDIR)/cover.out
COVERMODE=count
PKGS ?= $(shell go list -tags "${NOTARY_BUILDTAGS}" ./... | grep -v /vendor/ | tr '\n' ' ')
.PHONY: clean all fmt vet lint build test binaries cross cover docker-images notary-dockerfile
.DELETE_ON_ERROR: cover
.DEFAULT: default
all: AUTHORS clean fmt vet fmt lint build test binaries
AUTHORS: .git/HEAD
git log --format='%aN <%aE>' | sort -fu > $@
# This only needs to be generated by hand when cutting full releases.
version/version.go:
./version/version.sh > $@
${PREFIX}/bin/notary-server: NOTARY_VERSION $(shell find . -type f -name '*.go')
@echo "+ $@"
@go build -tags ${NOTARY_BUILDTAGS} -o $@ ${GO_LDFLAGS} ./cmd/notary-server
${PREFIX}/bin/notary: NOTARY_VERSION $(shell find . -type f -name '*.go')
@echo "+ $@"
@go build -tags ${NOTARY_BUILDTAGS} -o $@ ${GO_LDFLAGS} ./cmd/notary
${PREFIX}/bin/notary-signer: NOTARY_VERSION $(shell find . -type f -name '*.go')
@echo "+ $@"
@go build -tags ${NOTARY_BUILDTAGS} -o $@ ${GO_LDFLAGS} ./cmd/notary-signer
ifeq ($(shell uname -s),Darwin)
${PREFIX}/bin/static/notary-server:
@echo "notary-server: static builds not supported on OS X"
${PREFIX}/bin/static/notary-signer:
@echo "notary-signer: static builds not supported on OS X"
${PREFIX}/bin/static/notary:
@echo "notary: static builds not supported on OS X"
else
${PREFIX}/bin/static/notary-server: NOTARY_VERSION $(shell find . -type f -name '*.go')
@echo "+ $@"
@go build -tags ${NOTARY_BUILDTAGS} -o $@ ${GO_LDFLAGS_STATIC} ./cmd/notary-server
${PREFIX}/bin/static/notary-signer: NOTARY_VERSION $(shell find . -type f -name '*.go')
@echo "+ $@"
@go build -tags ${NOTARY_BUILDTAGS} -o $@ ${GO_LDFLAGS_STATIC} ./cmd/notary-signer
${PREFIX}/bin/static/notary:
@echo "+ $@"
@go build -tags ${NOTARY_BUILDTAGS} -o $@ ${GO_LDFLAGS_STATIC} ./cmd/notary
endif
vet:
@echo "+ $@"
ifeq ($(shell uname -s), Darwin)
@test -z "$(shell find . -iname *test*.go | grep -v _test.go | grep -v vendor | xargs echo "This file should end with '_test':" | tee /dev/stderr)"
else
@test -z "$(shell find . -iname *test*.go | grep -v _test.go | grep -v vendor | xargs -r echo "This file should end with '_test':" | tee /dev/stderr)"
endif
@test -z "$$(go tool vet -printf=false . 2>&1 | grep -v vendor/ | tee /dev/stderr)"
fmt:
@echo "+ $@"
@test -z "$$(gofmt -s -l .| grep -v .pb. | grep -v vendor/ | tee /dev/stderr)"
lint:
@echo "+ $@"
@test -z "$(shell find . -type f -name "*.go" -not -path "./vendor/*" -not -name "*.pb.*" -exec golint {} \; | tee /dev/stderr)"
# Requires that the following:
# go get -u github.com/client9/misspell/cmd/misspell
#
# be run first
# misspell target, don't include Godeps, binaries, python tests, or git files
misspell:
@echo "+ $@"
@test -z "$$(find . -name '*' | grep -v vendor/ | grep -v bin/ | grep -v misc/ | grep -v .git/ | xargs misspell | tee /dev/stderr)"
build:
@echo "+ $@"
@go build -tags "${NOTARY_BUILDTAGS}" -v ${GO_LDFLAGS} $(PKGS)
# When running `go test ./...`, it runs all the suites in parallel, which causes
# problems when running with a yubikey
test: TESTOPTS =
test:
@echo Note: when testing with a yubikey plugged in, make sure to include 'TESTOPTS="-p 1"'
@echo "+ $@ $(TESTOPTS)"
@echo
go test -tags "${NOTARY_BUILDTAGS}" $(TESTOPTS) $(PKGS)
test-full: TESTOPTS =
test-full: vet lint
@echo Note: when testing with a yubikey plugged in, make sure to include 'TESTOPTS="-p 1"'
@echo "+ $@"
@echo
go test -tags "${NOTARY_BUILDTAGS}" $(TESTOPTS) -v $(PKGS)
integration: TESTDB = mysql
integration:
buildscripts/integrationtest.sh development.$(TESTDB).yml
protos:
@protoc --go_out=plugins=grpc:. proto/*.proto
# This allows coverage for a package to come from tests in different package.
# Requires that the following:
# go get github.com/wadey/gocovmerge; go install github.com/wadey/gocovmerge
#
# be run first
define gocover
go test $(OPTS) $(TESTOPTS) -covermode="$(COVERMODE)" -coverprofile="$(COVERDIR)/$(subst /,-,$(1)).$(subst $(_space),.,$(NOTARY_BUILDTAGS)).coverage.txt" "$(1)" || exit 1;
endef
gen-cover:
@mkdir -p "$(COVERDIR)"
$(foreach PKG,$(PKGS),$(call gocover,$(PKG)))
rm -f "$(COVERDIR)"/*testutils*.coverage.txt
# Generates the cover binaries and runs them all in serial, so this can be used
# run all tests with a yubikey without any problems
cover: OPTS = -tags "${NOTARY_BUILDTAGS}" -coverpkg "$(shell ./coverpkg.sh $(1) $(NOTARY_PKG))"
cover: gen-cover covmerge
@go tool cover -html="$(COVERPROFILE)"
# Generates the cover binaries and runs them all in serial, so this can be used
# run all tests with a yubikey without any problems
ci: OPTS = -tags "${NOTARY_BUILDTAGS}" -race -coverpkg "$(shell ./coverpkg.sh $(1) $(NOTARY_PKG))"
# Codecov knows how to merge multiple coverage files, so covmerge is not needed
ci: gen-cover
yubikey-tests: override PKGS = github.com/docker/notary/cmd/notary github.com/docker/notary/trustmanager/yubikey
yubikey-tests: ci
covmerge:
@gocovmerge $(shell ls -1 $(COVERDIR)/* | tr "\n" " ") > $(COVERPROFILE)
@go tool cover -func="$(COVERPROFILE)"
clean-protos:
@rm proto/*.pb.go
client: ${PREFIX}/bin/notary
@echo "+ $@"
binaries: ${PREFIX}/bin/notary-server ${PREFIX}/bin/notary ${PREFIX}/bin/notary-signer
@echo "+ $@"
static: ${PREFIX}/bin/static/notary-server ${PREFIX}/bin/static/notary-signer ${PREFIX}/bin/static/notary
@echo "+ $@"
notary-dockerfile:
@docker build --rm --force-rm -t notary .
server-dockerfile:
@docker build --rm --force-rm -f server.Dockerfile -t notary-server .
signer-dockerfile:
@docker build --rm --force-rm -f signer.Dockerfile -t notary-signer .
docker-images: notary-dockerfile server-dockerfile signer-dockerfile
shell: notary-dockerfile
docker run --rm -it -v $(CURDIR)/cross:$(NOTARYDIR)/cross -v $(CURDIR)/bin:$(NOTARYDIR)/bin notary bash
cross: notary-dockerfile
@rm -rf $(CURDIR)/cross
docker run --rm -v $(CURDIR)/cross:$(NOTARYDIR)/cross -e NOTARY_BUILDTAGS=$(NOTARY_BUILDTAGS) notary buildscripts/cross.sh $(GOOSES)
clean:
@echo "+ $@"
@rm -rf "$(COVERDIR)"
@rm -rf "${PREFIX}/bin/notary-server" "${PREFIX}/bin/notary" "${PREFIX}/bin/notary-signer"

View File

@ -1 +0,0 @@
0.3

View File

@ -1,99 +0,0 @@
# Notary
[![Circle CI](https://circleci.com/gh/docker/notary/tree/master.svg?style=shield)](https://circleci.com/gh/docker/notary/tree/master) [![CodeCov](https://codecov.io/github/docker/notary/coverage.svg?branch=master)](https://codecov.io/github/docker/notary)
The Notary project comprises a [server](cmd/notary-server) and a [client](cmd/notary) for running and interacting
with trusted collections. Please see the [service architecture](docs/service_architecture.md) documentation
for more information.
Notary aims to make the internet more secure by making it easy for people to
publish and verify content. We often rely on TLS to secure our communications
with a web server which is inherently flawed, as any compromise of the server
enables malicious content to be substituted for the legitimate content.
With Notary, publishers can sign their content offline using keys kept highly
secure. Once the publisher is ready to make the content available, they can
push their signed trusted collection to a Notary Server.
Consumers, having acquired the publisher's public key through a secure channel,
can then communicate with any notary server or (insecure) mirror, relying
only on the publisher's key to determine the validity and integrity of the
received content.
## Goals
Notary is based on [The Update Framework](https://www.theupdateframework.com/), a secure general design for the problem of software distribution and updates. By using TUF, notary achieves a number of key advantages:
* **Survivable Key Compromise**: Content publishers must manage keys in order to sign their content. Signing keys may be compromised or lost so systems must be designed in order to be flexible and recoverable in the case of key compromise. TUF's notion of key roles is utilized to separate responsibilities across a hierarchy of keys such that loss of any particular key (except the root role) by itself is not fatal to the security of the system.
* **Freshness Guarantees**: Replay attacks are a common problem in designing secure systems, where previously valid payloads are replayed to trick another system. The same problem exists in the software update systems, where old signed can be presented as the most recent. notary makes use of timestamping on publishing so that consumers can know that they are receiving the most up to date content. This is particularly important when dealing with software update where old vulnerable versions could be used to attack users.
* **Configurable Trust Thresholds**: Oftentimes there are a large number of publishers that are allowed to publish a particular piece of content. For example, open source projects where there are a number of core maintainers. Trust thresholds can be used so that content consumers require a configurable number of signatures on a piece of content in order to trust it. Using thresholds increases security so that loss of individual signing keys doesn't allow publishing of malicious content.
* **Signing Delegation**: To allow for flexible publishing of trusted collections, a content publisher can delegate part of their collection to another signer. This delegation is represented as signed metadata so that a consumer of the content can verify both the content and the delegation.
* **Use of Existing Distribution**: Notary's trust guarantees are not tied at all to particular distribution channels from which content is delivered. Therefore, trust can be added to any existing content delivery mechanism.
* **Untrusted Mirrors and Transport**: All of the notary metadata can be mirrored and distributed via arbitrary channels.
## Security
Please see our [service architecture docs](docs/service_architecture.md#threat-model) for more information about our threat model, which details the varying survivability and severities for key compromise as well as mitigations.
Our last security audit was on July 31, 2015 by NCC ([results](docs/resources/ncc_docker_notary_audit_2015_07_31.pdf)).
Any security vulnerabilities can be reported to security@docker.com.
# Getting started with the Notary CLI
Please get the Notary Client CLI binary from [the official releases page](https://github.com/docker/notary/releases) or you can [build one yourself](#building-notary).
The version of Notary server and signer should be greater than or equal to Notary CLI's version to ensure feature compatibility (ex: CLI version 0.2, server/signer version >= 0.2), and all official releases are associated with GitHub tags.
To use the Notary CLI with Docker hub images, please have a look at our
[getting started docs](docs/getting_started.md).
For more advanced usage, please see the
[advanced usage docs](docs/advanced_usage.md).
To use the CLI against a local Notary server rather than against Docker Hub:
1. Please ensure that you have [docker and docker-compose](http://docs.docker.com/compose/install/) installed.
1. `git clone https://github.com/docker/notary.git` and from the cloned repository path,
start up a local Notary server and signer and copy the config file and testing certs to your
local notary config directory:
```sh
$ docker-compose build
$ docker-compose up -d
$ mkdir -p ~/.notary && cp cmd/notary/config.json cmd/notary/root-ca.crt ~/.notary
```
1. Add `127.0.0.1 notary-server` to your `/etc/hosts`, or if using docker-machine,
add `$(docker-machine ip) notary-server`).
You can run through the examples in the
[getting started docs](docs/getting_started.md) and
[advanced usage docs](docs/advanced_usage.md), but
without the `-s` (server URL) argument to the `notary` command since the server
URL is specified already in the configuration, file you copied.
You can also leave off the `-d ~/.docker/trust` argument if you do not care
to use `notary` with Docker images.
## Building Notary
Prerequisites:
- Go >= 1.6.1
- [godep](https://github.com/tools/godep) installed
- libtool development headers installed
- Ubuntu: `apt-get install libltdl-dev`
- CentOS/RedHat: `yum install libtool-ltdl-devel`
- Mac OS ([Homebrew](http://brew.sh/)): `brew install libtool`
Run `make binaries`, which creates the Notary Client CLI binary at `bin/notary`.
Note that `make binaries` assumes a standard Go directory structure, in which
Notary is checked out to the `src` directory in your `GOPATH`. For example:
```
$GOPATH/
src/
github.com/
docker/
notary/
```

View File

@ -1,7 +0,0 @@
# Roadmap
The Trust project consists of a number of moving parts of which Notary Server is one. Notary Server is the front line metadata service
that clients interact with. It manages TUF metadata and interacts with a pluggable signing service to issue new TUF timestamp
files.
The Notary-signer is provided as our reference implementation of a signing service. It supports HSMs along with Ed25519 software signing.

View File

@ -1,15 +0,0 @@
#!/usr/bin/env bash
set -e
case $CIRCLE_NODE_INDEX in
0) docker run --rm -e NOTARY_BUILDTAGS=pkcs11 --env-file buildscripts/env.list --user notary notary_client bash -c "make ci && codecov"
;;
1) docker run --rm -e NOTARY_BUILDTAGS=none --env-file buildscripts/env.list --user notary notary_client bash -c "make ci && codecov"
;;
2) SKIPENVCHECK=1 make TESTDB=mysql integration
;;
3) SKIPENVCHECK=1 make TESTDB=rethink integration
;;
4) docker run --rm -e NOTARY_BUILDTAGS=pkcs11 notary_client make vet lint fmt misspell
;;
esac

View File

@ -1,33 +0,0 @@
#!/usr/bin/env bash
GOARCH="amd64"
if [[ "${NOTARY_BUILDTAGS}" == *pkcs11* ]]; then
export CGO_ENABLED=1
else
export CGO_ENABLED=0
fi
for os in "$@"; do
export GOOS="${os}"
if [[ "${GOOS}" == "darwin" ]]; then
export CC="o64-clang"
export CXX="o64-clang++"
# -ldflags=-s: see https://github.com/golang/go/issues/11994
export LDFLAGS="${GO_LDFLAGS} -ldflags=-s"
else
unset CC
unset CXX
LDFLAGS="${GO_LDFLAGS}"
fi
mkdir -p "${NOTARYDIR}/cross/${GOOS}/${GOARCH}";
go build \
-o "${NOTARYDIR}/cross/${GOOS}/${GOARCH}/notary" \
-a \
-tags "${NOTARY_BUILDTAGS}" \
${LDFLAGS} \
./cmd/notary;
done

View File

@ -1,40 +0,0 @@
# These are codecov environment variables to pass through
CODECOV_TOKEN
CODECOV_ENV
CI
# These are the CircleCI environment variables to pass through for codecov
CIRCLECI
CIRCLE_BRANCH
CIRCLE_BUILD_NUM
CIRCLE_NODE_INDEX
CIRCLE_BUILD_NUM
CIRCLE_NODE_INDEX
CIRCLE_PR_NUMBER
CIRCLE_PROJECT_USERNAME
CIRCLE_PROJECT_REPONAME
CIRCLE_SHA1
# These are the Jenkins environment variables to pass through for codecov
JENKINS_URL
ghprbSourceBranch
GIT_BRANCH
ghprbActualCommit
GIT_COMMIT
ghprbPullId
BUILD_NUMBER
BUILD_URL
WORKSPACE
# These are the Travis environment variables to pass through for codecov
# http://docs.travis-ci.com/user/environment-variables/#Default-Environment-Variables
# TRAVIS
# TRAVIS_BRANCH
# TRAVIS_JOB_NUMBER
# TRAVIS_PULL_REQUEST
# TRAVIS_JOB_ID
# TRAVIS_TAG
# TRAVIS_REPO_SLUG
# TRAVIS_COMMIT
# TRAVIS_BUILD_DIR

View File

@ -1,39 +0,0 @@
#!/usr/bin/env bash
composeFile="$1"
function cleanup {
rm -f bin/notary
docker-compose -f $composeFile kill
# if we're in CircleCI, we cannot remove any containers
if [[ -z "${CIRCLECI}" ]]; then
docker-compose -f $composeFile down -v --remove-orphans
fi
}
function cleanupAndExit {
cleanup
# Check for existence of SUCCESS
ls test_output/SUCCESS
exitCode=$?
# Clean up test_output dir (if not in CircleCI) and exit
if [[ -z "${CIRCLECI}" ]]; then
rm -rf test_output
fi
exit $exitCode
}
if [[ -z "${CIRCLECI}" ]]; then
BUILDOPTS="--force-rm"
fi
set -e
set -x
cleanup
docker-compose -f $composeFile config
docker-compose -f $composeFile build ${BUILDOPTS} --pull | tee
docker-compose -f $composeFile up --abort-on-container-exit
trap cleanupAndExit SIGINT SIGTERM EXIT

View File

@ -1,51 +0,0 @@
#!/usr/bin/env bash
set -e
make clean
make client
set +e
RANDOMSTRING="$(cat /dev/urandom | env LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 10 | head -n 1)"
HOST="${REMOTE_SERVER_URL:-https://notary-server:4443}"
REPONAME="docker.com/notary/${RANDOMSTRING}"
OPTS="-c cmd/notary/config.json -d /tmp/${RANDOMSTRING}"
export NOTARY_ROOT_PASSPHRASE=ponies
export NOTARY_TARGETS_PASSPHRASE=ponies
export NOTARY_SNAPSHOT_PASSPHRASE=ponies
echo "Notary Host: ${HOST}"
echo "Repo Name: ${REPONAME}"
echo
rm -rf "/tmp/${RANDOMSTRING}"
iter=0
until (curl -s -S -k "${HOST}")
do
((iter++))
if (( iter > 30 )); then
echo "notary service failed to come up within 30 seconds"
exit 1;
fi
echo "waiting for notary service to come up."
sleep 1
done
set -e
set -x
bin/notary ${OPTS} init ${REPONAME}
bin/notary ${OPTS} delegation add ${REPONAME} targets/releases fixtures/secure.example.com.crt --all-paths
bin/notary ${OPTS} add ${REPONAME} readmetarget README.md
bin/notary ${OPTS} publish ${REPONAME}
bin/notary ${OPTS} delegation list ${REPONAME} | grep targets/releases
cat README.md | bin/notary ${OPTS} verify $REPONAME readmetarget > /test_output/SUCCESS
# Make this file accessible for CI
chmod -R 777 /test_output

View File

@ -1,23 +0,0 @@
machine:
pre:
# Upgrade docker
- curl -sSL https://s3.amazonaws.com/circle-downloads/install-circleci-docker.sh | bash -s -- 1.10.0
# upgrade compose
- sudo pip install --upgrade docker-compose
services:
- docker
dependencies:
override:
- docker build -t notary_client .
test:
override:
# circleci only supports manual parellism
- buildscripts/circle_parallelism.sh:
parallel: true
timeout: 600
post:
- docker-compose -f docker-compose.yml down -v
- docker-compose -f docker-compose.rethink.yml down -v

View File

@ -1,157 +0,0 @@
// The client can read and operate on older repository formats
package client
import (
"io"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"strings"
"testing"
"time"
"github.com/docker/notary/passphrase"
"github.com/docker/notary/trustpinning"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/tuf/store"
"github.com/stretchr/testify/require"
)
// Once a fixture is read in, ensure that it's valid by making sure the expiry
// times of all the metadata and certificates is > 10 years ahead
func requireValidFixture(t *testing.T, notaryRepo *NotaryRepository) {
tenYearsInFuture := time.Now().AddDate(10, 0, 0)
require.True(t, notaryRepo.tufRepo.Root.Signed.Expires.After(tenYearsInFuture))
require.True(t, notaryRepo.tufRepo.Snapshot.Signed.Expires.After(tenYearsInFuture))
require.True(t, notaryRepo.tufRepo.Timestamp.Signed.Expires.After(tenYearsInFuture))
for _, targetObj := range notaryRepo.tufRepo.Targets {
require.True(t, targetObj.Signed.Expires.After(tenYearsInFuture))
}
}
// recursively copies the contents of one directory into another - ignores
// symlinks
func recursiveCopy(sourceDir, targetDir string) error {
return filepath.Walk(sourceDir, func(fp string, fi os.FileInfo, err error) error {
if err != nil {
return err
}
targetFP := filepath.Join(targetDir, strings.TrimPrefix(fp, sourceDir+"/"))
if fi.IsDir() {
return os.MkdirAll(targetFP, fi.Mode())
}
// Ignore symlinks
if fi.Mode()&os.ModeSymlink == os.ModeSymlink {
return nil
}
// copy the file
in, err := os.Open(fp)
if err != nil {
return err
}
defer in.Close()
out, err := os.Create(targetFP)
if err != nil {
return err
}
defer out.Close()
_, err = io.Copy(out, in)
if err != nil {
return err
}
return nil
})
}
// We can read and publish from notary0.1 repos
func Test0Dot1RepoFormat(t *testing.T) {
// make a temporary directory and copy the fixture into it, since updating
// and publishing will modify the files
tmpDir, err := ioutil.TempDir("", "notary-backwards-compat-test")
defer os.RemoveAll(tmpDir)
require.NoError(t, err)
require.NoError(t, recursiveCopy("../fixtures/compatibility/notary0.1", tmpDir))
gun := "docker.com/notary0.1/samplerepo"
passwd := "randompass"
ts := fullTestServer(t)
defer ts.Close()
repo, err := NewNotaryRepository(tmpDir, gun, ts.URL, http.DefaultTransport,
passphrase.ConstantRetriever(passwd), trustpinning.TrustPinConfig{})
require.NoError(t, err, "error creating repo: %s", err)
// targets should have 1 target, and it should be readable offline
targets, err := repo.ListTargets()
require.NoError(t, err)
require.Len(t, targets, 1)
require.Equal(t, "LICENSE", targets[0].Name)
// ok, now that everything has been loaded, verify that the fixture is valid
requireValidFixture(t, repo)
// delete the timestamp metadata, since the server will ignore the uploaded
// one and try to create a new one from scratch, which will be the wrong version
require.NoError(t, repo.fileStore.RemoveMeta(data.CanonicalTimestampRole))
// rotate the timestamp key, since the server doesn't have that one
err = repo.RotateKey(data.CanonicalTimestampRole, true)
require.NoError(t, err)
require.NoError(t, repo.Publish())
targets, err = repo.ListTargets()
require.NoError(t, err)
require.Len(t, targets, 2)
// Also check that we can add/remove keys by rotating keys
oldTargetsKeys := repo.CryptoService.ListKeys(data.CanonicalTargetsRole)
require.NoError(t, repo.RotateKey(data.CanonicalTargetsRole, false))
require.NoError(t, repo.Publish())
newTargetsKeys := repo.CryptoService.ListKeys(data.CanonicalTargetsRole)
require.Len(t, oldTargetsKeys, 1)
require.Len(t, newTargetsKeys, 1)
require.NotEqual(t, oldTargetsKeys[0], newTargetsKeys[0])
// rotate the snapshot key to the server and ensure that the server can re-generate the snapshot
// and we can download the snapshot
require.NoError(t, repo.RotateKey(data.CanonicalSnapshotRole, true))
require.NoError(t, repo.Publish())
err = repo.Update(false)
require.NoError(t, err)
}
// Ensures that the current client can download metadata that is published from notary 0.1 repos
func TestDownloading0Dot1RepoFormat(t *testing.T) {
gun := "docker.com/notary0.1/samplerepo"
passwd := "randompass"
metaCache, err := store.NewFilesystemStore(
filepath.Join("../fixtures/compatibility/notary0.1/tuf", filepath.FromSlash(gun)),
"metadata", "json")
require.NoError(t, err)
ts := readOnlyServer(t, metaCache, http.StatusNotFound, gun)
defer ts.Close()
repoDir, err := ioutil.TempDir("", "notary-backwards-compat-test")
require.NoError(t, err)
defer os.RemoveAll(repoDir)
repo, err := NewNotaryRepository(repoDir, gun, ts.URL, http.DefaultTransport,
passphrase.ConstantRetriever(passwd), trustpinning.TrustPinConfig{})
require.NoError(t, err, "error creating repo: %s", err)
err = repo.Update(true)
require.NoError(t, err, "error updating repo: %s", err)
}

View File

@ -1,101 +0,0 @@
package changelist
import (
"github.com/docker/notary/tuf/data"
)
// Scopes for TUFChanges are simply the TUF roles.
// Unfortunately because of targets delegations, we can only
// cover the base roles.
const (
ScopeRoot = "root"
ScopeTargets = "targets"
ScopeSnapshot = "snapshot"
ScopeTimestamp = "timestamp"
)
// Types for TUFChanges are namespaced by the Role they
// are relevant for. The Root and Targets roles are the
// only ones for which user action can cause a change, as
// all changes in Snapshot and Timestamp are programmatically
// generated base on Root and Targets changes.
const (
TypeRootRole = "role"
TypeTargetsTarget = "target"
TypeTargetsDelegation = "delegation"
)
// TUFChange represents a change to a TUF repo
type TUFChange struct {
// Abbreviated because Go doesn't permit a field and method of the same name
Actn string `json:"action"`
Role string `json:"role"`
ChangeType string `json:"type"`
ChangePath string `json:"path"`
Data []byte `json:"data"`
}
// TUFRootData represents a modification of the keys associated
// with a role that appears in the root.json
type TUFRootData struct {
Keys data.KeyList `json:"keys"`
RoleName string `json:"role"`
}
// NewTUFChange initializes a TUFChange object
func NewTUFChange(action string, role, changeType, changePath string, content []byte) *TUFChange {
return &TUFChange{
Actn: action,
Role: role,
ChangeType: changeType,
ChangePath: changePath,
Data: content,
}
}
// Action return c.Actn
func (c TUFChange) Action() string {
return c.Actn
}
// Scope returns c.Role
func (c TUFChange) Scope() string {
return c.Role
}
// Type returns c.ChangeType
func (c TUFChange) Type() string {
return c.ChangeType
}
// Path return c.ChangePath
func (c TUFChange) Path() string {
return c.ChangePath
}
// Content returns c.Data
func (c TUFChange) Content() []byte {
return c.Data
}
// TUFDelegation represents a modification to a target delegation
// this includes creating a delegations. This format is used to avoid
// unexpected race conditions between humans modifying the same delegation
type TUFDelegation struct {
NewName string `json:"new_name,omitempty"`
NewThreshold int `json:"threshold, omitempty"`
AddKeys data.KeyList `json:"add_keys, omitempty"`
RemoveKeys []string `json:"remove_keys,omitempty"`
AddPaths []string `json:"add_paths,omitempty"`
RemovePaths []string `json:"remove_paths,omitempty"`
ClearAllPaths bool `json:"clear_paths,omitempty"`
}
// ToNewRole creates a fresh role object from the TUFDelegation data
func (td TUFDelegation) ToNewRole(scope string) (*data.Role, error) {
name := scope
if td.NewName != "" {
name = td.NewName
}
return data.NewRole(name, td.NewThreshold, td.AddKeys.IDs(), td.AddPaths)
}

View File

@ -1,29 +0,0 @@
package changelist
import (
"testing"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/tuf/signed"
"github.com/stretchr/testify/require"
)
func TestTUFDelegation(t *testing.T) {
cs := signed.NewEd25519()
key, err := cs.Create("targets/new_name", "gun", data.ED25519Key)
require.NoError(t, err)
kl := data.KeyList{key}
td := TUFDelegation{
NewName: "targets/new_name",
NewThreshold: 1,
AddKeys: kl,
AddPaths: []string{""},
}
r, err := td.ToNewRole("targets/old_name")
require.NoError(t, err)
require.Equal(t, td.NewName, r.Name)
require.Len(t, r.KeyIDs, 1)
require.Equal(t, kl[0].ID(), r.KeyIDs[0])
require.Len(t, r.Paths, 1)
}

View File

@ -1,59 +0,0 @@
package changelist
// memChangeList implements a simple in memory change list.
type memChangelist struct {
changes []Change
}
// NewMemChangelist instantiates a new in-memory changelist
func NewMemChangelist() Changelist {
return &memChangelist{}
}
// List returns a list of Changes
func (cl memChangelist) List() []Change {
return cl.changes
}
// Add adds a change to the in-memory change list
func (cl *memChangelist) Add(c Change) error {
cl.changes = append(cl.changes, c)
return nil
}
// Clear empties the changelist file.
func (cl *memChangelist) Clear(archive string) error {
// appending to a nil list initializes it.
cl.changes = nil
return nil
}
// Close is a no-op in this in-memory change-list
func (cl *memChangelist) Close() error {
return nil
}
func (cl *memChangelist) NewIterator() (ChangeIterator, error) {
return &MemChangeListIterator{index: 0, collection: cl.changes}, nil
}
// MemChangeListIterator is a concrete instance of ChangeIterator
type MemChangeListIterator struct {
index int
collection []Change // Same type as memChangeList.changes
}
// Next returns the next Change
func (m *MemChangeListIterator) Next() (item Change, err error) {
if m.index >= len(m.collection) {
return nil, IteratorBoundsError(m.index)
}
item = m.collection[m.index]
m.index++
return item, err
}
// HasNext indicates whether the iterator is exhausted
func (m *MemChangeListIterator) HasNext() bool {
return m.index < len(m.collection)
}

View File

@ -1,66 +0,0 @@
package changelist
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestMemChangelist(t *testing.T) {
cl := memChangelist{}
c := NewTUFChange(ActionCreate, "targets", "target", "test/targ", []byte{1})
err := cl.Add(c)
require.Nil(t, err, "Non-nil error while adding change")
cs := cl.List()
require.Equal(t, 1, len(cs), "List should have returned exactly one item")
require.Equal(t, c.Action(), cs[0].Action(), "Action mismatch")
require.Equal(t, c.Scope(), cs[0].Scope(), "Scope mismatch")
require.Equal(t, c.Type(), cs[0].Type(), "Type mismatch")
require.Equal(t, c.Path(), cs[0].Path(), "Path mismatch")
require.Equal(t, c.Content(), cs[0].Content(), "Content mismatch")
err = cl.Clear("")
require.Nil(t, err, "Non-nil error while clearing")
cs = cl.List()
require.Equal(t, 0, len(cs), "List should be empty")
}
func TestMemChangeIterator(t *testing.T) {
cl := memChangelist{}
it, err := cl.NewIterator()
require.Nil(t, err, "Non-nil error from NewIterator")
require.False(t, it.HasNext(), "HasNext returns false for empty ChangeList")
c1 := NewTUFChange(ActionCreate, "t1", "target1", "test/targ1", []byte{1})
cl.Add(c1)
c2 := NewTUFChange(ActionUpdate, "t2", "target2", "test/targ2", []byte{2})
cl.Add(c2)
c3 := NewTUFChange(ActionUpdate, "t3", "target3", "test/targ3", []byte{3})
cl.Add(c3)
cs := cl.List()
index := 0
it, _ = cl.NewIterator()
for it.HasNext() {
c, err := it.Next()
require.Nil(t, err, "Next err should be false")
require.Equal(t, c.Action(), cs[index].Action(), "Action mismatch")
require.Equal(t, c.Scope(), cs[index].Scope(), "Scope mismatch")
require.Equal(t, c.Type(), cs[index].Type(), "Type mismatch")
require.Equal(t, c.Path(), cs[index].Path(), "Path mismatch")
require.Equal(t, c.Content(), cs[index].Content(), "Content mismatch")
index++
}
require.Equal(t, index, len(cs), "Iterator produced all data in ChangeList")
_, err = it.Next()
require.NotNil(t, err, "Next errors gracefully when exhausted")
var iterError IteratorBoundsError
require.IsType(t, iterError, err, "IteratorBoundsError type")
}

View File

@ -1,176 +0,0 @@
package changelist
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path"
"sort"
"time"
"github.com/Sirupsen/logrus"
"github.com/docker/distribution/uuid"
)
// FileChangelist stores all the changes as files
type FileChangelist struct {
dir string
}
// NewFileChangelist is a convenience method for returning FileChangeLists
func NewFileChangelist(dir string) (*FileChangelist, error) {
logrus.Debug("Making dir path: ", dir)
err := os.MkdirAll(dir, 0700)
if err != nil {
return nil, err
}
return &FileChangelist{dir: dir}, nil
}
// getFileNames reads directory, filtering out child directories
func getFileNames(dirName string) ([]os.FileInfo, error) {
var dirListing, fileInfos []os.FileInfo
dir, err := os.Open(dirName)
if err != nil {
return fileInfos, err
}
defer dir.Close()
dirListing, err = dir.Readdir(0)
if err != nil {
return fileInfos, err
}
for _, f := range dirListing {
if f.IsDir() {
continue
}
fileInfos = append(fileInfos, f)
}
return fileInfos, nil
}
// Read a JSON formatted file from disk; convert to TUFChange struct
func unmarshalFile(dirname string, f os.FileInfo) (*TUFChange, error) {
c := &TUFChange{}
raw, err := ioutil.ReadFile(path.Join(dirname, f.Name()))
if err != nil {
return c, err
}
err = json.Unmarshal(raw, c)
if err != nil {
return c, err
}
return c, nil
}
// List returns a list of sorted changes
func (cl FileChangelist) List() []Change {
var changes []Change
fileInfos, err := getFileNames(cl.dir)
if err != nil {
return changes
}
sort.Sort(fileChanges(fileInfos))
for _, f := range fileInfos {
c, err := unmarshalFile(cl.dir, f)
if err != nil {
logrus.Warn(err.Error())
continue
}
changes = append(changes, c)
}
return changes
}
// Add adds a change to the file change list
func (cl FileChangelist) Add(c Change) error {
cJSON, err := json.Marshal(c)
if err != nil {
return err
}
filename := fmt.Sprintf("%020d_%s.change", time.Now().UnixNano(), uuid.Generate())
return ioutil.WriteFile(path.Join(cl.dir, filename), cJSON, 0644)
}
// Clear clears the change list
func (cl FileChangelist) Clear(archive string) error {
dir, err := os.Open(cl.dir)
if err != nil {
return err
}
defer dir.Close()
files, err := dir.Readdir(0)
if err != nil {
return err
}
for _, f := range files {
os.Remove(path.Join(cl.dir, f.Name()))
}
return nil
}
// Close is a no-op
func (cl FileChangelist) Close() error {
// Nothing to do here
return nil
}
// NewIterator creates an iterator from FileChangelist
func (cl FileChangelist) NewIterator() (ChangeIterator, error) {
fileInfos, err := getFileNames(cl.dir)
if err != nil {
return &FileChangeListIterator{}, err
}
sort.Sort(fileChanges(fileInfos))
return &FileChangeListIterator{dirname: cl.dir, collection: fileInfos}, nil
}
// IteratorBoundsError is an Error type used by Next()
type IteratorBoundsError int
// Error implements the Error interface
func (e IteratorBoundsError) Error() string {
return fmt.Sprintf("Iterator index (%d) out of bounds", e)
}
// FileChangeListIterator is a concrete instance of ChangeIterator
type FileChangeListIterator struct {
index int
dirname string
collection []os.FileInfo
}
// Next returns the next Change in the FileChangeList
func (m *FileChangeListIterator) Next() (item Change, err error) {
if m.index >= len(m.collection) {
return nil, IteratorBoundsError(m.index)
}
f := m.collection[m.index]
m.index++
item, err = unmarshalFile(m.dirname, f)
return
}
// HasNext indicates whether iterator is exhausted
func (m *FileChangeListIterator) HasNext() bool {
return m.index < len(m.collection)
}
type fileChanges []os.FileInfo
// Len returns the length of a file change list
func (cs fileChanges) Len() int {
return len(cs)
}
// Less compares the names of two different file changes
func (cs fileChanges) Less(i, j int) bool {
return cs[i].Name() < cs[j].Name()
}
// Swap swaps the position of two file changes
func (cs fileChanges) Swap(i, j int) {
tmp := cs[i]
cs[i] = cs[j]
cs[j] = tmp
}

View File

@ -1,169 +0,0 @@
package changelist
import (
"io/ioutil"
"os"
"path"
"testing"
"github.com/stretchr/testify/require"
)
func TestAdd(t *testing.T) {
tmpDir, err := ioutil.TempDir("/tmp", "test")
if err != nil {
t.Fatal(err.Error())
}
defer os.RemoveAll(tmpDir)
cl, err := NewFileChangelist(tmpDir)
require.Nil(t, err, "Error initializing fileChangelist")
c := NewTUFChange(ActionCreate, "targets", "target", "test/targ", []byte{1})
err = cl.Add(c)
require.Nil(t, err, "Non-nil error while adding change")
cs := cl.List()
require.Equal(t, 1, len(cs), "List should have returned exactly one item")
require.Equal(t, c.Action(), cs[0].Action(), "Action mismatch")
require.Equal(t, c.Scope(), cs[0].Scope(), "Scope mismatch")
require.Equal(t, c.Type(), cs[0].Type(), "Type mismatch")
require.Equal(t, c.Path(), cs[0].Path(), "Path mismatch")
require.Equal(t, c.Content(), cs[0].Content(), "Content mismatch")
err = cl.Clear("")
require.Nil(t, err, "Non-nil error while clearing")
cs = cl.List()
require.Equal(t, 0, len(cs), "List should be empty")
err = os.Remove(tmpDir) // will error if anything left in dir
require.Nil(t, err, "Clear should have left the tmpDir empty")
}
func TestErrorConditions(t *testing.T) {
tmpDir, err := ioutil.TempDir("/tmp", "test")
if err != nil {
t.Fatal(err.Error())
}
defer os.RemoveAll(tmpDir)
cl, err := NewFileChangelist(tmpDir)
// Attempt to unmarshall a bad JSON file. Note: causes a WARN on the console.
ioutil.WriteFile(path.Join(tmpDir, "broken_file.change"), []byte{5}, 0644)
noItems := cl.List()
require.Len(t, noItems, 0, "List returns zero items on bad JSON file error")
os.RemoveAll(tmpDir)
err = cl.Clear("")
require.Error(t, err, "Clear on missing change list should return err")
noItems = cl.List()
require.Len(t, noItems, 0, "List returns zero items on directory read error")
}
func TestListOrder(t *testing.T) {
tmpDir, err := ioutil.TempDir("/tmp", "test")
if err != nil {
t.Fatal(err.Error())
}
defer os.RemoveAll(tmpDir)
cl, err := NewFileChangelist(tmpDir)
require.Nil(t, err, "Error initializing fileChangelist")
c1 := NewTUFChange(ActionCreate, "targets", "target", "test/targ1", []byte{1})
err = cl.Add(c1)
require.Nil(t, err, "Non-nil error while adding change")
c2 := NewTUFChange(ActionCreate, "targets", "target", "test/targ2", []byte{1})
err = cl.Add(c2)
require.Nil(t, err, "Non-nil error while adding change")
cs := cl.List()
require.Equal(t, 2, len(cs), "List should have returned exactly one item")
require.Equal(t, c1.Action(), cs[0].Action(), "Action mismatch")
require.Equal(t, c1.Scope(), cs[0].Scope(), "Scope mismatch")
require.Equal(t, c1.Type(), cs[0].Type(), "Type mismatch")
require.Equal(t, c1.Path(), cs[0].Path(), "Path mismatch")
require.Equal(t, c1.Content(), cs[0].Content(), "Content mismatch")
require.Equal(t, c2.Action(), cs[1].Action(), "Action 2 mismatch")
require.Equal(t, c2.Scope(), cs[1].Scope(), "Scope 2 mismatch")
require.Equal(t, c2.Type(), cs[1].Type(), "Type 2 mismatch")
require.Equal(t, c2.Path(), cs[1].Path(), "Path 2 mismatch")
require.Equal(t, c2.Content(), cs[1].Content(), "Content 2 mismatch")
}
func TestFileChangeIterator(t *testing.T) {
tmpDir, err := ioutil.TempDir("/tmp", "test")
if err != nil {
t.Fatal(err.Error())
}
defer os.RemoveAll(tmpDir)
cl, err := NewFileChangelist(tmpDir)
require.Nil(t, err, "Error initializing fileChangelist")
it, err := cl.NewIterator()
require.Nil(t, err, "Error initializing iterator")
require.False(t, it.HasNext(), "HasNext returns false for empty ChangeList")
c1 := NewTUFChange(ActionCreate, "t1", "target1", "test/targ1", []byte{1})
cl.Add(c1)
c2 := NewTUFChange(ActionUpdate, "t2", "target2", "test/targ2", []byte{2})
cl.Add(c2)
c3 := NewTUFChange(ActionUpdate, "t3", "target3", "test/targ3", []byte{3})
cl.Add(c3)
cs := cl.List()
index := 0
it, err = cl.NewIterator()
require.Nil(t, err, "Error initializing iterator")
for it.HasNext() {
c, err := it.Next()
require.Nil(t, err, "Next err should be false")
require.Equal(t, c.Action(), cs[index].Action(), "Action mismatch")
require.Equal(t, c.Scope(), cs[index].Scope(), "Scope mismatch")
require.Equal(t, c.Type(), cs[index].Type(), "Type mismatch")
require.Equal(t, c.Path(), cs[index].Path(), "Path mismatch")
require.Equal(t, c.Content(), cs[index].Content(), "Content mismatch")
index++
}
require.Equal(t, index, len(cs), "Iterator produced all data in ChangeList")
// negative test case: index out of range
_, err = it.Next()
require.Error(t, err, "Next errors gracefully when exhausted")
var iterError IteratorBoundsError
require.IsType(t, iterError, err, "IteratorBoundsError type")
require.Regexp(t, "out of bounds", err, "Message for iterator bounds error")
// negative test case: changelist files missing
it, err = cl.NewIterator()
require.Nil(t, err, "Error initializing iterator")
for it.HasNext() {
cl.Clear("")
_, err := it.Next()
require.Error(t, err, "Next() error for missing changelist files")
}
// negative test case: bad JSON file to unmarshall via Next()
cl.Clear("")
ioutil.WriteFile(path.Join(tmpDir, "broken_file.change"), []byte{5}, 0644)
it, err = cl.NewIterator()
require.Nil(t, err, "Error initializing iterator")
for it.HasNext() {
_, err := it.Next()
require.Error(t, err, "Next should indicate error for bad JSON file")
}
// negative test case: changelist directory does not exist
os.RemoveAll(tmpDir)
it, err = cl.NewIterator()
require.Error(t, err, "Initializing iterator without underlying file store")
}

View File

@ -1,70 +0,0 @@
package changelist
// Changelist is the interface for all TUF change lists
type Changelist interface {
// List returns the ordered list of changes
// currently stored
List() []Change
// Add change appends the provided change to
// the list of changes
Add(Change) error
// Clear empties the current change list.
// Archive may be provided as a directory path
// to save a copy of the changelist in that location
Clear(archive string) error
// Close syncronizes any pending writes to the underlying
// storage and closes the file/connection
Close() error
// NewIterator returns an iterator for walking through the list
// of changes currently stored
NewIterator() (ChangeIterator, error)
}
const (
// ActionCreate represents a Create action
ActionCreate = "create"
// ActionUpdate represents an Update action
ActionUpdate = "update"
// ActionDelete represents a Delete action
ActionDelete = "delete"
)
// Change is the interface for a TUF Change
type Change interface {
// "create","update", or "delete"
Action() string
// Where the change should be made.
// For TUF this will be the role
Scope() string
// The content type being affected.
// For TUF this will be "target", or "delegation".
// If the type is "delegation", the Scope will be
// used to determine if a root role is being updated
// or a target delegation.
Type() string
// Path indicates the entry within a role to be affected by the
// change. For targets, this is simply the target's path,
// for delegations it's the delegated role name.
Path() string
// Serialized content that the interpreter of a changelist
// can use to apply the change.
// For TUF this will be the serialized JSON that needs
// to be inserted or merged. In the case of a "delete"
// action, it will be nil.
Content() []byte
}
// ChangeIterator is the interface for iterating across collections of
// TUF Change items
type ChangeIterator interface {
Next() (Change, error)
HasNext() bool
}

View File

@ -1,963 +0,0 @@
package client
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"os"
"path/filepath"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/docker/notary"
"github.com/docker/notary/client/changelist"
"github.com/docker/notary/cryptoservice"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/trustpinning"
"github.com/docker/notary/tuf"
tufclient "github.com/docker/notary/tuf/client"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/tuf/signed"
"github.com/docker/notary/tuf/store"
"github.com/docker/notary/tuf/utils"
)
func init() {
data.SetDefaultExpiryTimes(notary.NotaryDefaultExpiries)
}
// ErrRepoNotInitialized is returned when trying to publish an uninitialized
// notary repository
type ErrRepoNotInitialized struct{}
func (err ErrRepoNotInitialized) Error() string {
return "repository has not been initialized"
}
// ErrInvalidRemoteRole is returned when the server is requested to manage
// a key type that is not permitted
type ErrInvalidRemoteRole struct {
Role string
}
func (err ErrInvalidRemoteRole) Error() string {
return fmt.Sprintf(
"notary does not permit the server managing the %s key", err.Role)
}
// ErrInvalidLocalRole is returned when the client wants to manage
// a key type that is not permitted
type ErrInvalidLocalRole struct {
Role string
}
func (err ErrInvalidLocalRole) Error() string {
return fmt.Sprintf(
"notary does not permit the client managing the %s key", err.Role)
}
// ErrRepositoryNotExist is returned when an action is taken on a remote
// repository that doesn't exist
type ErrRepositoryNotExist struct {
remote string
gun string
}
func (err ErrRepositoryNotExist) Error() string {
return fmt.Sprintf("%s does not have trust data for %s", err.remote, err.gun)
}
const (
tufDir = "tuf"
)
// NotaryRepository stores all the information needed to operate on a notary
// repository.
type NotaryRepository struct {
baseDir string
gun string
baseURL string
tufRepoPath string
fileStore store.MetadataStore
CryptoService signed.CryptoService
tufRepo *tuf.Repo
roundTrip http.RoundTripper
trustPinning trustpinning.TrustPinConfig
}
// repositoryFromKeystores is a helper function for NewNotaryRepository that
// takes some basic NotaryRepository parameters as well as keystores (in order
// of usage preference), and returns a NotaryRepository.
func repositoryFromKeystores(baseDir, gun, baseURL string, rt http.RoundTripper,
keyStores []trustmanager.KeyStore, trustPin trustpinning.TrustPinConfig) (*NotaryRepository, error) {
cryptoService := cryptoservice.NewCryptoService(keyStores...)
nRepo := &NotaryRepository{
gun: gun,
baseDir: baseDir,
baseURL: baseURL,
tufRepoPath: filepath.Join(baseDir, tufDir, filepath.FromSlash(gun)),
CryptoService: cryptoService,
roundTrip: rt,
trustPinning: trustPin,
}
fileStore, err := store.NewFilesystemStore(
nRepo.tufRepoPath,
"metadata",
"json",
)
if err != nil {
return nil, err
}
nRepo.fileStore = fileStore
return nRepo, nil
}
// Target represents a simplified version of the data TUF operates on, so external
// applications don't have to depend on TUF data types.
type Target struct {
Name string // the name of the target
Hashes data.Hashes // the hash of the target
Length int64 // the size in bytes of the target
}
// TargetWithRole represents a Target that exists in a particular role - this is
// produced by ListTargets and GetTargetByName
type TargetWithRole struct {
Target
Role string
}
// NewTarget is a helper method that returns a Target
func NewTarget(targetName string, targetPath string) (*Target, error) {
b, err := ioutil.ReadFile(targetPath)
if err != nil {
return nil, err
}
meta, err := data.NewFileMeta(bytes.NewBuffer(b), data.NotaryDefaultHashes...)
if err != nil {
return nil, err
}
return &Target{Name: targetName, Hashes: meta.Hashes, Length: meta.Length}, nil
}
func rootCertKey(gun string, privKey data.PrivateKey) (data.PublicKey, error) {
// Hard-coded policy: the generated certificate expires in 10 years.
startTime := time.Now()
cert, err := cryptoservice.GenerateCertificate(
privKey, gun, startTime, startTime.Add(notary.Year*10))
if err != nil {
return nil, err
}
x509PublicKey := trustmanager.CertToKey(cert)
if x509PublicKey == nil {
return nil, fmt.Errorf(
"cannot use regenerated certificate: format %s", cert.PublicKeyAlgorithm)
}
return x509PublicKey, nil
}
// Initialize creates a new repository by using rootKey as the root Key for the
// TUF repository. The server must be reachable (and is asked to generate a
// timestamp key and possibly other serverManagedRoles), but the created repository
// result is only stored on local disk, not published to the server. To do that,
// use r.Publish() eventually.
func (r *NotaryRepository) Initialize(rootKeyID string, serverManagedRoles ...string) error {
privKey, _, err := r.CryptoService.GetPrivateKey(rootKeyID)
if err != nil {
return err
}
// currently we only support server managing timestamps and snapshots, and
// nothing else - timestamps are always managed by the server, and implicit
// (do not have to be passed in as part of `serverManagedRoles`, so that
// the API of Initialize doesn't change).
var serverManagesSnapshot bool
locallyManagedKeys := []string{
data.CanonicalTargetsRole,
data.CanonicalSnapshotRole,
// root is also locally managed, but that should have been created
// already
}
remotelyManagedKeys := []string{data.CanonicalTimestampRole}
for _, role := range serverManagedRoles {
switch role {
case data.CanonicalTimestampRole:
continue // timestamp is already in the right place
case data.CanonicalSnapshotRole:
// because we put Snapshot last
locallyManagedKeys = []string{data.CanonicalTargetsRole}
remotelyManagedKeys = append(
remotelyManagedKeys, data.CanonicalSnapshotRole)
serverManagesSnapshot = true
default:
return ErrInvalidRemoteRole{Role: role}
}
}
rootKey, err := rootCertKey(r.gun, privKey)
if err != nil {
return err
}
var (
rootRole = data.NewBaseRole(
data.CanonicalRootRole,
notary.MinThreshold,
rootKey,
)
timestampRole data.BaseRole
snapshotRole data.BaseRole
targetsRole data.BaseRole
)
// we want to create all the local keys first so we don't have to
// make unnecessary network calls
for _, role := range locallyManagedKeys {
// This is currently hardcoding the keys to ECDSA.
key, err := r.CryptoService.Create(role, r.gun, data.ECDSAKey)
if err != nil {
return err
}
switch role {
case data.CanonicalSnapshotRole:
snapshotRole = data.NewBaseRole(
role,
notary.MinThreshold,
key,
)
case data.CanonicalTargetsRole:
targetsRole = data.NewBaseRole(
role,
notary.MinThreshold,
key,
)
}
}
for _, role := range remotelyManagedKeys {
// This key is generated by the remote server.
key, err := getRemoteKey(r.baseURL, r.gun, role, r.roundTrip)
if err != nil {
return err
}
logrus.Debugf("got remote %s %s key with keyID: %s",
role, key.Algorithm(), key.ID())
switch role {
case data.CanonicalSnapshotRole:
snapshotRole = data.NewBaseRole(
role,
notary.MinThreshold,
key,
)
case data.CanonicalTimestampRole:
timestampRole = data.NewBaseRole(
role,
notary.MinThreshold,
key,
)
}
}
r.tufRepo = tuf.NewRepo(r.CryptoService)
err = r.tufRepo.InitRoot(
rootRole,
timestampRole,
snapshotRole,
targetsRole,
false,
)
if err != nil {
logrus.Debug("Error on InitRoot: ", err.Error())
return err
}
_, err = r.tufRepo.InitTargets(data.CanonicalTargetsRole)
if err != nil {
logrus.Debug("Error on InitTargets: ", err.Error())
return err
}
err = r.tufRepo.InitSnapshot()
if err != nil {
logrus.Debug("Error on InitSnapshot: ", err.Error())
return err
}
return r.saveMetadata(serverManagesSnapshot)
}
// adds a TUF Change template to the given roles
func addChange(cl *changelist.FileChangelist, c changelist.Change, roles ...string) error {
if len(roles) == 0 {
roles = []string{data.CanonicalTargetsRole}
}
var changes []changelist.Change
for _, role := range roles {
// Ensure we can only add targets to the CanonicalTargetsRole,
// or a Delegation role (which is <CanonicalTargetsRole>/something else)
if role != data.CanonicalTargetsRole && !data.IsDelegation(role) {
return data.ErrInvalidRole{
Role: role,
Reason: "cannot add targets to this role",
}
}
changes = append(changes, changelist.NewTUFChange(
c.Action(),
role,
c.Type(),
c.Path(),
c.Content(),
))
}
for _, c := range changes {
if err := cl.Add(c); err != nil {
return err
}
}
return nil
}
// AddTarget creates new changelist entries to add a target to the given roles
// in the repository when the changelist gets applied at publish time.
// If roles are unspecified, the default role is "targets"
func (r *NotaryRepository) AddTarget(target *Target, roles ...string) error {
if len(target.Hashes) == 0 {
return fmt.Errorf("no hashes specified for target \"%s\"", target.Name)
}
cl, err := changelist.NewFileChangelist(filepath.Join(r.tufRepoPath, "changelist"))
if err != nil {
return err
}
defer cl.Close()
logrus.Debugf("Adding target \"%s\" with sha256 \"%x\" and size %d bytes.\n", target.Name, target.Hashes["sha256"], target.Length)
meta := data.FileMeta{Length: target.Length, Hashes: target.Hashes}
metaJSON, err := json.Marshal(meta)
if err != nil {
return err
}
template := changelist.NewTUFChange(
changelist.ActionCreate, "", changelist.TypeTargetsTarget,
target.Name, metaJSON)
return addChange(cl, template, roles...)
}
// RemoveTarget creates new changelist entries to remove a target from the given
// roles in the repository when the changelist gets applied at publish time.
// If roles are unspecified, the default role is "target".
func (r *NotaryRepository) RemoveTarget(targetName string, roles ...string) error {
cl, err := changelist.NewFileChangelist(filepath.Join(r.tufRepoPath, "changelist"))
if err != nil {
return err
}
logrus.Debugf("Removing target \"%s\"", targetName)
template := changelist.NewTUFChange(changelist.ActionDelete, "",
changelist.TypeTargetsTarget, targetName, nil)
return addChange(cl, template, roles...)
}
// ListTargets lists all targets for the current repository. The list of
// roles should be passed in order from highest to lowest priority.
// IMPORTANT: if you pass a set of roles such as [ "targets/a", "targets/x"
// "targets/a/b" ], even though "targets/a/b" is part of the "targets/a" subtree
// its entries will be strictly shadowed by those in other parts of the "targets/a"
// subtree and also the "targets/x" subtree, as we will defer parsing it until
// we explicitly reach it in our iteration of the provided list of roles.
func (r *NotaryRepository) ListTargets(roles ...string) ([]*TargetWithRole, error) {
if err := r.Update(false); err != nil {
return nil, err
}
if len(roles) == 0 {
roles = []string{data.CanonicalTargetsRole}
}
targets := make(map[string]*TargetWithRole)
for _, role := range roles {
// Define an array of roles to skip for this walk (see IMPORTANT comment above)
skipRoles := utils.StrSliceRemove(roles, role)
// Define a visitor function to populate the targets map in priority order
listVisitorFunc := func(tgt *data.SignedTargets, validRole data.DelegationRole) interface{} {
// We found targets so we should try to add them to our targets map
for targetName, targetMeta := range tgt.Signed.Targets {
// Follow the priority by not overriding previously set targets
// and check that this path is valid with this role
if _, ok := targets[targetName]; ok || !validRole.CheckPaths(targetName) {
continue
}
targets[targetName] =
&TargetWithRole{Target: Target{Name: targetName, Hashes: targetMeta.Hashes, Length: targetMeta.Length}, Role: validRole.Name}
}
return nil
}
r.tufRepo.WalkTargets("", role, listVisitorFunc, skipRoles...)
}
var targetList []*TargetWithRole
for _, v := range targets {
targetList = append(targetList, v)
}
return targetList, nil
}
// GetTargetByName returns a target by the given name. If no roles are passed
// it uses the targets role and does a search of the entire delegation
// graph, finding the first entry in a breadth first search of the delegations.
// If roles are passed, they should be passed in descending priority and
// the target entry found in the subtree of the highest priority role
// will be returned.
// See the IMPORTANT section on ListTargets above. Those roles also apply here.
func (r *NotaryRepository) GetTargetByName(name string, roles ...string) (*TargetWithRole, error) {
if err := r.Update(false); err != nil {
return nil, err
}
if len(roles) == 0 {
roles = append(roles, data.CanonicalTargetsRole)
}
var resultMeta data.FileMeta
var resultRoleName string
var foundTarget bool
for _, role := range roles {
// Define an array of roles to skip for this walk (see IMPORTANT comment above)
skipRoles := utils.StrSliceRemove(roles, role)
// Define a visitor function to find the specified target
getTargetVisitorFunc := func(tgt *data.SignedTargets, validRole data.DelegationRole) interface{} {
if tgt == nil {
return nil
}
// We found the target and validated path compatibility in our walk,
// so we should stop our walk and set the resultMeta and resultRoleName variables
if resultMeta, foundTarget = tgt.Signed.Targets[name]; foundTarget {
resultRoleName = validRole.Name
return tuf.StopWalk{}
}
return nil
}
// Check that we didn't error, and that we assigned to our target
if err := r.tufRepo.WalkTargets(name, role, getTargetVisitorFunc, skipRoles...); err == nil && foundTarget {
return &TargetWithRole{Target: Target{Name: name, Hashes: resultMeta.Hashes, Length: resultMeta.Length}, Role: resultRoleName}, nil
}
}
return nil, fmt.Errorf("No trust data for %s", name)
}
// GetAllTargetMetadataByName searches the entire delegation role tree to find the specified target by name for all
// roles, and returns a map of role strings to Target structs for each time it finds the specified target.
func (r *NotaryRepository) GetAllTargetMetadataByName(name string) (map[string]Target, error) {
if err := r.Update(false); err != nil {
return nil, err
}
targetInfoMap := make(map[string]Target)
// Define a visitor function to find the specified target
getAllTargetInfoByNameVisitorFunc := func(tgt *data.SignedTargets, validRole data.DelegationRole) interface{} {
if tgt == nil {
return nil
}
// We found the target and validated path compatibility in our walk,
// so add it to our list
if resultMeta, foundTarget := tgt.Signed.Targets[name]; foundTarget {
targetInfoMap[validRole.Name] = Target{Name: name, Hashes: resultMeta.Hashes, Length: resultMeta.Length}
}
// continue walking to all child roles
return nil
}
// Check that we didn't error, and that we found the target at least once
if err := r.tufRepo.WalkTargets(name, "", getAllTargetInfoByNameVisitorFunc); err != nil {
return nil, err
}
if len(targetInfoMap) == 0 {
return nil, fmt.Errorf("No trust data for %s", name)
}
return targetInfoMap, nil
}
// GetChangelist returns the list of the repository's unpublished changes
func (r *NotaryRepository) GetChangelist() (changelist.Changelist, error) {
changelistDir := filepath.Join(r.tufRepoPath, "changelist")
cl, err := changelist.NewFileChangelist(changelistDir)
if err != nil {
logrus.Debug("Error initializing changelist")
return nil, err
}
return cl, nil
}
// RoleWithSignatures is a Role with its associated signatures
type RoleWithSignatures struct {
Signatures []data.Signature
data.Role
}
// ListRoles returns a list of RoleWithSignatures objects for this repo
// This represents the latest metadata for each role in this repo
func (r *NotaryRepository) ListRoles() ([]RoleWithSignatures, error) {
// Update to latest repo state
if err := r.Update(false); err != nil {
return nil, err
}
// Get all role info from our updated keysDB, can be empty
roles := r.tufRepo.GetAllLoadedRoles()
var roleWithSigs []RoleWithSignatures
// Populate RoleWithSignatures with Role from keysDB and signatures from TUF metadata
for _, role := range roles {
roleWithSig := RoleWithSignatures{Role: *role, Signatures: nil}
switch role.Name {
case data.CanonicalRootRole:
roleWithSig.Signatures = r.tufRepo.Root.Signatures
case data.CanonicalTargetsRole:
roleWithSig.Signatures = r.tufRepo.Targets[data.CanonicalTargetsRole].Signatures
case data.CanonicalSnapshotRole:
roleWithSig.Signatures = r.tufRepo.Snapshot.Signatures
case data.CanonicalTimestampRole:
roleWithSig.Signatures = r.tufRepo.Timestamp.Signatures
default:
if !data.IsDelegation(role.Name) {
continue
}
if _, ok := r.tufRepo.Targets[role.Name]; ok {
// We'll only find a signature if we've published any targets with this delegation
roleWithSig.Signatures = r.tufRepo.Targets[role.Name].Signatures
}
}
roleWithSigs = append(roleWithSigs, roleWithSig)
}
return roleWithSigs, nil
}
// Publish pushes the local changes in signed material to the remote notary-server
// Conceptually it performs an operation similar to a `git rebase`
func (r *NotaryRepository) Publish() error {
cl, err := r.GetChangelist()
if err != nil {
return err
}
if err = r.publish(cl); err != nil {
return err
}
if err = cl.Clear(""); err != nil {
// This is not a critical problem when only a single host is pushing
// but will cause weird behaviour if changelist cleanup is failing
// and there are multiple hosts writing to the repo.
logrus.Warn("Unable to clear changelist. You may want to manually delete the folder ", filepath.Join(r.tufRepoPath, "changelist"))
}
return nil
}
// publish pushes the changes in the given changelist to the remote notary-server
// Conceptually it performs an operation similar to a `git rebase`
func (r *NotaryRepository) publish(cl changelist.Changelist) error {
var initialPublish bool
// update first before publishing
if err := r.Update(true); err != nil {
// If the remote is not aware of the repo, then this is being published
// for the first time. Try to load from disk instead for publishing.
if _, ok := err.(ErrRepositoryNotExist); ok {
err := r.bootstrapRepo()
if err != nil {
logrus.Debugf("Unable to load repository from local files: %s",
err.Error())
if _, ok := err.(store.ErrMetaNotFound); ok {
return ErrRepoNotInitialized{}
}
return err
}
// Ensure we will push the initial root and targets file. Either or
// both of the root and targets may not be marked as Dirty, since
// there may not be any changes that update them, so use a
// different boolean.
initialPublish = true
} else {
// We could not update, so we cannot publish.
logrus.Error("Could not publish Repository since we could not update: ", err.Error())
return err
}
}
// apply the changelist to the repo
if err := applyChangelist(r.tufRepo, cl); err != nil {
logrus.Debug("Error applying changelist")
return err
}
// these are the TUF files we will need to update, serialized as JSON before
// we send anything to remote
updatedFiles := make(map[string][]byte)
// check if our root file is nearing expiry or dirty. Resign if it is. If
// root is not dirty but we are publishing for the first time, then just
// publish the existing root we have.
if nearExpiry(r.tufRepo.Root.Signed.SignedCommon) || r.tufRepo.Root.Dirty {
rootJSON, err := serializeCanonicalRole(r.tufRepo, data.CanonicalRootRole)
if err != nil {
return err
}
updatedFiles[data.CanonicalRootRole] = rootJSON
} else if initialPublish {
rootJSON, err := r.tufRepo.Root.MarshalJSON()
if err != nil {
return err
}
updatedFiles[data.CanonicalRootRole] = rootJSON
}
// iterate through all the targets files - if they are dirty, sign and update
for roleName, roleObj := range r.tufRepo.Targets {
if roleObj.Dirty || (roleName == data.CanonicalTargetsRole && initialPublish) {
targetsJSON, err := serializeCanonicalRole(r.tufRepo, roleName)
if err != nil {
return err
}
updatedFiles[roleName] = targetsJSON
}
}
// if we initialized the repo while designating the server as the snapshot
// signer, then there won't be a snapshots file. However, we might now
// have a local key (if there was a rotation), so initialize one.
if r.tufRepo.Snapshot == nil {
if err := r.tufRepo.InitSnapshot(); err != nil {
return err
}
}
snapshotJSON, err := serializeCanonicalRole(
r.tufRepo, data.CanonicalSnapshotRole)
if err == nil {
// Only update the snapshot if we've successfully signed it.
updatedFiles[data.CanonicalSnapshotRole] = snapshotJSON
} else if signErr, ok := err.(signed.ErrInsufficientSignatures); ok && signErr.FoundKeys == 0 {
// If signing fails due to us not having the snapshot key, then
// assume the server is going to sign, and do not include any snapshot
// data.
logrus.Debugf("Client does not have the key to sign snapshot. " +
"Assuming that server should sign the snapshot.")
} else {
logrus.Debugf("Client was unable to sign the snapshot: %s", err.Error())
return err
}
remote, err := getRemoteStore(r.baseURL, r.gun, r.roundTrip)
if err != nil {
return err
}
return remote.SetMultiMeta(updatedFiles)
}
// bootstrapRepo loads the repository from the local file system (i.e.
// a not yet published repo or a possibly obsolete local copy) into
// r.tufRepo. This attempts to load metadata for all roles. Since server
// snapshots are supported, if the snapshot metadata fails to load, that's ok.
// This assumes that bootstrapRepo is only used by Publish() or RotateKey()
func (r *NotaryRepository) bootstrapRepo() error {
b := tuf.NewRepoBuilder(r.gun, r.CryptoService, r.trustPinning)
logrus.Debugf("Loading trusted collection.")
for _, role := range data.BaseRoles {
jsonBytes, err := r.fileStore.GetMeta(role, store.NoSizeLimit)
if err != nil {
if _, ok := err.(store.ErrMetaNotFound); ok &&
// server snapshots are supported, and server timestamp management
// is required, so if either of these fail to load that's ok - especially
// if the repo is new
role == data.CanonicalSnapshotRole || role == data.CanonicalTimestampRole {
continue
}
return err
}
if err := b.Load(role, jsonBytes, 1, true); err != nil {
return err
}
}
tufRepo, err := b.Finish()
if err == nil {
r.tufRepo = tufRepo
}
return nil
}
// saveMetadata saves contents of r.tufRepo onto the local disk, creating
// signatures as necessary, possibly prompting for passphrases.
func (r *NotaryRepository) saveMetadata(ignoreSnapshot bool) error {
logrus.Debugf("Saving changes to Trusted Collection.")
rootJSON, err := serializeCanonicalRole(r.tufRepo, data.CanonicalRootRole)
if err != nil {
return err
}
err = r.fileStore.SetMeta(data.CanonicalRootRole, rootJSON)
if err != nil {
return err
}
targetsToSave := make(map[string][]byte)
for t := range r.tufRepo.Targets {
signedTargets, err := r.tufRepo.SignTargets(t, data.DefaultExpires(data.CanonicalTargetsRole))
if err != nil {
return err
}
targetsJSON, err := json.Marshal(signedTargets)
if err != nil {
return err
}
targetsToSave[t] = targetsJSON
}
for role, blob := range targetsToSave {
parentDir := filepath.Dir(role)
os.MkdirAll(parentDir, 0755)
r.fileStore.SetMeta(role, blob)
}
if ignoreSnapshot {
return nil
}
snapshotJSON, err := serializeCanonicalRole(r.tufRepo, data.CanonicalSnapshotRole)
if err != nil {
return err
}
return r.fileStore.SetMeta(data.CanonicalSnapshotRole, snapshotJSON)
}
// returns a properly constructed ErrRepositoryNotExist error based on this
// repo's information
func (r *NotaryRepository) errRepositoryNotExist() error {
host := r.baseURL
parsed, err := url.Parse(r.baseURL)
if err == nil {
host = parsed.Host // try to exclude the scheme and any paths
}
return ErrRepositoryNotExist{remote: host, gun: r.gun}
}
// Update bootstraps a trust anchor (root.json) before updating all the
// metadata from the repo.
func (r *NotaryRepository) Update(forWrite bool) error {
c, err := r.bootstrapClient(forWrite)
if err != nil {
if _, ok := err.(store.ErrMetaNotFound); ok {
return r.errRepositoryNotExist()
}
return err
}
repo, err := c.Update()
if err != nil {
// notFound.Resource may include a checksum so when the role is root,
// it will be root or root.<checksum>. Therefore best we can
// do it match a "root." prefix
if notFound, ok := err.(store.ErrMetaNotFound); ok && strings.HasPrefix(notFound.Resource, data.CanonicalRootRole+".") {
return r.errRepositoryNotExist()
}
return err
}
// we can be assured if we are at this stage that the repo we built is good
// no need to test the following function call for an error as it will always be fine should the repo be good- it is!
r.tufRepo = repo
warnRolesNearExpiry(repo)
return nil
}
// bootstrapClient attempts to bootstrap a root.json to be used as the trust
// anchor for a repository. The checkInitialized argument indicates whether
// we should always attempt to contact the server to determine if the repository
// is initialized or not. If set to true, we will always attempt to download
// and return an error if the remote repository errors.
//
// Populates a tuf.RepoBuilder with this root metadata (only use
// tufclient.Client.Update to load the rest).
//
// Fails if the remote server is reachable and does not know the repo
// (i.e. before the first r.Publish()), in which case the error is
// store.ErrMetaNotFound, or if the root metadata (from whichever source is used)
// is not trusted.
//
// Returns a tufclient.Client for the remote server, which may not be actually
// operational (if the URL is invalid but a root.json is cached).
func (r *NotaryRepository) bootstrapClient(checkInitialized bool) (*tufclient.Client, error) {
minVersion := 1
// the old root on disk should not be validated against any trust pinning configuration
// because if we have an old root, it itself is the thing that pins trust
oldBuilder := tuf.NewRepoBuilder(r.gun, r.CryptoService, trustpinning.TrustPinConfig{})
// by default, we want to use the trust pinning configuration on any new root that we download
newBuilder := tuf.NewRepoBuilder(r.gun, r.CryptoService, r.trustPinning)
// Try to read root from cache first. We will trust this root until we detect a problem
// during update which will cause us to download a new root and perform a rotation.
// If we have an old root, and it's valid, then we overwrite the newBuilder to be one
// preloaded with the old root or one which uses the old root for trust bootstrapping.
if rootJSON, err := r.fileStore.GetMeta(data.CanonicalRootRole, store.NoSizeLimit); err == nil {
// if we can't load the cached root, fail hard because that is how we pin trust
if err := oldBuilder.Load(data.CanonicalRootRole, rootJSON, minVersion, true); err != nil {
return nil, err
}
// again, the root on disk is the source of trust pinning, so use an empty trust
// pinning configuration
newBuilder = tuf.NewRepoBuilder(r.gun, r.CryptoService, trustpinning.TrustPinConfig{})
if err := newBuilder.Load(data.CanonicalRootRole, rootJSON, minVersion, false); err != nil {
// Ok, the old root is expired - we want to download a new one. But we want to use the
// old root to verify the new root, so bootstrap a new builder with the old builder
minVersion = oldBuilder.GetLoadedVersion(data.CanonicalRootRole)
newBuilder = oldBuilder.BootstrapNewBuilder()
}
}
remote, remoteErr := getRemoteStore(r.baseURL, r.gun, r.roundTrip)
if remoteErr != nil {
logrus.Error(remoteErr)
} else if !newBuilder.IsLoaded(data.CanonicalRootRole) || checkInitialized {
// remoteErr was nil and we were not able to load a root from cache or
// are specifically checking for initialization of the repo.
// if remote store successfully set up, try and get root from remote
// We don't have any local data to determine the size of root, so try the maximum (though it is restricted at 100MB)
tmpJSON, err := remote.GetMeta(data.CanonicalRootRole, store.NoSizeLimit)
if err != nil {
// we didn't have a root in cache and were unable to load one from
// the server. Nothing we can do but error.
return nil, err
}
if !newBuilder.IsLoaded(data.CanonicalRootRole) {
// we always want to use the downloaded root if we couldn't load from cache
if err := newBuilder.Load(data.CanonicalRootRole, tmpJSON, minVersion, false); err != nil {
return nil, err
}
err = r.fileStore.SetMeta(data.CanonicalRootRole, tmpJSON)
if err != nil {
// if we can't write cache we should still continue, just log error
logrus.Errorf("could not save root to cache: %s", err.Error())
}
}
}
// We can only get here if remoteErr != nil (hence we don't download any new root),
// and there was no root on disk
if !newBuilder.IsLoaded(data.CanonicalRootRole) {
return nil, ErrRepoNotInitialized{}
}
return tufclient.NewClient(oldBuilder, newBuilder, remote, r.fileStore), nil
}
// RotateKey removes all existing keys associated with the role, and either
// creates and adds one new key or delegates managing the key to the server.
// These changes are staged in a changelist until publish is called.
func (r *NotaryRepository) RotateKey(role string, serverManagesKey bool) error {
// We currently support remotely managing timestamp and snapshot keys
canBeRemoteKey := role == data.CanonicalTimestampRole || role == data.CanonicalSnapshotRole
// And locally managing root, targets, and snapshot keys
canBeLocalKey := (role == data.CanonicalSnapshotRole || role == data.CanonicalTargetsRole ||
role == data.CanonicalRootRole)
switch {
case !data.ValidRole(role) || data.IsDelegation(role):
return fmt.Errorf("notary does not currently permit rotating the %s key", role)
case serverManagesKey && !canBeRemoteKey:
return ErrInvalidRemoteRole{Role: role}
case !serverManagesKey && !canBeLocalKey:
return ErrInvalidLocalRole{Role: role}
}
var (
pubKey data.PublicKey
err error
errFmtMsg string
)
switch serverManagesKey {
case true:
pubKey, err = getRemoteKey(r.baseURL, r.gun, role, r.roundTrip)
errFmtMsg = "unable to rotate remote key: %s"
default:
pubKey, err = r.CryptoService.Create(role, r.gun, data.ECDSAKey)
errFmtMsg = "unable to generate key: %s"
}
if err != nil {
return fmt.Errorf(errFmtMsg, err)
}
// if this is a root role, generate a root cert for the public key
if role == data.CanonicalRootRole {
privKey, _, err := r.CryptoService.GetPrivateKey(pubKey.ID())
if err != nil {
return err
}
pubKey, err = rootCertKey(r.gun, privKey)
if err != nil {
return err
}
}
cl := changelist.NewMemChangelist()
if err := r.rootFileKeyChange(cl, role, changelist.ActionCreate, pubKey); err != nil {
return err
}
return r.publish(cl)
}
func (r *NotaryRepository) rootFileKeyChange(cl changelist.Changelist, role, action string, key data.PublicKey) error {
kl := make(data.KeyList, 0, 1)
kl = append(kl, key)
meta := changelist.TUFRootData{
RoleName: role,
Keys: kl,
}
metaJSON, err := json.Marshal(meta)
if err != nil {
return err
}
c := changelist.NewTUFChange(
action,
changelist.ScopeRoot,
changelist.TypeRootRole,
role,
metaJSON,
)
return cl.Add(c)
}
// DeleteTrustData removes the trust data stored for this repo in the TUF cache on the client side
func (r *NotaryRepository) DeleteTrustData() error {
// Clear TUF files and cache
if err := r.fileStore.RemoveAll(); err != nil {
return fmt.Errorf("error clearing TUF repo data: %v", err)
}
r.tufRepo = tuf.NewRepo(nil)
return nil
}

View File

@ -1,19 +0,0 @@
// +build pkcs11
package client
import "github.com/docker/notary/trustmanager/yubikey"
// clear out all keys
func init() {
yubikey.SetYubikeyKeyMode(0)
if !yubikey.IsAccessible() {
return
}
store, err := yubikey.NewYubiStore(nil, nil)
if err == nil {
for k := range store.ListKeys() {
store.RemoveKey(k)
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,294 +0,0 @@
package client
import (
"encoding/json"
"fmt"
"path/filepath"
"github.com/Sirupsen/logrus"
"github.com/docker/notary"
"github.com/docker/notary/client/changelist"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/tuf/store"
"github.com/docker/notary/tuf/utils"
)
// AddDelegation creates changelist entries to add provided delegation public keys and paths.
// This method composes AddDelegationRoleAndKeys and AddDelegationPaths (each creates one changelist if called).
func (r *NotaryRepository) AddDelegation(name string, delegationKeys []data.PublicKey, paths []string) error {
if len(delegationKeys) > 0 {
err := r.AddDelegationRoleAndKeys(name, delegationKeys)
if err != nil {
return err
}
}
if len(paths) > 0 {
err := r.AddDelegationPaths(name, paths)
if err != nil {
return err
}
}
return nil
}
// AddDelegationRoleAndKeys creates a changelist entry to add provided delegation public keys.
// This method is the simplest way to create a new delegation, because the delegation must have at least
// one key upon creation to be valid since we will reject the changelist while validating the threshold.
func (r *NotaryRepository) AddDelegationRoleAndKeys(name string, delegationKeys []data.PublicKey) error {
if !data.IsDelegation(name) {
return data.ErrInvalidRole{Role: name, Reason: "invalid delegation role name"}
}
cl, err := changelist.NewFileChangelist(filepath.Join(r.tufRepoPath, "changelist"))
if err != nil {
return err
}
defer cl.Close()
logrus.Debugf(`Adding delegation "%s" with threshold %d, and %d keys\n`,
name, notary.MinThreshold, len(delegationKeys))
// Defaulting to threshold of 1, since we don't allow for larger thresholds at the moment.
tdJSON, err := json.Marshal(&changelist.TUFDelegation{
NewThreshold: notary.MinThreshold,
AddKeys: data.KeyList(delegationKeys),
})
if err != nil {
return err
}
template := newCreateDelegationChange(name, tdJSON)
return addChange(cl, template, name)
}
// AddDelegationPaths creates a changelist entry to add provided paths to an existing delegation.
// This method cannot create a new delegation itself because the role must meet the key threshold upon creation.
func (r *NotaryRepository) AddDelegationPaths(name string, paths []string) error {
if !data.IsDelegation(name) {
return data.ErrInvalidRole{Role: name, Reason: "invalid delegation role name"}
}
cl, err := changelist.NewFileChangelist(filepath.Join(r.tufRepoPath, "changelist"))
if err != nil {
return err
}
defer cl.Close()
logrus.Debugf(`Adding %s paths to delegation %s\n`, paths, name)
tdJSON, err := json.Marshal(&changelist.TUFDelegation{
AddPaths: paths,
})
if err != nil {
return err
}
template := newCreateDelegationChange(name, tdJSON)
return addChange(cl, template, name)
}
// RemoveDelegationKeysAndPaths creates changelist entries to remove provided delegation key IDs and paths.
// This method composes RemoveDelegationPaths and RemoveDelegationKeys (each creates one changelist if called).
func (r *NotaryRepository) RemoveDelegationKeysAndPaths(name string, keyIDs, paths []string) error {
if len(paths) > 0 {
err := r.RemoveDelegationPaths(name, paths)
if err != nil {
return err
}
}
if len(keyIDs) > 0 {
err := r.RemoveDelegationKeys(name, keyIDs)
if err != nil {
return err
}
}
return nil
}
// RemoveDelegationRole creates a changelist to remove all paths and keys from a role, and delete the role in its entirety.
func (r *NotaryRepository) RemoveDelegationRole(name string) error {
if !data.IsDelegation(name) {
return data.ErrInvalidRole{Role: name, Reason: "invalid delegation role name"}
}
cl, err := changelist.NewFileChangelist(filepath.Join(r.tufRepoPath, "changelist"))
if err != nil {
return err
}
defer cl.Close()
logrus.Debugf(`Removing delegation "%s"\n`, name)
template := newDeleteDelegationChange(name, nil)
return addChange(cl, template, name)
}
// RemoveDelegationPaths creates a changelist entry to remove provided paths from an existing delegation.
func (r *NotaryRepository) RemoveDelegationPaths(name string, paths []string) error {
if !data.IsDelegation(name) {
return data.ErrInvalidRole{Role: name, Reason: "invalid delegation role name"}
}
cl, err := changelist.NewFileChangelist(filepath.Join(r.tufRepoPath, "changelist"))
if err != nil {
return err
}
defer cl.Close()
logrus.Debugf(`Removing %s paths from delegation "%s"\n`, paths, name)
tdJSON, err := json.Marshal(&changelist.TUFDelegation{
RemovePaths: paths,
})
if err != nil {
return err
}
template := newUpdateDelegationChange(name, tdJSON)
return addChange(cl, template, name)
}
// RemoveDelegationKeys creates a changelist entry to remove provided keys from an existing delegation.
// When this changelist is applied, if the specified keys are the only keys left in the role,
// the role itself will be deleted in its entirety.
func (r *NotaryRepository) RemoveDelegationKeys(name string, keyIDs []string) error {
if !data.IsDelegation(name) {
return data.ErrInvalidRole{Role: name, Reason: "invalid delegation role name"}
}
cl, err := changelist.NewFileChangelist(filepath.Join(r.tufRepoPath, "changelist"))
if err != nil {
return err
}
defer cl.Close()
logrus.Debugf(`Removing %s keys from delegation "%s"\n`, keyIDs, name)
tdJSON, err := json.Marshal(&changelist.TUFDelegation{
RemoveKeys: keyIDs,
})
if err != nil {
return err
}
template := newUpdateDelegationChange(name, tdJSON)
return addChange(cl, template, name)
}
// ClearDelegationPaths creates a changelist entry to remove all paths from an existing delegation.
func (r *NotaryRepository) ClearDelegationPaths(name string) error {
if !data.IsDelegation(name) {
return data.ErrInvalidRole{Role: name, Reason: "invalid delegation role name"}
}
cl, err := changelist.NewFileChangelist(filepath.Join(r.tufRepoPath, "changelist"))
if err != nil {
return err
}
defer cl.Close()
logrus.Debugf(`Removing all paths from delegation "%s"\n`, name)
tdJSON, err := json.Marshal(&changelist.TUFDelegation{
ClearAllPaths: true,
})
if err != nil {
return err
}
template := newUpdateDelegationChange(name, tdJSON)
return addChange(cl, template, name)
}
func newUpdateDelegationChange(name string, content []byte) *changelist.TUFChange {
return changelist.NewTUFChange(
changelist.ActionUpdate,
name,
changelist.TypeTargetsDelegation,
"", // no path for delegations
content,
)
}
func newCreateDelegationChange(name string, content []byte) *changelist.TUFChange {
return changelist.NewTUFChange(
changelist.ActionCreate,
name,
changelist.TypeTargetsDelegation,
"", // no path for delegations
content,
)
}
func newDeleteDelegationChange(name string, content []byte) *changelist.TUFChange {
return changelist.NewTUFChange(
changelist.ActionDelete,
name,
changelist.TypeTargetsDelegation,
"", // no path for delegations
content,
)
}
// GetDelegationRoles returns the keys and roles of the repository's delegations
// Also converts key IDs to canonical key IDs to keep consistent with signing prompts
func (r *NotaryRepository) GetDelegationRoles() ([]*data.Role, error) {
// Update state of the repo to latest
if err := r.Update(false); err != nil {
return nil, err
}
// All top level delegations (ex: targets/level1) are stored exclusively in targets.json
_, ok := r.tufRepo.Targets[data.CanonicalTargetsRole]
if !ok {
return nil, store.ErrMetaNotFound{Resource: data.CanonicalTargetsRole}
}
// make a copy for traversing nested delegations
allDelegations := []*data.Role{}
// Define a visitor function to populate the delegations list and translate their key IDs to canonical IDs
delegationCanonicalListVisitor := func(tgt *data.SignedTargets, validRole data.DelegationRole) interface{} {
// For the return list, update with a copy that includes canonicalKeyIDs
// These aren't validated by the validRole
canonicalDelegations, err := translateDelegationsToCanonicalIDs(tgt.Signed.Delegations)
if err != nil {
return err
}
allDelegations = append(allDelegations, canonicalDelegations...)
return nil
}
err := r.tufRepo.WalkTargets("", "", delegationCanonicalListVisitor)
if err != nil {
return nil, err
}
return allDelegations, nil
}
func translateDelegationsToCanonicalIDs(delegationInfo data.Delegations) ([]*data.Role, error) {
canonicalDelegations := make([]*data.Role, len(delegationInfo.Roles))
copy(canonicalDelegations, delegationInfo.Roles)
delegationKeys := delegationInfo.Keys
for i, delegation := range canonicalDelegations {
canonicalKeyIDs := []string{}
for _, keyID := range delegation.KeyIDs {
pubKey, ok := delegationKeys[keyID]
if !ok {
return nil, fmt.Errorf("Could not translate canonical key IDs for %s", delegation.Name)
}
canonicalKeyID, err := utils.CanonicalKeyID(pubKey)
if err != nil {
return nil, fmt.Errorf("Could not translate canonical key IDs for %s: %v", delegation.Name, err)
}
canonicalKeyIDs = append(canonicalKeyIDs, canonicalKeyID)
}
canonicalDelegations[i].KeyIDs = canonicalKeyIDs
}
return canonicalDelegations, nil
}

View File

@ -1,257 +0,0 @@
package client
import (
"encoding/json"
"fmt"
"net/http"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/docker/notary/client/changelist"
tuf "github.com/docker/notary/tuf"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/tuf/store"
"github.com/docker/notary/tuf/utils"
)
// Use this to initialize remote HTTPStores from the config settings
func getRemoteStore(baseURL, gun string, rt http.RoundTripper) (store.RemoteStore, error) {
s, err := store.NewHTTPStore(
baseURL+"/v2/"+gun+"/_trust/tuf/",
"",
"json",
"key",
rt,
)
if err != nil {
return store.OfflineStore{}, err
}
return s, err
}
func applyChangelist(repo *tuf.Repo, cl changelist.Changelist) error {
it, err := cl.NewIterator()
if err != nil {
return err
}
index := 0
for it.HasNext() {
c, err := it.Next()
if err != nil {
return err
}
isDel := data.IsDelegation(c.Scope())
switch {
case c.Scope() == changelist.ScopeTargets || isDel:
err = applyTargetsChange(repo, c)
case c.Scope() == changelist.ScopeRoot:
err = applyRootChange(repo, c)
default:
logrus.Debug("scope not supported: ", c.Scope())
}
index++
if err != nil {
return err
}
}
logrus.Debugf("applied %d change(s)", index)
return nil
}
func applyTargetsChange(repo *tuf.Repo, c changelist.Change) error {
switch c.Type() {
case changelist.TypeTargetsTarget:
return changeTargetMeta(repo, c)
case changelist.TypeTargetsDelegation:
return changeTargetsDelegation(repo, c)
default:
return fmt.Errorf("only target meta and delegations changes supported")
}
}
func changeTargetsDelegation(repo *tuf.Repo, c changelist.Change) error {
switch c.Action() {
case changelist.ActionCreate:
td := changelist.TUFDelegation{}
err := json.Unmarshal(c.Content(), &td)
if err != nil {
return err
}
// Try to create brand new role or update one
// First add the keys, then the paths. We can only add keys and paths in this scenario
err = repo.UpdateDelegationKeys(c.Scope(), td.AddKeys, []string{}, td.NewThreshold)
if err != nil {
return err
}
return repo.UpdateDelegationPaths(c.Scope(), td.AddPaths, []string{}, false)
case changelist.ActionUpdate:
td := changelist.TUFDelegation{}
err := json.Unmarshal(c.Content(), &td)
if err != nil {
return err
}
delgRole, err := repo.GetDelegationRole(c.Scope())
if err != nil {
return err
}
// We need to translate the keys from canonical ID to TUF ID for compatibility
canonicalToTUFID := make(map[string]string)
for tufID, pubKey := range delgRole.Keys {
canonicalID, err := utils.CanonicalKeyID(pubKey)
if err != nil {
return err
}
canonicalToTUFID[canonicalID] = tufID
}
removeTUFKeyIDs := []string{}
for _, canonID := range td.RemoveKeys {
removeTUFKeyIDs = append(removeTUFKeyIDs, canonicalToTUFID[canonID])
}
// If we specify the only keys left delete the role, else just delete specified keys
if strings.Join(delgRole.ListKeyIDs(), ";") == strings.Join(removeTUFKeyIDs, ";") && len(td.AddKeys) == 0 {
return repo.DeleteDelegation(c.Scope())
}
err = repo.UpdateDelegationKeys(c.Scope(), td.AddKeys, removeTUFKeyIDs, td.NewThreshold)
if err != nil {
return err
}
return repo.UpdateDelegationPaths(c.Scope(), td.AddPaths, td.RemovePaths, td.ClearAllPaths)
case changelist.ActionDelete:
return repo.DeleteDelegation(c.Scope())
default:
return fmt.Errorf("unsupported action against delegations: %s", c.Action())
}
}
func changeTargetMeta(repo *tuf.Repo, c changelist.Change) error {
var err error
switch c.Action() {
case changelist.ActionCreate:
logrus.Debug("changelist add: ", c.Path())
meta := &data.FileMeta{}
err = json.Unmarshal(c.Content(), meta)
if err != nil {
return err
}
files := data.Files{c.Path(): *meta}
// Attempt to add the target to this role
if _, err = repo.AddTargets(c.Scope(), files); err != nil {
logrus.Errorf("couldn't add target to %s: %s", c.Scope(), err.Error())
}
case changelist.ActionDelete:
logrus.Debug("changelist remove: ", c.Path())
// Attempt to remove the target from this role
if err = repo.RemoveTargets(c.Scope(), c.Path()); err != nil {
logrus.Errorf("couldn't remove target from %s: %s", c.Scope(), err.Error())
}
default:
logrus.Debug("action not yet supported: ", c.Action())
}
return err
}
func applyRootChange(repo *tuf.Repo, c changelist.Change) error {
var err error
switch c.Type() {
case changelist.TypeRootRole:
err = applyRootRoleChange(repo, c)
default:
logrus.Debug("type of root change not yet supported: ", c.Type())
}
return err // might be nil
}
func applyRootRoleChange(repo *tuf.Repo, c changelist.Change) error {
switch c.Action() {
case changelist.ActionCreate:
// replaces all keys for a role
d := &changelist.TUFRootData{}
err := json.Unmarshal(c.Content(), d)
if err != nil {
return err
}
err = repo.ReplaceBaseKeys(d.RoleName, d.Keys...)
if err != nil {
return err
}
default:
logrus.Debug("action not yet supported for root: ", c.Action())
}
return nil
}
func nearExpiry(r data.SignedCommon) bool {
plus6mo := time.Now().AddDate(0, 6, 0)
return r.Expires.Before(plus6mo)
}
func warnRolesNearExpiry(r *tuf.Repo) {
//get every role and its respective signed common and call nearExpiry on it
//Root check
if nearExpiry(r.Root.Signed.SignedCommon) {
logrus.Warn("root is nearing expiry, you should re-sign the role metadata")
}
//Targets and delegations check
for role, signedTOrD := range r.Targets {
//signedTOrD is of type *data.SignedTargets
if nearExpiry(signedTOrD.Signed.SignedCommon) {
logrus.Warn(role, " metadata is nearing expiry, you should re-sign the role metadata")
}
}
//Snapshot check
if nearExpiry(r.Snapshot.Signed.SignedCommon) {
logrus.Warn("snapshot is nearing expiry, you should re-sign the role metadata")
}
//do not need to worry about Timestamp, notary signer will re-sign with the timestamp key
}
// Fetches a public key from a remote store, given a gun and role
func getRemoteKey(url, gun, role string, rt http.RoundTripper) (data.PublicKey, error) {
remote, err := getRemoteStore(url, gun, rt)
if err != nil {
return nil, err
}
rawPubKey, err := remote.GetKey(role)
if err != nil {
return nil, err
}
pubKey, err := data.UnmarshalPublicKey(rawPubKey)
if err != nil {
return nil, err
}
return pubKey, nil
}
// signs and serializes the metadata for a canonical role in a TUF repo to JSON
func serializeCanonicalRole(tufRepo *tuf.Repo, role string) (out []byte, err error) {
var s *data.Signed
switch {
case role == data.CanonicalRootRole:
s, err = tufRepo.SignRoot(data.DefaultExpires(role))
case role == data.CanonicalSnapshotRole:
s, err = tufRepo.SignSnapshot(data.DefaultExpires(role))
case tufRepo.Targets[role] != nil:
s, err = tufRepo.SignTargets(
role, data.DefaultExpires(data.CanonicalTargetsRole))
default:
err = fmt.Errorf("%s not supported role to sign on the client", role)
}
if err != nil {
return
}
return json.Marshal(s)
}

File diff suppressed because it is too large Load Diff

View File

@ -1,29 +0,0 @@
// +build !pkcs11
package client
import (
"fmt"
"net/http"
"github.com/docker/notary"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/trustpinning"
)
// NewNotaryRepository is a helper method that returns a new notary repository.
// It takes the base directory under where all the trust files will be stored
// (This is normally defaults to "~/.notary" or "~/.docker/trust" when enabling
// docker content trust).
func NewNotaryRepository(baseDir, gun, baseURL string, rt http.RoundTripper,
retriever notary.PassRetriever, trustPinning trustpinning.TrustPinConfig) (
*NotaryRepository, error) {
fileKeyStore, err := trustmanager.NewKeyFileStore(baseDir, retriever)
if err != nil {
return nil, fmt.Errorf("failed to create private key store in directory: %s", baseDir)
}
return repositoryFromKeystores(baseDir, gun, baseURL, rt,
[]trustmanager.KeyStore{fileKeyStore}, trustPinning)
}

View File

@ -1,34 +0,0 @@
// +build pkcs11
package client
import (
"fmt"
"net/http"
"github.com/docker/notary"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/trustmanager/yubikey"
"github.com/docker/notary/trustpinning"
)
// NewNotaryRepository is a helper method that returns a new notary repository.
// It takes the base directory under where all the trust files will be stored
// (usually ~/.docker/trust/).
func NewNotaryRepository(baseDir, gun, baseURL string, rt http.RoundTripper,
retriever notary.PassRetriever, trustPinning trustpinning.TrustPinConfig) (
*NotaryRepository, error) {
fileKeyStore, err := trustmanager.NewKeyFileStore(baseDir, retriever)
if err != nil {
return nil, fmt.Errorf("failed to create private key store in directory: %s", baseDir)
}
keyStores := []trustmanager.KeyStore{fileKeyStore}
yubiKeyStore, _ := yubikey.NewYubiStore(fileKeyStore, retriever)
if yubiKeyStore != nil {
keyStores = []trustmanager.KeyStore{yubiKeyStore, fileKeyStore}
}
return repositoryFromKeystores(baseDir, gun, baseURL, rt, keyStores, trustPinning)
}

View File

@ -1,20 +0,0 @@
package main
import (
"fmt"
"github.com/docker/notary/storage"
"golang.org/x/net/context"
)
func bootstrap(ctx context.Context) error {
s := ctx.Value("metaStore")
if s == nil {
return fmt.Errorf("no store set during bootstrapping")
}
store, ok := s.(storage.Bootstrapper)
if !ok {
return fmt.Errorf("Store does not support bootstrapping.")
}
return store.Bootstrap()
}

View File

@ -1,24 +0,0 @@
package main
import (
"testing"
"github.com/docker/notary/tuf/testutils"
"github.com/stretchr/testify/require"
"golang.org/x/net/context"
)
func TestBootstrap(t *testing.T) {
ctx := context.Background()
err := bootstrap(ctx)
require.Error(t, err)
ctx = context.WithValue(ctx, "metaStore", 1)
err = bootstrap(ctx)
require.Error(t, err)
require.Contains(t, err.Error(), "does not support bootstrapping")
bs := &testutils.TestBootstrapper{}
ctx = context.WithValue(ctx, "metaStore", bs)
err = bootstrap(ctx)
require.NoError(t, err)
require.True(t, bs.Booted)
}

View File

@ -1,311 +0,0 @@
package main
import (
"crypto/tls"
"fmt"
"os"
"os/signal"
"path"
"strconv"
"strings"
"syscall"
"time"
"github.com/Sirupsen/logrus"
"github.com/docker/distribution/health"
_ "github.com/docker/distribution/registry/auth/htpasswd"
_ "github.com/docker/distribution/registry/auth/token"
"github.com/docker/go-connections/tlsconfig"
"github.com/docker/notary"
"github.com/docker/notary/server"
"github.com/docker/notary/server/storage"
"github.com/docker/notary/signer/client"
"github.com/docker/notary/storage/rethinkdb"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/tuf/signed"
"github.com/docker/notary/utils"
_ "github.com/go-sql-driver/mysql"
"github.com/spf13/viper"
"golang.org/x/net/context"
gorethink "gopkg.in/dancannon/gorethink.v2"
)
// gets the required gun prefixes accepted by this server
func getRequiredGunPrefixes(configuration *viper.Viper) ([]string, error) {
prefixes := configuration.GetStringSlice("repositories.gun_prefixes")
for _, prefix := range prefixes {
p := path.Clean(strings.TrimSpace(prefix))
if p+"/" != prefix || strings.HasPrefix(p, "/") || strings.HasPrefix(p, "..") {
return nil, fmt.Errorf("invalid GUN prefix %s", prefix)
}
}
return prefixes, nil
}
// get the address for the HTTP server, and parses the optional TLS
// configuration for the server - if no TLS configuration is specified,
// TLS is not enabled.
func getAddrAndTLSConfig(configuration *viper.Viper) (string, *tls.Config, error) {
httpAddr := configuration.GetString("server.http_addr")
if httpAddr == "" {
return "", nil, fmt.Errorf("http listen address required for server")
}
tlsConfig, err := utils.ParseServerTLS(configuration, false)
if err != nil {
return "", nil, fmt.Errorf(err.Error())
}
return httpAddr, tlsConfig, nil
}
// sets up TLS for the GRPC connection to notary-signer
func grpcTLS(configuration *viper.Viper) (*tls.Config, error) {
rootCA := utils.GetPathRelativeToConfig(configuration, "trust_service.tls_ca_file")
clientCert := utils.GetPathRelativeToConfig(configuration, "trust_service.tls_client_cert")
clientKey := utils.GetPathRelativeToConfig(configuration, "trust_service.tls_client_key")
if clientCert == "" && clientKey != "" || clientCert != "" && clientKey == "" {
return nil, fmt.Errorf("either pass both client key and cert, or neither")
}
tlsConfig, err := tlsconfig.Client(tlsconfig.Options{
CAFile: rootCA,
CertFile: clientCert,
KeyFile: clientKey,
})
if err != nil {
return nil, fmt.Errorf(
"Unable to configure TLS to the trust service: %s", err.Error())
}
return tlsConfig, nil
}
// parses the configuration and returns a backing store for the TUF files
func getStore(configuration *viper.Viper, hRegister healthRegister) (
storage.MetaStore, error) {
var store storage.MetaStore
backend := configuration.GetString("storage.backend")
logrus.Infof("Using %s backend", backend)
switch backend {
case notary.MemoryBackend:
return storage.NewMemStorage(), nil
case notary.MySQLBackend, notary.SQLiteBackend:
storeConfig, err := utils.ParseSQLStorage(configuration)
if err != nil {
return nil, err
}
s, err := storage.NewSQLStorage(storeConfig.Backend, storeConfig.Source)
if err != nil {
return nil, fmt.Errorf("Error starting %s driver: %s", backend, err.Error())
}
store = *storage.NewTUFMetaStorage(s)
hRegister("DB operational", time.Minute, s.CheckHealth)
case notary.RethinkDBBackend:
var sess *gorethink.Session
storeConfig, err := utils.ParseRethinkDBStorage(configuration)
if err != nil {
return nil, err
}
tlsOpts := tlsconfig.Options{
CAFile: storeConfig.CA,
CertFile: storeConfig.Cert,
KeyFile: storeConfig.Key,
}
if doBootstrap {
sess, err = rethinkdb.AdminConnection(tlsOpts, storeConfig.Source)
} else {
sess, err = rethinkdb.UserConnection(tlsOpts, storeConfig.Source, storeConfig.Username, storeConfig.Password)
}
if err != nil {
return nil, fmt.Errorf("Error starting %s driver: %s", backend, err.Error())
}
s := storage.NewRethinkDBStorage(storeConfig.DBName, storeConfig.Username, storeConfig.Password, sess)
store = *storage.NewTUFMetaStorage(s)
hRegister("DB operational", time.Minute, s.CheckHealth)
default:
return nil, fmt.Errorf("%s is not a supported storage backend", backend)
}
return store, nil
}
type signerFactory func(hostname, port string, tlsConfig *tls.Config) *client.NotarySigner
type healthRegister func(name string, duration time.Duration, check health.CheckFunc)
// parses the configuration and determines which trust service and key algorithm
// to return
func getTrustService(configuration *viper.Viper, sFactory signerFactory,
hRegister healthRegister) (signed.CryptoService, string, error) {
switch configuration.GetString("trust_service.type") {
case "local":
logrus.Info("Using local signing service, which requires ED25519. " +
"Ignoring all other trust_service parameters, including keyAlgorithm")
return signed.NewEd25519(), data.ED25519Key, nil
case "remote":
default:
return nil, "", fmt.Errorf(
"must specify either a \"local\" or \"remote\" type for trust_service")
}
keyAlgo := configuration.GetString("trust_service.key_algorithm")
if keyAlgo != data.ED25519Key && keyAlgo != data.ECDSAKey && keyAlgo != data.RSAKey {
return nil, "", fmt.Errorf("invalid key algorithm configured: %s", keyAlgo)
}
clientTLS, err := grpcTLS(configuration)
if err != nil {
return nil, "", err
}
logrus.Info("Using remote signing service")
notarySigner := sFactory(
configuration.GetString("trust_service.hostname"),
configuration.GetString("trust_service.port"),
clientTLS,
)
minute := 1 * time.Minute
hRegister(
"Trust operational",
// If the trust service fails, the server is degraded but not
// exactly unhealthy, so always return healthy and just log an
// error.
minute,
func() error {
err := notarySigner.CheckHealth(minute)
if err != nil {
logrus.Error("Trust not fully operational: ", err.Error())
}
return nil
},
)
return notarySigner, keyAlgo, nil
}
// Parse the cache configurations for GET-ting current and checksummed metadata,
// returning the configuration for current (non-content-addressed) metadata
// first, then the configuration for consistent (content-addressed) metadata
// second. The configuration consists mainly of the max-age (an integer in seconds,
// just like in the Cache-Control header) for each type of metadata.
// The max-age must be between 0 and 31536000 (one year in seconds, which is
// the recommended maximum time data is cached), else parsing will return an error.
// A max-age of 0 will disable caching for that type of download (consistent or current).
func getCacheConfig(configuration *viper.Viper) (current, consistent utils.CacheControlConfig, err error) {
cccs := make(map[string]utils.CacheControlConfig)
currentOpt, consistentOpt := "current_metadata", "consistent_metadata"
defaults := map[string]int{
currentOpt: int(notary.CurrentMetadataCacheMaxAge.Seconds()),
consistentOpt: int(notary.ConsistentMetadataCacheMaxAge.Seconds()),
}
maxMaxAge := int(notary.CacheMaxAgeLimit.Seconds())
for optionName, seconds := range defaults {
m := configuration.GetString(fmt.Sprintf("caching.max_age.%s", optionName))
if m != "" {
seconds, err = strconv.Atoi(m)
if err != nil || seconds < 0 || seconds > maxMaxAge {
return nil, nil, fmt.Errorf(
"must specify a cache-control max-age between 0 and %v", maxMaxAge)
}
}
cccs[optionName] = utils.NewCacheControlConfig(seconds, optionName == currentOpt)
}
current = cccs[currentOpt]
consistent = cccs[consistentOpt]
return
}
func parseServerConfig(configFilePath string, hRegister healthRegister) (context.Context, server.Config, error) {
config := viper.New()
utils.SetupViper(config, envPrefix)
// parse viper config
if err := utils.ParseViper(config, configFilePath); err != nil {
return nil, server.Config{}, err
}
ctx := context.Background()
// default is error level
lvl, err := utils.ParseLogLevel(config, logrus.ErrorLevel)
if err != nil {
return nil, server.Config{}, err
}
logrus.SetLevel(lvl)
prefixes, err := getRequiredGunPrefixes(config)
if err != nil {
return nil, server.Config{}, err
}
// parse bugsnag config
bugsnagConf, err := utils.ParseBugsnag(config)
if err != nil {
return ctx, server.Config{}, err
}
utils.SetUpBugsnag(bugsnagConf)
trust, keyAlgo, err := getTrustService(config, client.NewNotarySigner, hRegister)
if err != nil {
return nil, server.Config{}, err
}
ctx = context.WithValue(ctx, "keyAlgorithm", keyAlgo)
store, err := getStore(config, hRegister)
if err != nil {
return nil, server.Config{}, err
}
ctx = context.WithValue(ctx, "metaStore", store)
currentCache, consistentCache, err := getCacheConfig(config)
if err != nil {
return nil, server.Config{}, err
}
httpAddr, tlsConfig, err := getAddrAndTLSConfig(config)
if err != nil {
return nil, server.Config{}, err
}
return ctx, server.Config{
Addr: httpAddr,
TLSConfig: tlsConfig,
Trust: trust,
AuthMethod: config.GetString("auth.type"),
AuthOpts: config.Get("auth.options"),
RepoPrefixes: prefixes,
CurrentCacheControlConfig: currentCache,
ConsistentCacheControlConfig: consistentCache,
}, nil
}
func setupSignalTrap() {
c := make(chan os.Signal, 1)
signal.Notify(c, notary.NotarySupportedSignals...)
go func() {
for {
signalHandle(<-c)
}
}()
}
// signalHandle will increase/decrease the logging level via the signal we get.
func signalHandle(sig os.Signal) {
switch sig {
case syscall.SIGUSR1:
if err := utils.AdjustLogLevel(true); err != nil {
fmt.Printf("Attempt to increase log level failed, will remain at %s level, error: %s\n", logrus.GetLevel(), err)
return
}
case syscall.SIGUSR2:
if err := utils.AdjustLogLevel(false); err != nil {
fmt.Printf("Attempt to decrease log level failed, will remain at %s level, error: %s\n", logrus.GetLevel(), err)
return
}
}
fmt.Println("Successfully setting log level to ", logrus.GetLevel())
}

View File

@ -1,88 +0,0 @@
package main
import (
_ "expvar"
"flag"
"fmt"
"net/http"
_ "net/http/pprof"
"os"
"github.com/Sirupsen/logrus"
"github.com/docker/distribution/health"
"github.com/docker/notary/server"
"github.com/docker/notary/version"
)
// DebugAddress is the debug server address to listen on
const (
jsonLogFormat = "json"
DebugAddress = "localhost:8080"
)
var (
debug bool
logFormat string
configFile string
envPrefix = "NOTARY_SERVER"
doBootstrap bool
)
func init() {
// Setup flags
flag.StringVar(&configFile, "config", "", "Path to configuration file")
flag.BoolVar(&debug, "debug", false, "Enable the debugging server on localhost:8080")
flag.StringVar(&logFormat, "logf", "json", "Set the format of the logs. Only 'json' and 'logfmt' are supported at the moment.")
flag.BoolVar(&doBootstrap, "bootstrap", false, "Do any necessary setup of configured backend storage services")
// this needs to be in init so that _ALL_ logs are in the correct format
if logFormat == jsonLogFormat {
logrus.SetFormatter(new(logrus.JSONFormatter))
}
}
func main() {
flag.Usage = usage
flag.Parse()
if debug {
go debugServer(DebugAddress)
}
// when the server starts print the version for debugging and issue logs later
logrus.Infof("Version: %s, Git commit: %s", version.NotaryVersion, version.GitCommit)
ctx, serverConfig, err := parseServerConfig(configFile, health.RegisterPeriodicFunc)
if err != nil {
logrus.Fatal(err.Error())
}
setupSignalTrap()
if doBootstrap {
err = bootstrap(ctx)
} else {
logrus.Info("Starting Server")
err = server.Run(ctx, serverConfig)
}
if err != nil {
logrus.Fatal(err.Error())
}
return
}
func usage() {
fmt.Println("usage:", os.Args[0])
flag.PrintDefaults()
}
// debugServer starts the debug server with pprof, expvar among other
// endpoints. The addr should not be exposed externally. For most of these to
// work, tls cannot be enabled on the endpoint, so it is generally separate.
func debugServer(addr string) {
logrus.Infof("Debug server listening on %s", addr)
if err := http.ListenAndServe(addr, nil); err != nil {
logrus.Fatalf("error listening on debug interface: %v", err)
}
}

View File

@ -1,442 +0,0 @@
package main
import (
"bytes"
"crypto/tls"
"fmt"
"io/ioutil"
"os"
"reflect"
"strings"
"syscall"
"testing"
"time"
"github.com/Sirupsen/logrus"
"github.com/docker/distribution/health"
"github.com/docker/notary"
"github.com/docker/notary/server/storage"
"github.com/docker/notary/signer/client"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/tuf/signed"
"github.com/docker/notary/utils"
_ "github.com/mattn/go-sqlite3"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
)
const (
Cert = "../../fixtures/notary-server.crt"
Key = "../../fixtures/notary-server.key"
Root = "../../fixtures/root-ca.crt"
)
// initializes a viper object with test configuration
func configure(jsonConfig string) *viper.Viper {
config := viper.New()
config.SetConfigType("json")
config.ReadConfig(bytes.NewBuffer([]byte(jsonConfig)))
return config
}
func TestGetAddrAndTLSConfigInvalidTLS(t *testing.T) {
invalids := []string{
`{"server": {
"http_addr": ":1234",
"tls_key_file": "nope"
}}`,
}
for _, configJSON := range invalids {
_, _, err := getAddrAndTLSConfig(configure(configJSON))
require.Error(t, err)
}
}
func TestGetAddrAndTLSConfigNoHTTPAddr(t *testing.T) {
_, _, err := getAddrAndTLSConfig(configure(fmt.Sprintf(`{
"server": {
"tls_cert_file": "%s",
"tls_key_file": "%s"
}
}`, Cert, Key)))
require.Error(t, err)
require.Contains(t, err.Error(), "http listen address required for server")
}
func TestGetAddrAndTLSConfigSuccessWithTLS(t *testing.T) {
httpAddr, tlsConf, err := getAddrAndTLSConfig(configure(fmt.Sprintf(`{
"server": {
"http_addr": ":2345",
"tls_cert_file": "%s",
"tls_key_file": "%s"
}
}`, Cert, Key)))
require.NoError(t, err)
require.Equal(t, ":2345", httpAddr)
require.NotNil(t, tlsConf)
}
func TestGetAddrAndTLSConfigSuccessWithoutTLS(t *testing.T) {
httpAddr, tlsConf, err := getAddrAndTLSConfig(configure(
`{"server": {"http_addr": ":2345"}}`))
require.NoError(t, err)
require.Equal(t, ":2345", httpAddr)
require.Nil(t, tlsConf)
}
func TestGetAddrAndTLSConfigWithClientTLS(t *testing.T) {
httpAddr, tlsConf, err := getAddrAndTLSConfig(configure(fmt.Sprintf(`{
"server": {
"http_addr": ":2345",
"tls_cert_file": "%s",
"tls_key_file": "%s",
"client_ca_file": "%s"
}
}`, Cert, Key, Root)))
require.NoError(t, err)
require.Equal(t, ":2345", httpAddr)
require.NotNil(t, tlsConf.ClientCAs)
}
func fakeRegisterer(callCount *int) healthRegister {
return func(_ string, _ time.Duration, _ health.CheckFunc) {
(*callCount)++
}
}
// If neither "remote" nor "local" is passed for "trust_service.type", an
// error is returned.
func TestGetInvalidTrustService(t *testing.T) {
invalids := []string{
`{"trust_service": {"type": "bruhaha", "key_algorithm": "rsa"}}`,
`{}`,
}
var registerCalled = 0
for _, config := range invalids {
_, _, err := getTrustService(configure(config),
client.NewNotarySigner, fakeRegisterer(&registerCalled))
require.Error(t, err)
require.Contains(t, err.Error(),
"must specify either a \"local\" or \"remote\" type for trust_service")
}
// no health function ever registered
require.Equal(t, 0, registerCalled)
}
// If a local trust service is specified, a local trust service will be used
// with an ED22519 algorithm no matter what algorithm was specified. No health
// function is configured.
func TestGetLocalTrustService(t *testing.T) {
localConfig := `{"trust_service": {"type": "local", "key_algorithm": "meh"}}`
var registerCalled = 0
trust, algo, err := getTrustService(configure(localConfig),
client.NewNotarySigner, fakeRegisterer(&registerCalled))
require.NoError(t, err)
require.IsType(t, &signed.Ed25519{}, trust)
require.Equal(t, data.ED25519Key, algo)
// no health function ever registered
require.Equal(t, 0, registerCalled)
}
// Invalid key algorithms result in an error if a remote trust service was
// specified.
func TestGetTrustServiceInvalidKeyAlgorithm(t *testing.T) {
configTemplate := `
{
"trust_service": {
"type": "remote",
"hostname": "blah",
"port": "1234",
"key_algorithm": "%s"
}
}`
badKeyAlgos := []string{
fmt.Sprintf(configTemplate, ""),
fmt.Sprintf(configTemplate, data.ECDSAx509Key),
fmt.Sprintf(configTemplate, "random"),
}
var registerCalled = 0
for _, config := range badKeyAlgos {
_, _, err := getTrustService(configure(config),
client.NewNotarySigner, fakeRegisterer(&registerCalled))
require.Error(t, err)
require.Contains(t, err.Error(), "invalid key algorithm")
}
// no health function ever registered
require.Equal(t, 0, registerCalled)
}
// template to be used for testing TLS parsing with the trust service
var trustTLSConfigTemplate = `
{
"trust_service": {
"type": "remote",
"hostname": "notary-signer",
"port": "1234",
"key_algorithm": "ecdsa",
%s
}
}`
// Client cert and Key either both have to be empty or both have to be
// provided.
func TestGetTrustServiceTLSMissingCertOrKey(t *testing.T) {
configs := []string{
fmt.Sprintf(`"tls_client_cert": "%s"`, Cert),
fmt.Sprintf(`"tls_client_key": "%s"`, Key),
}
var registerCalled = 0
for _, clientTLSConfig := range configs {
jsonConfig := fmt.Sprintf(trustTLSConfigTemplate, clientTLSConfig)
config := configure(jsonConfig)
_, _, err := getTrustService(config, client.NewNotarySigner,
fakeRegisterer(&registerCalled))
require.Error(t, err)
require.True(t,
strings.Contains(err.Error(), "either pass both client key and cert, or neither"))
}
// no health function ever registered
require.Equal(t, 0, registerCalled)
}
// If no TLS configuration is provided for the host server, no TLS config will
// be set for the trust service.
func TestGetTrustServiceNoTLSConfig(t *testing.T) {
config := `{
"trust_service": {
"type": "remote",
"hostname": "notary-signer",
"port": "1234",
"key_algorithm": "ecdsa"
}
}`
var registerCalled = 0
var tlsConfig *tls.Config
var fakeNewSigner = func(_, _ string, c *tls.Config) *client.NotarySigner {
tlsConfig = c
return &client.NotarySigner{}
}
trust, algo, err := getTrustService(configure(config),
fakeNewSigner, fakeRegisterer(&registerCalled))
require.NoError(t, err)
require.IsType(t, &client.NotarySigner{}, trust)
require.Equal(t, "ecdsa", algo)
require.Nil(t, tlsConfig.RootCAs)
require.Nil(t, tlsConfig.Certificates)
// health function registered
require.Equal(t, 1, registerCalled)
}
// The rest of the functionality of getTrustService depends upon
// utils.ConfigureClientTLS, so this test just asserts that if successful,
// the correct tls.Config is returned based on all the configuration parameters
func TestGetTrustServiceTLSSuccess(t *testing.T) {
keypair, err := tls.LoadX509KeyPair(Cert, Key)
require.NoError(t, err, "Unable to load cert and key for testing")
tlspart := fmt.Sprintf(`"tls_client_cert": "%s", "tls_client_key": "%s"`,
Cert, Key)
var registerCalled = 0
var tlsConfig *tls.Config
var fakeNewSigner = func(_, _ string, c *tls.Config) *client.NotarySigner {
tlsConfig = c
return &client.NotarySigner{}
}
trust, algo, err := getTrustService(
configure(fmt.Sprintf(trustTLSConfigTemplate, tlspart)),
fakeNewSigner, fakeRegisterer(&registerCalled))
require.NoError(t, err)
require.IsType(t, &client.NotarySigner{}, trust)
require.Equal(t, "ecdsa", algo)
require.Len(t, tlsConfig.Certificates, 1)
require.True(t, reflect.DeepEqual(keypair, tlsConfig.Certificates[0]))
// health function registered
require.Equal(t, 1, registerCalled)
}
// The rest of the functionality of getTrustService depends upon
// utils.ConfigureServerTLS, so this test just asserts that if it fails,
// the error is propagated.
func TestGetTrustServiceTLSFailure(t *testing.T) {
tlspart := fmt.Sprintf(`"tls_client_cert": "none", "tls_client_key": "%s"`,
Key)
var registerCalled = 0
_, _, err := getTrustService(
configure(fmt.Sprintf(trustTLSConfigTemplate, tlspart)),
client.NewNotarySigner, fakeRegisterer(&registerCalled))
require.Error(t, err)
require.True(t, strings.Contains(err.Error(),
"Unable to configure TLS to the trust service"))
// no health function ever registered
require.Equal(t, 0, registerCalled)
}
// Just to ensure that errors are propagated
func TestGetStoreInvalid(t *testing.T) {
config := `{"storage": {"backend": "asdf", "db_url": "/tmp/1234"}}`
var registerCalled = 0
_, err := getStore(configure(config), fakeRegisterer(&registerCalled))
require.Error(t, err)
// no health function ever registered
require.Equal(t, 0, registerCalled)
}
func TestGetStoreDBStore(t *testing.T) {
tmpFile, err := ioutil.TempFile("/tmp", "sqlite3")
require.NoError(t, err)
tmpFile.Close()
defer os.Remove(tmpFile.Name())
config := fmt.Sprintf(`{"storage": {"backend": "%s", "db_url": "%s"}}`,
notary.SQLiteBackend, tmpFile.Name())
var registerCalled = 0
store, err := getStore(configure(config), fakeRegisterer(&registerCalled))
require.NoError(t, err)
_, ok := store.(storage.TUFMetaStorage)
require.True(t, ok)
// health function registered
require.Equal(t, 1, registerCalled)
}
func TestGetStoreRethinkDBStoreConnectionFails(t *testing.T) {
config := fmt.Sprintf(
`{"storage": {
"backend": "%s",
"db_url": "host:port",
"tls_ca_file": "/tls/ca.pem",
"client_cert_file": "/tls/cert.pem",
"client_key_file": "/tls/key.pem",
"database": "rethinkdbtest"
}
}`,
notary.RethinkDBBackend)
var registerCalled = 0
_, err := getStore(configure(config), fakeRegisterer(&registerCalled))
require.Error(t, err)
}
func TestGetMemoryStore(t *testing.T) {
var registerCalled = 0
config := fmt.Sprintf(`{"storage": {"backend": "%s"}}`, notary.MemoryBackend)
store, err := getStore(configure(config), fakeRegisterer(&registerCalled))
require.NoError(t, err)
_, ok := store.(*storage.MemStorage)
require.True(t, ok)
// no health function ever registered
require.Equal(t, 0, registerCalled)
}
func TestGetCacheConfig(t *testing.T) {
defaults := `{}`
valid := `{"caching": {"max_age": {"current_metadata": 0, "consistent_metadata": 31536000}}}`
invalids := []string{
`{"caching": {"max_age": {"current_metadata": 0, "consistent_metadata": 31539000}}}`,
`{"caching": {"max_age": {"current_metadata": -1, "consistent_metadata": 300}}}`,
`{"caching": {"max_age": {"current_metadata": "hello", "consistent_metadata": 300}}}`,
}
current, consistent, err := getCacheConfig(configure(defaults))
require.NoError(t, err)
require.Equal(t,
utils.PublicCacheControl{MaxAgeInSeconds: int(notary.CurrentMetadataCacheMaxAge.Seconds()),
MustReValidate: true}, current)
require.Equal(t,
utils.PublicCacheControl{MaxAgeInSeconds: int(notary.ConsistentMetadataCacheMaxAge.Seconds())}, consistent)
current, consistent, err = getCacheConfig(configure(valid))
require.NoError(t, err)
require.Equal(t, utils.NoCacheControl{}, current)
require.Equal(t, utils.PublicCacheControl{MaxAgeInSeconds: 31536000}, consistent)
for _, invalid := range invalids {
_, _, err := getCacheConfig(configure(invalid))
require.Error(t, err)
}
}
func TestGetGUNPRefixes(t *testing.T) {
valids := map[string][]string{
`{}`: nil,
`{"repositories": {"gun_prefixes": []}}`: nil,
`{"repositories": {}}`: nil,
`{"repositories": {"gun_prefixes": ["hello/"]}}`: {"hello/"},
}
invalids := []string{
`{"repositories": {"gun_prefixes": " / "}}`,
`{"repositories": {"gun_prefixes": "nope"}}`,
`{"repositories": {"gun_prefixes": ["nope"]}}`,
`{"repositories": {"gun_prefixes": ["/nope/"]}}`,
`{"repositories": {"gun_prefixes": ["../nope/"]}}`,
}
for valid, expected := range valids {
prefixes, err := getRequiredGunPrefixes(configure(valid))
require.NoError(t, err)
require.Equal(t, expected, prefixes)
}
for _, invalid := range invalids {
_, err := getRequiredGunPrefixes(configure(invalid))
require.Error(t, err, "expected error with %s", invalid)
}
}
// For sanity, make sure we can always parse the sample config
func TestSampleConfig(t *testing.T) {
var registerCalled = 0
_, _, err := parseServerConfig("../../fixtures/server-config.json", fakeRegisterer(&registerCalled))
require.NoError(t, err)
// once for the DB, once for the trust service
require.Equal(t, registerCalled, 2)
}
func TestSignalHandle(t *testing.T) {
f, err := os.Create("/tmp/testSignalHandle.json")
defer os.Remove(f.Name())
require.NoError(t, err)
f.WriteString(`{"logging": {"level": "info"}}`)
v := viper.New()
utils.SetupViper(v, "envPrefix")
err = utils.ParseViper(v, f.Name())
require.NoError(t, err)
// Info + SIGUSR1 -> Debug
signalHandle(syscall.SIGUSR1)
require.Equal(t, logrus.GetLevel(), logrus.DebugLevel)
// Debug + SIGUSR1 -> Debug
signalHandle(syscall.SIGUSR1)
require.Equal(t, logrus.GetLevel(), logrus.DebugLevel)
// Debug + SIGUSR2-> Info
signalHandle(syscall.SIGUSR2)
require.Equal(t, logrus.GetLevel(), logrus.InfoLevel)
}

View File

@ -1,251 +0,0 @@
package main
import (
"crypto/tls"
"errors"
"fmt"
"net"
"net/http"
"os"
"strings"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"github.com/Sirupsen/logrus"
"github.com/docker/distribution/health"
"github.com/docker/go-connections/tlsconfig"
"github.com/docker/notary"
"github.com/docker/notary/cryptoservice"
"github.com/docker/notary/passphrase"
pb "github.com/docker/notary/proto"
"github.com/docker/notary/signer"
"github.com/docker/notary/signer/api"
"github.com/docker/notary/signer/keydbstore"
"github.com/docker/notary/storage"
"github.com/docker/notary/storage/rethinkdb"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
tufutils "github.com/docker/notary/tuf/utils"
"github.com/docker/notary/utils"
"github.com/spf13/viper"
"gopkg.in/dancannon/gorethink.v2"
)
const (
envPrefix = "NOTARY_SIGNER"
defaultAliasEnv = "DEFAULT_ALIAS"
)
func parseSignerConfig(configFilePath string) (signer.Config, error) {
config := viper.New()
utils.SetupViper(config, envPrefix)
// parse viper config
if err := utils.ParseViper(config, configFilePath); err != nil {
return signer.Config{}, err
}
// default is error level
lvl, err := utils.ParseLogLevel(config, logrus.ErrorLevel)
if err != nil {
return signer.Config{}, err
}
logrus.SetLevel(lvl)
// parse bugsnag config
bugsnagConf, err := utils.ParseBugsnag(config)
if err != nil {
return signer.Config{}, err
}
utils.SetUpBugsnag(bugsnagConf)
// parse server config
httpAddr, grpcAddr, tlsConfig, err := getAddrAndTLSConfig(config)
if err != nil {
return signer.Config{}, err
}
// setup the cryptoservices
cryptoServices, err := setUpCryptoservices(config, []string{notary.MySQLBackend, notary.MemoryBackend, notary.RethinkDBBackend})
if err != nil {
return signer.Config{}, err
}
return signer.Config{
HTTPAddr: httpAddr,
GRPCAddr: grpcAddr,
TLSConfig: tlsConfig,
CryptoServices: cryptoServices,
}, nil
}
func getEnv(env string) string {
v := viper.New()
utils.SetupViper(v, envPrefix)
return v.GetString(strings.ToUpper(env))
}
func passphraseRetriever(keyName, alias string, createNew bool, attempts int) (passphrase string, giveup bool, err error) {
passphrase = getEnv(alias)
if passphrase == "" {
return "", false, errors.New("expected env variable to not be empty: " + alias)
}
return passphrase, false, nil
}
// Reads the configuration file for storage setup, and sets up the cryptoservice
// mapping
func setUpCryptoservices(configuration *viper.Viper, allowedBackends []string) (
signer.CryptoServiceIndex, error) {
backend := configuration.GetString("storage.backend")
if !tufutils.StrSliceContains(allowedBackends, backend) {
return nil, fmt.Errorf("%s is not an allowed backend, must be one of: %s", backend, allowedBackends)
}
var keyStore trustmanager.KeyStore
switch backend {
case notary.MemoryBackend:
keyStore = trustmanager.NewKeyMemoryStore(
passphrase.ConstantRetriever("memory-db-ignore"))
case notary.RethinkDBBackend:
var sess *gorethink.Session
storeConfig, err := utils.ParseRethinkDBStorage(configuration)
if err != nil {
return nil, err
}
defaultAlias, err := getDefaultAlias(configuration)
if err != nil {
return nil, err
}
tlsOpts := tlsconfig.Options{
CAFile: storeConfig.CA,
CertFile: storeConfig.Cert,
KeyFile: storeConfig.Key,
}
if doBootstrap {
sess, err = rethinkdb.AdminConnection(tlsOpts, storeConfig.Source)
} else {
sess, err = rethinkdb.UserConnection(tlsOpts, storeConfig.Source, storeConfig.Username, storeConfig.Password)
}
if err != nil {
return nil, fmt.Errorf("Error starting %s driver: %s", backend, err.Error())
}
s := keydbstore.NewRethinkDBKeyStore(storeConfig.DBName, storeConfig.Username, storeConfig.Password, passphraseRetriever, defaultAlias, sess)
health.RegisterPeriodicFunc("DB operational", time.Minute, s.CheckHealth)
keyStore = s
case notary.MySQLBackend, notary.SQLiteBackend:
storeConfig, err := utils.ParseSQLStorage(configuration)
if err != nil {
return nil, err
}
defaultAlias, err := getDefaultAlias(configuration)
if err != nil {
return nil, err
}
dbStore, err := keydbstore.NewKeyDBStore(
passphraseRetriever, defaultAlias, storeConfig.Backend, storeConfig.Source)
if err != nil {
return nil, fmt.Errorf("failed to create a new keydbstore: %v", err)
}
health.RegisterPeriodicFunc(
"DB operational", time.Minute, dbStore.HealthCheck)
keyStore = dbStore
}
if doBootstrap {
err := bootstrap(keyStore)
if err != nil {
logrus.Fatal(err.Error())
}
os.Exit(0)
}
cryptoService := cryptoservice.NewCryptoService(keyStore)
cryptoServices := make(signer.CryptoServiceIndex)
cryptoServices[data.ED25519Key] = cryptoService
cryptoServices[data.ECDSAKey] = cryptoService
return cryptoServices, nil
}
func getDefaultAlias(configuration *viper.Viper) (string, error) {
defaultAlias := configuration.GetString("storage.default_alias")
if defaultAlias == "" {
// backwards compatibility - support this environment variable
defaultAlias = configuration.GetString(defaultAliasEnv)
}
if defaultAlias == "" {
return "", fmt.Errorf("must provide a default alias for the key DB")
}
logrus.Debug("Default Alias: ", defaultAlias)
return defaultAlias, nil
}
// set up the GRPC server
func setupGRPCServer(grpcAddr string, tlsConfig *tls.Config,
cryptoServices signer.CryptoServiceIndex) (*grpc.Server, net.Listener, error) {
//RPC server setup
kms := &api.KeyManagementServer{CryptoServices: cryptoServices,
HealthChecker: health.CheckStatus}
ss := &api.SignerServer{CryptoServices: cryptoServices,
HealthChecker: health.CheckStatus}
lis, err := net.Listen("tcp", grpcAddr)
if err != nil {
return nil, nil, fmt.Errorf("grpc server failed to listen on %s: %v",
grpcAddr, err)
}
creds := credentials.NewTLS(tlsConfig)
opts := []grpc.ServerOption{grpc.Creds(creds)}
grpcServer := grpc.NewServer(opts...)
pb.RegisterKeyManagementServer(grpcServer, kms)
pb.RegisterSignerServer(grpcServer, ss)
return grpcServer, lis, nil
}
func setupHTTPServer(httpAddr string, tlsConfig *tls.Config,
cryptoServices signer.CryptoServiceIndex) *http.Server {
return &http.Server{
Addr: httpAddr,
Handler: api.Handlers(cryptoServices),
TLSConfig: tlsConfig,
}
}
func getAddrAndTLSConfig(configuration *viper.Viper) (string, string, *tls.Config, error) {
tlsConfig, err := utils.ParseServerTLS(configuration, true)
if err != nil {
return "", "", nil, fmt.Errorf("unable to set up TLS: %s", err.Error())
}
grpcAddr := configuration.GetString("server.grpc_addr")
if grpcAddr == "" {
return "", "", nil, fmt.Errorf("grpc listen address required for server")
}
httpAddr := configuration.GetString("server.http_addr")
if httpAddr == "" {
return "", "", nil, fmt.Errorf("http listen address required for server")
}
return httpAddr, grpcAddr, tlsConfig, nil
}
func bootstrap(s interface{}) error {
store, ok := s.(storage.Bootstrapper)
if !ok {
return fmt.Errorf("Store does not support bootstrapping.")
}
return store.Bootstrap()
}

View File

@ -1,88 +0,0 @@
package main
import (
_ "expvar"
"flag"
"log"
"net/http"
"os"
"github.com/Sirupsen/logrus"
"github.com/docker/notary/version"
_ "github.com/go-sql-driver/mysql"
)
const (
jsonLogFormat = "json"
debugAddr = "localhost:8080"
)
var (
debug bool
logFormat string
configFile string
doBootstrap bool
)
func init() {
// Setup flags
flag.StringVar(&configFile, "config", "", "Path to configuration file")
flag.BoolVar(&debug, "debug", false, "Show the version and exit")
flag.StringVar(&logFormat, "logf", "json", "Set the format of the logs. Only 'json' and 'logfmt' are supported at the moment.")
flag.BoolVar(&doBootstrap, "bootstrap", false, "Do any necessary setup of configured backend storage services")
// this needs to be in init so that _ALL_ logs are in the correct format
if logFormat == jsonLogFormat {
logrus.SetFormatter(new(logrus.JSONFormatter))
}
}
func main() {
flag.Usage = usage
flag.Parse()
if debug {
go debugServer(debugAddr)
}
// when the signer starts print the version for debugging and issue logs later
logrus.Infof("Version: %s, Git commit: %s", version.NotaryVersion, version.GitCommit)
signerConfig, err := parseSignerConfig(configFile)
if err != nil {
logrus.Fatal(err.Error())
}
grpcServer, lis, err := setupGRPCServer(signerConfig.GRPCAddr, signerConfig.TLSConfig, signerConfig.CryptoServices)
if err != nil {
logrus.Fatal(err.Error())
}
httpServer := setupHTTPServer(signerConfig.HTTPAddr, signerConfig.TLSConfig, signerConfig.CryptoServices)
if debug {
log.Println("RPC server listening on", signerConfig.GRPCAddr)
log.Println("HTTP server listening on", signerConfig.HTTPAddr)
}
go grpcServer.Serve(lis)
err = httpServer.ListenAndServeTLS("", "")
if err != nil {
log.Fatal("HTTPS server failed to start:", err)
}
}
func usage() {
log.Println("usage:", os.Args[0], "<config>")
flag.PrintDefaults()
}
// debugServer starts the debug server with pprof, expvar among other
// endpoints. The addr should not be exposed externally. For most of these to
// work, tls cannot be enabled on the endpoint, so it is generally separate.
func debugServer(addr string) {
logrus.Infof("Debug server listening on %s", addr)
if err := http.ListenAndServe(addr, nil); err != nil {
logrus.Fatalf("error listening on debug interface: %v", err)
}
}

View File

@ -1,299 +0,0 @@
package main
import (
"bytes"
"crypto/tls"
"fmt"
"io/ioutil"
"os"
"testing"
"github.com/docker/notary"
"github.com/docker/notary/signer"
"github.com/docker/notary/signer/keydbstore"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/tuf/testutils"
"github.com/jinzhu/gorm"
_ "github.com/mattn/go-sqlite3"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
)
const (
Cert = "../../fixtures/notary-signer.crt"
Key = "../../fixtures/notary-signer.key"
Root = "../../fixtures/root-ca.crt"
)
// initializes a viper object with test configuration
func configure(jsonConfig string) *viper.Viper {
config := viper.New()
config.SetConfigType("json")
config.ReadConfig(bytes.NewBuffer([]byte(jsonConfig)))
return config
}
// If the TLS configuration is invalid, an error is returned. This doesn't test
// all the cases of the TLS configuration being invalid, since it's just
// calling configuration.ParseTLSConfig - this test just makes sure the
// error is propagated.
func TestGetAddrAndTLSConfigInvalidTLS(t *testing.T) {
invalids := []string{
`{"server": {"http_addr": ":1234", "grpc_addr": ":2345"}}`,
`{"server": {
"http_addr": ":1234",
"grpc_addr": ":2345",
"tls_cert_file": "nope",
"tls_key_file": "nope"
}}`,
}
for _, configJSON := range invalids {
_, _, _, err := getAddrAndTLSConfig(configure(configJSON))
require.Error(t, err)
require.Contains(t, err.Error(), "unable to set up TLS")
}
}
// If a GRPC address is not provided, an error is returned.
func TestGetAddrAndTLSConfigNoGRPCAddr(t *testing.T) {
_, _, _, err := getAddrAndTLSConfig(configure(fmt.Sprintf(`{
"server": {
"http_addr": ":1234",
"tls_cert_file": "%s",
"tls_key_file": "%s"
}
}`, Cert, Key)))
require.Error(t, err)
require.Contains(t, err.Error(), "grpc listen address required for server")
}
// If an HTTP address is not provided, an error is returned.
func TestGetAddrAndTLSConfigNoHTTPAddr(t *testing.T) {
_, _, _, err := getAddrAndTLSConfig(configure(fmt.Sprintf(`{
"server": {
"grpc_addr": ":1234",
"tls_cert_file": "%s",
"tls_key_file": "%s"
}
}`, Cert, Key)))
require.Error(t, err)
require.Contains(t, err.Error(), "http listen address required for server")
}
// Success parsing a valid TLS config, HTTP address, and GRPC address.
func TestGetAddrAndTLSConfigSuccess(t *testing.T) {
httpAddr, grpcAddr, tlsConf, err := getAddrAndTLSConfig(configure(fmt.Sprintf(`{
"server": {
"http_addr": ":2345",
"grpc_addr": ":1234",
"tls_cert_file": "%s",
"tls_key_file": "%s"
}
}`, Cert, Key)))
require.NoError(t, err)
require.Equal(t, ":2345", httpAddr)
require.Equal(t, ":1234", grpcAddr)
require.NotNil(t, tlsConf)
}
// If a default alias is not provided to a DB backend, an error is returned.
func TestSetupCryptoServicesDBStoreNoDefaultAlias(t *testing.T) {
tmpFile, err := ioutil.TempFile("/tmp", "sqlite3")
require.NoError(t, err)
tmpFile.Close()
defer os.Remove(tmpFile.Name())
_, err = setUpCryptoservices(
configure(fmt.Sprintf(
`{"storage": {"backend": "%s", "db_url": "%s"}}`,
notary.SQLiteBackend, tmpFile.Name())),
[]string{notary.SQLiteBackend})
require.Error(t, err)
require.Contains(t, err.Error(), "must provide a default alias for the key DB")
}
// If a default alias is not provided to a rethinkdb backend, an error is returned.
func TestSetupCryptoServicesRethinkDBStoreNoDefaultAlias(t *testing.T) {
_, err := setUpCryptoservices(
configure(fmt.Sprintf(
`{"storage": {
"backend": "%s",
"db_url": "host:port",
"tls_ca_file": "/tls/ca.pem",
"client_cert_file": "/tls/cert.pem",
"client_key_file": "/tls/key.pem",
"database": "rethinkdbtest",
"username": "signer",
"password": "password"
}
}`,
notary.RethinkDBBackend)),
[]string{notary.RethinkDBBackend})
require.Error(t, err)
require.Contains(t, err.Error(), "must provide a default alias for the key DB")
}
func TestSetupCryptoServicesRethinkDBStoreConnectionFails(t *testing.T) {
// We don't have a rethink instance up, so the Connection() call will fail
_, err := setUpCryptoservices(
configure(fmt.Sprintf(
`{"storage": {
"backend": "%s",
"db_url": "host:port",
"tls_ca_file": "../../fixtures/rethinkdb/ca.pem",
"client_cert_file": "../../fixtures/rethinkdb/cert.pem",
"client_key_file": "../../fixtures/rethinkdb/key.pem",
"database": "rethinkdbtest",
"username": "signer",
"password": "password"
},
"default_alias": "timestamp"
}`,
notary.RethinkDBBackend)),
[]string{notary.RethinkDBBackend})
require.Error(t, err)
require.Contains(t, err.Error(), "no connections were made when creating the session")
}
// If a default alias *is* provided to a valid DB backend, a valid
// CryptoService is returned. (This depends on ParseStorage, which is tested
// separately, so this doesn't test all the possible cases of storage
// success/failure).
func TestSetupCryptoServicesDBStoreSuccess(t *testing.T) {
tmpFile, err := ioutil.TempFile("/tmp", "sqlite3")
require.NoError(t, err)
tmpFile.Close()
defer os.Remove(tmpFile.Name())
// Ensure that the private_key table exists
db, err := gorm.Open("sqlite3", tmpFile.Name())
require.NoError(t, err)
var (
gormKey = keydbstore.GormPrivateKey{}
count int
)
db.CreateTable(&gormKey)
db.Model(&gormKey).Count(&count)
require.Equal(t, 0, count)
cryptoServices, err := setUpCryptoservices(
configure(fmt.Sprintf(
`{"storage": {"backend": "%s", "db_url": "%s"},
"default_alias": "timestamp"}`,
notary.SQLiteBackend, tmpFile.Name())),
[]string{notary.SQLiteBackend})
require.NoError(t, err)
require.Len(t, cryptoServices, 2)
edService, ok := cryptoServices[data.ED25519Key]
require.True(t, ok)
ecService, ok := cryptoServices[data.ECDSAKey]
require.True(t, ok)
require.Equal(t, edService, ecService)
// since the keystores are not exposed by CryptoService, try creating
// a key and seeing if it is in the sqlite DB.
os.Setenv("NOTARY_SIGNER_TIMESTAMP", "password")
defer os.Unsetenv("NOTARY_SIGNER_TIMESTAMP")
_, err = ecService.Create("timestamp", "", data.ECDSAKey)
require.NoError(t, err)
db.Model(&gormKey).Count(&count)
require.Equal(t, 1, count)
}
// If a memory backend is specified, then a default alias is not needed, and
// a valid CryptoService is returned.
func TestSetupCryptoServicesMemoryStore(t *testing.T) {
config := configure(fmt.Sprintf(`{"storage": {"backend": "%s"}}`,
notary.MemoryBackend))
cryptoServices, err := setUpCryptoservices(config,
[]string{notary.SQLiteBackend, notary.MemoryBackend})
require.NoError(t, err)
require.Len(t, cryptoServices, 2)
edService, ok := cryptoServices[data.ED25519Key]
require.True(t, ok)
ecService, ok := cryptoServices[data.ECDSAKey]
require.True(t, ok)
require.Equal(t, edService, ecService)
// since the keystores are not exposed by CryptoService, try creating
// and getting the key
pubKey, err := ecService.Create("", "", data.ECDSAKey)
require.NoError(t, err)
privKey, _, err := ecService.GetPrivateKey(pubKey.ID())
require.NoError(t, err)
require.NotNil(t, privKey)
}
func TestSetupCryptoServicesInvalidStore(t *testing.T) {
config := configure(fmt.Sprintf(`{"storage": {"backend": "%s"}}`,
"invalid_backend"))
_, err := setUpCryptoservices(config,
[]string{notary.SQLiteBackend, notary.MemoryBackend, notary.RethinkDBBackend})
require.Error(t, err)
require.Equal(t, err.Error(), fmt.Sprintf("%s is not an allowed backend, must be one of: %s", "invalid_backend", []string{notary.SQLiteBackend, notary.MemoryBackend, notary.RethinkDBBackend}))
}
func TestSetupHTTPServer(t *testing.T) {
httpServer := setupHTTPServer(":4443", nil, make(signer.CryptoServiceIndex))
require.Equal(t, ":4443", httpServer.Addr)
require.Nil(t, httpServer.TLSConfig)
}
func TestSetupGRPCServerInvalidAddress(t *testing.T) {
_, _, err := setupGRPCServer("nope", nil, make(signer.CryptoServiceIndex))
require.Error(t, err)
require.Contains(t, err.Error(), "grpc server failed to listen on nope")
}
func TestSetupGRPCServerSuccess(t *testing.T) {
tlsConf := tls.Config{InsecureSkipVerify: true}
grpcServer, lis, err := setupGRPCServer(":7899", &tlsConf,
make(signer.CryptoServiceIndex))
defer lis.Close()
require.NoError(t, err)
require.Equal(t, "[::]:7899", lis.Addr().String())
require.Equal(t, "tcp", lis.Addr().Network())
require.NotNil(t, grpcServer)
}
func TestBootstrap(t *testing.T) {
var ks trustmanager.KeyStore
err := bootstrap(ks)
require.Error(t, err)
tb := &testutils.TestBootstrapper{}
err = bootstrap(tb)
require.NoError(t, err)
require.True(t, tb.Booted)
}
func TestGetEnv(t *testing.T) {
os.Setenv("NOTARY_SIGNER_TIMESTAMP", "password")
defer os.Unsetenv("NOTARY_SIGNER_TIMESTAMP")
require.Equal(t, "password", getEnv("timestamp"))
}
func TestPassphraseRetrieverInvalid(t *testing.T) {
_, _, err := passphraseRetriever("fakeKey", "fakeAlias", false, 1)
require.Error(t, err)
}
// For sanity, make sure we can always parse the sample config
func TestSampleConfig(t *testing.T) {
// We need to provide a default alias for the key DB.
//
// Generally it will be done during the building process
// if using signer.Dockerfile.
os.Setenv("NOTARY_SIGNER_DEFAULT_ALIAS", "timestamp_1")
defer os.Unsetenv("NOTARY_SIGNER_DEFAULT_ALIAS")
_, err := parseSignerConfig("../../fixtures/signer-config-local.json")
require.NoError(t, err)
}

View File

@ -1,7 +0,0 @@
{
"remote_server": {
"url": "https://notary-server:4443",
"root_ca": "root-ca.crt"
}
}

View File

@ -1,299 +0,0 @@
package main
import (
"fmt"
"io/ioutil"
"os"
"github.com/docker/notary"
notaryclient "github.com/docker/notary/client"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/tuf/utils"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var cmdDelegationTemplate = usageTemplate{
Use: "delegation",
Short: "Operates on delegations.",
Long: `Operations on TUF delegations.`,
}
var cmdDelegationListTemplate = usageTemplate{
Use: "list [ GUN ]",
Short: "Lists delegations for the Global Unique Name.",
Long: "Lists all delegations known to notary for a specific Global Unique Name.",
}
var cmdDelegationRemoveTemplate = usageTemplate{
Use: "remove [ GUN ] [ Role ] <KeyID 1> ...",
Short: "Remove KeyID(s) from the specified Role delegation.",
Long: "Remove KeyID(s) from the specified Role delegation in a specific Global Unique Name.",
}
var cmdDelegationAddTemplate = usageTemplate{
Use: "add [ GUN ] [ Role ] <X509 file path 1> ...",
Short: "Add a keys to delegation using the provided public key X509 certificates.",
Long: "Add a keys to delegation using the provided public key PEM encoded X509 certificates in a specific Global Unique Name.",
}
type delegationCommander struct {
// these need to be set
configGetter func() (*viper.Viper, error)
retriever notary.PassRetriever
paths []string
allPaths, removeAll, forceYes bool
}
func (d *delegationCommander) GetCommand() *cobra.Command {
cmd := cmdDelegationTemplate.ToCommand(nil)
cmd.AddCommand(cmdDelegationListTemplate.ToCommand(d.delegationsList))
cmdRemDelg := cmdDelegationRemoveTemplate.ToCommand(d.delegationRemove)
cmdRemDelg.Flags().StringSliceVar(&d.paths, "paths", nil, "List of paths to remove")
cmdRemDelg.Flags().BoolVarP(&d.forceYes, "yes", "y", false, "Answer yes to the removal question (no confirmation)")
cmdRemDelg.Flags().BoolVar(&d.allPaths, "all-paths", false, "Remove all paths from this delegation")
cmd.AddCommand(cmdRemDelg)
cmdAddDelg := cmdDelegationAddTemplate.ToCommand(d.delegationAdd)
cmdAddDelg.Flags().StringSliceVar(&d.paths, "paths", nil, "List of paths to add")
cmdAddDelg.Flags().BoolVar(&d.allPaths, "all-paths", false, "Add all paths to this delegation")
cmd.AddCommand(cmdAddDelg)
return cmd
}
// delegationsList lists all the delegations for a particular GUN
func (d *delegationCommander) delegationsList(cmd *cobra.Command, args []string) error {
if len(args) != 1 {
cmd.Usage()
return fmt.Errorf(
"Please provide a Global Unique Name as an argument to list")
}
config, err := d.configGetter()
if err != nil {
return err
}
gun := args[0]
rt, err := getTransport(config, gun, true)
if err != nil {
return err
}
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
// initialize repo with transport to get latest state of the world before listing delegations
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), rt, d.retriever, trustPin)
if err != nil {
return err
}
delegationRoles, err := nRepo.GetDelegationRoles()
if err != nil {
return fmt.Errorf("Error retrieving delegation roles for repository %s: %v", gun, err)
}
cmd.Println("")
prettyPrintRoles(delegationRoles, cmd.Out(), "delegations")
cmd.Println("")
return nil
}
// delegationRemove removes a public key from a specific role in a GUN
func (d *delegationCommander) delegationRemove(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
cmd.Usage()
return fmt.Errorf("must specify the Global Unique Name and the role of the delegation along with optional keyIDs and/or a list of paths to remove")
}
config, err := d.configGetter()
if err != nil {
return err
}
gun := args[0]
role := args[1]
// Check if role is valid delegation name before requiring any user input
if !data.IsDelegation(role) {
return fmt.Errorf("invalid delegation name %s", role)
}
// If we're only given the gun and the role, attempt to remove all data for this delegation
if len(args) == 2 && d.paths == nil && !d.allPaths {
d.removeAll = true
}
keyIDs := []string{}
if len(args) > 2 {
keyIDs = args[2:]
}
// If the user passes --all-paths, don't use any of the passed in --paths
if d.allPaths {
d.paths = nil
}
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
// no online operations are performed by add so the transport argument
// should be nil
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), nil, d.retriever, trustPin)
if err != nil {
return err
}
if d.removeAll {
cmd.Println("\nAre you sure you want to remove all data for this delegation? (yes/no)")
// Ask for confirmation before force removing delegation
if !d.forceYes {
confirmed := askConfirm(os.Stdin)
if !confirmed {
fatalf("Aborting action.")
}
} else {
cmd.Println("Confirmed `yes` from flag")
}
// Delete the entire delegation
err = nRepo.RemoveDelegationRole(role)
if err != nil {
return fmt.Errorf("failed to remove delegation: %v", err)
}
} else {
if d.allPaths {
err = nRepo.ClearDelegationPaths(role)
if err != nil {
return fmt.Errorf("failed to remove delegation: %v", err)
}
}
// Remove any keys or paths that we passed in
err = nRepo.RemoveDelegationKeysAndPaths(role, keyIDs, d.paths)
if err != nil {
return fmt.Errorf("failed to remove delegation: %v", err)
}
}
cmd.Println("")
if d.removeAll {
cmd.Printf("Forced removal (including all keys and paths) of delegation role %s to repository \"%s\" staged for next publish.\n", role, gun)
} else {
removingItems := ""
if len(keyIDs) > 0 {
removingItems = removingItems + fmt.Sprintf("with keys %s, ", keyIDs)
}
if d.allPaths {
removingItems = removingItems + "with all paths, "
}
if d.paths != nil {
removingItems = removingItems + fmt.Sprintf("with paths [%s], ", prettyPrintPaths(d.paths))
}
cmd.Printf("Removal of delegation role %s %sto repository \"%s\" staged for next publish.\n", role, removingItems, gun)
}
cmd.Println("")
return nil
}
// delegationAdd creates a new delegation by adding a public key from a certificate to a specific role in a GUN
func (d *delegationCommander) delegationAdd(cmd *cobra.Command, args []string) error {
// We must have at least the gun and role name, and at least one key or path (or the --all-paths flag) to add
if len(args) < 2 || len(args) < 3 && d.paths == nil && !d.allPaths {
cmd.Usage()
return fmt.Errorf("must specify the Global Unique Name and the role of the delegation along with the public key certificate paths and/or a list of paths to add")
}
config, err := d.configGetter()
if err != nil {
return err
}
gun := args[0]
role := args[1]
pubKeys := []data.PublicKey{}
if len(args) > 2 {
pubKeyPaths := args[2:]
for _, pubKeyPath := range pubKeyPaths {
// Read public key bytes from PEM file
pubKeyBytes, err := ioutil.ReadFile(pubKeyPath)
if err != nil {
return fmt.Errorf("unable to read public key from file: %s", pubKeyPath)
}
// Parse PEM bytes into type PublicKey
pubKey, err := trustmanager.ParsePEMPublicKey(pubKeyBytes)
if err != nil {
return fmt.Errorf("unable to parse valid public key certificate from PEM file %s: %v", pubKeyPath, err)
}
pubKeys = append(pubKeys, pubKey)
}
}
for _, path := range d.paths {
if path == "" {
d.allPaths = true
break
}
}
// If the user passes --all-paths (or gave the "" path in --paths), give the "" path
if d.allPaths {
d.paths = []string{""}
}
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
// no online operations are performed by add so the transport argument
// should be nil
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), nil, d.retriever, trustPin)
if err != nil {
return err
}
// Add the delegation to the repository
err = nRepo.AddDelegation(role, pubKeys, d.paths)
if err != nil {
return fmt.Errorf("failed to create delegation: %v", err)
}
// Make keyID slice for better CLI print
pubKeyIDs := []string{}
for _, pubKey := range pubKeys {
pubKeyID, err := utils.CanonicalKeyID(pubKey)
if err != nil {
return err
}
pubKeyIDs = append(pubKeyIDs, pubKeyID)
}
cmd.Println("")
addingItems := ""
if len(pubKeyIDs) > 0 {
addingItems = addingItems + fmt.Sprintf("with keys %s, ", pubKeyIDs)
}
if d.paths != nil || d.allPaths {
addingItems = addingItems + fmt.Sprintf("with paths [%s], ", prettyPrintPaths(d.paths))
}
cmd.Printf(
"Addition of delegation role %s %sto repository \"%s\" staged for next publish.\n",
role, addingItems, gun)
cmd.Println("")
return nil
}

View File

@ -1,189 +0,0 @@
package main
import (
"crypto/rand"
"crypto/x509"
"io/ioutil"
"os"
"testing"
"time"
"github.com/docker/notary/cryptoservice"
"github.com/docker/notary/trustmanager"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
)
var testTrustDir = "trust_dir"
func setup() *delegationCommander {
return &delegationCommander{
configGetter: func() (*viper.Viper, error) {
mainViper := viper.New()
mainViper.Set("trust_dir", testTrustDir)
return mainViper, nil
},
retriever: nil,
}
}
func TestAddInvalidDelegationName(t *testing.T) {
// Cleanup after test
defer os.RemoveAll(testTrustDir)
// Setup certificate
tempFile, err := ioutil.TempFile("/tmp", "pemfile")
require.NoError(t, err)
cert, _, err := generateValidTestCert()
_, err = tempFile.Write(trustmanager.CertToPEM(cert))
require.NoError(t, err)
tempFile.Close()
defer os.Remove(tempFile.Name())
// Setup commander
commander := setup()
// Should error due to invalid delegation name (should be prefixed by "targets/")
err = commander.delegationAdd(commander.GetCommand(), []string{"gun", "INVALID_NAME", tempFile.Name()})
require.Error(t, err)
}
func TestAddInvalidDelegationCert(t *testing.T) {
// Cleanup after test
defer os.RemoveAll(testTrustDir)
// Setup certificate
tempFile, err := ioutil.TempFile("/tmp", "pemfile")
require.NoError(t, err)
cert, _, err := generateExpiredTestCert()
_, err = tempFile.Write(trustmanager.CertToPEM(cert))
require.NoError(t, err)
tempFile.Close()
defer os.Remove(tempFile.Name())
// Setup commander
commander := setup()
// Should error due to expired cert
err = commander.delegationAdd(commander.GetCommand(), []string{"gun", "targets/delegation", tempFile.Name(), "--paths", "path"})
require.Error(t, err)
}
func TestAddInvalidShortPubkeyCert(t *testing.T) {
// Cleanup after test
defer os.RemoveAll(testTrustDir)
// Setup certificate
tempFile, err := ioutil.TempFile("/tmp", "pemfile")
require.NoError(t, err)
cert, _, err := generateShortRSAKeyTestCert()
_, err = tempFile.Write(trustmanager.CertToPEM(cert))
require.NoError(t, err)
tempFile.Close()
defer os.Remove(tempFile.Name())
// Setup commander
commander := setup()
// Should error due to short RSA key
err = commander.delegationAdd(commander.GetCommand(), []string{"gun", "targets/delegation", tempFile.Name(), "--paths", "path"})
require.Error(t, err)
}
func TestRemoveInvalidDelegationName(t *testing.T) {
// Cleanup after test
defer os.RemoveAll(testTrustDir)
// Setup commander
commander := setup()
// Should error due to invalid delegation name (should be prefixed by "targets/")
err := commander.delegationRemove(commander.GetCommand(), []string{"gun", "INVALID_NAME", "fake_key_id1", "fake_key_id2"})
require.Error(t, err)
}
func TestRemoveAllInvalidDelegationName(t *testing.T) {
// Cleanup after test
defer os.RemoveAll(testTrustDir)
// Setup commander
commander := setup()
// Should error due to invalid delegation name (should be prefixed by "targets/")
err := commander.delegationRemove(commander.GetCommand(), []string{"gun", "INVALID_NAME"})
require.Error(t, err)
}
func TestAddInvalidNumArgs(t *testing.T) {
// Setup commander
commander := setup()
// Should error due to invalid number of args (2 instead of 3)
err := commander.delegationAdd(commander.GetCommand(), []string{"not", "enough"})
require.Error(t, err)
}
func TestListInvalidNumArgs(t *testing.T) {
// Setup commander
commander := setup()
// Should error due to invalid number of args (0 instead of 1)
err := commander.delegationsList(commander.GetCommand(), []string{})
require.Error(t, err)
}
func TestRemoveInvalidNumArgs(t *testing.T) {
// Setup commander
commander := setup()
// Should error due to invalid number of args (1 instead of 2)
err := commander.delegationRemove(commander.GetCommand(), []string{"notenough"})
require.Error(t, err)
}
func generateValidTestCert() (*x509.Certificate, string, error) {
privKey, err := trustmanager.GenerateECDSAKey(rand.Reader)
if err != nil {
return nil, "", err
}
keyID := privKey.ID()
startTime := time.Now()
endTime := startTime.AddDate(10, 0, 0)
cert, err := cryptoservice.GenerateCertificate(privKey, "gun", startTime, endTime)
if err != nil {
return nil, "", err
}
return cert, keyID, nil
}
func generateExpiredTestCert() (*x509.Certificate, string, error) {
privKey, err := trustmanager.GenerateECDSAKey(rand.Reader)
if err != nil {
return nil, "", err
}
keyID := privKey.ID()
// Set to Unix time 0 start time, valid for one more day
startTime := time.Unix(0, 0)
endTime := startTime.AddDate(0, 0, 1)
cert, err := cryptoservice.GenerateCertificate(privKey, "gun", startTime, endTime)
if err != nil {
return nil, "", err
}
return cert, keyID, nil
}
func generateShortRSAKeyTestCert() (*x509.Certificate, string, error) {
// 1024 bits is too short
privKey, err := trustmanager.GenerateRSAKey(rand.Reader, 1024)
if err != nil {
return nil, "", err
}
keyID := privKey.ID()
startTime := time.Now()
endTime := startTime.AddDate(10, 0, 0)
cert, err := cryptoservice.GenerateCertificate(privKey, "gun", startTime, endTime)
if err != nil {
return nil, "", err
}
return cert, keyID, nil
}

View File

@ -1,30 +0,0 @@
// +build !pkcs11
package main
import (
"testing"
"github.com/docker/notary"
"github.com/docker/notary/passphrase"
"github.com/spf13/cobra"
)
func init() {
NewNotaryCommand = func() *cobra.Command {
commander := &notaryCommander{
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever(testPassphrase) },
}
return commander.GetCommand()
}
}
func rootOnHardware() bool {
return false
}
// Per-test set up that is a no-op
func setUp(t *testing.T) {}
// no-op
func verifyRootKeyOnHardware(t *testing.T, rootKeyID string) {}

View File

@ -1,72 +0,0 @@
// +build pkcs11
package main
import (
"testing"
"github.com/docker/notary"
"github.com/docker/notary/passphrase"
"github.com/docker/notary/trustmanager/yubikey"
"github.com/docker/notary/tuf/data"
"github.com/spf13/cobra"
"github.com/stretchr/testify/require"
)
var _retriever notary.PassRetriever
func init() {
yubikey.SetYubikeyKeyMode(yubikey.KeymodeNone)
regRetriver := passphrase.PromptRetriever()
_retriever := func(k, a string, c bool, n int) (string, bool, error) {
if k == "Yubikey" {
return regRetriver(k, a, c, n)
}
return testPassphrase, false, nil
}
// best effort at removing keys here, so nil is fine
s, err := yubikey.NewYubiStore(nil, _retriever)
if err != nil {
for k := range s.ListKeys() {
s.RemoveKey(k)
}
}
NewNotaryCommand = func() *cobra.Command {
commander := &notaryCommander{
getRetriever: func() notary.PassRetriever { return _retriever },
}
return commander.GetCommand()
}
}
var rootOnHardware = yubikey.IsAccessible
// Per-test set up deletes all keys on the yubikey
func setUp(t *testing.T) {
//we're just removing keys here, so nil is fine
s, err := yubikey.NewYubiStore(nil, _retriever)
require.NoError(t, err)
for k := range s.ListKeys() {
err := s.RemoveKey(k)
require.NoError(t, err)
}
}
// ensures that the root is actually on the yubikey - this makes sure the
// commands are hooked up to interact with the yubikey, rather than right files
// on disk
func verifyRootKeyOnHardware(t *testing.T, rootKeyID string) {
// do not bother verifying if there is no yubikey available
if yubikey.IsAccessible() {
// //we're just getting keys here, so nil is fine
s, err := yubikey.NewYubiStore(nil, _retriever)
require.NoError(t, err)
privKey, role, err := s.GetKey(rootKeyID)
require.NoError(t, err)
require.NotNil(t, privKey)
require.Equal(t, data.CanonicalRootRole, role)
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,634 +0,0 @@
package main
import (
"archive/zip"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"
notaryclient "github.com/docker/notary/client"
"github.com/docker/notary/cryptoservice"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary"
"github.com/docker/notary/tuf/data"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var cmdKeyTemplate = usageTemplate{
Use: "key",
Short: "Operates on keys.",
Long: `Operations on private keys.`,
}
var cmdKeyListTemplate = usageTemplate{
Use: "list",
Short: "Lists keys.",
Long: "Lists all keys known to notary.",
}
var cmdRotateKeyTemplate = usageTemplate{
Use: "rotate [ GUN ] [ key role ]",
Short: "Rotate a signing (non-root) key of the given type for the given Globally Unique Name and role.",
Long: `Generates a new key for the given Globally Unique Name and role (one of "snapshot", "targets", "root", or "timestamp"). If rotating to a server-managed key, a new key is requested from the server rather than generated. If the generation or key request is successful, the key rotation is immediately published. No other changes, even if they are staged, will be published.`,
}
var cmdKeyGenerateRootKeyTemplate = usageTemplate{
Use: "generate [ algorithm ]",
Short: "Generates a new root key with a given algorithm.",
Long: "Generates a new root key with a given algorithm. If hardware key storage (e.g. a Yubikey) is available, the key will be stored both on hardware and on disk (so that it can be backed up). Please make sure to back up and then remove this on-key disk immediately afterwards.",
}
var cmdKeysBackupTemplate = usageTemplate{
Use: "backup [ zipfilename ]",
Short: "Backs up all your on-disk keys to a ZIP file.",
Long: "Backs up all of your accessible of keys. The keys are reencrypted with a new passphrase. The output is a ZIP file. If the --gun option is passed, only signing keys and no root keys will be backed up. Does not work on keys that are only in hardware (e.g. Yubikeys).",
}
var cmdKeyExportTemplate = usageTemplate{
Use: "export [ keyID ] [ pemfilename ]",
Short: "Export a private key on disk to a PEM file.",
Long: "Exports a single private key on disk, without reencrypting. The output is a PEM file. Does not work on keys that are only in hardware (e.g. Yubikeys).",
}
var cmdKeysRestoreTemplate = usageTemplate{
Use: "restore [ zipfilename ]",
Short: "Restore multiple keys from a ZIP file.",
Long: "Restores one or more keys from a ZIP file. If hardware key storage (e.g. a Yubikey) is available, root keys will be imported into the hardware, but not backed up to disk in the same location as the other, non-root keys.",
}
var cmdKeyImportTemplate = usageTemplate{
Use: "import [ pemfilename ]",
Short: "Imports a key from a PEM file.",
Long: "Imports a single key from a PEM file. If a hardware key storage (e.g. Yubikey) is available, the root key will be imported into the hardware but not backed up on disk again.",
}
var cmdKeyRemoveTemplate = usageTemplate{
Use: "remove [ keyID ]",
Short: "Removes the key with the given keyID.",
Long: "Removes the key with the given keyID. If the key is stored in more than one location, you will be asked which one to remove.",
}
var cmdKeyPasswdTemplate = usageTemplate{
Use: "passwd [ keyID ]",
Short: "Changes the passphrase for the key with the given keyID.",
Long: "Changes the passphrase for the key with the given keyID. Will require validation of the old passphrase.",
}
type keyCommander struct {
// these need to be set
configGetter func() (*viper.Viper, error)
getRetriever func() notary.PassRetriever
// these are for command line parsing - no need to set
keysExportChangePassphrase bool
keysExportGUN string
keysImportGUN string
keysImportRole string
rotateKeyRole string
rotateKeyServerManaged bool
input io.Reader
}
func (k *keyCommander) GetCommand() *cobra.Command {
cmd := cmdKeyTemplate.ToCommand(nil)
cmd.AddCommand(cmdKeyListTemplate.ToCommand(k.keysList))
cmd.AddCommand(cmdKeyGenerateRootKeyTemplate.ToCommand(k.keysGenerateRootKey))
cmd.AddCommand(cmdKeysRestoreTemplate.ToCommand(k.keysRestore))
cmdKeysImport := cmdKeyImportTemplate.ToCommand(k.keysImport)
cmdKeysImport.Flags().StringVarP(
&k.keysImportGUN, "gun", "g", "", "Globally Unique Name to import key to")
cmdKeysImport.Flags().StringVarP(
&k.keysImportRole, "role", "r", "", "Role to import key to (if not in PEM headers)")
cmd.AddCommand(cmdKeysImport)
cmd.AddCommand(cmdKeyRemoveTemplate.ToCommand(k.keyRemove))
cmd.AddCommand(cmdKeyPasswdTemplate.ToCommand(k.keyPassphraseChange))
cmdKeysBackup := cmdKeysBackupTemplate.ToCommand(k.keysBackup)
cmdKeysBackup.Flags().StringVarP(
&k.keysExportGUN, "gun", "g", "", "Globally Unique Name to export keys for")
cmd.AddCommand(cmdKeysBackup)
cmdKeyExport := cmdKeyExportTemplate.ToCommand(k.keysExport)
cmdKeyExport.Flags().BoolVarP(
&k.keysExportChangePassphrase, "change-passphrase", "p", false,
"Set a new passphrase for the key being exported")
cmd.AddCommand(cmdKeyExport)
cmdRotateKey := cmdRotateKeyTemplate.ToCommand(k.keysRotate)
cmdRotateKey.Flags().BoolVarP(&k.rotateKeyServerManaged, "server-managed", "r",
false, "Signing and key management will be handled by the remote server "+
"(no key will be generated or stored locally). "+
"Required for timestamp role, optional for snapshot role")
cmd.AddCommand(cmdRotateKey)
return cmd
}
func (k *keyCommander) keysList(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
cmd.Usage()
return fmt.Errorf("")
}
config, err := k.configGetter()
if err != nil {
return err
}
ks, err := k.getKeyStores(config, true, false)
if err != nil {
return err
}
cmd.Println("")
prettyPrintKeys(ks, cmd.Out())
cmd.Println("")
return nil
}
func (k *keyCommander) keysGenerateRootKey(cmd *cobra.Command, args []string) error {
// We require one or no arguments (since we have a default value), but if the
// user passes in more than one argument, we error out.
if len(args) > 1 {
cmd.Usage()
return fmt.Errorf(
"Please provide only one Algorithm as an argument to generate (rsa, ecdsa)")
}
// If no param is given to generate, generates an ecdsa key by default
algorithm := data.ECDSAKey
// If we were provided an argument lets attempt to use it as an algorithm
if len(args) > 0 {
algorithm = args[0]
}
allowedCiphers := map[string]bool{
data.ECDSAKey: true,
data.RSAKey: true,
}
if !allowedCiphers[strings.ToLower(algorithm)] {
return fmt.Errorf("Algorithm not allowed, possible values are: RSA, ECDSA")
}
config, err := k.configGetter()
if err != nil {
return err
}
ks, err := k.getKeyStores(config, true, true)
if err != nil {
return err
}
cs := cryptoservice.NewCryptoService(ks...)
pubKey, err := cs.Create(data.CanonicalRootRole, "", algorithm)
if err != nil {
return fmt.Errorf("Failed to create a new root key: %v", err)
}
cmd.Printf("Generated new %s root key with keyID: %s\n", algorithm, pubKey.ID())
return nil
}
// keysBackup exports a collection of keys to a ZIP file
func (k *keyCommander) keysBackup(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
cmd.Usage()
return fmt.Errorf("Must specify output filename for export")
}
config, err := k.configGetter()
if err != nil {
return err
}
ks, err := k.getKeyStores(config, false, false)
if err != nil {
return err
}
exportFilename := args[0]
cs := cryptoservice.NewCryptoService(ks...)
exportFile, err := os.Create(exportFilename)
if err != nil {
return fmt.Errorf("Error creating output file: %v", err)
}
// Must use a different passphrase retriever to avoid caching the
// unlocking passphrase and reusing that.
exportRetriever := k.getRetriever()
if k.keysExportGUN != "" {
err = cs.ExportKeysByGUN(exportFile, k.keysExportGUN, exportRetriever)
} else {
err = cs.ExportAllKeys(exportFile, exportRetriever)
}
exportFile.Close()
if err != nil {
os.Remove(exportFilename)
return fmt.Errorf("Error exporting keys: %v", err)
}
return nil
}
// keysExport exports a key by ID to a PEM file
func (k *keyCommander) keysExport(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
cmd.Usage()
return fmt.Errorf("Must specify key ID and output filename for export")
}
keyID := args[0]
exportFilename := args[1]
if len(keyID) != notary.Sha256HexSize {
return fmt.Errorf("Please specify a valid key ID")
}
config, err := k.configGetter()
if err != nil {
return err
}
ks, err := k.getKeyStores(config, true, false)
if err != nil {
return err
}
cs := cryptoservice.NewCryptoService(ks...)
keyInfo, err := cs.GetKeyInfo(keyID)
if err != nil {
return fmt.Errorf("Could not retrieve info for key %s", keyID)
}
exportFile, err := os.Create(exportFilename)
if err != nil {
return fmt.Errorf("Error creating output file: %v", err)
}
if k.keysExportChangePassphrase {
// Must use a different passphrase retriever to avoid caching the
// unlocking passphrase and reusing that.
exportRetriever := k.getRetriever()
err = cs.ExportKeyReencrypt(exportFile, keyID, exportRetriever)
} else {
err = cs.ExportKey(exportFile, keyID, keyInfo.Role)
}
exportFile.Close()
if err != nil {
os.Remove(exportFilename)
return fmt.Errorf("Error exporting %s key: %v", keyInfo.Role, err)
}
return nil
}
// keysRestore imports keys from a ZIP file
func (k *keyCommander) keysRestore(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
cmd.Usage()
return fmt.Errorf("Must specify input filename for import")
}
importFilename := args[0]
config, err := k.configGetter()
if err != nil {
return err
}
ks, err := k.getKeyStores(config, true, false)
if err != nil {
return err
}
cs := cryptoservice.NewCryptoService(ks...)
zipReader, err := zip.OpenReader(importFilename)
if err != nil {
return fmt.Errorf("Opening file for import: %v", err)
}
defer zipReader.Close()
err = cs.ImportKeysZip(zipReader.Reader, k.getRetriever())
if err != nil {
return fmt.Errorf("Error importing keys: %v", err)
}
return nil
}
// keysImport imports a private key from a PEM file for a role
func (k *keyCommander) keysImport(cmd *cobra.Command, args []string) error {
if len(args) != 1 {
cmd.Usage()
return fmt.Errorf("Must specify input filename for import")
}
config, err := k.configGetter()
if err != nil {
return err
}
ks, err := k.getKeyStores(config, true, false)
if err != nil {
return err
}
importFilename := args[0]
importFile, err := os.Open(importFilename)
if err != nil {
return fmt.Errorf("Opening file for import: %v", err)
}
defer importFile.Close()
pemBytes, err := ioutil.ReadAll(importFile)
if err != nil {
return fmt.Errorf("Error reading input file: %v", err)
}
pemRole := trustmanager.ReadRoleFromPEM(pemBytes)
// If the PEM key doesn't have a role in it, we must have --role set
if pemRole == "" && k.keysImportRole == "" {
return fmt.Errorf("Could not infer role, and no role was specified for key")
}
// If both PEM role and a --role are provided and they don't match, error
if pemRole != "" && k.keysImportRole != "" && pemRole != k.keysImportRole {
return fmt.Errorf("Specified role %s does not match role %s in PEM headers", k.keysImportRole, pemRole)
}
// Determine which role to add to between PEM headers and --role flag:
var importRole string
if k.keysImportRole != "" {
importRole = k.keysImportRole
} else {
importRole = pemRole
}
// If we're importing to targets or snapshot, we need a GUN
if (importRole == data.CanonicalTargetsRole || importRole == data.CanonicalSnapshotRole) && k.keysImportGUN == "" {
return fmt.Errorf("Must specify GUN for %s key", importRole)
}
// Root keys must be encrypted
if importRole == data.CanonicalRootRole {
if err = cryptoservice.CheckRootKeyIsEncrypted(pemBytes); err != nil {
return err
}
}
cs := cryptoservice.NewCryptoService(ks...)
// Convert to a data.PrivateKey, potentially decrypting the key
privKey, err := trustmanager.ParsePEMPrivateKey(pemBytes, "")
if err != nil {
privKey, _, err = trustmanager.GetPasswdDecryptBytes(k.getRetriever(), pemBytes, "", "imported "+importRole)
if err != nil {
return err
}
}
err = cs.AddKey(importRole, k.keysImportGUN, privKey)
if err != nil {
return fmt.Errorf("Error importing key: %v", err)
}
return nil
}
func (k *keyCommander) keysRotate(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
cmd.Usage()
return fmt.Errorf("Must specify a GUN and a key role to rotate")
}
config, err := k.configGetter()
if err != nil {
return err
}
gun := args[0]
rotateKeyRole := args[1]
rt, err := getTransport(config, gun, false)
if err != nil {
return err
}
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config),
rt, k.getRetriever(), trustPin)
if err != nil {
return err
}
if rotateKeyRole == data.CanonicalRootRole {
cmd.Print("Warning: you are about to rotate your root key.\n\n" +
"You must use your old key to sign this root rotation. We recommend that\n" +
"you sign all your future root changes with this key as well, so that\n" +
"clients can have a smoother update process. Please do not delete\n" +
"this key after rotating.\n\n" +
"Are you sure you want to proceed? (yes/no) ")
if !askConfirm(k.input) {
fmt.Fprintln(cmd.Out(), "\nAborting action.")
return nil
}
}
return nRepo.RotateKey(rotateKeyRole, k.rotateKeyServerManaged)
}
func removeKeyInteractively(keyStores []trustmanager.KeyStore, keyID string,
in io.Reader, out io.Writer) error {
var foundKeys [][]string
var storesByIndex []trustmanager.KeyStore
for _, store := range keyStores {
for keypath, keyInfo := range store.ListKeys() {
if filepath.Base(keypath) == keyID {
foundKeys = append(foundKeys,
[]string{keypath, keyInfo.Role, store.Name()})
storesByIndex = append(storesByIndex, store)
}
}
}
if len(foundKeys) == 0 {
return fmt.Errorf("No key with ID %s found.", keyID)
}
if len(foundKeys) > 1 {
for {
// ask the user for which key to delete
fmt.Fprintf(out, "Found the following matching keys:\n")
for i, info := range foundKeys {
fmt.Fprintf(out, "\t%d. %s: %s (%s)\n", i+1, info[0], info[1], info[2])
}
fmt.Fprint(out, "Which would you like to delete? Please enter a number: ")
var result string
if _, err := fmt.Fscanln(in, &result); err != nil {
return err
}
index, err := strconv.Atoi(strings.TrimSpace(result))
if err != nil || index > len(foundKeys) || index < 1 {
fmt.Fprintf(out, "\nInvalid choice: %s\n", string(result))
continue
}
foundKeys = [][]string{foundKeys[index-1]}
storesByIndex = []trustmanager.KeyStore{storesByIndex[index-1]}
fmt.Fprintln(out, "")
break
}
}
// Now the length must be 1 - ask for confirmation.
keyDescription := fmt.Sprintf("%s (role %s) from %s", foundKeys[0][0],
foundKeys[0][1], foundKeys[0][2])
fmt.Fprintf(out, "Are you sure you want to remove %s? (yes/no) ",
keyDescription)
if !askConfirm(in) {
fmt.Fprintln(out, "\nAborting action.")
return nil
}
if err := storesByIndex[0].RemoveKey(foundKeys[0][0]); err != nil {
return err
}
fmt.Fprintf(out, "\nDeleted %s.\n", keyDescription)
return nil
}
// keyRemove deletes a private key based on ID
func (k *keyCommander) keyRemove(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
cmd.Usage()
return fmt.Errorf("must specify the key ID of the key to remove")
}
config, err := k.configGetter()
if err != nil {
return err
}
ks, err := k.getKeyStores(config, true, false)
if err != nil {
return err
}
keyID := args[0]
// This is an invalid ID
if len(keyID) != notary.Sha256HexSize {
return fmt.Errorf("invalid key ID provided: %s", keyID)
}
cmd.Println("")
err = removeKeyInteractively(ks, keyID, k.input, cmd.Out())
cmd.Println("")
return err
}
// keyPassphraseChange changes the passphrase for a private key based on ID
func (k *keyCommander) keyPassphraseChange(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
cmd.Usage()
return fmt.Errorf("must specify the key ID of the key to change the passphrase of")
}
config, err := k.configGetter()
if err != nil {
return err
}
ks, err := k.getKeyStores(config, true, false)
if err != nil {
return err
}
keyID := args[0]
// This is an invalid ID
if len(keyID) != notary.Sha256HexSize {
return fmt.Errorf("invalid key ID provided: %s", keyID)
}
// Find which keyStore we should replace the key password in, and replace if we find it
var foundKeyStore trustmanager.KeyStore
var privKey data.PrivateKey
var keyInfo trustmanager.KeyInfo
var cs *cryptoservice.CryptoService
for _, keyStore := range ks {
cs = cryptoservice.NewCryptoService(keyStore)
if privKey, _, err = cs.GetPrivateKey(keyID); err == nil {
foundKeyStore = keyStore
break
}
}
if foundKeyStore == nil {
return fmt.Errorf("could not retrieve local key for key ID provided: %s", keyID)
}
// Must use a different passphrase retriever to avoid caching the
// unlocking passphrase and reusing that.
passChangeRetriever := k.getRetriever()
var addingKeyStore trustmanager.KeyStore
switch foundKeyStore.Name() {
case "yubikey":
addingKeyStore, err = getYubiStore(nil, passChangeRetriever)
keyInfo = trustmanager.KeyInfo{Role: data.CanonicalRootRole}
default:
addingKeyStore, err = trustmanager.NewKeyFileStore(config.GetString("trust_dir"), passChangeRetriever)
if err != nil {
return err
}
keyInfo, err = foundKeyStore.GetKeyInfo(keyID)
}
if err != nil {
return err
}
err = addingKeyStore.AddKey(keyInfo, privKey)
if err != nil {
return err
}
cmd.Println("")
cmd.Printf("Successfully updated passphrase for key ID: %s", keyID)
cmd.Println("")
return nil
}
func (k *keyCommander) getKeyStores(
config *viper.Viper, withHardware, hardwareBackup bool) ([]trustmanager.KeyStore, error) {
retriever := k.getRetriever()
directory := config.GetString("trust_dir")
fileKeyStore, err := trustmanager.NewKeyFileStore(directory, retriever)
if err != nil {
return nil, fmt.Errorf(
"Failed to create private key store in directory: %s", directory)
}
ks := []trustmanager.KeyStore{fileKeyStore}
if withHardware {
var yubiStore trustmanager.KeyStore
if hardwareBackup {
yubiStore, err = getYubiStore(fileKeyStore, retriever)
} else {
yubiStore, err = getYubiStore(nil, retriever)
}
if err == nil && yubiStore != nil {
// Note that the order is important, since we want to prioritize
// the yubikey store
ks = []trustmanager.KeyStore{yubiStore, fileKeyStore}
}
}
return ks, nil
}

View File

@ -1,14 +0,0 @@
// +build !pkcs11
package main
import (
"errors"
"github.com/docker/notary"
"github.com/docker/notary/trustmanager"
)
func getYubiStore(fileKeyStore trustmanager.KeyStore, ret notary.PassRetriever) (trustmanager.KeyStore, error) {
return nil, errors.New("Not built with hardware support")
}

View File

@ -1,13 +0,0 @@
// +build pkcs11
package main
import (
"github.com/docker/notary"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/trustmanager/yubikey"
)
func getYubiStore(fileKeyStore trustmanager.KeyStore, ret notary.PassRetriever) (trustmanager.KeyStore, error) {
return yubikey.NewYubiStore(fileKeyStore, ret)
}

View File

@ -1,640 +0,0 @@
package main
import (
"bytes"
"crypto/rand"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"os"
"strings"
"testing"
"golang.org/x/net/context"
"github.com/Sirupsen/logrus"
ctxu "github.com/docker/distribution/context"
"github.com/docker/notary"
"github.com/docker/notary/client"
"github.com/docker/notary/cryptoservice"
"github.com/docker/notary/passphrase"
"github.com/docker/notary/server"
"github.com/docker/notary/server/storage"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/trustpinning"
"github.com/docker/notary/tuf/data"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
)
var ret = passphrase.ConstantRetriever("pass")
// If there are no keys, removeKeyInteractively will just return an error about
// there not being any key
func TestRemoveIfNoKey(t *testing.T) {
setUp(t)
var buf bytes.Buffer
stores := []trustmanager.KeyStore{trustmanager.NewKeyMemoryStore(nil)}
err := removeKeyInteractively(stores, "12345", &buf, &buf)
require.Error(t, err)
require.Contains(t, err.Error(), "No key with ID")
}
// If there is one key, asking to remove it will ask for confirmation. Passing
// anything other than 'yes'/'y'/'' response will abort the deletion and
// not delete the key.
func TestRemoveOneKeyAbort(t *testing.T) {
setUp(t)
nos := []string{"no", "NO", "AAAARGH", " N "}
store := trustmanager.NewKeyMemoryStore(ret)
key, err := trustmanager.GenerateED25519Key(rand.Reader)
require.NoError(t, err)
err = store.AddKey(trustmanager.KeyInfo{Role: data.CanonicalRootRole, Gun: ""}, key)
require.NoError(t, err)
stores := []trustmanager.KeyStore{store}
for _, noAnswer := range nos {
var out bytes.Buffer
in := bytes.NewBuffer([]byte(noAnswer + "\n"))
err := removeKeyInteractively(stores, key.ID(), in, &out)
require.NoError(t, err)
text, err := ioutil.ReadAll(&out)
require.NoError(t, err)
output := string(text)
require.Contains(t, output, "Are you sure")
require.Contains(t, output, "Aborting action")
require.Len(t, store.ListKeys(), 1)
}
}
// If there is one key, asking to remove it will ask for confirmation. Passing
// 'yes'/'y' response will continue the deletion.
func TestRemoveOneKeyConfirm(t *testing.T) {
setUp(t)
yesses := []string{"yes", " Y "}
for _, yesAnswer := range yesses {
store := trustmanager.NewKeyMemoryStore(ret)
key, err := trustmanager.GenerateED25519Key(rand.Reader)
require.NoError(t, err)
err = store.AddKey(trustmanager.KeyInfo{Role: data.CanonicalRootRole, Gun: ""}, key)
require.NoError(t, err)
var out bytes.Buffer
in := bytes.NewBuffer([]byte(yesAnswer + "\n"))
err = removeKeyInteractively(
[]trustmanager.KeyStore{store}, key.ID(), in, &out)
require.NoError(t, err)
text, err := ioutil.ReadAll(&out)
require.NoError(t, err)
output := string(text)
require.Contains(t, output, "Are you sure")
require.Contains(t, output, "Deleted "+key.ID())
require.Len(t, store.ListKeys(), 0)
}
}
// If there is more than one key, removeKeyInteractively will ask which key to
// delete and will do so over and over until the user quits if the answer is
// invalid.
func TestRemoveMultikeysInvalidInput(t *testing.T) {
setUp(t)
in := bytes.NewBuffer([]byte("notanumber\n9999\n-3\n0"))
key, err := trustmanager.GenerateED25519Key(rand.Reader)
require.NoError(t, err)
stores := []trustmanager.KeyStore{
trustmanager.NewKeyMemoryStore(ret),
trustmanager.NewKeyMemoryStore(ret),
}
err = stores[0].AddKey(trustmanager.KeyInfo{Role: data.CanonicalRootRole, Gun: ""}, key)
require.NoError(t, err)
err = stores[1].AddKey(trustmanager.KeyInfo{Role: data.CanonicalTargetsRole, Gun: "gun"}, key)
require.NoError(t, err)
var out bytes.Buffer
err = removeKeyInteractively(stores, key.ID(), in, &out)
require.Error(t, err)
text, err := ioutil.ReadAll(&out)
require.NoError(t, err)
require.Len(t, stores[0].ListKeys(), 1)
require.Len(t, stores[1].ListKeys(), 1)
// It should have listed the keys over and over, asking which key the user
// wanted to delete
output := string(text)
require.Contains(t, output, "Found the following matching keys")
var rootCount, targetCount int
for _, line := range strings.Split(output, "\n") {
if strings.Contains(line, key.ID()) {
if strings.Contains(line, "target") {
targetCount++
} else {
rootCount++
}
}
}
require.Equal(t, rootCount, targetCount)
require.Equal(t, 5, rootCount) // original + 1 for each of the 4 invalid inputs
}
// If there is more than one key, removeKeyInteractively will ask which key to
// delete. Then it will confirm whether they want to delete, and the user can
// abort at that confirmation.
func TestRemoveMultikeysAbortChoice(t *testing.T) {
setUp(t)
in := bytes.NewBuffer([]byte("1\nn\n"))
key, err := trustmanager.GenerateED25519Key(rand.Reader)
require.NoError(t, err)
stores := []trustmanager.KeyStore{
trustmanager.NewKeyMemoryStore(ret),
trustmanager.NewKeyMemoryStore(ret),
}
err = stores[0].AddKey(trustmanager.KeyInfo{Role: data.CanonicalRootRole, Gun: ""}, key)
require.NoError(t, err)
err = stores[1].AddKey(trustmanager.KeyInfo{Role: data.CanonicalTargetsRole, Gun: "gun"}, key)
require.NoError(t, err)
var out bytes.Buffer
err = removeKeyInteractively(stores, key.ID(), in, &out)
require.NoError(t, err) // no error to abort deleting
text, err := ioutil.ReadAll(&out)
require.NoError(t, err)
require.Len(t, stores[0].ListKeys(), 1)
require.Len(t, stores[1].ListKeys(), 1)
// It should have listed the keys, asked whether the user really wanted to
// delete, and then aborted.
output := string(text)
require.Contains(t, output, "Found the following matching keys")
require.Contains(t, output, "Are you sure")
require.Contains(t, output, "Aborting action")
}
// If there is more than one key, removeKeyInteractively will ask which key to
// delete. Then it will confirm whether they want to delete, and if the user
// confirms, will remove it from the correct key store.
func TestRemoveMultikeysRemoveOnlyChosenKey(t *testing.T) {
setUp(t)
in := bytes.NewBuffer([]byte("1\ny\n"))
key, err := trustmanager.GenerateED25519Key(rand.Reader)
require.NoError(t, err)
stores := []trustmanager.KeyStore{
trustmanager.NewKeyMemoryStore(ret),
trustmanager.NewKeyMemoryStore(ret),
}
err = stores[0].AddKey(trustmanager.KeyInfo{Role: data.CanonicalRootRole, Gun: ""}, key)
require.NoError(t, err)
err = stores[1].AddKey(trustmanager.KeyInfo{Role: data.CanonicalTargetsRole, Gun: "gun"}, key)
require.NoError(t, err)
var out bytes.Buffer
err = removeKeyInteractively(stores, key.ID(), in, &out)
require.NoError(t, err)
text, err := ioutil.ReadAll(&out)
require.NoError(t, err)
// It should have listed the keys, asked whether the user really wanted to
// delete, and then deleted.
output := string(text)
require.Contains(t, output, "Found the following matching keys")
require.Contains(t, output, "Are you sure")
require.Contains(t, output, "Deleted "+key.ID())
// figure out which one we picked to delete, and assert it was deleted
for _, line := range strings.Split(output, "\n") {
if strings.HasPrefix(line, "\t1.") { // we picked the first item
if strings.Contains(line, "root") { // first key store
require.Len(t, stores[0].ListKeys(), 0)
require.Len(t, stores[1].ListKeys(), 1)
} else {
require.Len(t, stores[0].ListKeys(), 1)
require.Len(t, stores[1].ListKeys(), 0)
}
}
}
}
// Non-roles and delegation keys can't be rotated with the command line
func TestRotateKeyInvalidRoles(t *testing.T) {
setUp(t)
invalids := []string{
"notevenARole",
"targets/a",
}
for _, role := range invalids {
for _, serverManaged := range []bool{true, false} {
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
rotateKeyRole: role,
rotateKeyServerManaged: serverManaged,
}
commands := []string{"gun", role}
if serverManaged {
commands = append(commands, "-r")
}
err := k.keysRotate(&cobra.Command{}, commands)
require.Error(t, err)
require.Contains(t, err.Error(),
fmt.Sprintf("does not currently permit rotating the %s key", role))
}
}
}
// Cannot rotate a targets key and require that it is server managed
func TestRotateKeyTargetCannotBeServerManaged(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
rotateKeyRole: data.CanonicalTargetsRole,
rotateKeyServerManaged: true,
}
err := k.keysRotate(&cobra.Command{}, []string{"gun", data.CanonicalTargetsRole})
require.Error(t, err)
require.IsType(t, client.ErrInvalidRemoteRole{}, err)
}
// Cannot rotate a timestamp key and require that it is locally managed
func TestRotateKeyTimestampCannotBeLocallyManaged(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
rotateKeyRole: data.CanonicalTimestampRole,
rotateKeyServerManaged: false,
}
err := k.keysRotate(&cobra.Command{}, []string{"gun", data.CanonicalTimestampRole})
require.Error(t, err)
require.IsType(t, client.ErrInvalidLocalRole{}, err)
}
// rotate key must be provided with a gun
func TestRotateKeyNoGUN(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
rotateKeyRole: data.CanonicalTargetsRole,
}
err := k.keysRotate(&cobra.Command{}, []string{})
require.Error(t, err)
require.Contains(t, err.Error(), "Must specify a GUN")
}
// initialize a repo with keys, so they can be rotated
func setUpRepo(t *testing.T, tempBaseDir, gun string, ret notary.PassRetriever) (
*httptest.Server, map[string]string) {
// Set up server
ctx := context.WithValue(
context.Background(), "metaStore", storage.NewMemStorage())
// Do not pass one of the const KeyAlgorithms here as the value! Passing a
// string is in itself good test that we are handling it correctly as we
// will be receiving a string from the configuration.
ctx = context.WithValue(ctx, "keyAlgorithm", "ecdsa")
// Eat the logs instead of spewing them out
l := logrus.New()
l.Out = bytes.NewBuffer(nil)
ctx = ctxu.WithLogger(ctx, logrus.NewEntry(l))
cryptoService := cryptoservice.NewCryptoService(trustmanager.NewKeyMemoryStore(ret))
ts := httptest.NewServer(server.RootHandler(nil, ctx, cryptoService, nil, nil, nil))
repo, err := client.NewNotaryRepository(
tempBaseDir, gun, ts.URL, http.DefaultTransport, ret, trustpinning.TrustPinConfig{})
require.NoError(t, err, "error creating repo: %s", err)
rootPubKey, err := repo.CryptoService.Create("root", "", data.ECDSAKey)
require.NoError(t, err, "error generating root key: %s", err)
err = repo.Initialize(rootPubKey.ID())
require.NoError(t, err)
return ts, repo.CryptoService.ListAllKeys()
}
// The command line uses NotaryRepository's RotateKey - this is just testing
// that the correct config variables are passed for the client to request a key
// from the remote server.
func TestRotateKeyRemoteServerManagesKey(t *testing.T) {
for _, role := range []string{data.CanonicalSnapshotRole, data.CanonicalTimestampRole} {
setUp(t)
// Temporary directory where test files will be created
tempBaseDir, err := ioutil.TempDir("/tmp", "notary-test-")
defer os.RemoveAll(tempBaseDir)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
gun := "docker.com/notary"
ret := passphrase.ConstantRetriever("pass")
ts, initialKeys := setUpRepo(t, tempBaseDir, gun, ret)
defer ts.Close()
require.Len(t, initialKeys, 3)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) {
v := viper.New()
v.SetDefault("trust_dir", tempBaseDir)
v.SetDefault("remote_server.url", ts.URL)
return v, nil
},
getRetriever: func() notary.PassRetriever { return ret },
rotateKeyServerManaged: true,
}
require.NoError(t, k.keysRotate(&cobra.Command{}, []string{gun, role, "-r"}))
repo, err := client.NewNotaryRepository(tempBaseDir, gun, ts.URL, http.DefaultTransport, ret, trustpinning.TrustPinConfig{})
require.NoError(t, err, "error creating repo: %s", err)
cl, err := repo.GetChangelist()
require.NoError(t, err, "unable to get changelist: %v", err)
require.Len(t, cl.List(), 0, "expected the changes to have been published")
finalKeys := repo.CryptoService.ListAllKeys()
// no keys have been created, since a remote key was specified
if role == data.CanonicalSnapshotRole {
require.Len(t, finalKeys, 2)
for k, r := range initialKeys {
if r != data.CanonicalSnapshotRole {
_, ok := finalKeys[k]
require.True(t, ok)
}
}
} else {
require.Len(t, finalKeys, 3)
for k := range initialKeys {
_, ok := finalKeys[k]
require.True(t, ok)
}
}
}
}
// The command line uses NotaryRepository's RotateKey - this is just testing
// that multiple keys can be rotated at once locally
func TestRotateKeyBothKeys(t *testing.T) {
setUp(t)
// Temporary directory where test files will be created
tempBaseDir, err := ioutil.TempDir("/tmp", "notary-test-")
defer os.RemoveAll(tempBaseDir)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
gun := "docker.com/notary"
ret := passphrase.ConstantRetriever("pass")
ts, initialKeys := setUpRepo(t, tempBaseDir, gun, ret)
defer ts.Close()
k := &keyCommander{
configGetter: func() (*viper.Viper, error) {
v := viper.New()
v.SetDefault("trust_dir", tempBaseDir)
v.SetDefault("remote_server.url", ts.URL)
return v, nil
},
getRetriever: func() notary.PassRetriever { return ret },
}
require.NoError(t, k.keysRotate(&cobra.Command{}, []string{gun, data.CanonicalTargetsRole}))
require.NoError(t, k.keysRotate(&cobra.Command{}, []string{gun, data.CanonicalSnapshotRole}))
repo, err := client.NewNotaryRepository(tempBaseDir, gun, ts.URL, nil, ret, trustpinning.TrustPinConfig{})
require.NoError(t, err, "error creating repo: %s", err)
cl, err := repo.GetChangelist()
require.NoError(t, err, "unable to get changelist: %v", err)
require.Len(t, cl.List(), 0)
// two new keys have been created, and the old keys should still be gone
newKeys := repo.CryptoService.ListAllKeys()
// there should be 3 keys - snapshot, targets, and root
require.Len(t, newKeys, 3)
// the old snapshot/targets keys should be gone
for keyID, role := range initialKeys {
r, ok := newKeys[keyID]
switch r {
case data.CanonicalSnapshotRole, data.CanonicalTargetsRole:
require.False(t, ok, "original key %s still there", keyID)
case data.CanonicalRootRole:
require.Equal(t, role, r)
require.True(t, ok, "old root key has changed")
}
}
found := make(map[string]bool)
for _, role := range newKeys {
found[role] = true
}
require.True(t, found[data.CanonicalTargetsRole], "targets key was not created")
require.True(t, found[data.CanonicalSnapshotRole], "snapshot key was not created")
require.True(t, found[data.CanonicalRootRole], "root key was removed somehow")
}
// RotateKey when rotating a root requires extra confirmation
func TestRotateKeyRootIsInteractive(t *testing.T) {
setUp(t)
// Temporary directory where test files will be created
tempBaseDir, err := ioutil.TempDir("/tmp", "notary-test-")
defer os.RemoveAll(tempBaseDir)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
gun := "docker.com/notary"
ret := passphrase.ConstantRetriever("pass")
ts, _ := setUpRepo(t, tempBaseDir, gun, ret)
defer ts.Close()
k := &keyCommander{
configGetter: func() (*viper.Viper, error) {
v := viper.New()
v.SetDefault("trust_dir", tempBaseDir)
v.SetDefault("remote_server.url", ts.URL)
return v, nil
},
getRetriever: func() notary.PassRetriever { return ret },
input: bytes.NewBuffer([]byte("\n")),
}
c := &cobra.Command{}
out := bytes.NewBuffer(make([]byte, 0, 10))
c.SetOutput(out)
require.NoError(t, k.keysRotate(c, []string{gun, data.CanonicalRootRole}))
require.Contains(t, out.String(), "Aborting action")
repo, err := client.NewNotaryRepository(tempBaseDir, gun, ts.URL, nil, ret, trustpinning.TrustPinConfig{})
require.NoError(t, err, "error creating repo: %s", err)
// There should still just be one root key (and one targets and one snapshot)
allKeys := repo.CryptoService.ListAllKeys()
require.Len(t, allKeys, 3)
}
func TestChangeKeyPassphraseInvalidID(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
}
err := k.keyPassphraseChange(&cobra.Command{}, []string{"too_short"})
require.Error(t, err)
require.Contains(t, err.Error(), "invalid key ID provided")
}
func TestChangeKeyPassphraseInvalidNumArgs(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
}
err := k.keyPassphraseChange(&cobra.Command{}, []string{})
require.Error(t, err)
require.Contains(t, err.Error(), "must specify the key ID")
}
func TestChangeKeyPassphraseNonexistentID(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
}
// Valid ID size, but does not exist as a key ID
err := k.keyPassphraseChange(&cobra.Command{}, []string{strings.Repeat("x", notary.Sha256HexSize)})
require.Error(t, err)
require.Contains(t, err.Error(), "could not retrieve local key for key ID provided")
}
func TestKeyImportMismatchingRoles(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
keysImportRole: "targets",
}
tempFileName := generateTempTestKeyFile(t, "snapshot")
defer os.Remove(tempFileName)
err := k.keysImport(&cobra.Command{}, []string{tempFileName})
require.Error(t, err)
require.Contains(t, err.Error(), "does not match role")
}
func TestKeyImportNoGUNForTargetsPEM(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
}
tempFileName := generateTempTestKeyFile(t, "targets")
defer os.Remove(tempFileName)
err := k.keysImport(&cobra.Command{}, []string{tempFileName})
require.Error(t, err)
require.Contains(t, err.Error(), "Must specify GUN")
}
func TestKeyImportNoGUNForSnapshotPEM(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
}
tempFileName := generateTempTestKeyFile(t, "snapshot")
defer os.Remove(tempFileName)
err := k.keysImport(&cobra.Command{}, []string{tempFileName})
require.Error(t, err)
require.Contains(t, err.Error(), "Must specify GUN")
}
func TestKeyImportNoGUNForTargetsFlag(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
keysImportRole: "targets",
}
tempFileName := generateTempTestKeyFile(t, "")
defer os.Remove(tempFileName)
err := k.keysImport(&cobra.Command{}, []string{tempFileName})
require.Error(t, err)
require.Contains(t, err.Error(), "Must specify GUN")
}
func TestKeyImportNoGUNForSnapshotFlag(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
keysImportRole: "snapshot",
}
tempFileName := generateTempTestKeyFile(t, "")
defer os.Remove(tempFileName)
err := k.keysImport(&cobra.Command{}, []string{tempFileName})
require.Error(t, err)
require.Contains(t, err.Error(), "Must specify GUN")
}
func TestKeyImportNoRole(t *testing.T) {
setUp(t)
k := &keyCommander{
configGetter: func() (*viper.Viper, error) { return viper.New(), nil },
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
}
tempFileName := generateTempTestKeyFile(t, "")
defer os.Remove(tempFileName)
err := k.keysImport(&cobra.Command{}, []string{tempFileName})
require.Error(t, err)
require.Contains(t, err.Error(), "Could not infer role, and no role was specified for key")
}
func generateTempTestKeyFile(t *testing.T, role string) string {
setUp(t)
privKey, err := trustmanager.GenerateECDSAKey(rand.Reader)
if err != nil {
return ""
}
keyBytes, err := trustmanager.KeyToPEM(privKey, role)
require.NoError(t, err)
tempPrivFile, err := ioutil.TempFile("/tmp", "privfile")
require.NoError(t, err)
// Write the private key to a file so we can import it
_, err = tempPrivFile.Write(keyBytes)
require.NoError(t, err)
tempPrivFile.Close()
return tempPrivFile.Name()
}

View File

@ -1,254 +0,0 @@
package main
import (
"fmt"
"io"
"os"
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
"github.com/docker/notary"
"github.com/docker/notary/passphrase"
"github.com/docker/notary/version"
homedir "github.com/mitchellh/go-homedir"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
const (
configDir = ".notary/"
defaultServerURL = "https://notary-server:4443"
)
type usageTemplate struct {
Use string
Short string
Long string
}
type cobraRunE func(cmd *cobra.Command, args []string) error
func (u usageTemplate) ToCommand(run cobraRunE) *cobra.Command {
c := cobra.Command{
Use: u.Use,
Short: u.Short,
Long: u.Long,
}
if run != nil {
// newer versions of cobra support a run function that returns an error,
// but in the meantime, this should help ease the transition
c.RunE = run
}
return &c
}
func pathRelativeToCwd(path string) string {
if path == "" || filepath.IsAbs(path) {
return path
}
cwd, err := os.Getwd()
if err != nil {
return filepath.Clean(path)
}
return filepath.Clean(filepath.Join(cwd, path))
}
type notaryCommander struct {
// this needs to be set
getRetriever func() notary.PassRetriever
// these are for command line parsing - no need to set
debug bool
verbose bool
trustDir string
configFile string
remoteTrustServer string
tlsCAFile string
tlsCertFile string
tlsKeyFile string
}
func (n *notaryCommander) parseConfig() (*viper.Viper, error) {
n.setVerbosityLevel()
// Get home directory for current user
homeDir, err := homedir.Dir()
if err != nil {
return nil, fmt.Errorf("cannot get current user home directory: %v", err)
}
if homeDir == "" {
return nil, fmt.Errorf("cannot get current user home directory")
}
config := viper.New()
// By default our trust directory (where keys are stored) is in ~/.notary/
defaultTrustDir := filepath.Join(homeDir, filepath.Dir(configDir))
// If there was a commandline configFile set, we parse that.
// If there wasn't we attempt to find it on the default location ~/.notary/config.json
if n.configFile != "" {
config.SetConfigFile(n.configFile)
} else {
config.SetConfigFile(filepath.Join(defaultTrustDir, "config.json"))
}
// Setup the configuration details into viper
config.SetDefault("trust_dir", defaultTrustDir)
config.SetDefault("remote_server", map[string]string{"url": defaultServerURL})
// Find and read the config file
if err := config.ReadInConfig(); err != nil {
logrus.Debugf("Configuration file not found, using defaults")
// If we were passed in a configFile via command linen flags, bail if it doesn't exist,
// otherwise ignore it: we can use the defaults
if n.configFile != "" || !os.IsNotExist(err) {
return nil, fmt.Errorf("error opening config file: %v", err)
}
}
// At this point we either have the default value or the one set by the config.
// Either way, some command-line flags have precedence and overwrites the value
if n.trustDir != "" {
config.Set("trust_dir", pathRelativeToCwd(n.trustDir))
}
if n.tlsCAFile != "" {
config.Set("remote_server.root_ca", pathRelativeToCwd(n.tlsCAFile))
}
if n.tlsCertFile != "" {
config.Set("remote_server.tls_client_cert", pathRelativeToCwd(n.tlsCertFile))
}
if n.tlsKeyFile != "" {
config.Set("remote_server.tls_client_key", pathRelativeToCwd(n.tlsKeyFile))
}
if n.remoteTrustServer != "" {
config.Set("remote_server.url", n.remoteTrustServer)
}
// Expands all the possible ~/ that have been given, either through -d or config
// If there is no error, use it, if not, just attempt to use whatever the user gave us
expandedTrustDir, err := homedir.Expand(config.GetString("trust_dir"))
if err == nil {
config.Set("trust_dir", expandedTrustDir)
}
logrus.Debugf("Using the following trust directory: %s", config.GetString("trust_dir"))
return config, nil
}
func (n *notaryCommander) GetCommand() *cobra.Command {
notaryCmd := cobra.Command{
Use: "notary",
Short: "Notary allows the creation of trusted collections.",
Long: "Notary allows the creation and management of collections of signed targets, allowing the signing and validation of arbitrary content.",
SilenceUsage: true, // we don't want to print out usage for EVERY error
SilenceErrors: true, // we do our own error reporting with fatalf
Run: func(cmd *cobra.Command, args []string) { cmd.Usage() },
}
notaryCmd.SetOutput(os.Stdout)
notaryCmd.AddCommand(&cobra.Command{
Use: "version",
Short: "Print the version number of notary",
Long: "Print the version number of notary",
Run: func(cmd *cobra.Command, args []string) {
fmt.Printf("notary\n Version: %s\n Git commit: %s\n", version.NotaryVersion, version.GitCommit)
},
})
notaryCmd.PersistentFlags().StringVarP(
&n.trustDir, "trustDir", "d", "", "Directory where the trust data is persisted to")
notaryCmd.PersistentFlags().StringVarP(
&n.configFile, "configFile", "c", "", "Path to the configuration file to use")
notaryCmd.PersistentFlags().BoolVarP(&n.verbose, "verbose", "v", false, "Verbose output")
notaryCmd.PersistentFlags().BoolVarP(&n.debug, "debug", "D", false, "Debug output")
notaryCmd.PersistentFlags().StringVarP(&n.remoteTrustServer, "server", "s", "", "Remote trust server location")
notaryCmd.PersistentFlags().StringVar(&n.tlsCAFile, "tlscacert", "", "Trust certs signed only by this CA")
notaryCmd.PersistentFlags().StringVar(&n.tlsCertFile, "tlscert", "", "Path to TLS certificate file")
notaryCmd.PersistentFlags().StringVar(&n.tlsKeyFile, "tlskey", "", "Path to TLS key file")
cmdKeyGenerator := &keyCommander{
configGetter: n.parseConfig,
getRetriever: n.getRetriever,
input: os.Stdin,
}
cmdDelegationGenerator := &delegationCommander{
configGetter: n.parseConfig,
retriever: n.getRetriever(),
}
cmdTUFGenerator := &tufCommander{
configGetter: n.parseConfig,
retriever: n.getRetriever(),
}
notaryCmd.AddCommand(cmdKeyGenerator.GetCommand())
notaryCmd.AddCommand(cmdDelegationGenerator.GetCommand())
cmdTUFGenerator.AddToCommand(&notaryCmd)
return &notaryCmd
}
func main() {
notaryCommander := &notaryCommander{getRetriever: getPassphraseRetriever}
notaryCmd := notaryCommander.GetCommand()
if err := notaryCmd.Execute(); err != nil {
notaryCmd.Println("")
fatalf(err.Error())
}
}
func fatalf(format string, args ...interface{}) {
fmt.Printf("* fatal: "+format+"\n", args...)
os.Exit(1)
}
func askConfirm(input io.Reader) bool {
var res string
if _, err := fmt.Fscanln(input, &res); err != nil {
return false
}
if strings.EqualFold(res, "y") || strings.EqualFold(res, "yes") {
return true
}
return false
}
func getPassphraseRetriever() notary.PassRetriever {
baseRetriever := passphrase.PromptRetriever()
env := map[string]string{
"root": os.Getenv("NOTARY_ROOT_PASSPHRASE"),
"targets": os.Getenv("NOTARY_TARGETS_PASSPHRASE"),
"snapshot": os.Getenv("NOTARY_SNAPSHOT_PASSPHRASE"),
"delegation": os.Getenv("NOTARY_DELEGATION_PASSPHRASE"),
}
return func(keyName string, alias string, createNew bool, numAttempts int) (string, bool, error) {
if v := env[alias]; v != "" {
return v, numAttempts > 1, nil
}
// For delegation roles, we can also try the "delegation" alias if it is specified
// Note that we don't check if the role name is for a delegation to allow for names like "user"
// since delegation keys can be shared across repositories
if v := env["delegation"]; v != "" {
return v, numAttempts > 1, nil
}
return baseRetriever(keyName, alias, createNew, numAttempts)
}
}
// Set the logging level to fatal on default, or the most specific level the user specified (debug or error)
func (n *notaryCommander) setVerbosityLevel() {
if n.debug {
logrus.SetLevel(logrus.DebugLevel)
} else if n.verbose {
logrus.SetLevel(logrus.ErrorLevel)
} else {
logrus.SetLevel(logrus.FatalLevel)
}
logrus.SetOutput(os.Stderr)
}

View File

@ -1,563 +0,0 @@
package main
import (
"bytes"
"crypto/tls"
"fmt"
"io/ioutil"
"net/http/httptest"
"os"
"path/filepath"
"strings"
"testing"
"time"
"github.com/docker/go-connections/tlsconfig"
"github.com/docker/notary"
"github.com/docker/notary/passphrase"
"github.com/docker/notary/server/storage"
"github.com/docker/notary/tuf/data"
"github.com/stretchr/testify/require"
)
// the default location for the config file is in ~/.notary/config.json - even if it doesn't exist.
func TestNotaryConfigFileDefault(t *testing.T) {
commander := &notaryCommander{
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
}
config, err := commander.parseConfig()
require.NoError(t, err)
configFileUsed := config.ConfigFileUsed()
require.True(t, strings.HasSuffix(configFileUsed,
filepath.Join(".notary", "config.json")), "Unknown config file: %s", configFileUsed)
}
// the default server address is notary-server
func TestRemoteServerDefault(t *testing.T) {
tempDir := tempDirWithConfig(t, "{}")
defer os.RemoveAll(tempDir)
configFile := filepath.Join(tempDir, "config.json")
commander := &notaryCommander{
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
}
// set a blank config file, so it doesn't check ~/.notary/config.json by default
// and execute a random command so that the flags are parsed
cmd := commander.GetCommand()
cmd.SetArgs([]string{"-c", configFile, "list"})
cmd.SetOutput(new(bytes.Buffer)) // eat the output
cmd.Execute()
config, err := commander.parseConfig()
require.NoError(t, err)
require.Equal(t, "https://notary-server:4443", getRemoteTrustServer(config))
}
// providing a config file uses the config file's server url instead
func TestRemoteServerUsesConfigFile(t *testing.T) {
tempDir := tempDirWithConfig(t, `{"remote_server": {"url": "https://myserver"}}`)
defer os.RemoveAll(tempDir)
configFile := filepath.Join(tempDir, "config.json")
commander := &notaryCommander{
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
}
// set a config file, so it doesn't check ~/.notary/config.json by default,
// and execute a random command so that the flags are parsed
cmd := commander.GetCommand()
cmd.SetArgs([]string{"-c", configFile, "list"})
cmd.SetOutput(new(bytes.Buffer)) // eat the output
cmd.Execute()
config, err := commander.parseConfig()
require.NoError(t, err)
require.Equal(t, "https://myserver", getRemoteTrustServer(config))
}
// a command line flag overrides the config file's server url
func TestRemoteServerCommandLineFlagOverridesConfig(t *testing.T) {
tempDir := tempDirWithConfig(t, `{"remote_server": {"url": "https://myserver"}}`)
defer os.RemoveAll(tempDir)
configFile := filepath.Join(tempDir, "config.json")
commander := &notaryCommander{
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
}
// set a config file, so it doesn't check ~/.notary/config.json by default,
// and execute a random command so that the flags are parsed
cmd := commander.GetCommand()
cmd.SetArgs([]string{"-c", configFile, "-s", "http://overridden", "list"})
cmd.SetOutput(new(bytes.Buffer)) // eat the output
cmd.Execute()
config, err := commander.parseConfig()
require.NoError(t, err)
require.Equal(t, "http://overridden", getRemoteTrustServer(config))
}
// invalid commands for `notary addhash`
func TestInvalidAddHashCommands(t *testing.T) {
tempDir := tempDirWithConfig(t, `{"remote_server": {"url": "https://myserver"}}`)
defer os.RemoveAll(tempDir)
configFile := filepath.Join(tempDir, "config.json")
b := new(bytes.Buffer)
cmd := NewNotaryCommand()
cmd.SetOutput(b)
// No hashes given
cmd.SetArgs(append([]string{"-c", configFile, "-d", tempDir}, "addhash", "gun", "test", "10"))
err := cmd.Execute()
require.Error(t, err)
require.Contains(t, err.Error(), "Must specify a GUN, target, byte size of target data, and at least one hash")
// Invalid byte size given
cmd = NewNotaryCommand()
cmd.SetArgs(append([]string{"-c", configFile, "-d", tempDir}, "addhash", "gun", "test", "sizeNotAnInt", "--sha256", "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"))
err = cmd.Execute()
require.Error(t, err)
// Invalid sha256 size given
cmd = NewNotaryCommand()
cmd.SetArgs(append([]string{"-c", configFile, "-d", tempDir}, "addhash", "gun", "test", "1", "--sha256", "a"))
err = cmd.Execute()
require.Error(t, err)
require.Contains(t, err.Error(), "invalid sha256 hex contents provided")
// Invalid sha256 hex given
cmd = NewNotaryCommand()
cmd.SetArgs(append([]string{"-c", configFile, "-d", tempDir}, "addhash", "gun", "test", "1", "--sha256", "***aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa***"))
err = cmd.Execute()
require.Error(t, err)
// Invalid sha512 size given
cmd = NewNotaryCommand()
cmd.SetArgs(append([]string{"-c", configFile, "-d", tempDir}, "addhash", "gun", "test", "1", "--sha512", "a"))
err = cmd.Execute()
require.Error(t, err)
require.Contains(t, err.Error(), "invalid sha512 hex contents provided")
// Invalid sha512 hex given
cmd = NewNotaryCommand()
cmd.SetArgs(append([]string{"-c", configFile, "-d", tempDir}, "addhash", "gun", "test", "1", "--sha512", "***aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa******aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa***"))
err = cmd.Execute()
require.Error(t, err)
}
var exampleValidCommands = []string{
"init repo",
"list repo",
"status repo",
"publish repo",
"add repo v1 somefile",
"addhash repo targetv1 --sha256 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 10",
"verify repo v1",
"key list",
"key rotate repo snapshot",
"key generate rsa",
"key backup tempfile.zip",
"key export e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 backup.pem",
"key restore tempfile.zip",
"key import backup.pem",
"key remove e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"key passwd e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"delegation list repo",
"delegation add repo targets/releases path/to/pem/file.pem",
"delegation remove repo targets/releases",
}
// config parsing bugs are propagated in all commands
func TestConfigParsingErrorsPropagatedByCommands(t *testing.T) {
tempdir, err := ioutil.TempDir("", "empty-dir")
require.NoError(t, err)
defer os.RemoveAll(tempdir)
for _, args := range exampleValidCommands {
b := new(bytes.Buffer)
cmd := NewNotaryCommand()
cmd.SetOutput(b)
cmd.SetArgs(append(
[]string{"-c", filepath.Join(tempdir, "idonotexist.json"), "-d", tempdir},
strings.Fields(args)...))
err = cmd.Execute()
require.Error(t, err, "expected error when running `notary %s`", args)
require.Contains(t, err.Error(), "error opening config file", "running `notary %s`", args)
require.NotContains(t, b.String(), "Usage:")
}
}
// insufficient arguments produce an error before any parsing of configs happens
func TestInsufficientArgumentsReturnsErrorAndPrintsUsage(t *testing.T) {
tempdir, err := ioutil.TempDir("", "empty-dir")
require.NoError(t, err)
defer os.RemoveAll(tempdir)
for _, args := range exampleValidCommands {
b := new(bytes.Buffer)
cmd := NewNotaryCommand()
cmd.SetOutput(b)
arglist := strings.Fields(args)
if args == "key list" || args == "key generate rsa" {
// in these case, "key" or "key generate" are valid commands, so add an arg to them instead
arglist = append(arglist, "extraArg")
} else {
arglist = arglist[:len(arglist)-1]
}
invalid := strings.Join(arglist, " ")
cmd.SetArgs(append(
[]string{"-c", filepath.Join(tempdir, "idonotexist.json"), "-d", tempdir}, arglist...))
err = cmd.Execute()
require.NotContains(t, err.Error(), "error opening config file", "running `notary %s`", invalid)
// it's a usage error, so the usage is printed
require.Contains(t, b.String(), "Usage:", "expected usage when running `notary %s`", invalid)
}
}
// The bare notary command and bare subcommands all print out usage
func TestBareCommandPrintsUsageAndNoError(t *testing.T) {
tempdir, err := ioutil.TempDir("", "empty-dir")
require.NoError(t, err)
defer os.RemoveAll(tempdir)
// just the notary command
b := new(bytes.Buffer)
cmd := NewNotaryCommand()
cmd.SetOutput(b)
cmd.SetArgs([]string{"-c", filepath.Join(tempdir, "idonotexist.json")})
require.NoError(t, cmd.Execute(), "Expected no error from a help request")
// usage is printed
require.Contains(t, b.String(), "Usage:", "expected usage when running `notary`")
// notary key and notary delegation
for _, bareCommand := range []string{"key", "delegation"} {
b := new(bytes.Buffer)
cmd := NewNotaryCommand()
cmd.SetOutput(b)
cmd.SetArgs([]string{"-c", filepath.Join(tempdir, "idonotexist.json"), "-d", tempdir, bareCommand})
require.NoError(t, cmd.Execute(), "Expected no error from a help request")
// usage is printed
require.Contains(t, b.String(), "Usage:", "expected usage when running `notary %s`", bareCommand)
}
}
type recordingMetaStore struct {
gotten []string
storage.MemStorage
}
// GetCurrent gets the metadata from the underlying MetaStore, but also records
// that the metadata was requested
func (r *recordingMetaStore) GetCurrent(gun, role string) (*time.Time, []byte, error) {
r.gotten = append(r.gotten, fmt.Sprintf("%s.%s", gun, role))
return r.MemStorage.GetCurrent(gun, role)
}
// GetChecksum gets the metadata from the underlying MetaStore, but also records
// that the metadata was requested
func (r *recordingMetaStore) GetChecksum(gun, role, checksum string) (*time.Time, []byte, error) {
r.gotten = append(r.gotten, fmt.Sprintf("%s.%s", gun, role))
return r.MemStorage.GetChecksum(gun, role, checksum)
}
// the config can provide all the TLS information necessary - the root ca file,
// the tls client files - they are all relative to the directory of the config
// file, and not the cwd
func TestConfigFileTLSCannotBeRelativeToCWD(t *testing.T) {
// Set up server that with a self signed cert
var err error
// add a handler for getting the root
m := &recordingMetaStore{MemStorage: *storage.NewMemStorage()}
s := httptest.NewUnstartedServer(setupServerHandler(m))
s.TLS, err = tlsconfig.Server(tlsconfig.Options{
CertFile: "../../fixtures/notary-server.crt",
KeyFile: "../../fixtures/notary-server.key",
CAFile: "../../fixtures/root-ca.crt",
ClientAuth: tls.RequireAndVerifyClientCert,
})
require.NoError(t, err)
s.StartTLS()
defer s.Close()
// test that a config file with certs that are relative to the cwd fail
tempDir := tempDirWithConfig(t, fmt.Sprintf(`{
"remote_server": {
"url": "%s",
"root_ca": "../../fixtures/root-ca.crt",
"tls_client_cert": "../../fixtures/notary-server.crt",
"tls_client_key": "../../fixtures/notary-server.key"
}
}`, s.URL))
defer os.RemoveAll(tempDir)
configFile := filepath.Join(tempDir, "config.json")
// set a config file, so it doesn't check ~/.notary/config.json by default,
// and execute a random command so that the flags are parsed
cmd := NewNotaryCommand()
cmd.SetArgs([]string{"-c", configFile, "-d", tempDir, "list", "repo"})
cmd.SetOutput(new(bytes.Buffer)) // eat the output
err = cmd.Execute()
require.Error(t, err, "expected a failure due to TLS")
require.Contains(t, err.Error(), "TLS", "should have been a TLS error")
// validate that we failed to connect and attempt any downloads at all
require.Len(t, m.gotten, 0)
}
// the config can provide all the TLS information necessary - the root ca file,
// the tls client files - they are all relative to the directory of the config
// file, and not the cwd, or absolute paths
func TestConfigFileTLSCanBeRelativeToConfigOrAbsolute(t *testing.T) {
// Set up server that with a self signed cert
var err error
// add a handler for getting the root
m := &recordingMetaStore{MemStorage: *storage.NewMemStorage()}
s := httptest.NewUnstartedServer(setupServerHandler(m))
s.TLS, err = tlsconfig.Server(tlsconfig.Options{
CertFile: "../../fixtures/notary-server.crt",
KeyFile: "../../fixtures/notary-server.key",
CAFile: "../../fixtures/root-ca.crt",
ClientAuth: tls.RequireAndVerifyClientCert,
})
require.NoError(t, err)
s.StartTLS()
defer s.Close()
tempDir, err := ioutil.TempDir("", "config-test")
require.NoError(t, err)
defer os.RemoveAll(tempDir)
configFile, err := os.Create(filepath.Join(tempDir, "config.json"))
require.NoError(t, err)
fmt.Fprintf(configFile, `{
"remote_server": {
"url": "%s",
"root_ca": "root-ca.crt",
"tls_client_cert": "%s",
"tls_client_key": "notary-server.key"
}
}`, s.URL, filepath.Join(tempDir, "notary-server.crt"))
configFile.Close()
// copy the certs to be relative to the config directory
for _, fname := range []string{"notary-server.crt", "notary-server.key", "root-ca.crt"} {
content, err := ioutil.ReadFile(filepath.Join("../../fixtures", fname))
require.NoError(t, err)
require.NoError(t, ioutil.WriteFile(filepath.Join(tempDir, fname), content, 0766))
}
// set a config file, so it doesn't check ~/.notary/config.json by default,
// and execute a random command so that the flags are parsed
cmd := NewNotaryCommand()
cmd.SetArgs([]string{"-c", configFile.Name(), "-d", tempDir, "list", "repo"})
cmd.SetOutput(new(bytes.Buffer)) // eat the output
err = cmd.Execute()
require.Error(t, err, "there was no repository, so list should have failed")
require.NotContains(t, err.Error(), "TLS", "there was no TLS error though!")
// validate that we actually managed to connect and attempted to download the root though
require.Len(t, m.gotten, 1)
require.Equal(t, m.gotten[0], "repo.root")
}
// Whatever TLS config is in the config file can be overridden by the command line
// TLS flags, which are relative to the CWD (not the config) or absolute
func TestConfigFileOverridenByCmdLineFlags(t *testing.T) {
// Set up server that with a self signed cert
var err error
// add a handler for getting the root
m := &recordingMetaStore{MemStorage: *storage.NewMemStorage()}
s := httptest.NewUnstartedServer(setupServerHandler(m))
s.TLS, err = tlsconfig.Server(tlsconfig.Options{
CertFile: "../../fixtures/notary-server.crt",
KeyFile: "../../fixtures/notary-server.key",
CAFile: "../../fixtures/root-ca.crt",
ClientAuth: tls.RequireAndVerifyClientCert,
})
require.NoError(t, err)
s.StartTLS()
defer s.Close()
tempDir := tempDirWithConfig(t, fmt.Sprintf(`{
"remote_server": {
"url": "%s",
"root_ca": "nope",
"tls_client_cert": "nope",
"tls_client_key": "nope"
}
}`, s.URL))
defer os.RemoveAll(tempDir)
configFile := filepath.Join(tempDir, "config.json")
// set a config file, so it doesn't check ~/.notary/config.json by default,
// and execute a random command so that the flags are parsed
cwd, err := os.Getwd()
require.NoError(t, err)
cmd := NewNotaryCommand()
cmd.SetArgs([]string{
"-c", configFile, "-d", tempDir, "list", "repo",
"--tlscacert", "../../fixtures/root-ca.crt",
"--tlscert", filepath.Clean(filepath.Join(cwd, "../../fixtures/notary-server.crt")),
"--tlskey", "../../fixtures/notary-server.key"})
cmd.SetOutput(new(bytes.Buffer)) // eat the output
err = cmd.Execute()
require.Error(t, err, "there was no repository, so list should have failed")
require.NotContains(t, err.Error(), "TLS", "there was no TLS error though!")
// validate that we actually managed to connect and attempted to download the root though
require.Len(t, m.gotten, 1)
require.Equal(t, m.gotten[0], "repo.root")
}
// the config can specify trust pinning settings for TOFUs, as well as pinned Certs or CA
func TestConfigFileTrustPinning(t *testing.T) {
var err error
tempDir := tempDirWithConfig(t, `{
"trust_pinning": {
"disable_tofu": false
}
}`)
defer os.RemoveAll(tempDir)
commander := &notaryCommander{
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
configFile: filepath.Join(tempDir, "config.json"),
}
// Check that tofu was set correctly
config, err := commander.parseConfig()
require.NoError(t, err)
require.Equal(t, false, config.GetBool("trust_pinning.disable_tofu"))
trustPin, err := getTrustPinning(config)
require.NoError(t, err)
require.Equal(t, false, trustPin.DisableTOFU)
tempDir = tempDirWithConfig(t, `{
"remote_server": {
"url": "%s"
},
"trust_pinning": {
"disable_tofu": true
}
}`)
defer os.RemoveAll(tempDir)
commander = &notaryCommander{
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
configFile: filepath.Join(tempDir, "config.json"),
}
// Check that tofu was correctly disabled
config, err = commander.parseConfig()
require.NoError(t, err)
require.Equal(t, true, config.GetBool("trust_pinning.disable_tofu"))
trustPin, err = getTrustPinning(config)
require.NoError(t, err)
require.Equal(t, true, trustPin.DisableTOFU)
tempDir = tempDirWithConfig(t, fmt.Sprintf(`{
"trust_pinning": {
"certs": {
"repo3": ["%s"]
}
}
}`, strings.Repeat("x", notary.Sha256HexSize)))
defer os.RemoveAll(tempDir)
commander = &notaryCommander{
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
configFile: filepath.Join(tempDir, "config.json"),
}
config, err = commander.parseConfig()
require.NoError(t, err)
require.Equal(t, []interface{}{strings.Repeat("x", notary.Sha256HexSize)}, config.GetStringMap("trust_pinning.certs")["repo3"])
trustPin, err = getTrustPinning(config)
require.NoError(t, err)
require.Equal(t, strings.Repeat("x", notary.Sha256HexSize), trustPin.Certs["repo3"][0])
// Check that an invalid cert ID pinning format fails
tempDir = tempDirWithConfig(t, fmt.Sprintf(`{
"trust_pinning": {
"certs": {
"repo3": "%s"
}
}
}`, strings.Repeat("x", notary.Sha256HexSize)))
defer os.RemoveAll(tempDir)
commander = &notaryCommander{
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
configFile: filepath.Join(tempDir, "config.json"),
}
config, err = commander.parseConfig()
require.NoError(t, err)
trustPin, err = getTrustPinning(config)
require.Error(t, err)
tempDir = tempDirWithConfig(t, fmt.Sprintf(`{
"trust_pinning": {
"ca": {
"repo4": "%s"
}
}
}`, "root-ca.crt"))
defer os.RemoveAll(tempDir)
commander = &notaryCommander{
getRetriever: func() notary.PassRetriever { return passphrase.ConstantRetriever("pass") },
configFile: filepath.Join(tempDir, "config.json"),
}
config, err = commander.parseConfig()
require.NoError(t, err)
require.Equal(t, "root-ca.crt", config.GetStringMap("trust_pinning.ca")["repo4"])
trustPin, err = getTrustPinning(config)
require.NoError(t, err)
require.Equal(t, "root-ca.crt", trustPin.CA["repo4"])
}
func TestPassphraseRetrieverCaching(t *testing.T) {
// Set up passphrase environment vars
require.NoError(t, os.Setenv("NOTARY_ROOT_PASSPHRASE", "root_passphrase"))
require.NoError(t, os.Setenv("NOTARY_TARGETS_PASSPHRASE", "targets_passphrase"))
require.NoError(t, os.Setenv("NOTARY_SNAPSHOT_PASSPHRASE", "snapshot_passphrase"))
require.NoError(t, os.Setenv("NOTARY_DELEGATION_PASSPHRASE", "delegation_passphrase"))
defer os.Clearenv()
// Check the caching
retriever := getPassphraseRetriever()
passphrase, giveup, err := retriever("key", data.CanonicalRootRole, false, 0)
require.NoError(t, err)
require.False(t, giveup)
require.Equal(t, passphrase, "root_passphrase")
passphrase, giveup, err = retriever("key", data.CanonicalTargetsRole, false, 0)
require.NoError(t, err)
require.False(t, giveup)
require.Equal(t, passphrase, "targets_passphrase")
passphrase, giveup, err = retriever("key", data.CanonicalSnapshotRole, false, 0)
require.NoError(t, err)
require.False(t, giveup)
require.Equal(t, passphrase, "snapshot_passphrase")
passphrase, giveup, err = retriever("key", "targets/releases", false, 0)
require.NoError(t, err)
require.False(t, giveup)
require.Equal(t, passphrase, "delegation_passphrase")
// We don't require a targets/ prefix in PEM headers for delegation keys
passphrase, giveup, err = retriever("key", "user", false, 0)
require.NoError(t, err)
require.False(t, giveup)
require.Equal(t, passphrase, "delegation_passphrase")
}

View File

@ -1,203 +0,0 @@
package main
import (
"encoding/hex"
"fmt"
"io"
"sort"
"strings"
"github.com/docker/notary/client"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
"github.com/olekukonko/tablewriter"
)
// returns a tablewriter
func getTable(headers []string, writer io.Writer) *tablewriter.Table {
table := tablewriter.NewWriter(writer)
table.SetBorder(false)
table.SetColumnSeparator(" ")
table.SetAlignment(tablewriter.ALIGN_LEFT)
table.SetCenterSeparator("-")
table.SetAutoWrapText(false)
table.SetHeader(headers)
return table
}
// --- pretty printing certs ---
func truncateWithEllipsis(str string, maxWidth int, leftTruncate bool) string {
if len(str) <= maxWidth {
return str
}
if leftTruncate {
return fmt.Sprintf("...%s", str[len(str)-(maxWidth-3):])
}
return fmt.Sprintf("%s...", str[:maxWidth-3])
}
const (
maxGUNWidth = 25
maxLocWidth = 40
)
type keyInfo struct {
gun string // assumption that this is "" if role is root
role string
keyID string
location string
}
// We want to sort by gun, then by role, then by keyID, then by location
// In the case of a root role, then there is no GUN, and a root role comes
// first.
type keyInfoSorter []keyInfo
func (k keyInfoSorter) Len() int { return len(k) }
func (k keyInfoSorter) Swap(i, j int) { k[i], k[j] = k[j], k[i] }
func (k keyInfoSorter) Less(i, j int) bool {
// special-case role
if k[i].role != k[j].role {
if k[i].role == data.CanonicalRootRole {
return true
}
if k[j].role == data.CanonicalRootRole {
return false
}
// otherwise, neither of them are root, they're just different, so
// go with the traditional sort order.
}
// sort order is GUN, role, keyID, location.
orderedI := []string{k[i].gun, k[i].role, k[i].keyID, k[i].location}
orderedJ := []string{k[j].gun, k[j].role, k[j].keyID, k[j].location}
for x := 0; x < 4; x++ {
switch {
case orderedI[x] < orderedJ[x]:
return true
case orderedI[x] > orderedJ[x]:
return false
}
// continue on and evalulate the next item
}
// this shouldn't happen - that means two values are exactly equal
return false
}
// Given a list of KeyStores in order of listing preference, pretty-prints the
// root keys and then the signing keys.
func prettyPrintKeys(keyStores []trustmanager.KeyStore, writer io.Writer) {
var info []keyInfo
for _, store := range keyStores {
for keyID, keyIDInfo := range store.ListKeys() {
info = append(info, keyInfo{
role: keyIDInfo.Role,
location: store.Name(),
gun: keyIDInfo.Gun,
keyID: keyID,
})
}
}
if len(info) == 0 {
writer.Write([]byte("No signing keys found.\n"))
return
}
sort.Stable(keyInfoSorter(info))
table := getTable([]string{"ROLE", "GUN", "KEY ID", "LOCATION"}, writer)
for _, oneKeyInfo := range info {
table.Append([]string{
oneKeyInfo.role,
truncateWithEllipsis(oneKeyInfo.gun, maxGUNWidth, true),
oneKeyInfo.keyID,
truncateWithEllipsis(oneKeyInfo.location, maxLocWidth, true),
})
}
table.Render()
}
// --- pretty printing targets ---
type targetsSorter []*client.TargetWithRole
func (t targetsSorter) Len() int { return len(t) }
func (t targetsSorter) Swap(i, j int) { t[i], t[j] = t[j], t[i] }
func (t targetsSorter) Less(i, j int) bool {
return t[i].Name < t[j].Name
}
// --- pretty printing roles ---
type roleSorter []*data.Role
func (r roleSorter) Len() int { return len(r) }
func (r roleSorter) Swap(i, j int) { r[i], r[j] = r[j], r[i] }
func (r roleSorter) Less(i, j int) bool {
return r[i].Name < r[j].Name
}
// Pretty-prints the sorted list of TargetWithRoles.
func prettyPrintTargets(ts []*client.TargetWithRole, writer io.Writer) {
if len(ts) == 0 {
writer.Write([]byte("\nNo targets present in this repository.\n\n"))
return
}
sort.Stable(targetsSorter(ts))
table := getTable([]string{"Name", "Digest", "Size (bytes)", "Role"}, writer)
for _, t := range ts {
table.Append([]string{
t.Name,
hex.EncodeToString(t.Hashes["sha256"]),
fmt.Sprintf("%d", t.Length),
t.Role,
})
}
table.Render()
}
// Pretty-prints the list of provided Roles
func prettyPrintRoles(rs []*data.Role, writer io.Writer, roleType string) {
if len(rs) == 0 {
writer.Write([]byte(fmt.Sprintf("\nNo %s present in this repository.\n\n", roleType)))
return
}
// this sorter works for Role types
sort.Stable(roleSorter(rs))
table := getTable([]string{"Role", "Paths", "Key IDs", "Threshold"}, writer)
for _, r := range rs {
table.Append([]string{
r.Name,
prettyPrintPaths(r.Paths),
strings.Join(r.KeyIDs, "\n"),
fmt.Sprintf("%v", r.Threshold),
})
}
table.Render()
}
// Pretty-prints a list of delegation paths, and ensures the empty string is printed as "" in the console
func prettyPrintPaths(paths []string) string {
// sort paths first
sort.Strings(paths)
prettyPaths := []string{}
for _, path := range paths {
// manually escape "" and designate that it is all paths with an extra print <all paths>
if path == "" {
path = "\"\" <all paths>"
}
prettyPaths = append(prettyPaths, path)
}
return strings.Join(prettyPaths, "\n")
}

View File

@ -1,254 +0,0 @@
package main
import (
"bytes"
"crypto/rand"
"encoding/hex"
"fmt"
"io/ioutil"
"reflect"
"sort"
"strings"
"testing"
"github.com/docker/notary/client"
"github.com/docker/notary/passphrase"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
"github.com/stretchr/testify/require"
)
// --- tests for pretty printing keys ---
func TestTruncateWithEllipsis(t *testing.T) {
digits := "1234567890"
// do not truncate
require.Equal(t, truncateWithEllipsis(digits, 10, true), digits)
require.Equal(t, truncateWithEllipsis(digits, 10, false), digits)
require.Equal(t, truncateWithEllipsis(digits, 11, true), digits)
require.Equal(t, truncateWithEllipsis(digits, 11, false), digits)
// left and right truncate
require.Equal(t, truncateWithEllipsis(digits, 8, true), "...67890")
require.Equal(t, truncateWithEllipsis(digits, 8, false), "12345...")
}
func TestKeyInfoSorter(t *testing.T) {
expected := []keyInfo{
{role: data.CanonicalRootRole, gun: "", keyID: "a", location: "i"},
{role: data.CanonicalRootRole, gun: "", keyID: "a", location: "j"},
{role: data.CanonicalRootRole, gun: "", keyID: "z", location: "z"},
{role: "a", gun: "a", keyID: "a", location: "y"},
{role: "b", gun: "a", keyID: "a", location: "y"},
{role: "b", gun: "a", keyID: "b", location: "y"},
{role: "b", gun: "a", keyID: "b", location: "z"},
{role: "a", gun: "b", keyID: "a", location: "z"},
}
jumbled := make([]keyInfo, len(expected))
// randomish indices
for j, e := range []int{3, 6, 1, 4, 0, 7, 5, 2} {
jumbled[j] = expected[e]
}
sort.Sort(keyInfoSorter(jumbled))
require.True(t, reflect.DeepEqual(expected, jumbled),
fmt.Sprintf("Expected %v, Got %v", expected, jumbled))
}
type otherMemoryStore struct {
trustmanager.KeyMemoryStore
}
func (l *otherMemoryStore) Name() string {
return strings.Repeat("z", 70)
}
// If there are no keys in any of the key stores, a message that there are no
// signing keys should be displayed.
func TestPrettyPrintZeroKeys(t *testing.T) {
ret := passphrase.ConstantRetriever("pass")
emptyKeyStore := trustmanager.NewKeyMemoryStore(ret)
var b bytes.Buffer
prettyPrintKeys([]trustmanager.KeyStore{emptyKeyStore}, &b)
text, err := ioutil.ReadAll(&b)
require.NoError(t, err)
lines := strings.Split(strings.TrimSpace(string(text)), "\n")
require.Len(t, lines, 1)
require.Equal(t, "No signing keys found.", lines[0])
}
// Given a list of key stores, the keys should be pretty-printed with their
// roles, locations, IDs, and guns first in sorted order in the key store
func TestPrettyPrintRootAndSigningKeys(t *testing.T) {
ret := passphrase.ConstantRetriever("pass")
keyStores := []trustmanager.KeyStore{
trustmanager.NewKeyMemoryStore(ret),
&otherMemoryStore{KeyMemoryStore: *trustmanager.NewKeyMemoryStore(ret)},
}
longNameShortened := "..." + strings.Repeat("z", 37)
keys := make([]data.PrivateKey, 4)
for i := 0; i < 4; i++ {
key, err := trustmanager.GenerateED25519Key(rand.Reader)
require.NoError(t, err)
keys[i] = key
}
root := data.CanonicalRootRole
// add keys to the key stores
require.NoError(t, keyStores[0].AddKey(trustmanager.KeyInfo{Role: root, Gun: ""}, keys[0]))
require.NoError(t, keyStores[1].AddKey(trustmanager.KeyInfo{Role: root, Gun: ""}, keys[0]))
require.NoError(t, keyStores[0].AddKey(trustmanager.KeyInfo{Role: data.CanonicalTargetsRole, Gun: strings.Repeat("/a", 30)}, keys[1]))
require.NoError(t, keyStores[1].AddKey(trustmanager.KeyInfo{Role: data.CanonicalSnapshotRole, Gun: "short/gun"}, keys[1]))
require.NoError(t, keyStores[0].AddKey(trustmanager.KeyInfo{Role: "targets/a", Gun: ""}, keys[3]))
require.NoError(t, keyStores[0].AddKey(trustmanager.KeyInfo{Role: "invalidRole", Gun: ""}, keys[2]))
expected := [][]string{
// root always comes first
{root, keys[0].ID(), keyStores[0].Name()},
{root, keys[0].ID(), longNameShortened},
// these have no gun, so they come first
{"invalidRole", keys[2].ID(), keyStores[0].Name()},
{"targets/a", keys[3].ID(), keyStores[0].Name()},
// these have guns, and are sorted then by guns
{data.CanonicalTargetsRole, "..." + strings.Repeat("/a", 11), keys[1].ID(), keyStores[0].Name()},
{data.CanonicalSnapshotRole, "short/gun", keys[1].ID(), longNameShortened},
}
var b bytes.Buffer
prettyPrintKeys(keyStores, &b)
text, err := ioutil.ReadAll(&b)
require.NoError(t, err)
lines := strings.Split(strings.TrimSpace(string(text)), "\n")
require.Len(t, lines, len(expected)+2)
// starts with headers
require.True(t, reflect.DeepEqual(strings.Fields(lines[0]),
[]string{"ROLE", "GUN", "KEY", "ID", "LOCATION"}))
require.Equal(t, "----", lines[1][:4])
for i, line := range lines[2:] {
// we are purposely not putting spaces in test data so easier to split
splitted := strings.Fields(line)
for j, v := range splitted {
require.Equal(t, expected[i][j], strings.TrimSpace(v))
}
}
}
// --- tests for pretty printing targets ---
// If there are no targets, no table is printed, only a line saying that there
// are no targets.
func TestPrettyPrintZeroTargets(t *testing.T) {
var b bytes.Buffer
prettyPrintTargets([]*client.TargetWithRole{}, &b)
text, err := ioutil.ReadAll(&b)
require.NoError(t, err)
lines := strings.Split(strings.TrimSpace(string(text)), "\n")
require.Len(t, lines, 1)
require.Equal(t, "No targets present in this repository.", lines[0])
}
// Targets are sorted by name, and the name, SHA256 digest, size, and role are
// printed.
func TestPrettyPrintSortedTargets(t *testing.T) {
hashes := make([][]byte, 3)
var err error
for i, letter := range []string{"a012", "b012", "c012"} {
hashes[i], err = hex.DecodeString(letter)
require.NoError(t, err)
}
unsorted := []*client.TargetWithRole{
{Target: client.Target{Name: "zebra", Hashes: data.Hashes{"sha256": hashes[0]}, Length: 8}, Role: "targets/b"},
{Target: client.Target{Name: "aardvark", Hashes: data.Hashes{"sha256": hashes[1]}, Length: 1},
Role: "targets"},
{Target: client.Target{Name: "bee", Hashes: data.Hashes{"sha256": hashes[2]}, Length: 5}, Role: "targets/a"},
}
var b bytes.Buffer
prettyPrintTargets(unsorted, &b)
text, err := ioutil.ReadAll(&b)
require.NoError(t, err)
expected := [][]string{
{"aardvark", "b012", "1", "targets"},
{"bee", "c012", "5", "targets/a"},
{"zebra", "a012", "8", "targets/b"},
}
lines := strings.Split(strings.TrimSpace(string(text)), "\n")
require.Len(t, lines, len(expected)+2)
// starts with headers
require.True(t, reflect.DeepEqual(strings.Fields(lines[0]), strings.Fields(
"NAME DIGEST SIZE (BYTES) ROLE")))
require.Equal(t, "----", lines[1][:4])
for i, line := range lines[2:] {
splitted := strings.Fields(line)
require.Equal(t, expected[i], splitted)
}
}
// --- tests for pretty printing roles ---
// If there are no roles, no table is printed, only a line saying that there
// are no roles.
func TestPrettyPrintZeroRoles(t *testing.T) {
var b bytes.Buffer
prettyPrintRoles([]*data.Role{}, &b, "delegations")
text, err := ioutil.ReadAll(&b)
require.NoError(t, err)
lines := strings.Split(strings.TrimSpace(string(text)), "\n")
require.Len(t, lines, 1)
require.Equal(t, "No delegations present in this repository.", lines[0])
}
// Roles are sorted by name, and the name, paths, and KeyIDs are printed.
func TestPrettyPrintSortedRoles(t *testing.T) {
var err error
unsorted := []*data.Role{
{Name: "targets/zebra", Paths: []string{"stripes", "black", "white"}, RootRole: data.RootRole{KeyIDs: []string{"101"}, Threshold: 1}},
{Name: "targets/aardvark/unicorn/pony", Paths: []string{"rainbows"}, RootRole: data.RootRole{KeyIDs: []string{"135"}, Threshold: 1}},
{Name: "targets/bee", Paths: []string{"honey"}, RootRole: data.RootRole{KeyIDs: []string{"246"}, Threshold: 1}},
{Name: "targets/bee/wasp", Paths: []string{"honey/sting", "stuff"}, RootRole: data.RootRole{KeyIDs: []string{"246", "468"}, Threshold: 1}},
}
var b bytes.Buffer
prettyPrintRoles(unsorted, &b, "delegations")
text, err := ioutil.ReadAll(&b)
require.NoError(t, err)
expected := [][]string{
{"targets/aardvark/unicorn/pony", "rainbows", "135", "1"},
{"targets/bee", "honey", "246", "1"},
{"targets/bee/wasp", "honey/sting", "246", "1"},
{"stuff", "468"}, // Extra keys and paths are printed to extra rows
{"targets/zebra", "black", "101", "1"},
{"stripes"},
{"white"},
}
lines := strings.Split(strings.TrimSpace(string(text)), "\n")
require.Len(t, lines, len(expected)+2)
// starts with headers
require.True(t, reflect.DeepEqual(strings.Fields(lines[0]), strings.Fields(
"ROLE PATHS KEY IDS THRESHOLD")))
require.Equal(t, "----", lines[1][:4])
for i, line := range lines[2:] {
splitted := strings.Fields(line)
require.Equal(t, expected[i], splitted)
}
}

View File

@ -1,33 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIFhjCCA26gAwIBAgIJAMJ4Mtt6YhNLMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV
BAYTAlVTMQswCQYDVQQIDAJDQTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEPMA0G
A1UECgwGRG9ja2VyMRowGAYDVQQDDBFOb3RhcnkgVGVzdGluZyBDQTAeFw0xNTA3
MTYwNDI1MDBaFw0yNTA3MTMwNDI1MDBaMF8xCzAJBgNVBAYTAlVTMQswCQYDVQQI
DAJDQTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEPMA0GA1UECgwGRG9ja2VyMRow
GAYDVQQDDBFOb3RhcnkgVGVzdGluZyBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIP
ADCCAgoCggIBAMzUzq2O07tm3A/4emCN/294jUBnNeGlM4TgsB8W9ingw9CU7oBn
CRTK94cGDHTb5ofcj9Kt4/dSL52uJpkZshmAga4fDDhtntnUHaKYzjoZSKZtq7qV
hC1Dah7s3zftZn4NHiRe82loXH/W0//0MWdQCaLc8E0rd/amrd6EO+5SUwF4dXSk
nWoo3oxtOEnb1uQcWWIiwLRmd1pw3PW/bt/SHssD5dJ+78/nR1qCHhJyLVpylMiy
WijkMKW7mbQFefuCOsQ0QvGG3BrTLu+fVs9GYNzHC+L1bSQbfts4nOSodcB/klhd
mbgVW8mrgeHww/jgb2WJW9Y3RFNp/VEuhVrHiz/NW2qE3nPLEnu0vd50jYIXbvBm
fbhCoJntYAiCY0l8v+POgP3ACtsS41rcn8VyD3Ho4u4186ki71+QRQTsUk2MXRV6
AKQ9u4Cl4d0tV1oHjVyiKDv8PNakNrI48KmnF9R9wMgzDHIoBVQZraVTyPwW9HvS
8K3Lsm6QAE7pErideOyBViOiiqvW7rUaLERTkhGirX2RChwhYLtYIj0LitgzdaT4
JD1JxonqN30g2jk1+mJKMEeWBMTjFqtzuQPYH3HkHKxoNfvEuL5fsZSmhV/mR+yW
lSe1f8r1qpAACj/K3mome/z8UhNxzEW8TCYkwamLkAPF485W64KIYI1tAgMBAAGj
RTBDMBIGA1UdEwEB/wQIMAYBAf8CAQEwDgYDVR0PAQH/BAQDAgFGMB0GA1UdDgQW
BBR1DNVNxOFsi9Z7xXfnT2PH+DtoWTANBgkqhkiG9w0BAQsFAAOCAgEAUbbrI3OQ
5XO8HHpoTwVqFzSzKOuSSrcMGrv67rn+2HvVJYfxtusZBS6+Rw7QVG3daPS+pSNX
NM1qyin3BjpNR2lI771yyK/yjjNH9pZPR+8ThJ8/77roLJudTCCPt49PoYgSQQsp
IB75PlqnTWVwccW9pm2zSdqDxFeZpTpwEvgyX8MNCfYeynxp5+S81593z8iav16u
t2I38NyFJKuxin9zNkxkpf/a9Pr/Gk56gw1OfHXp+sW/6KIzx8fjQuL6P8HEpwVG
zXXA8fMX91cIFI4+DTc8mPjtYvT6/PzDWE/q6FZZnbHJ50Ngg5D8uFN5lLgZFNtf
ITeoNjTk2koq8vvTW8FDpMkb50zqGdBoIdDtRFd3oot+MEg+6mba+Kttwg05aJ9a
SIIxjvU4NH6qOXBSgzaI1hMr7DTBnaXxMEBiaNaPg2nqi6uhaUOcVw3F01yBfGfX
aGsNLKpFiKFYQfOR1M2ho/7AL19GYQD3IFWDJqk0/eQLfFR74iKVMz6ndwt9F7A8
0xxGXGpw2NJQTWLQui4Wzt33q541ihzL7EDtybBScUdIOIEO20mHr2czFoTL9IKx
rU0Ck5BMyMBB+DOppP+TeKjutAI1yRVsNoabOuK4oo/FmqysgQoHEE+gVUThrrpE
wV1EBILkX6O4GiMqu1+x92/yCmlKEg0Q6MM=
-----END CERTIFICATE-----

View File

@ -1,751 +0,0 @@
package main
import (
"bufio"
"encoding/hex"
"fmt"
"net"
"net/http"
"net/url"
"os"
"strconv"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/docker/distribution/registry/client/auth"
"github.com/docker/distribution/registry/client/transport"
"github.com/docker/docker/pkg/term"
"github.com/docker/go-connections/tlsconfig"
"github.com/docker/notary"
notaryclient "github.com/docker/notary/client"
"github.com/docker/notary/trustpinning"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/utils"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var cmdTUFListTemplate = usageTemplate{
Use: "list [ GUN ]",
Short: "Lists targets for a remote trusted collection.",
Long: "Lists all targets for a remote trusted collection identified by the Globally Unique Name. This is an online operation.",
}
var cmdTUFAddTemplate = usageTemplate{
Use: "add [ GUN ] <target> <file>",
Short: "Adds the file as a target to the trusted collection.",
Long: "Adds the file as a target to the local trusted collection identified by the Globally Unique Name. This is an offline operation. Please then use `publish` to push the changes to the remote trusted collection.",
}
var cmdTUFAddHashTemplate = usageTemplate{
Use: "addhash [ GUN ] <target> <byte size> <hashes>",
Short: "Adds the byte size and hash(es) as a target to the trusted collection.",
Long: "Adds the specified byte size and hash(es) as a target to the local trusted collection identified by the Globally Unique Name. This is an offline operation. Please then use `publish` to push the changes to the remote trusted collection.",
}
var cmdTUFRemoveTemplate = usageTemplate{
Use: "remove [ GUN ] <target>",
Short: "Removes a target from a trusted collection.",
Long: "Removes a target from the local trusted collection identified by the Globally Unique Name. This is an offline operation. Please then use `publish` to push the changes to the remote trusted collection.",
}
var cmdTUFInitTemplate = usageTemplate{
Use: "init [ GUN ]",
Short: "Initializes a local trusted collection.",
Long: "Initializes a local trusted collection identified by the Globally Unique Name. This is an online operation.",
}
var cmdTUFLookupTemplate = usageTemplate{
Use: "lookup [ GUN ] <target>",
Short: "Looks up a specific target in a remote trusted collection.",
Long: "Looks up a specific target in a remote trusted collection identified by the Globally Unique Name.",
}
var cmdTUFPublishTemplate = usageTemplate{
Use: "publish [ GUN ]",
Short: "Publishes the local trusted collection.",
Long: "Publishes the local trusted collection identified by the Globally Unique Name, sending the local changes to a remote trusted server.",
}
var cmdTUFStatusTemplate = usageTemplate{
Use: "status [ GUN ]",
Short: "Displays status of unpublished changes to the local trusted collection.",
Long: "Displays status of unpublished changes to the local trusted collection identified by the Globally Unique Name.",
}
var cmdTUFVerifyTemplate = usageTemplate{
Use: "verify [ GUN ] <target>",
Short: "Verifies if the content is included in the remote trusted collection",
Long: "Verifies if the data passed in STDIN is included in the remote trusted collection identified by the Global Unique Name.",
}
type tufCommander struct {
// these need to be set
configGetter func() (*viper.Viper, error)
retriever notary.PassRetriever
// these are for command line parsing - no need to set
roles []string
sha256 string
sha512 string
input string
output string
quiet bool
}
func (t *tufCommander) AddToCommand(cmd *cobra.Command) {
cmd.AddCommand(cmdTUFInitTemplate.ToCommand(t.tufInit))
cmd.AddCommand(cmdTUFStatusTemplate.ToCommand(t.tufStatus))
cmd.AddCommand(cmdTUFPublishTemplate.ToCommand(t.tufPublish))
cmd.AddCommand(cmdTUFLookupTemplate.ToCommand(t.tufLookup))
cmdTUFList := cmdTUFListTemplate.ToCommand(t.tufList)
cmdTUFList.Flags().StringSliceVarP(
&t.roles, "roles", "r", nil, "Delegation roles to list targets for (will shadow targets role)")
cmd.AddCommand(cmdTUFList)
cmdTUFAdd := cmdTUFAddTemplate.ToCommand(t.tufAdd)
cmdTUFAdd.Flags().StringSliceVarP(&t.roles, "roles", "r", nil, "Delegation roles to add this target to")
cmd.AddCommand(cmdTUFAdd)
cmdTUFRemove := cmdTUFRemoveTemplate.ToCommand(t.tufRemove)
cmdTUFRemove.Flags().StringSliceVarP(&t.roles, "roles", "r", nil, "Delegation roles to remove this target from")
cmd.AddCommand(cmdTUFRemove)
cmdTUFAddHash := cmdTUFAddHashTemplate.ToCommand(t.tufAddByHash)
cmdTUFAddHash.Flags().StringSliceVarP(&t.roles, "roles", "r", nil, "Delegation roles to add this target to")
cmdTUFAddHash.Flags().StringVar(&t.sha256, notary.SHA256, "", "hex encoded sha256 of the target to add")
cmdTUFAddHash.Flags().StringVar(&t.sha512, notary.SHA512, "", "hex encoded sha512 of the target to add")
cmd.AddCommand(cmdTUFAddHash)
cmdTUFVerify := cmdTUFVerifyTemplate.ToCommand(t.tufVerify)
cmdTUFVerify.Flags().StringVarP(&t.input, "input", "i", "", "Read from a file, instead of STDIN")
cmdTUFVerify.Flags().StringVarP(&t.output, "output", "o", "", "Write to a file, instead of STDOUT")
cmdTUFVerify.Flags().BoolVarP(&t.quiet, "quiet", "q", false, "No output except for errors")
cmd.AddCommand(cmdTUFVerify)
}
func (t *tufCommander) tufAddByHash(cmd *cobra.Command, args []string) error {
if len(args) < 3 || t.sha256 == "" && t.sha512 == "" {
cmd.Usage()
return fmt.Errorf("Must specify a GUN, target, byte size of target data, and at least one hash")
}
config, err := t.configGetter()
if err != nil {
return err
}
gun := args[0]
targetName := args[1]
targetSize := args[2]
targetInt64Len, err := strconv.ParseInt(targetSize, 0, 64)
if err != nil {
return err
}
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
// no online operations are performed by add so the transport argument
// should be nil
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), nil, t.retriever, trustPin)
if err != nil {
return err
}
targetHash := data.Hashes{}
if t.sha256 != "" {
if len(t.sha256) != notary.Sha256HexSize {
return fmt.Errorf("invalid sha256 hex contents provided")
}
sha256Hash, err := hex.DecodeString(t.sha256)
if err != nil {
return err
}
targetHash[notary.SHA256] = sha256Hash
}
if t.sha512 != "" {
if len(t.sha512) != notary.Sha512HexSize {
return fmt.Errorf("invalid sha512 hex contents provided")
}
sha512Hash, err := hex.DecodeString(t.sha512)
if err != nil {
return err
}
targetHash[notary.SHA512] = sha512Hash
}
// Manually construct the target with the given byte size and hashes
target := &notaryclient.Target{Name: targetName, Hashes: targetHash, Length: targetInt64Len}
// If roles is empty, we default to adding to targets
if err = nRepo.AddTarget(target, t.roles...); err != nil {
return err
}
// Include the hash algorithms we're using for pretty printing
hashesUsed := []string{}
for hashName := range targetHash {
hashesUsed = append(hashesUsed, hashName)
}
cmd.Printf(
"Addition of target \"%s\" by %s hash to repository \"%s\" staged for next publish.\n",
targetName, strings.Join(hashesUsed, ", "), gun)
return nil
}
func (t *tufCommander) tufAdd(cmd *cobra.Command, args []string) error {
if len(args) < 3 {
cmd.Usage()
return fmt.Errorf("Must specify a GUN, target, and path to target data")
}
config, err := t.configGetter()
if err != nil {
return err
}
gun := args[0]
targetName := args[1]
targetPath := args[2]
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
// no online operations are performed by add so the transport argument
// should be nil
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), nil, t.retriever, trustPin)
if err != nil {
return err
}
target, err := notaryclient.NewTarget(targetName, targetPath)
if err != nil {
return err
}
// If roles is empty, we default to adding to targets
if err = nRepo.AddTarget(target, t.roles...); err != nil {
return err
}
cmd.Printf(
"Addition of target \"%s\" to repository \"%s\" staged for next publish.\n",
targetName, gun)
return nil
}
func (t *tufCommander) tufInit(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
cmd.Usage()
return fmt.Errorf("Must specify a GUN")
}
config, err := t.configGetter()
if err != nil {
return err
}
gun := args[0]
rt, err := getTransport(config, gun, false)
if err != nil {
return err
}
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), rt, t.retriever, trustPin)
if err != nil {
return err
}
rootKeyList := nRepo.CryptoService.ListKeys(data.CanonicalRootRole)
var rootKeyID string
if len(rootKeyList) < 1 {
cmd.Println("No root keys found. Generating a new root key...")
rootPublicKey, err := nRepo.CryptoService.Create(data.CanonicalRootRole, "", data.ECDSAKey)
rootKeyID = rootPublicKey.ID()
if err != nil {
return err
}
} else {
// Choses the first root key available, which is initialization specific
// but should return the HW one first.
rootKeyID = rootKeyList[0]
cmd.Printf("Root key found, using: %s\n", rootKeyID)
}
if err = nRepo.Initialize(rootKeyID); err != nil {
return err
}
return nil
}
func (t *tufCommander) tufList(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
cmd.Usage()
return fmt.Errorf("Must specify a GUN")
}
config, err := t.configGetter()
if err != nil {
return err
}
gun := args[0]
rt, err := getTransport(config, gun, true)
if err != nil {
return err
}
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), rt, t.retriever, trustPin)
if err != nil {
return err
}
// Retrieve the remote list of signed targets, prioritizing the passed-in list over targets
roles := append(t.roles, data.CanonicalTargetsRole)
targetList, err := nRepo.ListTargets(roles...)
if err != nil {
return err
}
prettyPrintTargets(targetList, cmd.Out())
return nil
}
func (t *tufCommander) tufLookup(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
cmd.Usage()
return fmt.Errorf("Must specify a GUN and target")
}
config, err := t.configGetter()
if err != nil {
return err
}
gun := args[0]
targetName := args[1]
rt, err := getTransport(config, gun, true)
if err != nil {
return err
}
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), rt, t.retriever, trustPin)
if err != nil {
return err
}
target, err := nRepo.GetTargetByName(targetName)
if err != nil {
return err
}
cmd.Println(target.Name, fmt.Sprintf("sha256:%x", target.Hashes["sha256"]), target.Length)
return nil
}
func (t *tufCommander) tufStatus(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
cmd.Usage()
return fmt.Errorf("Must specify a GUN")
}
config, err := t.configGetter()
if err != nil {
return err
}
gun := args[0]
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), nil, t.retriever, trustPin)
if err != nil {
return err
}
cl, err := nRepo.GetChangelist()
if err != nil {
return err
}
if len(cl.List()) == 0 {
cmd.Printf("No unpublished changes for %s\n", gun)
return nil
}
cmd.Printf("Unpublished changes for %s:\n\n", gun)
cmd.Printf("%-10s%-10s%-12s%s\n", "action", "scope", "type", "path")
cmd.Println("----------------------------------------------------")
for _, ch := range cl.List() {
cmd.Printf("%-10s%-10s%-12s%s\n", ch.Action(), ch.Scope(), ch.Type(), ch.Path())
}
return nil
}
func (t *tufCommander) tufPublish(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
cmd.Usage()
return fmt.Errorf("Must specify a GUN")
}
config, err := t.configGetter()
if err != nil {
return err
}
gun := args[0]
cmd.Println("Pushing changes to", gun)
rt, err := getTransport(config, gun, false)
if err != nil {
return err
}
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), rt, t.retriever, trustPin)
if err != nil {
return err
}
if err = nRepo.Publish(); err != nil {
return err
}
return nil
}
func (t *tufCommander) tufRemove(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
return fmt.Errorf("Must specify a GUN and target")
}
config, err := t.configGetter()
if err != nil {
return err
}
gun := args[0]
targetName := args[1]
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
// no online operation are performed by remove so the transport argument
// should be nil.
repo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), nil, t.retriever, trustPin)
if err != nil {
return err
}
// If roles is empty, we default to removing from targets
if err = repo.RemoveTarget(targetName, t.roles...); err != nil {
return err
}
cmd.Printf("Removal of %s from %s staged for next publish.\n", targetName, gun)
return nil
}
func (t *tufCommander) tufVerify(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
cmd.Usage()
return fmt.Errorf("Must specify a GUN and target")
}
config, err := t.configGetter()
if err != nil {
return err
}
payload, err := getPayload(t)
if err != nil {
return err
}
gun := args[0]
targetName := args[1]
rt, err := getTransport(config, gun, true)
if err != nil {
return err
}
trustPin, err := getTrustPinning(config)
if err != nil {
return err
}
nRepo, err := notaryclient.NewNotaryRepository(
config.GetString("trust_dir"), gun, getRemoteTrustServer(config), rt, t.retriever, trustPin)
if err != nil {
return err
}
target, err := nRepo.GetTargetByName(targetName)
if err != nil {
return fmt.Errorf("error retrieving target by name:%s, error:%v", targetName, err)
}
if err := data.CheckHashes(payload, targetName, target.Hashes); err != nil {
return fmt.Errorf("data not present in the trusted collection, %v", err)
}
return feedback(t, payload)
}
type passwordStore struct {
anonymous bool
}
func (ps passwordStore) Basic(u *url.URL) (string, string) {
if ps.anonymous {
return "", ""
}
stdin := bufio.NewReader(os.Stdin)
fmt.Fprintf(os.Stdout, "Enter username: ")
userIn, err := stdin.ReadBytes('\n')
if err != nil {
logrus.Errorf("error processing username input: %s", err)
return "", ""
}
username := strings.TrimSpace(string(userIn))
if term.IsTerminal(0) {
state, err := term.SaveState(0)
if err != nil {
logrus.Errorf("error saving terminal state, cannot retrieve password: %s", err)
return "", ""
}
term.DisableEcho(0, state)
defer term.RestoreTerminal(0, state)
}
fmt.Fprintf(os.Stdout, "Enter password: ")
userIn, err = stdin.ReadBytes('\n')
fmt.Fprintln(os.Stdout)
if err != nil {
logrus.Errorf("error processing password input: %s", err)
return "", ""
}
password := strings.TrimSpace(string(userIn))
return username, password
}
// getTransport returns an http.RoundTripper to be used for all http requests.
// It correctly handles the auth challenge/credentials required to interact
// with a notary server over both HTTP Basic Auth and the JWT auth implemented
// in the notary-server
// The readOnly flag indicates if the operation should be performed as an
// anonymous read only operation. If the command entered requires write
// permissions on the server, readOnly must be false
func getTransport(config *viper.Viper, gun string, readOnly bool) (http.RoundTripper, error) {
// Attempt to get a root CA from the config file. Nil is the host defaults.
rootCAFile := utils.GetPathRelativeToConfig(config, "remote_server.root_ca")
clientCert := utils.GetPathRelativeToConfig(config, "remote_server.tls_client_cert")
clientKey := utils.GetPathRelativeToConfig(config, "remote_server.tls_client_key")
insecureSkipVerify := false
if config.IsSet("remote_server.skipTLSVerify") {
insecureSkipVerify = config.GetBool("remote_server.skipTLSVerify")
}
if clientCert == "" && clientKey != "" || clientCert != "" && clientKey == "" {
return nil, fmt.Errorf("either pass both client key and cert, or neither")
}
tlsConfig, err := tlsconfig.Client(tlsconfig.Options{
CAFile: rootCAFile,
InsecureSkipVerify: insecureSkipVerify,
CertFile: clientCert,
KeyFile: clientKey,
})
if err != nil {
return nil, fmt.Errorf("unable to configure TLS: %s", err.Error())
}
base := &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).Dial,
TLSHandshakeTimeout: 10 * time.Second,
TLSClientConfig: tlsConfig,
DisableKeepAlives: true,
}
trustServerURL := getRemoteTrustServer(config)
return tokenAuth(trustServerURL, base, gun, readOnly)
}
func tokenAuth(trustServerURL string, baseTransport *http.Transport, gun string,
readOnly bool) (http.RoundTripper, error) {
// TODO(dmcgowan): add notary specific headers
authTransport := transport.NewTransport(baseTransport)
pingClient := &http.Client{
Transport: authTransport,
Timeout: 5 * time.Second,
}
endpoint, err := url.Parse(trustServerURL)
if err != nil {
return nil, fmt.Errorf("Could not parse remote trust server url (%s): %s", trustServerURL, err.Error())
}
if endpoint.Scheme == "" {
return nil, fmt.Errorf("Trust server url has to be in the form of http(s)://URL:PORT. Got: %s", trustServerURL)
}
subPath, err := url.Parse("v2/")
if err != nil {
return nil, fmt.Errorf("Failed to parse v2 subpath. This error should not have been reached. Please report it as an issue at https://github.com/docker/notary/issues: %s", err.Error())
}
endpoint = endpoint.ResolveReference(subPath)
req, err := http.NewRequest("GET", endpoint.String(), nil)
if err != nil {
return nil, err
}
resp, err := pingClient.Do(req)
if err != nil {
logrus.Errorf("could not reach %s: %s", trustServerURL, err.Error())
logrus.Info("continuing in offline mode")
return nil, nil
}
// non-nil err means we must close body
defer resp.Body.Close()
if (resp.StatusCode < http.StatusOK || resp.StatusCode >= http.StatusMultipleChoices) &&
resp.StatusCode != http.StatusUnauthorized {
// If we didn't get a 2XX range or 401 status code, we're not talking to a notary server.
// The http client should be configured to handle redirects so at this point, 3XX is
// not a valid status code.
logrus.Errorf("could not reach %s: %d", trustServerURL, resp.StatusCode)
logrus.Info("continuing in offline mode")
return nil, nil
}
challengeManager := auth.NewSimpleChallengeManager()
if err := challengeManager.AddResponse(resp); err != nil {
return nil, err
}
ps := passwordStore{anonymous: readOnly}
var actions []string
if readOnly {
actions = []string{"pull"}
} else {
actions = []string{"push", "pull"}
}
tokenHandler := auth.NewTokenHandler(authTransport, ps, gun, actions...)
basicHandler := auth.NewBasicHandler(ps)
modifier := auth.NewAuthorizer(challengeManager, tokenHandler, basicHandler)
if !readOnly {
return newAuthRoundTripper(transport.NewTransport(baseTransport, modifier)), nil
}
// Try to authenticate read only repositories using basic username/password authentication
return newAuthRoundTripper(transport.NewTransport(baseTransport, modifier),
transport.NewTransport(baseTransport, auth.NewAuthorizer(challengeManager, auth.NewTokenHandler(authTransport, passwordStore{anonymous: false}, gun, actions...)))), nil
}
func getRemoteTrustServer(config *viper.Viper) string {
if configRemote := config.GetString("remote_server.url"); configRemote != "" {
return configRemote
}
return defaultServerURL
}
func getTrustPinning(config *viper.Viper) (trustpinning.TrustPinConfig, error) {
var ok bool
// Need to parse out Certs section from config
certMap := config.GetStringMap("trust_pinning.certs")
resultCertMap := make(map[string][]string)
for gun, certSlice := range certMap {
var castedCertSlice []interface{}
if castedCertSlice, ok = certSlice.([]interface{}); !ok {
return trustpinning.TrustPinConfig{}, fmt.Errorf("invalid format for trust_pinning.certs")
}
certsForGun := make([]string, len(castedCertSlice))
for idx, certIDInterface := range castedCertSlice {
if certID, ok := certIDInterface.(string); ok {
certsForGun[idx] = certID
} else {
return trustpinning.TrustPinConfig{}, fmt.Errorf("invalid format for trust_pinning.certs")
}
}
resultCertMap[gun] = certsForGun
}
return trustpinning.TrustPinConfig{
DisableTOFU: config.GetBool("trust_pinning.disable_tofu"),
CA: config.GetStringMapString("trust_pinning.ca"),
Certs: resultCertMap,
}, nil
}
// authRoundTripper tries to authenticate the requests via multiple HTTP transactions (until first succeed)
type authRoundTripper struct {
trippers []http.RoundTripper
}
func newAuthRoundTripper(trippers ...http.RoundTripper) http.RoundTripper {
return &authRoundTripper{trippers: trippers}
}
func (a *authRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
var resp *http.Response
// Try all run all transactions
for _, t := range a.trippers {
var err error
resp, err = t.RoundTrip(req)
// Reject on error
if err != nil {
return resp, err
}
// Stop when request is authorized/unknown error
if resp.StatusCode != http.StatusUnauthorized {
return resp, nil
}
}
// Return the last response
return resp, nil
}

View File

@ -1,75 +0,0 @@
package main
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/stretchr/testify/require"
)
func TestTokenAuth(t *testing.T) {
var (
readOnly bool
baseTransport = &http.Transport{}
gun = "test"
)
auth, err := tokenAuth("https://localhost:9999", baseTransport, gun, readOnly)
require.NoError(t, err)
require.Nil(t, auth)
}
func StatusOKTestHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
w.Write([]byte("{}"))
}
func TestTokenAuth200Status(t *testing.T) {
var (
readOnly bool
baseTransport = &http.Transport{}
gun = "test"
)
s := httptest.NewServer(http.HandlerFunc(NotAuthorizedTestHandler))
defer s.Close()
auth, err := tokenAuth(s.URL, baseTransport, gun, readOnly)
require.NoError(t, err)
require.NotNil(t, auth)
}
func NotAuthorizedTestHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(401)
}
func TestTokenAuth401Status(t *testing.T) {
var (
readOnly bool
baseTransport = &http.Transport{}
gun = "test"
)
s := httptest.NewServer(http.HandlerFunc(NotAuthorizedTestHandler))
defer s.Close()
auth, err := tokenAuth(s.URL, baseTransport, gun, readOnly)
require.NoError(t, err)
require.NotNil(t, auth)
}
func NotFoundTestHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(404)
}
func TestTokenAuthNon200Non401Status(t *testing.T) {
var (
readOnly bool
baseTransport = &http.Transport{}
gun = "test"
)
s := httptest.NewServer(http.HandlerFunc(NotFoundTestHandler))
defer s.Close()
auth, err := tokenAuth(s.URL, baseTransport, gun, readOnly)
require.NoError(t, err)
require.Nil(t, auth)
}

View File

@ -1,49 +0,0 @@
package main
import (
"fmt"
"io/ioutil"
"os"
)
// getPayload is a helper function to get the content used to be verified
// either from an existing file or STDIN.
func getPayload(t *tufCommander) ([]byte, error) {
// Reads from the given file
if t.input != "" {
// Please note that ReadFile will cut off the size if it was over 1e9.
// Thus, if the size of the file exceeds 1GB, the over part will not be
// loaded into the buffer.
payload, err := ioutil.ReadFile(t.input)
if err != nil {
return nil, err
}
return payload, nil
}
// Reads all of the data on STDIN
payload, err := ioutil.ReadAll(os.Stdin)
if err != nil {
return nil, fmt.Errorf("Error reading content from STDIN: %v", err)
}
return payload, nil
}
// feedback is a helper function to print the payload to a file or STDOUT or keep quiet
// due to the value of flag "quiet" and "output".
func feedback(t *tufCommander, payload []byte) error {
// We only get here when everything goes well, since the flag "quiet" was
// provided, we output nothing but just return.
if t.quiet {
return nil
}
// Flag "quiet" was not "true", that's why we get here.
if t.output != "" {
return ioutil.WriteFile(t.output, payload, 0644)
}
os.Stdout.Write(payload)
return nil
}

View File

@ -1,54 +0,0 @@
package main
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
)
func TestGetPayload(t *testing.T) {
tempDir, err := ioutil.TempDir("", "test-get-payload")
require.NoError(t, err)
defer os.RemoveAll(tempDir)
file, err := os.Create(filepath.Join(tempDir, "content.txt"))
require.NoError(t, err)
fmt.Fprintf(file, "Release date: June 10, 2016 - Director: Duncan Jones")
file.Close()
commander := &tufCommander{
input: file.Name(),
}
payload, err := getPayload(commander)
require.NoError(t, err)
require.Equal(t, "Release date: June 10, 2016 - Director: Duncan Jones", string(payload))
}
func TestFeedback(t *testing.T) {
tempDir, err := ioutil.TempDir("", "test-feedback")
require.NoError(t, err)
defer os.RemoveAll(tempDir)
file, err := os.Create(filepath.Join(tempDir, "content.txt"))
require.NoError(t, err)
// Expect it to print nothing since "quiet" takes priority.
commander := &tufCommander{
output: file.Name(),
quiet: true,
}
payload := []byte("Release date: June 10, 2016 - Director: Duncan Jones")
err = feedback(commander, payload)
require.NoError(t, err)
content, err := ioutil.ReadFile(file.Name())
require.NoError(t, err)
require.Equal(t, "", string(content))
}

View File

@ -1,19 +0,0 @@
codecov:
notify:
# 2 builds on circleci, 1 jenkins build
after_n_builds: 3
coverage:
status:
# project will give us the diff in the total code coverage between a commit
# and its parent
project:
default:
target: auto
threshold: "0.05%"
# patch would give us the code coverage of the diff only
patch: false
# changes tells us if there are unexpected code coverage changes in other files
# which were not changed by the diff
changes: false
comment: off

View File

@ -1,78 +0,0 @@
package notary
import (
"os"
"syscall"
"time"
)
// application wide constants
const (
// MaxDownloadSize is the maximum size we'll download for metadata if no limit is given
MaxDownloadSize int64 = 100 << 20
// MaxTimestampSize is the maximum size of timestamp metadata - 1MiB.
MaxTimestampSize int64 = 1 << 20
// MinRSABitSize is the minimum bit size for RSA keys allowed in notary
MinRSABitSize = 2048
// MinThreshold requires a minimum of one threshold for roles; currently we do not support a higher threshold
MinThreshold = 1
// PrivKeyPerms are the file permissions to use when writing private keys to disk
PrivKeyPerms = 0700
// PubCertPerms are the file permissions to use when writing public certificates to disk
PubCertPerms = 0755
// Sha256HexSize is how big a Sha256 hex is in number of characters
Sha256HexSize = 64
// Sha512HexSize is how big a Sha512 hex is in number of characters
Sha512HexSize = 128
// SHA256 is the name of SHA256 hash algorithm
SHA256 = "sha256"
// SHA512 is the name of SHA512 hash algorithm
SHA512 = "sha512"
// TrustedCertsDir is the directory, under the notary repo base directory, where trusted certs are stored
TrustedCertsDir = "trusted_certificates"
// PrivDir is the directory, under the notary repo base directory, where private keys are stored
PrivDir = "private"
// RootKeysSubdir is the subdirectory under PrivDir where root private keys are stored
RootKeysSubdir = "root_keys"
// NonRootKeysSubdir is the subdirectory under PrivDir where non-root private keys are stored
NonRootKeysSubdir = "tuf_keys"
// Day is a duration of one day
Day = 24 * time.Hour
Year = 365 * Day
// NotaryRootExpiry is the duration representing the expiry time of the Root role
NotaryRootExpiry = 10 * Year
NotaryTargetsExpiry = 3 * Year
NotarySnapshotExpiry = 3 * Year
NotaryTimestampExpiry = 14 * Day
ConsistentMetadataCacheMaxAge = 30 * Day
CurrentMetadataCacheMaxAge = 5 * time.Minute
// CacheMaxAgeLimit is the generally recommended maximum age for Cache-Control headers
// (one year, in seconds, since one year is forever in terms of internet
// content)
CacheMaxAgeLimit = 1 * Year
MySQLBackend = "mysql"
MemoryBackend = "memory"
SQLiteBackend = "sqlite3"
RethinkDBBackend = "rethinkdb"
)
// NotaryDefaultExpiries is the construct used to configure the default expiry times of
// the various role files.
var NotaryDefaultExpiries = map[string]time.Duration{
"root": NotaryRootExpiry,
"targets": NotaryTargetsExpiry,
"snapshot": NotarySnapshotExpiry,
"timestamp": NotaryTimestampExpiry,
}
// NotarySupportedSignals contains the signals we would like to capture:
// - SIGUSR1, indicates a increment of the log level.
// - SIGUSR2, indicates a decrement of the log level.
var NotarySupportedSignals = []os.Signal{
syscall.SIGUSR1,
syscall.SIGUSR2,
}

View File

@ -1,10 +0,0 @@
#!/usr/bin/env bash
# Given a subpackage and the containing package, figures out which packages
# need to be passed to `go test -coverpkg`: this includes all of the
# subpackage's dependencies within the containing package, as well as the
# subpackage itself.
DEPENDENCIES="$(go list -f $'{{range $f := .Deps}}{{$f}}\n{{end}}' ${1} | grep ${2} | grep -v ${2}/vendor)"
echo "${1} ${DEPENDENCIES}" | xargs echo -n | tr ' ' ','

View File

@ -1,41 +0,0 @@
package cryptoservice
import (
"crypto"
"crypto/rand"
"crypto/x509"
"fmt"
"time"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
)
// GenerateCertificate generates an X509 Certificate from a template, given a GUN and validity interval
func GenerateCertificate(rootKey data.PrivateKey, gun string, startTime, endTime time.Time) (*x509.Certificate, error) {
signer := rootKey.CryptoSigner()
if signer == nil {
return nil, fmt.Errorf("key type not supported for Certificate generation: %s\n", rootKey.Algorithm())
}
return generateCertificate(signer, gun, startTime, endTime)
}
func generateCertificate(signer crypto.Signer, gun string, startTime, endTime time.Time) (*x509.Certificate, error) {
template, err := trustmanager.NewCertificate(gun, startTime, endTime)
if err != nil {
return nil, fmt.Errorf("failed to create the certificate template for: %s (%v)", gun, err)
}
derBytes, err := x509.CreateCertificate(rand.Reader, template, template, signer.Public(), signer)
if err != nil {
return nil, fmt.Errorf("failed to create the certificate for: %s (%v)", gun, err)
}
cert, err := x509.ParseCertificate(derBytes)
if err != nil {
return nil, fmt.Errorf("failed to parse the certificate for key: %s (%v)", gun, err)
}
return cert, nil
}

View File

@ -1,37 +0,0 @@
package cryptoservice
import (
"crypto/rand"
"crypto/x509"
"testing"
"time"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
"github.com/stretchr/testify/require"
)
func TestGenerateCertificate(t *testing.T) {
privKey, err := trustmanager.GenerateECDSAKey(rand.Reader)
require.NoError(t, err, "could not generate key")
keyStore := trustmanager.NewKeyMemoryStore(passphraseRetriever)
err = keyStore.AddKey(trustmanager.KeyInfo{Role: data.CanonicalRootRole, Gun: ""}, privKey)
require.NoError(t, err, "could not add key to store")
// Check GenerateCertificate method
gun := "docker.com/notary"
startTime := time.Now()
cert, err := GenerateCertificate(privKey, gun, startTime, startTime.AddDate(10, 0, 0))
require.NoError(t, err, "could not generate certificate")
// Check public key
ecdsaPrivateKey, err := x509.ParseECPrivateKey(privKey.Private())
require.NoError(t, err)
ecdsaPublicKey := ecdsaPrivateKey.Public()
require.Equal(t, ecdsaPublicKey, cert.PublicKey)
// Check CommonName
require.Equal(t, cert.Subject.CommonName, gun)
}

View File

@ -1,155 +0,0 @@
package cryptoservice
import (
"crypto/rand"
"fmt"
"github.com/Sirupsen/logrus"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
)
const (
rsaKeySize = 2048 // Used for snapshots and targets keys
)
// CryptoService implements Sign and Create, holding a specific GUN and keystore to
// operate on
type CryptoService struct {
keyStores []trustmanager.KeyStore
}
// NewCryptoService returns an instance of CryptoService
func NewCryptoService(keyStores ...trustmanager.KeyStore) *CryptoService {
return &CryptoService{keyStores: keyStores}
}
// Create is used to generate keys for targets, snapshots and timestamps
func (cs *CryptoService) Create(role, gun, algorithm string) (data.PublicKey, error) {
var privKey data.PrivateKey
var err error
switch algorithm {
case data.RSAKey:
privKey, err = trustmanager.GenerateRSAKey(rand.Reader, rsaKeySize)
if err != nil {
return nil, fmt.Errorf("failed to generate RSA key: %v", err)
}
case data.ECDSAKey:
privKey, err = trustmanager.GenerateECDSAKey(rand.Reader)
if err != nil {
return nil, fmt.Errorf("failed to generate EC key: %v", err)
}
case data.ED25519Key:
privKey, err = trustmanager.GenerateED25519Key(rand.Reader)
if err != nil {
return nil, fmt.Errorf("failed to generate ED25519 key: %v", err)
}
default:
return nil, fmt.Errorf("private key type not supported for key generation: %s", algorithm)
}
logrus.Debugf("generated new %s key for role: %s and keyID: %s", algorithm, role, privKey.ID())
// Store the private key into our keystore
for _, ks := range cs.keyStores {
err = ks.AddKey(trustmanager.KeyInfo{Role: role, Gun: gun}, privKey)
if err == nil {
return data.PublicKeyFromPrivate(privKey), nil
}
}
if err != nil {
return nil, fmt.Errorf("failed to add key to filestore: %v", err)
}
return nil, fmt.Errorf("keystores would not accept new private keys for unknown reasons")
}
// GetPrivateKey returns a private key and role if present by ID.
func (cs *CryptoService) GetPrivateKey(keyID string) (k data.PrivateKey, role string, err error) {
for _, ks := range cs.keyStores {
if k, role, err = ks.GetKey(keyID); err == nil {
return
}
switch err.(type) {
case trustmanager.ErrPasswordInvalid, trustmanager.ErrAttemptsExceeded:
return
default:
continue
}
}
return // returns whatever the final values were
}
// GetKey returns a key by ID
func (cs *CryptoService) GetKey(keyID string) data.PublicKey {
privKey, _, err := cs.GetPrivateKey(keyID)
if err != nil {
return nil
}
return data.PublicKeyFromPrivate(privKey)
}
// GetKeyInfo returns role and GUN info of a key by ID
func (cs *CryptoService) GetKeyInfo(keyID string) (trustmanager.KeyInfo, error) {
for _, store := range cs.keyStores {
if info, err := store.GetKeyInfo(keyID); err == nil {
return info, nil
}
}
return trustmanager.KeyInfo{}, fmt.Errorf("Could not find info for keyID %s", keyID)
}
// RemoveKey deletes a key by ID
func (cs *CryptoService) RemoveKey(keyID string) (err error) {
for _, ks := range cs.keyStores {
ks.RemoveKey(keyID)
}
return // returns whatever the final values were
}
// AddKey adds a private key to a specified role.
// The GUN is inferred from the cryptoservice itself for non-root roles
func (cs *CryptoService) AddKey(role, gun string, key data.PrivateKey) (err error) {
// First check if this key already exists in any of our keystores
for _, ks := range cs.keyStores {
if keyInfo, err := ks.GetKeyInfo(key.ID()); err == nil {
if keyInfo.Role != role {
return fmt.Errorf("key with same ID already exists for role: %s", keyInfo.Role)
}
logrus.Debugf("key with same ID %s and role %s already exists", key.ID(), keyInfo.Role)
return nil
}
}
// If the key didn't exist in any of our keystores, add and return on the first successful keystore
for _, ks := range cs.keyStores {
// Try to add to this keystore, return if successful
if err = ks.AddKey(trustmanager.KeyInfo{Role: role, Gun: gun}, key); err == nil {
return nil
}
}
return // returns whatever the final values were
}
// ListKeys returns a list of key IDs valid for the given role
func (cs *CryptoService) ListKeys(role string) []string {
var res []string
for _, ks := range cs.keyStores {
for k, r := range ks.ListKeys() {
if r.Role == role {
res = append(res, k)
}
}
}
return res
}
// ListAllKeys returns a map of key IDs to role
func (cs *CryptoService) ListAllKeys() map[string]string {
res := make(map[string]string)
for _, ks := range cs.keyStores {
for k, r := range ks.ListKeys() {
res[k] = r.Role // keys are content addressed so don't care about overwrites
}
}
return res
}

View File

@ -1,419 +0,0 @@
package cryptoservice
import (
"crypto/rand"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"runtime"
"testing"
"github.com/stretchr/testify/require"
"github.com/docker/notary/passphrase"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/tuf/signed"
"github.com/docker/notary/tuf/testutils/interfaces"
)
var algoToSigType = map[string]data.SigAlgorithm{
data.ECDSAKey: data.ECDSASignature,
data.ED25519Key: data.EDDSASignature,
data.RSAKey: data.RSAPSSSignature,
}
var passphraseRetriever = func(string, string, bool, int) (string, bool, error) { return "", false, nil }
type CryptoServiceTester struct {
role string
keyAlgo string
gun string
}
func (c CryptoServiceTester) cryptoServiceFactory() *CryptoService {
return NewCryptoService(trustmanager.NewKeyMemoryStore(passphraseRetriever))
}
// asserts that created key exists
func (c CryptoServiceTester) TestCreateAndGetKey(t *testing.T) {
cryptoService := c.cryptoServiceFactory()
// Test Create
tufKey, err := cryptoService.Create(c.role, c.gun, c.keyAlgo)
require.NoError(t, err, c.errorMsg("error creating key"))
// Test GetKey
retrievedKey := cryptoService.GetKey(tufKey.ID())
require.NotNil(t, retrievedKey,
c.errorMsg("Could not find key ID %s", tufKey.ID()))
require.Equal(t, tufKey.Public(), retrievedKey.Public(),
c.errorMsg("retrieved public key didn't match"))
// Test GetPrivateKey
retrievedKey, alias, err := cryptoService.GetPrivateKey(tufKey.ID())
require.NoError(t, err)
require.Equal(t, tufKey.ID(), retrievedKey.ID(),
c.errorMsg("retrieved private key didn't have the right ID"))
require.Equal(t, c.role, alias)
}
// If there are multiple keystores, ensure that a key is only added to one -
// the first in the list of keyStores (which is in order of preference)
func (c CryptoServiceTester) TestCreateAndGetWhenMultipleKeystores(t *testing.T) {
cryptoService := c.cryptoServiceFactory()
cryptoService.keyStores = append(cryptoService.keyStores,
trustmanager.NewKeyMemoryStore(passphraseRetriever))
// Test Create
tufKey, err := cryptoService.Create(c.role, c.gun, c.keyAlgo)
require.NoError(t, err, c.errorMsg("error creating key"))
// Only the first keystore should have the key
keyPath := tufKey.ID()
if c.role != data.CanonicalRootRole && c.gun != "" {
keyPath = filepath.Join(c.gun, keyPath)
}
_, _, err = cryptoService.keyStores[0].GetKey(keyPath)
require.NoError(t, err, c.errorMsg(
"First keystore does not have the key %s", keyPath))
_, _, err = cryptoService.keyStores[1].GetKey(keyPath)
require.Error(t, err, c.errorMsg(
"Second keystore has the key %s", keyPath))
// GetKey works across multiple keystores
retrievedKey := cryptoService.GetKey(tufKey.ID())
require.NotNil(t, retrievedKey,
c.errorMsg("Could not find key ID %s", tufKey.ID()))
}
// asserts that getting key fails for a non-existent key
func (c CryptoServiceTester) TestGetNonexistentKey(t *testing.T) {
cryptoService := c.cryptoServiceFactory()
require.Nil(t, cryptoService.GetKey("boguskeyid"),
c.errorMsg("non-nil result for bogus keyid"))
_, _, err := cryptoService.GetPrivateKey("boguskeyid")
require.Error(t, err)
// The underlying error has been correctly propagated.
_, ok := err.(trustmanager.ErrKeyNotFound)
require.True(t, ok)
}
// asserts that signing with a created key creates a valid signature
func (c CryptoServiceTester) TestSignWithKey(t *testing.T) {
cryptoService := c.cryptoServiceFactory()
content := []byte("this is a secret")
tufKey, err := cryptoService.Create(c.role, c.gun, c.keyAlgo)
require.NoError(t, err, c.errorMsg("error creating key"))
// Test Sign
privKey, role, err := cryptoService.GetPrivateKey(tufKey.ID())
require.NoError(t, err, c.errorMsg("failed to get private key"))
require.Equal(t, c.role, role)
signature, err := privKey.Sign(rand.Reader, content, nil)
require.NoError(t, err, c.errorMsg("signing failed"))
verifier, ok := signed.Verifiers[algoToSigType[c.keyAlgo]]
require.True(t, ok, c.errorMsg("Unknown verifier for algorithm"))
err = verifier.Verify(tufKey, signature, content)
require.NoError(t, err,
c.errorMsg("verification failed for %s key type", c.keyAlgo))
}
// asserts that signing, if there are no matching keys, produces no signatures
func (c CryptoServiceTester) TestSignNoMatchingKeys(t *testing.T) {
cryptoService := c.cryptoServiceFactory()
privKey, err := trustmanager.GenerateECDSAKey(rand.Reader)
require.NoError(t, err, c.errorMsg("error creating key"))
// Test Sign
_, _, err = cryptoService.GetPrivateKey(privKey.ID())
require.Error(t, err, c.errorMsg("Should not have found private key"))
}
// Test GetPrivateKey succeeds when multiple keystores have the same key
func (c CryptoServiceTester) TestGetPrivateKeyMultipleKeystores(t *testing.T) {
cryptoService := c.cryptoServiceFactory()
cryptoService.keyStores = append(cryptoService.keyStores,
trustmanager.NewKeyMemoryStore(passphraseRetriever))
privKey, err := trustmanager.GenerateECDSAKey(rand.Reader)
require.NoError(t, err, c.errorMsg("error creating key"))
for _, store := range cryptoService.keyStores {
err := store.AddKey(trustmanager.KeyInfo{Role: c.role, Gun: c.gun}, privKey)
require.NoError(t, err)
}
foundKey, role, err := cryptoService.GetPrivateKey(privKey.ID())
require.NoError(t, err, c.errorMsg("failed to get private key"))
require.Equal(t, c.role, role)
require.Equal(t, privKey.ID(), foundKey.ID())
}
func giveUpPassphraseRetriever(_, _ string, _ bool, _ int) (string, bool, error) {
return "", true, nil
}
// Test that ErrPasswordInvalid is correctly propagated
func (c CryptoServiceTester) TestGetPrivateKeyPasswordInvalid(t *testing.T) {
tempBaseDir, err := ioutil.TempDir("", "cs-test-")
require.NoError(t, err, "failed to create a temporary directory: %s", err)
defer os.RemoveAll(tempBaseDir)
// Do not use c.cryptoServiceFactory(), we need a KeyFileStore.
retriever := passphrase.ConstantRetriever("password")
store, err := trustmanager.NewKeyFileStore(tempBaseDir, retriever)
require.NoError(t, err)
cryptoService := NewCryptoService(store)
pubKey, err := cryptoService.Create(c.role, c.gun, c.keyAlgo)
require.NoError(t, err, "error generating key: %s", err)
// cryptoService's FileKeyStore caches the unlocked private key, so to test
// private key unlocking we need a new instance.
store, err = trustmanager.NewKeyFileStore(tempBaseDir, giveUpPassphraseRetriever)
require.NoError(t, err)
cryptoService = NewCryptoService(store)
_, _, err = cryptoService.GetPrivateKey(pubKey.ID())
require.EqualError(t, err, trustmanager.ErrPasswordInvalid{}.Error())
}
// Test that ErrAtttemptsExceeded is correctly propagated
func (c CryptoServiceTester) TestGetPrivateKeyAttemptsExceeded(t *testing.T) {
tempBaseDir, err := ioutil.TempDir("", "cs-test-")
require.NoError(t, err, "failed to create a temporary directory: %s", err)
defer os.RemoveAll(tempBaseDir)
// Do not use c.cryptoServiceFactory(), we need a KeyFileStore.
retriever := passphrase.ConstantRetriever("password")
store, err := trustmanager.NewKeyFileStore(tempBaseDir, retriever)
require.NoError(t, err)
cryptoService := NewCryptoService(store)
pubKey, err := cryptoService.Create(c.role, c.gun, c.keyAlgo)
require.NoError(t, err, "error generating key: %s", err)
// trustmanager.KeyFileStore and trustmanager.KeyMemoryStore both cache the unlocked
// private key, so to test private key unlocking we need a new instance using the
// same underlying storage; this also makes trustmanager.KeyMemoryStore (and
// c.cryptoServiceFactory()) unsuitable.
retriever = passphrase.ConstantRetriever("incorrect password")
store, err = trustmanager.NewKeyFileStore(tempBaseDir, retriever)
require.NoError(t, err)
cryptoService = NewCryptoService(store)
_, _, err = cryptoService.GetPrivateKey(pubKey.ID())
require.EqualError(t, err, trustmanager.ErrAttemptsExceeded{}.Error())
}
// asserts that removing key that exists succeeds
func (c CryptoServiceTester) TestRemoveCreatedKey(t *testing.T) {
cryptoService := c.cryptoServiceFactory()
tufKey, err := cryptoService.Create(c.role, c.gun, c.keyAlgo)
require.NoError(t, err, c.errorMsg("error creating key"))
require.NotNil(t, cryptoService.GetKey(tufKey.ID()))
// Test RemoveKey
err = cryptoService.RemoveKey(tufKey.ID())
require.NoError(t, err, c.errorMsg("could not remove key"))
retrievedKey := cryptoService.GetKey(tufKey.ID())
require.Nil(t, retrievedKey, c.errorMsg("remove didn't work"))
}
// asserts that removing key will remove it from all keystores
func (c CryptoServiceTester) TestRemoveFromMultipleKeystores(t *testing.T) {
cryptoService := c.cryptoServiceFactory()
cryptoService.keyStores = append(cryptoService.keyStores,
trustmanager.NewKeyMemoryStore(passphraseRetriever))
privKey, err := trustmanager.GenerateECDSAKey(rand.Reader)
require.NoError(t, err, c.errorMsg("error creating key"))
for _, store := range cryptoService.keyStores {
err := store.AddKey(trustmanager.KeyInfo{Role: data.CanonicalRootRole, Gun: ""}, privKey)
require.NoError(t, err)
}
require.NotNil(t, cryptoService.GetKey(privKey.ID()))
// Remove removes it from all key stores
err = cryptoService.RemoveKey(privKey.ID())
require.NoError(t, err, c.errorMsg("could not remove key"))
for _, store := range cryptoService.keyStores {
_, _, err := store.GetKey(privKey.ID())
require.Error(t, err)
}
}
// asserts that listing keys works with multiple keystores, and that the
// same keys are deduplicated
func (c CryptoServiceTester) TestListFromMultipleKeystores(t *testing.T) {
cryptoService := c.cryptoServiceFactory()
cryptoService.keyStores = append(cryptoService.keyStores,
trustmanager.NewKeyMemoryStore(passphraseRetriever))
expectedKeysIDs := make(map[string]bool) // just want to be able to index by key
for i := 0; i < 3; i++ {
privKey, err := trustmanager.GenerateECDSAKey(rand.Reader)
require.NoError(t, err, c.errorMsg("error creating key"))
expectedKeysIDs[privKey.ID()] = true
// adds one different key to each keystore, and then one key to
// both keystores
for j, store := range cryptoService.keyStores {
if i == j || i == 2 {
store.AddKey(trustmanager.KeyInfo{Role: data.CanonicalRootRole, Gun: ""}, privKey)
}
}
}
// sanity check - each should have 2
for _, store := range cryptoService.keyStores {
require.Len(t, store.ListKeys(), 2, c.errorMsg("added keys wrong"))
}
keyList := cryptoService.ListKeys("root")
require.Len(t, keyList, 4,
c.errorMsg(
"ListKeys should have 4 keys (not necesarily unique) but does not: %v", keyList))
for _, k := range keyList {
_, ok := expectedKeysIDs[k]
require.True(t, ok, c.errorMsg("Unexpected key %s", k))
}
keyMap := cryptoService.ListAllKeys()
require.Len(t, keyMap, 3,
c.errorMsg("ListAllKeys should have 3 unique keys but does not: %v", keyMap))
for k, role := range keyMap {
_, ok := expectedKeysIDs[k]
require.True(t, ok)
require.Equal(t, "root", role)
}
}
// asserts that adding a key adds to just the first keystore
// and adding an existing key either succeeds if the role matches or fails if it does not
func (c CryptoServiceTester) TestAddKey(t *testing.T) {
cryptoService := c.cryptoServiceFactory()
cryptoService.keyStores = append(cryptoService.keyStores,
trustmanager.NewKeyMemoryStore(passphraseRetriever))
privKey, err := trustmanager.GenerateECDSAKey(rand.Reader)
require.NoError(t, err)
// Add the key to the targets role
require.NoError(t, cryptoService.AddKey(data.CanonicalTargetsRole, c.gun, privKey))
// Check that we added the key and its info to only the first keystore
retrievedKey, retrievedRole, err := cryptoService.keyStores[0].GetKey(privKey.ID())
require.NoError(t, err)
require.Equal(t, privKey.Private(), retrievedKey.Private())
require.Equal(t, data.CanonicalTargetsRole, retrievedRole)
retrievedKeyInfo, err := cryptoService.keyStores[0].GetKeyInfo(privKey.ID())
require.NoError(t, err)
require.Equal(t, data.CanonicalTargetsRole, retrievedKeyInfo.Role)
require.Equal(t, c.gun, retrievedKeyInfo.Gun)
// The key should not exist in the second keystore
_, _, err = cryptoService.keyStores[1].GetKey(privKey.ID())
require.Error(t, err)
_, err = cryptoService.keyStores[1].GetKeyInfo(privKey.ID())
require.Error(t, err)
// We should be able to successfully get the key from the cryptoservice level
retrievedKey, retrievedRole, err = cryptoService.GetPrivateKey(privKey.ID())
require.NoError(t, err)
require.Equal(t, privKey.Private(), retrievedKey.Private())
require.Equal(t, data.CanonicalTargetsRole, retrievedRole)
retrievedKeyInfo, err = cryptoService.GetKeyInfo(privKey.ID())
require.NoError(t, err)
require.Equal(t, data.CanonicalTargetsRole, retrievedKeyInfo.Role)
require.Equal(t, c.gun, retrievedKeyInfo.Gun)
// Add the same key to the targets role, since the info is the same we should have no error
require.NoError(t, cryptoService.AddKey(data.CanonicalTargetsRole, c.gun, privKey))
// Try to add the same key to the snapshot role, which should error due to the role mismatch
require.Error(t, cryptoService.AddKey(data.CanonicalSnapshotRole, c.gun, privKey))
}
// Prints out an error message with information about the key algorithm,
// role, and test name. Ideally we could generate different tests given
// data, without having to put for loops in one giant test function, but
// that involves a lot of boilerplate. So as a compromise, everything will
// still be run in for loops in one giant test function, but we can at
// least provide an error message stating what data/helper test function
// failed.
func (c CryptoServiceTester) errorMsg(message string, args ...interface{}) string {
pc := make([]uintptr, 10) // at least 1 entry needed
runtime.Callers(2, pc) // the caller of errorMsg
f := runtime.FuncForPC(pc[0])
return fmt.Sprintf("%s (role: %s, keyAlgo: %s): %s", f.Name(), c.role,
c.keyAlgo, fmt.Sprintf(message, args...))
}
func testCryptoService(t *testing.T, gun string) {
roles := []string{
data.CanonicalRootRole,
data.CanonicalTargetsRole,
data.CanonicalSnapshotRole,
data.CanonicalTimestampRole,
}
for _, role := range roles {
for algo := range algoToSigType {
cst := CryptoServiceTester{
role: role,
keyAlgo: algo,
gun: gun,
}
cst.TestAddKey(t)
cst.TestCreateAndGetKey(t)
cst.TestCreateAndGetWhenMultipleKeystores(t)
cst.TestGetNonexistentKey(t)
cst.TestSignWithKey(t)
cst.TestSignNoMatchingKeys(t)
cst.TestGetPrivateKeyMultipleKeystores(t)
cst.TestRemoveCreatedKey(t)
cst.TestRemoveFromMultipleKeystores(t)
cst.TestListFromMultipleKeystores(t)
cst.TestGetPrivateKeyPasswordInvalid(t)
cst.TestGetPrivateKeyAttemptsExceeded(t)
}
}
}
func TestCryptoServiceWithNonEmptyGUN(t *testing.T) {
testCryptoService(t, "org/repo")
}
func TestCryptoServiceWithEmptyGUN(t *testing.T) {
testCryptoService(t, "")
}
// CryptoSigner conforms to the signed.CryptoService interface behavior
func TestCryptoSignerInterfaceBehavior(t *testing.T) {
cs := NewCryptoService(trustmanager.NewKeyMemoryStore(passphraseRetriever))
interfaces.EmptyCryptoServiceInterfaceBehaviorTests(t, cs)
interfaces.CreateGetKeyCryptoServiceInterfaceBehaviorTests(t, cs, data.ECDSAKey, true)
cs = NewCryptoService(trustmanager.NewKeyMemoryStore(passphraseRetriever))
interfaces.CreateListKeyCryptoServiceInterfaceBehaviorTests(t, cs, data.ECDSAKey)
cs = NewCryptoService(trustmanager.NewKeyMemoryStore(passphraseRetriever))
interfaces.AddGetKeyCryptoServiceInterfaceBehaviorTests(t, cs, data.ECDSAKey)
cs = NewCryptoService(trustmanager.NewKeyMemoryStore(passphraseRetriever))
interfaces.AddListKeyCryptoServiceInterfaceBehaviorTests(t, cs, data.ECDSAKey)
}

View File

@ -1,313 +0,0 @@
package cryptoservice
import (
"archive/zip"
"crypto/x509"
"encoding/pem"
"errors"
"io"
"io/ioutil"
"os"
"path/filepath"
"strings"
"github.com/docker/notary"
"github.com/docker/notary/trustmanager"
)
const zipMadeByUNIX = 3 << 8
var (
// ErrNoValidPrivateKey is returned if a key being imported doesn't
// look like a private key
ErrNoValidPrivateKey = errors.New("no valid private key found")
// ErrRootKeyNotEncrypted is returned if a root key being imported is
// unencrypted
ErrRootKeyNotEncrypted = errors.New("only encrypted root keys may be imported")
// ErrNoKeysFoundForGUN is returned if no keys are found for the
// specified GUN during export
ErrNoKeysFoundForGUN = errors.New("no keys found for specified GUN")
)
// ExportKey exports the specified private key to an io.Writer in PEM format.
// The key's existing encryption is preserved.
func (cs *CryptoService) ExportKey(dest io.Writer, keyID, role string) error {
var (
pemBytes []byte
err error
)
for _, ks := range cs.keyStores {
pemBytes, err = ks.ExportKey(keyID)
if err != nil {
continue
}
}
if err != nil {
return err
}
nBytes, err := dest.Write(pemBytes)
if err != nil {
return err
}
if nBytes != len(pemBytes) {
return errors.New("Unable to finish writing exported key.")
}
return nil
}
// ExportKeyReencrypt exports the specified private key to an io.Writer in
// PEM format. The key is reencrypted with a new passphrase.
func (cs *CryptoService) ExportKeyReencrypt(dest io.Writer, keyID string, newPassphraseRetriever notary.PassRetriever) error {
privateKey, _, err := cs.GetPrivateKey(keyID)
if err != nil {
return err
}
keyInfo, err := cs.GetKeyInfo(keyID)
if err != nil {
return err
}
// Create temporary keystore to use as a staging area
tempBaseDir, err := ioutil.TempDir("", "notary-key-export-")
defer os.RemoveAll(tempBaseDir)
tempKeyStore, err := trustmanager.NewKeyFileStore(tempBaseDir, newPassphraseRetriever)
if err != nil {
return err
}
err = tempKeyStore.AddKey(keyInfo, privateKey)
if err != nil {
return err
}
pemBytes, err := tempKeyStore.ExportKey(keyID)
if err != nil {
return err
}
nBytes, err := dest.Write(pemBytes)
if err != nil {
return err
}
if nBytes != len(pemBytes) {
return errors.New("Unable to finish writing exported key.")
}
return nil
}
// ExportAllKeys exports all keys to an io.Writer in zip format.
// newPassphraseRetriever will be used to obtain passphrases to use to encrypt the existing keys.
func (cs *CryptoService) ExportAllKeys(dest io.Writer, newPassphraseRetriever notary.PassRetriever) error {
tempBaseDir, err := ioutil.TempDir("", "notary-key-export-")
defer os.RemoveAll(tempBaseDir)
// Create temporary keystore to use as a staging area
tempKeyStore, err := trustmanager.NewKeyFileStore(tempBaseDir, newPassphraseRetriever)
if err != nil {
return err
}
for _, ks := range cs.keyStores {
if err := moveKeys(ks, tempKeyStore); err != nil {
return err
}
}
zipWriter := zip.NewWriter(dest)
if err := addKeysToArchive(zipWriter, tempKeyStore); err != nil {
return err
}
zipWriter.Close()
return nil
}
// ImportKeysZip imports keys from a zip file provided as an zip.Reader. The
// keys in the root_keys directory are left encrypted, but the other keys are
// decrypted with the specified passphrase.
func (cs *CryptoService) ImportKeysZip(zipReader zip.Reader, retriever notary.PassRetriever) error {
// Temporarily store the keys in maps, so we can bail early if there's
// an error (for example, wrong passphrase), without leaving the key
// store in an inconsistent state
newKeys := make(map[string][]byte)
// Iterate through the files in the archive. Don't add the keys
for _, f := range zipReader.File {
fNameTrimmed := strings.TrimSuffix(f.Name, filepath.Ext(f.Name))
rc, err := f.Open()
if err != nil {
return err
}
defer rc.Close()
fileBytes, err := ioutil.ReadAll(rc)
if err != nil {
return nil
}
// Note that using / as a separator is okay here - the zip
// package guarantees that the separator will be /
if fNameTrimmed[len(fNameTrimmed)-5:] == "_root" {
if err = CheckRootKeyIsEncrypted(fileBytes); err != nil {
return err
}
}
newKeys[fNameTrimmed] = fileBytes
}
for keyName, pemBytes := range newKeys {
// Get the key role information as well as its data.PrivateKey representation
_, keyInfo, err := trustmanager.KeyInfoFromPEM(pemBytes, keyName)
if err != nil {
return err
}
privKey, err := trustmanager.ParsePEMPrivateKey(pemBytes, "")
if err != nil {
privKey, _, err = trustmanager.GetPasswdDecryptBytes(retriever, pemBytes, "", "imported "+keyInfo.Role)
if err != nil {
return err
}
}
// Add the key to our cryptoservice, will add to the first successful keystore
if err = cs.AddKey(keyInfo.Role, keyInfo.Gun, privKey); err != nil {
return err
}
}
return nil
}
// ExportKeysByGUN exports all keys associated with a specified GUN to an
// io.Writer in zip format. passphraseRetriever is used to select new passphrases to use to
// encrypt the keys.
func (cs *CryptoService) ExportKeysByGUN(dest io.Writer, gun string, passphraseRetriever notary.PassRetriever) error {
tempBaseDir, err := ioutil.TempDir("", "notary-key-export-")
defer os.RemoveAll(tempBaseDir)
// Create temporary keystore to use as a staging area
tempKeyStore, err := trustmanager.NewKeyFileStore(tempBaseDir, passphraseRetriever)
if err != nil {
return err
}
for _, ks := range cs.keyStores {
if err := moveKeysByGUN(ks, tempKeyStore, gun); err != nil {
return err
}
}
zipWriter := zip.NewWriter(dest)
if len(tempKeyStore.ListKeys()) == 0 {
return ErrNoKeysFoundForGUN
}
if err := addKeysToArchive(zipWriter, tempKeyStore); err != nil {
return err
}
zipWriter.Close()
return nil
}
func moveKeysByGUN(oldKeyStore, newKeyStore trustmanager.KeyStore, gun string) error {
for keyID, keyInfo := range oldKeyStore.ListKeys() {
// Skip keys that aren't associated with this GUN
if keyInfo.Gun != gun {
continue
}
privKey, _, err := oldKeyStore.GetKey(keyID)
if err != nil {
return err
}
err = newKeyStore.AddKey(keyInfo, privKey)
if err != nil {
return err
}
}
return nil
}
func moveKeys(oldKeyStore, newKeyStore trustmanager.KeyStore) error {
for keyID, keyInfo := range oldKeyStore.ListKeys() {
privateKey, _, err := oldKeyStore.GetKey(keyID)
if err != nil {
return err
}
err = newKeyStore.AddKey(keyInfo, privateKey)
if err != nil {
return err
}
}
return nil
}
func addKeysToArchive(zipWriter *zip.Writer, newKeyStore *trustmanager.KeyFileStore) error {
for _, relKeyPath := range newKeyStore.ListFiles() {
fullKeyPath, err := newKeyStore.GetPath(relKeyPath)
if err != nil {
return err
}
fi, err := os.Lstat(fullKeyPath)
if err != nil {
return err
}
infoHeader, err := zip.FileInfoHeader(fi)
if err != nil {
return err
}
relPath, err := filepath.Rel(newKeyStore.BaseDir(), fullKeyPath)
if err != nil {
return err
}
infoHeader.Name = relPath
zipFileEntryWriter, err := zipWriter.CreateHeader(infoHeader)
if err != nil {
return err
}
fileContents, err := ioutil.ReadFile(fullKeyPath)
if err != nil {
return err
}
if _, err = zipFileEntryWriter.Write(fileContents); err != nil {
return err
}
}
return nil
}
// CheckRootKeyIsEncrypted makes sure the root key is encrypted. We have
// internal assumptions that depend on this.
func CheckRootKeyIsEncrypted(pemBytes []byte) error {
block, _ := pem.Decode(pemBytes)
if block == nil {
return ErrNoValidPrivateKey
}
if !x509.IsEncryptedPEMBlock(block) {
return ErrRootKeyNotEncrypted
}
return nil
}

View File

@ -1,164 +0,0 @@
// Ensures we can import/export old-style repos
package cryptoservice
import (
"archive/zip"
"io/ioutil"
"os"
"testing"
"github.com/docker/notary"
"github.com/docker/notary/passphrase"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
"github.com/stretchr/testify/require"
)
// Zips up the keys in the old repo, and assert that we can import it and use
// said keys. The 0.1 exported format is just a zip file of all the keys
func TestImport0Dot1Zip(t *testing.T) {
ks, ret, _ := get0Dot1(t)
zipFile, err := ioutil.TempFile("", "notary-test-zipFile")
defer os.RemoveAll(zipFile.Name())
zipWriter := zip.NewWriter(zipFile)
require.NoError(t, err)
require.NoError(t, addKeysToArchive(zipWriter, ks))
zipWriter.Close()
zipFile.Close()
origKeys := make(map[string]string)
for keyID, keyInfo := range ks.ListKeys() {
origKeys[keyID] = keyInfo.Role
}
require.Len(t, origKeys, 3)
// now import the zip file into a new cryptoservice
tempDir, err := ioutil.TempDir("", "notary-test-import")
defer os.RemoveAll(tempDir)
require.NoError(t, err)
ks, err = trustmanager.NewKeyFileStore(tempDir, ret)
require.NoError(t, err)
cs := NewCryptoService(ks)
zipReader, err := zip.OpenReader(zipFile.Name())
require.NoError(t, err)
defer zipReader.Close()
require.NoError(t, cs.ImportKeysZip(zipReader.Reader, passphrase.ConstantRetriever("randompass")))
assertHasKeys(t, cs, origKeys)
}
func get0Dot1(t *testing.T) (*trustmanager.KeyFileStore, notary.PassRetriever, string) {
gun := "docker.com/notary0.1/samplerepo"
ret := passphrase.ConstantRetriever("randompass")
// produce the zip file
ks, err := trustmanager.NewKeyFileStore("../fixtures/compatibility/notary0.1", ret)
require.NoError(t, err)
return ks, ret, gun
}
// Given a map of key IDs to roles, asserts that the cryptoService has all and
// only those keys
func assertHasKeys(t *testing.T, cs *CryptoService, expectedKeys map[string]string) {
keys := cs.ListAllKeys()
require.Len(t, keys, len(expectedKeys))
for keyID, role := range keys {
expectedRole, ok := expectedKeys[keyID]
require.True(t, ok)
require.Equal(t, expectedRole, role)
}
}
// Export all the keys of a cryptoservice to a zipfile, and import it into a
// new cryptoService, and return that new cryptoService
func importExportedZip(t *testing.T, original *CryptoService,
ret notary.PassRetriever, gun string) (*CryptoService, string) {
// Temporary directory where test files will be created
tempBaseDir, err := ioutil.TempDir("", "notary-test-")
require.NoError(t, err, "failed to create a temporary directory: %s", err)
ks, err := trustmanager.NewKeyFileStore(tempBaseDir, ret)
require.NoError(t, err)
var cs *CryptoService
// export keys
zipFile, err := ioutil.TempFile("", "notary-test-zipFile")
defer os.RemoveAll(zipFile.Name())
if gun != "" {
err = original.ExportKeysByGUN(zipFile, gun, ret)
require.NoError(t, err)
cs = NewCryptoService(ks)
} else {
err = original.ExportAllKeys(zipFile, ret)
require.NoError(t, err)
cs = NewCryptoService(ks)
}
zipFile.Close()
// import keys into the cryptoservice now
zipReader, err := zip.OpenReader(zipFile.Name())
require.NoError(t, err)
defer zipReader.Close()
require.NoError(t, cs.ImportKeysZip(zipReader.Reader, passphrase.ConstantRetriever("randompass")))
return cs, tempBaseDir
}
func TestImportExport0Dot1AllKeys(t *testing.T) {
ks, ret, _ := get0Dot1(t)
cs := NewCryptoService(ks)
newCS, tempDir := importExportedZip(t, cs, ret, "")
defer os.RemoveAll(tempDir)
assertHasKeys(t, newCS, cs.ListAllKeys())
}
func TestImportExport0Dot1GUNKeys(t *testing.T) {
ks, ret, gun := get0Dot1(t)
// remove root from expected key list, because root is not exported when
// we export by gun
expectedKeys := make(map[string]string)
for keyID, keyInfo := range ks.ListKeys() {
if keyInfo.Role != data.CanonicalRootRole {
expectedKeys[keyID] = keyInfo.Role
}
}
// make some other temp directory to create new keys in
tempDir, err := ioutil.TempDir("", "notary-tests-keystore")
defer os.RemoveAll(tempDir)
require.NoError(t, err)
otherKS, err := trustmanager.NewKeyFileStore(tempDir, ret)
require.NoError(t, err)
cs := NewCryptoService(otherKS, ks)
// create a keys that is not of the same GUN, and be sure it's in this
// CryptoService
otherPubKey, err := cs.Create(data.CanonicalTargetsRole, "some/other/gun", data.ECDSAKey)
require.NoError(t, err)
k, _, err := cs.GetPrivateKey(otherPubKey.ID())
require.NoError(t, err)
require.NotNil(t, k)
// export/import, and ensure that the other-gun key is not in the new
// CryptoService
newCS, tempDir := importExportedZip(t, cs, ret, gun)
defer os.RemoveAll(tempDir)
assertHasKeys(t, newCS, expectedKeys)
_, _, err = newCS.GetPrivateKey(otherPubKey.ID())
require.Error(t, err)
}

View File

@ -1,491 +0,0 @@
package cryptoservice
import (
"archive/zip"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"testing"
"github.com/docker/notary"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
"github.com/stretchr/testify/require"
)
const timestampECDSAKeyJSON = `
{"keytype":"ecdsa","keyval":{"public":"MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEgl3rzMPMEKhS1k/AX16MM4PdidpjJr+z4pj0Td+30QnpbOIARgpyR1PiFztU8BZlqG3cUazvFclr2q/xHvfrqw==","private":"MHcCAQEEIDqtcdzU7H3AbIPSQaxHl9+xYECt7NpK7B1+6ep5cv9CoAoGCCqGSM49AwEHoUQDQgAEgl3rzMPMEKhS1k/AX16MM4PdidpjJr+z4pj0Td+30QnpbOIARgpyR1PiFztU8BZlqG3cUazvFclr2q/xHvfrqw=="}}`
func createTestServer(t *testing.T) (*httptest.Server, *http.ServeMux) {
mux := http.NewServeMux()
// TUF will request /v2/docker.com/notary/_trust/tuf/timestamp.key
// Return a canned timestamp.key
mux.HandleFunc("/v2/docker.com/notary/_trust/tuf/timestamp.key", func(w http.ResponseWriter, r *http.Request) {
// Also contains the private key, but for the purpose of this
// test, we don't care
fmt.Fprint(w, timestampECDSAKeyJSON)
})
ts := httptest.NewServer(mux)
return ts, mux
}
var oldPassphrase = "oldPassphrase"
var exportPassphrase = "exportPassphrase"
var oldPassphraseRetriever = func(string, string, bool, int) (string, bool, error) { return oldPassphrase, false, nil }
var newPassphraseRetriever = func(string, string, bool, int) (string, bool, error) { return exportPassphrase, false, nil }
func TestImportExportZip(t *testing.T) {
gun := "docker.com/notary"
// Temporary directory where test files will be created
tempBaseDir, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore, err := trustmanager.NewKeyFileStore(tempBaseDir, newPassphraseRetriever)
cs := NewCryptoService(fileStore)
pubKey, err := cs.Create(data.CanonicalRootRole, gun, data.ECDSAKey)
require.NoError(t, err)
rootKeyID := pubKey.ID()
tempZipFile, err := ioutil.TempFile("", "notary-test-export-")
tempZipFilePath := tempZipFile.Name()
defer os.Remove(tempZipFilePath)
err = cs.ExportAllKeys(tempZipFile, newPassphraseRetriever)
tempZipFile.Close()
require.NoError(t, err)
// Reopen the zip file for importing
zipReader, err := zip.OpenReader(tempZipFilePath)
require.NoError(t, err, "could not open zip file")
// Map of files to expect in the zip file, with the passphrases
passphraseByFile := make(map[string]string)
// Add non-root keys to the map. These should use the new passphrase
// because the passwords were chosen by the newPassphraseRetriever.
privKeyMap := cs.ListAllKeys()
for privKeyName := range privKeyMap {
_, alias, err := cs.GetPrivateKey(privKeyName)
require.NoError(t, err, "privKey %s has no alias", privKeyName)
if alias == data.CanonicalRootRole {
continue
}
relKeyPath := filepath.Join(notary.NonRootKeysSubdir, privKeyName+".key")
passphraseByFile[relKeyPath] = exportPassphrase
}
// Add root key to the map. This will use the export passphrase because it
// will be reencrypted.
relRootKey := filepath.Join(notary.RootKeysSubdir, rootKeyID+".key")
passphraseByFile[relRootKey] = exportPassphrase
// Iterate through the files in the archive, checking that the files
// exist and are encrypted with the expected passphrase.
for _, f := range zipReader.File {
expectedPassphrase, present := passphraseByFile[f.Name]
require.True(t, present, "unexpected file %s in zip file", f.Name)
delete(passphraseByFile, f.Name)
rc, err := f.Open()
require.NoError(t, err, "could not open file inside zip archive")
pemBytes, err := ioutil.ReadAll(rc)
require.NoError(t, err, "could not read file from zip")
_, err = trustmanager.ParsePEMPrivateKey(pemBytes, expectedPassphrase)
require.NoError(t, err, "PEM not encrypted with the expected passphrase")
rc.Close()
}
zipReader.Close()
// Are there any keys that didn't make it to the zip?
require.Len(t, passphraseByFile, 0)
// Create new repo to test import
tempBaseDir2, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir2)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore2, err := trustmanager.NewKeyFileStore(tempBaseDir2, newPassphraseRetriever)
require.NoError(t, err)
cs2 := NewCryptoService(fileStore2)
// Reopen the zip file for importing
zipReader, err = zip.OpenReader(tempZipFilePath)
require.NoError(t, err, "could not open zip file")
// Now try with a valid passphrase. This time it should succeed.
err = cs2.ImportKeysZip(zipReader.Reader, newPassphraseRetriever)
require.NoError(t, err)
zipReader.Close()
// Look for keys in private. The filenames should match the key IDs
// in the repo's private key store.
for privKeyName := range privKeyMap {
_, alias, err := cs2.GetPrivateKey(privKeyName)
require.NoError(t, err, "privKey %s has no alias", privKeyName)
if alias == data.CanonicalRootRole {
continue
}
relKeyPath := filepath.Join(notary.NonRootKeysSubdir, privKeyName+".key")
privKeyFileName := filepath.Join(tempBaseDir2, notary.PrivDir, relKeyPath)
_, err = os.Stat(privKeyFileName)
require.NoError(t, err, "missing private key for role %s: %s", alias, privKeyName)
}
// Look for keys in root_keys
// There should be a file named after the key ID of the root key we
// passed in.
rootKeyFilename := rootKeyID + ".key"
_, err = os.Stat(filepath.Join(tempBaseDir2, notary.PrivDir, notary.RootKeysSubdir, rootKeyFilename))
require.NoError(t, err, "missing root key")
}
func TestImportExportGUN(t *testing.T) {
gun := "docker.com/notary"
// Temporary directory where test files will be created
tempBaseDir, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore, err := trustmanager.NewKeyFileStore(tempBaseDir, newPassphraseRetriever)
cs := NewCryptoService(fileStore)
_, err = cs.Create(data.CanonicalRootRole, gun, data.ECDSAKey)
_, err = cs.Create(data.CanonicalTargetsRole, gun, data.ECDSAKey)
_, err = cs.Create(data.CanonicalSnapshotRole, gun, data.ECDSAKey)
require.NoError(t, err)
tempZipFile, err := ioutil.TempFile("", "notary-test-export-")
tempZipFilePath := tempZipFile.Name()
defer os.Remove(tempZipFilePath)
err = cs.ExportKeysByGUN(tempZipFile, gun, newPassphraseRetriever)
require.NoError(t, err)
// With an invalid GUN, this should return an error
err = cs.ExportKeysByGUN(tempZipFile, "does.not.exist/in/repository", newPassphraseRetriever)
require.EqualError(t, err, ErrNoKeysFoundForGUN.Error())
tempZipFile.Close()
// Reopen the zip file for importing
zipReader, err := zip.OpenReader(tempZipFilePath)
require.NoError(t, err, "could not open zip file")
// Map of files to expect in the zip file, with the passphrases
passphraseByFile := make(map[string]string)
// Add keys non-root keys to the map. These should use the new passphrase
// because they were formerly unencrypted.
privKeyMap := cs.ListAllKeys()
for privKeyName := range privKeyMap {
_, alias, err := cs.GetPrivateKey(privKeyName)
require.NoError(t, err, "privKey %s has no alias", privKeyName)
if alias == data.CanonicalRootRole {
continue
}
relKeyPath := filepath.Join(notary.NonRootKeysSubdir, gun, privKeyName+".key")
passphraseByFile[relKeyPath] = exportPassphrase
}
// Iterate through the files in the archive, checking that the files
// exist and are encrypted with the expected passphrase.
for _, f := range zipReader.File {
expectedPassphrase, present := passphraseByFile[f.Name]
require.True(t, present, "unexpected file %s in zip file", f.Name)
delete(passphraseByFile, f.Name)
rc, err := f.Open()
require.NoError(t, err, "could not open file inside zip archive")
pemBytes, err := ioutil.ReadAll(rc)
require.NoError(t, err, "could not read file from zip")
_, err = trustmanager.ParsePEMPrivateKey(pemBytes, expectedPassphrase)
require.NoError(t, err, "PEM not encrypted with the expected passphrase")
rc.Close()
}
zipReader.Close()
// Are there any keys that didn't make it to the zip?
require.Len(t, passphraseByFile, 0)
// Create new repo to test import
tempBaseDir2, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir2)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore2, err := trustmanager.NewKeyFileStore(tempBaseDir2, newPassphraseRetriever)
cs2 := NewCryptoService(fileStore2)
// Reopen the zip file for importing
zipReader, err = zip.OpenReader(tempZipFilePath)
require.NoError(t, err, "could not open zip file")
// Now try with a valid passphrase. This time it should succeed.
err = cs2.ImportKeysZip(zipReader.Reader, newPassphraseRetriever)
require.NoError(t, err)
zipReader.Close()
// Look for keys in private. The filenames should match the key IDs
// in the repo's private key store.
for privKeyName, role := range privKeyMap {
if role == data.CanonicalRootRole {
continue
}
_, alias, err := cs2.GetPrivateKey(privKeyName)
require.NoError(t, err, "privKey %s has no alias", privKeyName)
if alias == data.CanonicalRootRole {
continue
}
relKeyPath := filepath.Join(notary.NonRootKeysSubdir, gun, privKeyName+".key")
privKeyFileName := filepath.Join(tempBaseDir2, notary.PrivDir, relKeyPath)
_, err = os.Stat(privKeyFileName)
require.NoError(t, err)
}
}
func TestExportRootKey(t *testing.T) {
gun := "docker.com/notary"
// Temporary directory where test files will be created
tempBaseDir, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore, err := trustmanager.NewKeyFileStore(tempBaseDir, oldPassphraseRetriever)
cs := NewCryptoService(fileStore)
pubKey, err := cs.Create(data.CanonicalRootRole, gun, data.ECDSAKey)
require.NoError(t, err)
rootKeyID := pubKey.ID()
tempKeyFile, err := ioutil.TempFile("", "notary-test-export-")
tempKeyFilePath := tempKeyFile.Name()
defer os.Remove(tempKeyFilePath)
err = cs.ExportKey(tempKeyFile, rootKeyID, data.CanonicalRootRole)
require.NoError(t, err)
tempKeyFile.Close()
// Create new repo to test import
tempBaseDir2, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir2)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore2, err := trustmanager.NewKeyFileStore(tempBaseDir2, oldPassphraseRetriever)
cs2 := NewCryptoService(fileStore2)
keyReader, err := os.Open(tempKeyFilePath)
require.NoError(t, err, "could not open key file")
pemImportBytes, err := ioutil.ReadAll(keyReader)
keyReader.Close()
require.NoError(t, err)
// Convert to a data.PrivateKey, potentially decrypting the key, and add it to the cryptoservice
privKey, _, err := trustmanager.GetPasswdDecryptBytes(oldPassphraseRetriever, pemImportBytes, "", "imported "+data.CanonicalRootRole)
require.NoError(t, err)
err = cs2.AddKey(data.CanonicalRootRole, gun, privKey)
require.NoError(t, err)
// Look for repo's root key in repo2
// There should be a file named after the key ID of the root key we
// imported.
rootKeyFilename := rootKeyID + ".key"
_, err = os.Stat(filepath.Join(tempBaseDir2, notary.PrivDir, notary.RootKeysSubdir, rootKeyFilename))
require.NoError(t, err, "missing root key")
}
func TestExportRootKeyReencrypt(t *testing.T) {
gun := "docker.com/notary"
// Temporary directory where test files will be created
tempBaseDir, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore, err := trustmanager.NewKeyFileStore(tempBaseDir, oldPassphraseRetriever)
cs := NewCryptoService(fileStore)
pubKey, err := cs.Create(data.CanonicalRootRole, gun, data.ECDSAKey)
require.NoError(t, err)
rootKeyID := pubKey.ID()
tempKeyFile, err := ioutil.TempFile("", "notary-test-export-")
tempKeyFilePath := tempKeyFile.Name()
defer os.Remove(tempKeyFilePath)
err = cs.ExportKeyReencrypt(tempKeyFile, rootKeyID, newPassphraseRetriever)
require.NoError(t, err)
tempKeyFile.Close()
// Create new repo to test import
tempBaseDir2, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir2)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore2, err := trustmanager.NewKeyFileStore(tempBaseDir2, newPassphraseRetriever)
cs2 := NewCryptoService(fileStore2)
keyReader, err := os.Open(tempKeyFilePath)
require.NoError(t, err, "could not open key file")
pemImportBytes, err := ioutil.ReadAll(keyReader)
keyReader.Close()
require.NoError(t, err)
// Convert to a data.PrivateKey, potentially decrypting the key, and add it to the cryptoservice
privKey, _, err := trustmanager.GetPasswdDecryptBytes(newPassphraseRetriever, pemImportBytes, "", "imported "+data.CanonicalRootRole)
require.NoError(t, err)
err = cs2.AddKey(data.CanonicalRootRole, gun, privKey)
require.NoError(t, err)
// Look for repo's root key in repo2
// There should be a file named after the key ID of the root key we
// imported.
rootKeyFilename := rootKeyID + ".key"
_, err = os.Stat(filepath.Join(tempBaseDir2, notary.PrivDir, notary.RootKeysSubdir, rootKeyFilename))
require.NoError(t, err, "missing root key")
// Should be able to unlock the root key with the new password
key, alias, err := cs2.GetPrivateKey(rootKeyID)
require.NoError(t, err, "could not unlock root key")
require.Equal(t, data.CanonicalRootRole, alias)
require.Equal(t, rootKeyID, key.ID())
}
func TestExportNonRootKey(t *testing.T) {
gun := "docker.com/notary"
// Temporary directory where test files will be created
tempBaseDir, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore, err := trustmanager.NewKeyFileStore(tempBaseDir, oldPassphraseRetriever)
cs := NewCryptoService(fileStore)
pubKey, err := cs.Create(data.CanonicalTargetsRole, gun, data.ECDSAKey)
require.NoError(t, err)
targetsKeyID := pubKey.ID()
tempKeyFile, err := ioutil.TempFile("", "notary-test-export-")
tempKeyFilePath := tempKeyFile.Name()
defer os.Remove(tempKeyFilePath)
err = cs.ExportKey(tempKeyFile, targetsKeyID, data.CanonicalTargetsRole)
require.NoError(t, err)
tempKeyFile.Close()
// Create new repo to test import
tempBaseDir2, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir2)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore2, err := trustmanager.NewKeyFileStore(tempBaseDir2, oldPassphraseRetriever)
cs2 := NewCryptoService(fileStore2)
keyReader, err := os.Open(tempKeyFilePath)
require.NoError(t, err, "could not open key file")
pemBytes, err := ioutil.ReadAll(keyReader)
require.NoError(t, err, "could not read key file")
// Convert to a data.PrivateKey, potentially decrypting the key, and add it to the cryptoservice
privKey, _, err := trustmanager.GetPasswdDecryptBytes(oldPassphraseRetriever, pemBytes, "", "imported "+data.CanonicalTargetsRole)
require.NoError(t, err)
err = cs2.AddKey(data.CanonicalTargetsRole, gun, privKey)
require.NoError(t, err)
keyReader.Close()
// Look for repo's targets key in repo2
// There should be a file named after the key ID of the targets key we
// imported.
targetsKeyFilename := targetsKeyID + ".key"
_, err = os.Stat(filepath.Join(tempBaseDir2, notary.PrivDir, notary.NonRootKeysSubdir, "docker.com/notary", targetsKeyFilename))
require.NoError(t, err, "missing targets key")
// Check that the key is the same
key, alias, err := cs2.GetPrivateKey(targetsKeyID)
require.NoError(t, err, "could not unlock targets key")
require.Equal(t, data.CanonicalTargetsRole, alias)
require.Equal(t, targetsKeyID, key.ID())
}
func TestExportNonRootKeyReencrypt(t *testing.T) {
gun := "docker.com/notary"
// Temporary directory where test files will be created
tempBaseDir, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore, err := trustmanager.NewKeyFileStore(tempBaseDir, oldPassphraseRetriever)
cs := NewCryptoService(fileStore)
pubKey, err := cs.Create(data.CanonicalSnapshotRole, gun, data.ECDSAKey)
require.NoError(t, err)
snapshotKeyID := pubKey.ID()
tempKeyFile, err := ioutil.TempFile("", "notary-test-export-")
tempKeyFilePath := tempKeyFile.Name()
defer os.Remove(tempKeyFilePath)
err = cs.ExportKeyReencrypt(tempKeyFile, snapshotKeyID, newPassphraseRetriever)
require.NoError(t, err)
tempKeyFile.Close()
// Create new repo to test import
tempBaseDir2, err := ioutil.TempDir("", "notary-test-")
defer os.RemoveAll(tempBaseDir2)
require.NoError(t, err, "failed to create a temporary directory: %s", err)
fileStore2, err := trustmanager.NewKeyFileStore(tempBaseDir2, newPassphraseRetriever)
cs2 := NewCryptoService(fileStore2)
keyReader, err := os.Open(tempKeyFilePath)
require.NoError(t, err, "could not open key file")
pemBytes, err := ioutil.ReadAll(keyReader)
require.NoError(t, err, "could not read key file")
// Convert to a data.PrivateKey, potentially decrypting the key, and add it to the cryptoservice
privKey, _, err := trustmanager.GetPasswdDecryptBytes(newPassphraseRetriever, pemBytes, "", "imported "+data.CanonicalSnapshotRole)
require.NoError(t, err)
err = cs2.AddKey(data.CanonicalSnapshotRole, gun, privKey)
require.NoError(t, err)
keyReader.Close()
// Look for repo's snapshot key in repo2
// There should be a file named after the key ID of the snapshot key we
// imported.
snapshotKeyFilename := snapshotKeyID + ".key"
_, err = os.Stat(filepath.Join(tempBaseDir2, notary.PrivDir, notary.NonRootKeysSubdir, "docker.com/notary", snapshotKeyFilename))
require.NoError(t, err, "missing snapshot key")
// Should be able to unlock the root key with the new password
key, alias, err := cs2.GetPrivateKey(snapshotKeyID)
require.NoError(t, err, "could not unlock snapshot key")
require.Equal(t, data.CanonicalSnapshotRole, alias)
require.Equal(t, snapshotKeyID, key.ID())
}

View File

@ -1,61 +0,0 @@
version: "2"
services:
server:
build:
context: .
dockerfile: server.Dockerfile
networks:
mdb:
sig:
srv:
aliases:
- notary-server
entrypoint: /usr/bin/env sh
command: -c "./migrations/migrate.sh && notary-server -config=fixtures/server-config.json"
depends_on:
- mysql
- signer
signer:
build:
context: .
dockerfile: signer.Dockerfile
networks:
mdb:
sig:
aliases:
- notarysigner
entrypoint: /usr/bin/env sh
command: -c "./migrations/migrate.sh && notary-signer -config=fixtures/signer-config.json"
depends_on:
- mysql
mysql:
networks:
- mdb
volumes:
- ./notarymysql/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
image: mariadb:10.1.10
environment:
- TERM=dumb
- MYSQL_ALLOW_EMPTY_PASSWORD="true"
command: mysqld --innodb_file_per_table
client:
build:
context: .
dockerfile: Dockerfile
command: buildscripts/testclient.sh
volumes:
- ./test_output:/test_output
networks:
- srv
depends_on:
- server
volumes:
notary_data:
external: false
networks:
mdb:
external: false
sig:
external: false
srv:
external: false

View File

@ -1,109 +0,0 @@
version: "2"
services:
server:
build:
context: .
dockerfile: server.Dockerfile
volumes:
- ./fixtures/rethinkdb:/tls
networks:
- rdb
links:
- rdb-proxy:rdb-proxy.rdb
- signer
ports:
- "8080"
- "4443:4443"
entrypoint: /usr/bin/env sh
command: -c "sh migrations/rethink_migrate.sh && notary-server -config=fixtures/server-config.rethink.json"
depends_on:
- rdb-proxy
signer:
build:
context: .
dockerfile: signer.Dockerfile
volumes:
- ./fixtures/rethinkdb:/tls
networks:
rdb:
aliases:
- notarysigner
links:
- rdb-proxy:rdb-proxy.rdb
entrypoint: /usr/bin/env sh
command: -c "sh migrations/rethink_migrate.sh && notary-signer -config=fixtures/signer-config.rethink.json"
depends_on:
- rdb-proxy
rdb-01:
image: jlhawn/rethinkdb:2.3.4
volumes:
- ./fixtures/rethinkdb:/tls
- rdb-01-data:/var/data
networks:
rdb:
aliases:
- rdb
- rdb.rdb
- rdb-01.rdb
command: "--bind all --no-http-admin --server-name rdb_01 --canonical-address rdb-01.rdb --directory /var/data/rethinkdb --join rdb.rdb --driver-tls-ca /tls/ca.pem --driver-tls-key /tls/key.pem --driver-tls-cert /tls/cert.pem --cluster-tls-key /tls/key.pem --cluster-tls-cert /tls/cert.pem --cluster-tls-ca /tls/ca.pem"
rdb-02:
image: jlhawn/rethinkdb:2.3.4
volumes:
- ./fixtures/rethinkdb:/tls
- rdb-02-data:/var/data
networks:
rdb:
aliases:
- rdb
- rdb.rdb
- rdb-02.rdb
command: "--bind all --no-http-admin --server-name rdb_02 --canonical-address rdb-02.rdb --directory /var/data/rethinkdb --join rdb.rdb --driver-tls-ca /tls/ca.pem --driver-tls-key /tls/key.pem --driver-tls-cert /tls/cert.pem --cluster-tls-key /tls/key.pem --cluster-tls-cert /tls/cert.pem --cluster-tls-ca /tls/ca.pem"
rdb-03:
image: jlhawn/rethinkdb:2.3.4
volumes:
- ./fixtures/rethinkdb:/tls
- rdb-03-data:/var/data
networks:
rdb:
aliases:
- rdb
- rdb.rdb
- rdb-03.rdb
command: "--bind all --no-http-admin --server-name rdb_03 --canonical-address rdb-03.rdb --directory /var/data/rethinkdb --join rdb.rdb --driver-tls-ca /tls/ca.pem --driver-tls-key /tls/key.pem --driver-tls-cert /tls/cert.pem --cluster-tls-key /tls/key.pem --cluster-tls-cert /tls/cert.pem --cluster-tls-ca /tls/ca.pem"
rdb-proxy:
image: jlhawn/rethinkdb:2.3.4
ports:
- "8080:8080"
volumes:
- ./fixtures/rethinkdb:/tls
networks:
rdb:
aliases:
- rdb-proxy
- rdb-proxy.rdp
command: "proxy --bind all --join rdb.rdb --driver-tls-ca /tls/ca.pem --driver-tls-key /tls/key.pem --driver-tls-cert /tls/cert.pem --cluster-tls-key /tls/key.pem --cluster-tls-cert /tls/cert.pem --cluster-tls-ca /tls/ca.pem"
depends_on:
- rdb-01
- rdb-02
- rdb-03
client:
volumes:
- ./test_output:/test_output
networks:
- rdb
build:
context: .
dockerfile: Dockerfile
links:
- server:notary-server
command: buildscripts/testclient.sh
volumes:
rdb-01-data:
external: false
rdb-02-data:
external: false
rdb-03-data:
external: false
networks:
rdb:
external: false

View File

@ -1,96 +0,0 @@
version: "2"
services:
server:
build:
context: .
dockerfile: server.Dockerfile
volumes:
- ./fixtures/rethinkdb:/tls
networks:
- rdb
links:
- rdb-proxy:rdb-proxy.rdb
- signer
ports:
- "4443:4443"
entrypoint: /usr/bin/env sh
command: -c "sh migrations/rethink_migrate.sh && notary-server -config=fixtures/server-config.rethink.json"
depends_on:
- rdb-proxy
signer:
build:
context: .
dockerfile: signer.Dockerfile
volumes:
- ./fixtures/rethinkdb:/tls
networks:
rdb:
aliases:
- notarysigner
links:
- rdb-proxy:rdb-proxy.rdb
entrypoint: /usr/bin/env sh
command: -c "sh migrations/rethink_migrate.sh && notary-signer -config=fixtures/signer-config.rethink.json"
depends_on:
- rdb-proxy
rdb-01:
image: jlhawn/rethinkdb:2.3.4
volumes:
- ./fixtures/rethinkdb:/tls
- rdb-01-data:/var/data
networks:
rdb:
aliases:
- rdb-01.rdb
command: "--bind all --no-http-admin --server-name rdb_01 --canonical-address rdb-01.rdb --directory /var/data/rethinkdb --driver-tls-ca /tls/ca.pem --driver-tls-key /tls/key.pem --driver-tls-cert /tls/cert.pem --cluster-tls-key /tls/key.pem --cluster-tls-cert /tls/cert.pem --cluster-tls-ca /tls/ca.pem"
rdb-02:
image: jlhawn/rethinkdb:2.3.4
volumes:
- ./fixtures/rethinkdb:/tls
- rdb-02-data:/var/data
networks:
rdb:
aliases:
- rdb-02.rdb
command: "--bind all --no-http-admin --server-name rdb_02 --canonical-address rdb-02.rdb --directory /var/data/rethinkdb --join rdb-01 --driver-tls-ca /tls/ca.pem --driver-tls-key /tls/key.pem --driver-tls-cert /tls/cert.pem --cluster-tls-key /tls/key.pem --cluster-tls-cert /tls/cert.pem --cluster-tls-ca /tls/ca.pem"
depends_on:
- rdb-01
rdb-03:
image: jlhawn/rethinkdb:2.3.4
volumes:
- ./fixtures/rethinkdb:/tls
- rdb-03-data:/var/data
networks:
rdb:
aliases:
- rdb-03.rdb
command: "--bind all --no-http-admin --server-name rdb_03 --canonical-address rdb-03.rdb --directory /var/data/rethinkdb --join rdb-02 --driver-tls-ca /tls/ca.pem --driver-tls-key /tls/key.pem --driver-tls-cert /tls/cert.pem --cluster-tls-key /tls/key.pem --cluster-tls-cert /tls/cert.pem --cluster-tls-ca /tls/ca.pem"
depends_on:
- rdb-01
- rdb-02
rdb-proxy:
image: jlhawn/rethinkdb:2.3.4
ports:
- "8080:8080"
volumes:
- ./fixtures/rethinkdb:/tls
networks:
rdb:
aliases:
- rdb-proxy
- rdb-proxy.rdp
command: "proxy --bind all --join rdb-03 --driver-tls-ca /tls/ca.pem --driver-tls-key /tls/key.pem --driver-tls-cert /tls/cert.pem --cluster-tls-key /tls/key.pem --cluster-tls-cert /tls/cert.pem --cluster-tls-ca /tls/ca.pem"
depends_on:
- rdb-01
- rdb-02
- rdb-03
volumes:
rdb-01-data:
external: false
rdb-02-data:
external: false
rdb-03-data:
external: false
networks:
rdb:
external: false

View File

@ -1,49 +0,0 @@
version: "2"
services:
server:
build:
context: .
dockerfile: server.Dockerfile
networks:
- mdb
- sig
ports:
- "8080"
- "4443:4443"
entrypoint: /usr/bin/env sh
command: -c "./migrations/migrate.sh && notary-server -config=fixtures/server-config.json"
depends_on:
- mysql
- signer
signer:
build:
context: .
dockerfile: signer.Dockerfile
networks:
mdb:
sig:
aliases:
- notarysigner
entrypoint: /usr/bin/env sh
command: -c "./migrations/migrate.sh && notary-signer -config=fixtures/signer-config.json"
depends_on:
- mysql
mysql:
networks:
- mdb
volumes:
- ./notarymysql/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
- notary_data:/var/lib/mysql
image: mariadb:10.1.10
environment:
- TERM=dumb
- MYSQL_ALLOW_EMPTY_PASSWORD="true"
command: mysqld --innodb_file_per_table
volumes:
notary_data:
external: false
networks:
mdb:
external: false
sig:
external: false

View File

@ -1,17 +0,0 @@
This directory contains sample repositories from different versions of Notary client (TUF metadata, trust anchor certificates, and private keys), in order to test backwards compatibility (that newer clients can read old-format repositories).
Notary client makes no guarantees of future-compatibility though (that is, repositories produced by newer clients may not be able to be read by old clients.)
Relevant information for repositories:
- `notary0.1`
- GUN: `docker.com/notary0.1/samplerepo`
- key passwords: "randompass"
- targets:
```
NAME DIGEST SIZE (BYTES)
---------------------------------------------------------------------------------------------
LICENSE 9395bac6fccb26bcb55efb083d1b4b0fe72a1c25f959f056c016120b3bb56a62 11309
```
- It also has a changelist to add a `.gitignore` target, that hasn't been published.

View File

@ -1,12 +0,0 @@
$ bin/notary -c cmd/notary/config.json -d /tmp/notary0.1 list docker.com/notary0.1/samplerepo
NAME DIGEST SIZE (BYTES)
---------------------------------------------------------------------------------------------
LICENSE 9395bac6fccb26bcb55efb083d1b4b0fe72a1c25f959f056c016120b3bb56a62 11309
$ bin/notary -c cmd/notary/config.json -d /tmp/notary0.1 status docker.com/notary0.1/samplerepo
Unpublished changes for docker.com/notary0.1/samplerepo:
action scope type path
----------------------------------------------------
create targets target .gitignore

View File

@ -1,8 +0,0 @@
-----BEGIN EC PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-256-CBC,dac5836ee6baf54197daba876c3d84cb
1zj/qNRkW+2N8kWHd18jZ7ddohkABKUDEYdJPvgICDP17v6eZJI/tcTlHWM36Dil
3a/zAwUAyYtbM0hjOXu6/YVP1+2pl+22N0/37PdPTMxb9LOPt3Ujtc70JKP08Kcf
pjM/7YQkjfLdMxLcFJsFHt23+ERzQDRzNSuGv4vn51g=
-----END EC PRIVATE KEY-----

View File

@ -1,8 +0,0 @@
-----BEGIN EC PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-256-CBC,471240bb633cce396bd638240847fb59
KXNOnT6p7zbia8GC2eXviKWH/cYJXJRrlO2lpAABdH103XAl2YQvUsIcFFG/ZKH9
QgDShODzgf3CF+1yoYnPF0YHvM9VaKkBYKOsQ06wL1j/5VTldUB6wsibMoYWMH1B
owj4RlVMq4M/kP974auTuE28C1AKZfh9yWC4tuWVRXo=
-----END EC PRIVATE KEY-----

View File

@ -1,8 +0,0 @@
-----BEGIN EC PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-256-CBC,25522de9b9c2ba3fbb65398ee51d2ea2
32PFFLeoX3sfkzYw9uFi4Zt/gVOri9ju9WhTSALwTCSsoG0qlfxbx+gH2c6P56ZT
cnERJWVV2YNPmh8YdeIMczYrqAzfe/YaLokZ5zUPKs706+jYoorviJVHAnSkmQaO
qnnlWyR2ULsFCt1j3flNIZNitNTrzuyQV9yFzPfW0E8=
-----END EC PRIVATE KEY-----

View File

@ -1,11 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIBiTCCAS+gAwIBAgIQRO35ZpmIfqBlRv/yC9GPhTAKBggqhkjOPQQDAjAqMSgw
JgYDVQQDEx9kb2NrZXIuY29tL25vdGFyeTAuMS9zYW1wbGVyZXBvMCAXDTE2MDIw
NTAwNTg1N1oYDzIxMTYwMjA1MDA1ODU3WjAqMSgwJgYDVQQDEx9kb2NrZXIuY29t
L25vdGFyeTAuMS9zYW1wbGVyZXBvMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE
2FUY15KjIJySU4Extfrpi3iYGx8rehcHXu7r3BFuYxpzt4K5nLbByd7xF9AP22pN
uE1afYVe4bccXFQfIJAApKM1MDMwDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoG
CCsGAQUFBwMDMAwGA1UdEwEB/wQCMAAwCgYIKoZIzj0EAwIDSAAwRQIgHsSH3usp
nHtyyu9vINmdjXeKzBEP7+JhKzv8sgGJTEwCIQDm/HUuxyH5N2CUw4Huq9Q8OZ5P
2pdsVblYj2vJiEIHgw==
-----END CERTIFICATE-----

View File

@ -1 +0,0 @@
{"action":"create","role":"targets","type":"target","path":".gitignore","data":"eyJsZW5ndGgiOjE2MSwiaGFzaGVzIjp7InNoYTI1NiI6IkxITTJUMXZqMDZKVzJDM29jdXdBWXFvRExuc0FYeFFkeVZRZi9ucXA2TVE9In19"}

View File

@ -1 +0,0 @@
{"signed":{"_type":"Root","consistent_snapshot":false,"expires":"2116-01-11T16:58:57.119711158-08:00","keys":{"1192c9d6a8e45e4fa80fd3eb7fff45778ccad29c84f4ce7afcf45f62210a4955":{"keytype":"ecdsa","keyval":{"private":null,"public":"MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEwTj8+har32eLdvKi3X7P6njHqzXWTFtOBYEJdJEZ9aqwRbcplVztUeJpdHPA6JoiHAgK9+hXRxVOmM49FAywsA=="}},"4ff57dc987163053a12c066f2dd36b1ae6037a92f5416d381fe311a3db1868d8":{"keytype":"ecdsa-x509","keyval":{"private":null,"public":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJpVENDQVMrZ0F3SUJBZ0lRUk8zNVpwbUlmcUJsUnYveUM5R1BoVEFLQmdncWhrak9QUVFEQWpBcU1TZ3cKSmdZRFZRUURFeDlrYjJOclpYSXVZMjl0TDI1dmRHRnllVEF1TVM5ellXMXdiR1Z5WlhCdk1DQVhEVEUyTURJdwpOVEF3TlRnMU4xb1lEekl4TVRZd01qQTFNREExT0RVM1dqQXFNU2d3SmdZRFZRUURFeDlrYjJOclpYSXVZMjl0CkwyNXZkR0Z5ZVRBdU1TOXpZVzF3YkdWeVpYQnZNRmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBEQVFjRFFnQUUKMkZVWTE1S2pJSnlTVTRFeHRmcnBpM2lZR3g4cmVoY0hYdTdyM0JGdVl4cHp0NEs1bkxiQnlkN3hGOUFQMjJwTgp1RTFhZllWZTRiY2NYRlFmSUpBQXBLTTFNRE13RGdZRFZSMFBBUUgvQkFRREFnV2dNQk1HQTFVZEpRUU1NQW9HCkNDc0dBUVVGQndNRE1Bd0dBMVVkRXdFQi93UUNNQUF3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnSHNTSDN1c3AKbkh0eXl1OXZJTm1kalhlS3pCRVA3K0poS3p2OHNnR0pURXdDSVFEbS9IVXV4eUg1TjJDVXc0SHVxOVE4T1o1UAoycGRzVmJsWWoydkppRUlIZ3c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="}},"7fc757801b9bab4ec9e35bfe7a6b61668ff6f4c81b5632af19e6c728ab799599":{"keytype":"ecdsa","keyval":{"private":null,"public":"MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEMNbne5Ki6AlF/B0VHhaZ2Z1xHAL2KqXoL+j5lYUw37Qjhkcav/JG2A3K1qJd6yC+OTa0Bl2PDBEvHvnWNa6WYA=="}},"a55ccf652b0be4b6c4d356cbb02d9ea432bb84a2571665be3df7c7396af8e8b8":{"keytype":"ecdsa","keyval":{"private":null,"public":"MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE6A7d++A5KSNHAvNGD3r7zCOXE5ztDjGXvYmUxP0VE1AM7pwrIWAdCZVbbZXBMYk+hWVLmNDT6eYqHEhtgyES5Q=="}}},"roles":{"root":{"keyids":["4ff57dc987163053a12c066f2dd36b1ae6037a92f5416d381fe311a3db1868d8"],"threshold":1},"snapshot":{"keyids":["a55ccf652b0be4b6c4d356cbb02d9ea432bb84a2571665be3df7c7396af8e8b8"],"threshold":1},"targets":{"keyids":["7fc757801b9bab4ec9e35bfe7a6b61668ff6f4c81b5632af19e6c728ab799599"],"threshold":1},"timestamp":{"keyids":["1192c9d6a8e45e4fa80fd3eb7fff45778ccad29c84f4ce7afcf45f62210a4955"],"threshold":1}},"version":1},"signatures":[{"keyid":"4ff57dc987163053a12c066f2dd36b1ae6037a92f5416d381fe311a3db1868d8","method":"ecdsa","sig":"pUW6ZM0LVInxuB/M9OMFWimRrAHBhTCwyFczyt49WhLyYVjVGQK87/nLroYW0XQkhY4SAlSXBRt8a5GqEoHv4w=="}]}

View File

@ -1 +0,0 @@
{"signed":{"_type":"Snapshot","expires":"2116-01-11T16:58:57.197958956-08:00","meta":{"root":{"hashes":{"sha256":"fMvVcv5Se773HesG6Or0tdFMvhBa4lyKsi4LSB683F4="},"length":2429},"targets":{"hashes":{"sha256":"HPcl2zJ5gM2Kv1Hz1IihF4HBMvLrrpzjmgccWpJeV7Y="},"length":439}},"version":2},"signatures":[{"keyid":"a55ccf652b0be4b6c4d356cbb02d9ea432bb84a2571665be3df7c7396af8e8b8","method":"ecdsa","sig":"oFwXmCdukz9lJqjGM2MM1/rn3UNvcOAjjXvw3Qo0915qXIJ5/9mABQ7Q8B/7a+GDbi1J4WfOSvAQ16pwQrTv2g=="}]}

View File

@ -1 +0,0 @@
{"signed":{"_type":"Targets","delegations":{"keys":{},"roles":[]},"expires":"2116-01-11T16:58:57.194192511-08:00","targets":{"LICENSE":{"hashes":{"sha256":"k5W6xvzLJry1XvsIPRtLD+cqHCX5WfBWwBYSCzu1amI="},"length":11309}},"version":2},"signatures":[{"keyid":"7fc757801b9bab4ec9e35bfe7a6b61668ff6f4c81b5632af19e6c728ab799599","method":"ecdsa","sig":"Q2YkzUU2dTwanLhtiKd+FogRxi33GmFiX9EreHcyvRNjDU+2hPQwpoKSxlILAoXLxhqA9d7ixDmxmZhyXAzjiQ=="}]}

View File

@ -1 +0,0 @@
{"signed":{"_type":"Timestamp","expires":"2116-01-12T00:58:43.527711489Z","meta":{"snapshot":{"hashes":{"sha256":"lbmRtDEnrK4xv9M42C+vTVUR5Vu8qq373Q+Lnfe9hXk="},"length":488}},"version":1},"signatures":[{"keyid":"1192c9d6a8e45e4fa80fd3eb7fff45778ccad29c84f4ce7afcf45f62210a4955","method":"ecdsa","sig":"W4CVwbbPCBO0hV8KvXNTaOZmysSc3uR5HN7StpkFWfVh4EzqNvxdZc+qZc856mQjE5PYOxFcPNGa+4NIhUV13w=="}]}

View File

@ -1,37 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIGMzCCBBugAwIBAgIBATANBgkqhkiG9w0BAQsFADBfMQswCQYDVQQGEwJVUzEL
MAkGA1UECAwCQ0ExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNVBAoMBkRv
Y2tlcjEaMBgGA1UEAwwRTm90YXJ5IFRlc3RpbmcgQ0EwHhcNMTUwNzE2MDQyNTAz
WhcNMjUwNzEzMDQyNTAzWjBfMRowGAYDVQQDDBFOb3RhcnkgVGVzdGluZyBDQTEL
MAkGA1UEBhMCVVMxFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNVBAoMBkRv
Y2tlcjELMAkGA1UECAwCQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC
AQCwVVD4pK7z7pXPpJbaZ1Hg5eRXIcaYtbFPCnN0iqy9HsVEGnEn5BPNSEsuP+m0
5N0qVV7DGb1SjiloLXD1qDDvhXWk+giS9ppqPHPLVPB4bvzsqwDYrtpbqkYvO0YK
0SL3kxPXUFdlkFfgu0xjlczm2PhWG3Jd8aAtspL/L+VfPA13JUaWxSLpui1In8rh
gAyQTK6Q4Of6GbJYTnAHb59UoLXSzB5AfqiUq6L7nEYYKoPflPbRAIWL/UBm0c+H
ocms706PYpmPS2RQv3iOGmnn9hEVp3P6jq7WAevbA4aYGx5EsbVtYABqJBbFWAuw
wTGRYmzn0Mj0eTMge9ztYB2/2sxdTe6uhmFgpUXngDqJI5O9N3zPfvlEImCky3HM
jJoL7g5smqX9o1P+ESLh0VZzhh7IDPzQTXpcPIS/6z0l22QGkK/1N1PaADaUHdLL
vSav3y2BaEmPvf2fkZj8yP5eYgi7Cw5ONhHLDYHFcl9Zm/ywmdxHJETz9nfgXnsW
HNxDqrkCVO46r/u6rSrUt6hr3oddJG8s8Jo06earw6XU3MzM+3giwkK0SSM3uRPq
4AscR1Tv+E31AuOAmjqYQoT29bMIxoSzeljj/YnedwjW45pWyc3JoHaibDwvW9Uo
GSZBVy4hrM/Fa7XCWv1WfHNW1gDwaLYwDnl5jFmRBvcfuQIDAQABo4H5MIH2MIGR
BgNVHSMEgYkwgYaAFHUM1U3E4WyL1nvFd+dPY8f4O2hZoWOkYTBfMQswCQYDVQQG
EwJVUzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNV
BAoMBkRvY2tlcjEaMBgGA1UEAwwRTm90YXJ5IFRlc3RpbmcgQ0GCCQDCeDLbemIT
SzASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDATAOBgNVHQ8BAf8EBAMCAUYwHQYDVR0OBBYEFHe48hcBcAp0bUVlTxXeRA4o
E16pMA0GCSqGSIb3DQEBCwUAA4ICAQAWUtAPdUFpwRq+N1SzGUejSikeMGyPZscZ
JBUCmhZoFufgXGbLO5OpcRLaV3Xda0t/5PtdGMSEzczeoZHWknDtw+79OBittPPj
Sh1oFDuPo35R7eP624lUCch/InZCphTaLx9oDLGcaK3ailQ9wjBdKdlBl8KNKIZp
a13aP5rnSm2Jva+tXy/yi3BSds3dGD8ITKZyI/6AFHxGvObrDIBpo4FF/zcWXVDj
paOmxplRtM4Hitm+sXGvfqJe4x5DuOXOnPrT3dHvRT6vSZUoKobxMqmRTOcrOIPa
EeMpOobshORuRntMDYvvgO3D6p6iciDW2Vp9N6rdMdfOWEQN8JVWvB7IxRHk9qKJ
vYOWVbczAt0qpMvXF3PXLjZbUM0knOdUKIEbqP4YUbgdzx6RtgiiY930Aj6tAtce
0fpgNlvjMRpSBuWTlAfNNjG/YhndMz9uI68TMfFpR3PcgVIv30krw/9VzoLi2Dpe
ow6DrGO6oi+DhN78P4jY/O9UczZK2roZL1Oi5P0RIxf23UZC7x1DlcN3nBr4sYSv
rBx4cFTMNpwU+nzsIi4djcFDKmJdEOyjMnkP2v0Lwe7yvK08pZdEu+0zbrq17kue
XpXLc7K68QB15yxzGylU5rRwzmC/YsAVyE4eoGu8PxWxrERvHby4B8YP0vAfOraL
lKmXlK4dTg==
-----END CERTIFICATE-----

View File

@ -1,68 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIFWzCCA0OgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBfMRowGAYDVQQDDBFOb3Rh
cnkgVGVzdGluZyBDQTELMAkGA1UEBhMCVVMxFjAUBgNVBAcMDVNhbiBGcmFuY2lz
Y28xDzANBgNVBAoMBkRvY2tlcjELMAkGA1UECAwCQ0EwHhcNMTUwNzE2MDQyNTMy
WhcNMTYwNzE1MDQyNTMyWjBbMRYwFAYDVQQDDA1ub3Rhcnktc2VydmVyMQswCQYD
VQQGEwJVUzEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEPMA0GA1UECgwGRG9ja2Vy
MQswCQYDVQQIDAJDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKjb
eflOtVrOv0IOeJGKfi5LHH3Di0O2nlZu8AITSJbDZPSXoYc+cprpoEWYncbFFC3C
94z5xBW5vcAqMhLs50ml5ADl86umcLl2C/mX8NuZnlIevMCb0mBiavDtSPV3J5Dq
Ok+trgKEXs9g4hyh5Onh5Y5InPO1lDJ+2cEtVGBMhhddfWRVlV9ZUWxPYVCTt6L0
bD9SeyXJVB0dnFhr3xICayhDlhlvcjXVOTUsewJLo/L2nq0ve93Jb2smKio27ZGE
79bCGqJK213/FNqfAlGUPkhYTfYJTcgjhS1plmtgN6KZF6RVXvOrCBMEDM2yZq1m
EPWjoT0tn0MkWErDANcCAwEAAaOCASQwggEgMIGIBgNVHSMEgYAwfoAUd7jyFwFw
CnRtRWVPFd5EDigTXqmhY6RhMF8xCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTEW
MBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEPMA0GA1UECgwGRG9ja2VyMRowGAYDVQQD
DBFOb3RhcnkgVGVzdGluZyBDQYIBATAMBgNVHRMBAf8EAjAAMB0GA1UdJQQWMBQG
CCsGAQUFBwMCBggrBgEFBQcDATAOBgNVHQ8BAf8EBAMCBaAwNwYDVR0RBDAwLoIN
bm90YXJ5LXNlcnZlcoIMbm90YXJ5c2VydmVygglsb2NhbGhvc3SHBH8AAAEwHQYD
VR0OBBYEFBQcColyhey0o0RTLiiGAtaRhUIuMA0GCSqGSIb3DQEBCwUAA4ICAQAW
jf7f98t5y2C5mjd8om/vfgpJRmnyjFxJD8glCIacnwABAc2MgNoWRISWJnjwSf9W
kZ/tWGeHKdQ4Q7T3+Vu2d8nVpL+cGLZY4iddzxlNqWeaA7Sa7jQSLvOoYYxkb+w5
jUpukvqxGzCToW3dlOaV0qvOhXaOxPD6T8IWivnQdU53oU3kopfYiRjkREA1dIBv
Hwaa6fAjeK4KyBt7pzKScfHzU4X2gXajqc7Ox0NAb5YfIFOqySqcnYNflcZ+lDPd
XVMBdB4eRl1BbVTlonxxATWkhiv8GZUc9YD/bikbFzVYm3N5XRT7LCgyBgrmbH5k
PJUElTP2AsoSRLXUsPgCAhBM9QzHWsMiEh5wcpe61C3Afwv4MLtr7T0T99vp/BJt
OOJ7kJzYhp6P4FTi4uXuT4xcIJ/yTDZcLUTlJSWCuCKCM76yZteWEmlvWBHd9QiF
TDqKzjhrnt2FpPSBSm9Na+hAwsmZfRzQXelXai3aBx55HCIcGZ9o8oGsJaB7uDum
4+lFOhMiGaL2/pxhZcbCCjpLNv/9mCb67iPQV/E8xAY89wsXYpU+i/q1RGbraXLA
K3faVJu6R5taGe0heQr6VGZwF4L+bG64rtxUPqKDCF+Y9FpN4qUDl3vzmYGBTd5u
osKHqyciMmPCpgR7IQd1yYqH1cwlhQX/yTepX9gcrA==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIGMzCCBBugAwIBAgIBATANBgkqhkiG9w0BAQsFADBfMQswCQYDVQQGEwJVUzEL
MAkGA1UECAwCQ0ExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNVBAoMBkRv
Y2tlcjEaMBgGA1UEAwwRTm90YXJ5IFRlc3RpbmcgQ0EwHhcNMTUwNzE2MDQyNTAz
WhcNMjUwNzEzMDQyNTAzWjBfMRowGAYDVQQDDBFOb3RhcnkgVGVzdGluZyBDQTEL
MAkGA1UEBhMCVVMxFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNVBAoMBkRv
Y2tlcjELMAkGA1UECAwCQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC
AQCwVVD4pK7z7pXPpJbaZ1Hg5eRXIcaYtbFPCnN0iqy9HsVEGnEn5BPNSEsuP+m0
5N0qVV7DGb1SjiloLXD1qDDvhXWk+giS9ppqPHPLVPB4bvzsqwDYrtpbqkYvO0YK
0SL3kxPXUFdlkFfgu0xjlczm2PhWG3Jd8aAtspL/L+VfPA13JUaWxSLpui1In8rh
gAyQTK6Q4Of6GbJYTnAHb59UoLXSzB5AfqiUq6L7nEYYKoPflPbRAIWL/UBm0c+H
ocms706PYpmPS2RQv3iOGmnn9hEVp3P6jq7WAevbA4aYGx5EsbVtYABqJBbFWAuw
wTGRYmzn0Mj0eTMge9ztYB2/2sxdTe6uhmFgpUXngDqJI5O9N3zPfvlEImCky3HM
jJoL7g5smqX9o1P+ESLh0VZzhh7IDPzQTXpcPIS/6z0l22QGkK/1N1PaADaUHdLL
vSav3y2BaEmPvf2fkZj8yP5eYgi7Cw5ONhHLDYHFcl9Zm/ywmdxHJETz9nfgXnsW
HNxDqrkCVO46r/u6rSrUt6hr3oddJG8s8Jo06earw6XU3MzM+3giwkK0SSM3uRPq
4AscR1Tv+E31AuOAmjqYQoT29bMIxoSzeljj/YnedwjW45pWyc3JoHaibDwvW9Uo
GSZBVy4hrM/Fa7XCWv1WfHNW1gDwaLYwDnl5jFmRBvcfuQIDAQABo4H5MIH2MIGR
BgNVHSMEgYkwgYaAFHUM1U3E4WyL1nvFd+dPY8f4O2hZoWOkYTBfMQswCQYDVQQG
EwJVUzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNV
BAoMBkRvY2tlcjEaMBgGA1UEAwwRTm90YXJ5IFRlc3RpbmcgQ0GCCQDCeDLbemIT
SzASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDATAOBgNVHQ8BAf8EBAMCAUYwHQYDVR0OBBYEFHe48hcBcAp0bUVlTxXeRA4o
E16pMA0GCSqGSIb3DQEBCwUAA4ICAQAWUtAPdUFpwRq+N1SzGUejSikeMGyPZscZ
JBUCmhZoFufgXGbLO5OpcRLaV3Xda0t/5PtdGMSEzczeoZHWknDtw+79OBittPPj
Sh1oFDuPo35R7eP624lUCch/InZCphTaLx9oDLGcaK3ailQ9wjBdKdlBl8KNKIZp
a13aP5rnSm2Jva+tXy/yi3BSds3dGD8ITKZyI/6AFHxGvObrDIBpo4FF/zcWXVDj
paOmxplRtM4Hitm+sXGvfqJe4x5DuOXOnPrT3dHvRT6vSZUoKobxMqmRTOcrOIPa
EeMpOobshORuRntMDYvvgO3D6p6iciDW2Vp9N6rdMdfOWEQN8JVWvB7IxRHk9qKJ
vYOWVbczAt0qpMvXF3PXLjZbUM0knOdUKIEbqP4YUbgdzx6RtgiiY930Aj6tAtce
0fpgNlvjMRpSBuWTlAfNNjG/YhndMz9uI68TMfFpR3PcgVIv30krw/9VzoLi2Dpe
ow6DrGO6oi+DhN78P4jY/O9UczZK2roZL1Oi5P0RIxf23UZC7x1DlcN3nBr4sYSv
rBx4cFTMNpwU+nzsIi4djcFDKmJdEOyjMnkP2v0Lwe7yvK08pZdEu+0zbrq17kue
XpXLc7K68QB15yxzGylU5rRwzmC/YsAVyE4eoGu8PxWxrERvHby4B8YP0vAfOraL
lKmXlK4dTg==
-----END CERTIFICATE-----

View File

@ -1,28 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAqNt5+U61Ws6/Qg54kYp+LkscfcOLQ7aeVm7wAhNIlsNk9Jeh
hz5ymumgRZidxsUULcL3jPnEFbm9wCoyEuznSaXkAOXzq6ZwuXYL+Zfw25meUh68
wJvSYGJq8O1I9XcnkOo6T62uAoRez2DiHKHk6eHljkic87WUMn7ZwS1UYEyGF119
ZFWVX1lRbE9hUJO3ovRsP1J7JclUHR2cWGvfEgJrKEOWGW9yNdU5NSx7Akuj8vae
rS973clvayYqKjbtkYTv1sIaokrbXf8U2p8CUZQ+SFhN9glNyCOFLWmWa2A3opkX
pFVe86sIEwQMzbJmrWYQ9aOhPS2fQyRYSsMA1wIDAQABAoIBAG6mtD1dCJajGM3u
sa+d86XebqMzOtV6nDPDqt+RR2YUUNm/a4g2sd817WLt6aZRizGZq6LkIUyjVObS
P9ILEF1AqjK0fYMkJIZEBwDeQmWFOyxRHBuTgL7Mf4u10rOYC4N5GhEQnRDlMUPw
FvvwUxO4hjdA+ijx+lVErulaDQq0yj5mL4LWu4cHm576OufzgHOIp6fQtfRVJIXD
W2ginblgYFLd+PPiM1RMPR/Pj63VWXWBn1VwLAxWN889E4VG2medl0taQgkNQ3/W
0J04KiTXPrtcUBy2AGoHikvN7gG7Up2IwRRbsXkUdhQNZ/HnIQlkFfteiqqt9VNR
Nsi31nECgYEA0qE+96TvYf8jeZsqrl8YQAvjXWrNA05eKZlT6cm6XpyXq22v9Cgn
2KXEhRwHZF2dQ2C+1PvboeTUbpdPX1nY2shY59L7+t68F/jxotcjx0yL+ZC742Fy
bWsc8Us0Ir2DD5g/+0F+LRLFJKSfJPdLzEkvwuYnlm6RcFlbxIxW6h0CgYEAzTrE
6ulEhN0fKeJY/UaK/8GlLllXc2Z5t7mRicN1s782l5qi0n1R57VJw/Ezx4JN1mcQ
4axe9zzjAA5JfSDfyTyNedP1KOmCaKmBqGa9JppxGcVQpMDg8+QvYnJ8o5JXEXSE
TOnpY4RTEA1RGnA5KbbJ7R1MiHUGXC9nizVHxIMCgYB8cu1DYN5XpmoNddK4CFPJ
s7x4+5t6MpmMNp3P6nMFZ7xte3eU6QzyAq+kfjUX5f//SXA3Y0AX3Z5uYVRyYCGy
0uFEx/I9/dBg0aPjtP3cyauCnzOEW5VCdSE6qFZ7mEGRu0FCcSXd99MnnWSycLMG
Vs+zdk05osan/QQtk0XfOQKBgDfkIWy4SmjEr5AAjKutYn10hz+wJRjQd6WJbBFQ
oeVp1bxD6MPaTUwFGym5rphO7FPPjdFn2BUNB+Uj/u+M3GU5kG31Q3b44QMP5reu
AyVYOiUCj4vO23SQWDc/ZqJFYGDokn8/1Me9acGdXtEMbwTlOujQad9fv3OrlU9c
G0dxAoGAHcntflD6UvQ5/PYOirNJL1GhSspF7u72NrsYjaoZls83uIqucJiB5hMH
Ovq1TJbl0DwDBOyMmt5gZraPQB0P5/5GvnxqGlIAKIwi2VuQ2XHpSBE8Pg5Pveb8
sgFLFnwL5+JyqOP65AV3Eh5b4BJc6kqKz4gVmKLBQeo6lE13sNs=
-----END RSA PRIVATE KEY-----

View File

@ -1,68 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIFWzCCA0OgAwIBAgIBATANBgkqhkiG9w0BAQsFADBfMRowGAYDVQQDDBFOb3Rh
cnkgVGVzdGluZyBDQTELMAkGA1UEBhMCVVMxFjAUBgNVBAcMDVNhbiBGcmFuY2lz
Y28xDzANBgNVBAoMBkRvY2tlcjELMAkGA1UECAwCQ0EwHhcNMTUwNzE2MDQyNTIx
WhcNMTYwNzE1MDQyNTIxWjBbMRYwFAYDVQQDDA1ub3Rhcnktc2lnbmVyMQswCQYD
VQQGEwJVUzEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEPMA0GA1UECgwGRG9ja2Vy
MQswCQYDVQQIDAJDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANhO
8+K9xT6M9dQC90Hxs6bmTXWQzE5oV2kLeVKqOjwAvGt6wBE2XJCAbTS3FORIOyoO
VQDVCv2Pk2lZXGWqSrH8SY2umjRJIhPDiqN9V5M/gcmMm2EUgwmp2l4bsDk1MQ6G
Sbud5kjYGZcp9uXxAVO8tfLVLQF7ohJYqiexJN+fZkQyxTgSqrI7MKK1pUvGX/fa
6EXzpKwxTQPJXiG/ZQW0Pn+gdrz+/Cf0PcVyV/Ghc2RR+WjKzqqAiDUJoEtKm/xQ
VRcSPbagVLCe0KZr7VmtDWnHsUv9ZB9BRNlIlRVDOhVDCCcMu/zEtcxuH8ja7faf
i5xNt6vCBmHuCXQtTUsCAwEAAaOCASQwggEgMIGIBgNVHSMEgYAwfoAUd7jyFwFw
CnRtRWVPFd5EDigTXqmhY6RhMF8xCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTEW
MBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEPMA0GA1UECgwGRG9ja2VyMRowGAYDVQQD
DBFOb3RhcnkgVGVzdGluZyBDQYIBATAMBgNVHRMBAf8EAjAAMB0GA1UdJQQWMBQG
CCsGAQUFBwMCBggrBgEFBQcDATAOBgNVHQ8BAf8EBAMCBaAwNwYDVR0RBDAwLoIN
bm90YXJ5LXNpZ25lcoIMbm90YXJ5c2lnbmVygglsb2NhbGhvc3SHBH8AAAEwHQYD
VR0OBBYEFLv4/22eN7pe8IzCbL+gKr2i/o6VMA0GCSqGSIb3DQEBCwUAA4ICAQCR
uX9Wif8uRu5v3tgSWx+EBJleq0nWcWM7VTLPedtpL2Xq+GZldJ7A+BGHgLQ42YjO
/nye92ZcAWllEv676SEInWQmR1wtZ0cnlltvLdsZSCbHpwPpn3CK/afNm8OwtLfC
KmaRU+qlLLtAvnu2fTk8KMTfAc9UJbhtntsH0rjvQPxoMTgjj2gWiWfIQZurkeAT
Bovv7GfvfBsM4jAtAx5ZFOAo6yx1kvCb2rwmnrzzMA7GQTSUzWlwyviNyi8WB+kb
pcm/4e4khDHzIVgCoT+O+gS382CP6cCAUcFfLizxCYvY3uS6P5be+sp8JO4bV9Sc
0nMiDFZWyzEZj1dWMnoWNq1vMEr9NAXexata5B2DIfWZz6pWWMdw3uPo5hZBcNik
6okQacazFCdgmtbXl+TPld8dQEN0beqYhIHQ9aosYyONoBhqn4I/09XQQmxVY2/L
BThsQBIJHh2jIRgFcSePoVDI/lDd6wnqtSwedu+7tShG6bN9tlQsyqf+8MquBC3Q
aw78cRCJG3CZpw0cmMm2vxlraHbB3+XKkQfQGRgEV4C88MO1W7WTyrwCJg9akVYz
l2sG3WANdBs46RHAKDbTBXOKiib5tfTUFRgDqtFJ9/wKJ9mNhhYHPuCkjIt2yPf4
iq/3GeSNdr5stqSN0Wa7w6baqxbuZgqURtOCayAcpA==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIGMzCCBBugAwIBAgIBATANBgkqhkiG9w0BAQsFADBfMQswCQYDVQQGEwJVUzEL
MAkGA1UECAwCQ0ExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNVBAoMBkRv
Y2tlcjEaMBgGA1UEAwwRTm90YXJ5IFRlc3RpbmcgQ0EwHhcNMTUwNzE2MDQyNTAz
WhcNMjUwNzEzMDQyNTAzWjBfMRowGAYDVQQDDBFOb3RhcnkgVGVzdGluZyBDQTEL
MAkGA1UEBhMCVVMxFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNVBAoMBkRv
Y2tlcjELMAkGA1UECAwCQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC
AQCwVVD4pK7z7pXPpJbaZ1Hg5eRXIcaYtbFPCnN0iqy9HsVEGnEn5BPNSEsuP+m0
5N0qVV7DGb1SjiloLXD1qDDvhXWk+giS9ppqPHPLVPB4bvzsqwDYrtpbqkYvO0YK
0SL3kxPXUFdlkFfgu0xjlczm2PhWG3Jd8aAtspL/L+VfPA13JUaWxSLpui1In8rh
gAyQTK6Q4Of6GbJYTnAHb59UoLXSzB5AfqiUq6L7nEYYKoPflPbRAIWL/UBm0c+H
ocms706PYpmPS2RQv3iOGmnn9hEVp3P6jq7WAevbA4aYGx5EsbVtYABqJBbFWAuw
wTGRYmzn0Mj0eTMge9ztYB2/2sxdTe6uhmFgpUXngDqJI5O9N3zPfvlEImCky3HM
jJoL7g5smqX9o1P+ESLh0VZzhh7IDPzQTXpcPIS/6z0l22QGkK/1N1PaADaUHdLL
vSav3y2BaEmPvf2fkZj8yP5eYgi7Cw5ONhHLDYHFcl9Zm/ywmdxHJETz9nfgXnsW
HNxDqrkCVO46r/u6rSrUt6hr3oddJG8s8Jo06earw6XU3MzM+3giwkK0SSM3uRPq
4AscR1Tv+E31AuOAmjqYQoT29bMIxoSzeljj/YnedwjW45pWyc3JoHaibDwvW9Uo
GSZBVy4hrM/Fa7XCWv1WfHNW1gDwaLYwDnl5jFmRBvcfuQIDAQABo4H5MIH2MIGR
BgNVHSMEgYkwgYaAFHUM1U3E4WyL1nvFd+dPY8f4O2hZoWOkYTBfMQswCQYDVQQG
EwJVUzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNV
BAoMBkRvY2tlcjEaMBgGA1UEAwwRTm90YXJ5IFRlc3RpbmcgQ0GCCQDCeDLbemIT
SzASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDATAOBgNVHQ8BAf8EBAMCAUYwHQYDVR0OBBYEFHe48hcBcAp0bUVlTxXeRA4o
E16pMA0GCSqGSIb3DQEBCwUAA4ICAQAWUtAPdUFpwRq+N1SzGUejSikeMGyPZscZ
JBUCmhZoFufgXGbLO5OpcRLaV3Xda0t/5PtdGMSEzczeoZHWknDtw+79OBittPPj
Sh1oFDuPo35R7eP624lUCch/InZCphTaLx9oDLGcaK3ailQ9wjBdKdlBl8KNKIZp
a13aP5rnSm2Jva+tXy/yi3BSds3dGD8ITKZyI/6AFHxGvObrDIBpo4FF/zcWXVDj
paOmxplRtM4Hitm+sXGvfqJe4x5DuOXOnPrT3dHvRT6vSZUoKobxMqmRTOcrOIPa
EeMpOobshORuRntMDYvvgO3D6p6iciDW2Vp9N6rdMdfOWEQN8JVWvB7IxRHk9qKJ
vYOWVbczAt0qpMvXF3PXLjZbUM0knOdUKIEbqP4YUbgdzx6RtgiiY930Aj6tAtce
0fpgNlvjMRpSBuWTlAfNNjG/YhndMz9uI68TMfFpR3PcgVIv30krw/9VzoLi2Dpe
ow6DrGO6oi+DhN78P4jY/O9UczZK2roZL1Oi5P0RIxf23UZC7x1DlcN3nBr4sYSv
rBx4cFTMNpwU+nzsIi4djcFDKmJdEOyjMnkP2v0Lwe7yvK08pZdEu+0zbrq17kue
XpXLc7K68QB15yxzGylU5rRwzmC/YsAVyE4eoGu8PxWxrERvHby4B8YP0vAfOraL
lKmXlK4dTg==
-----END CERTIFICATE-----

View File

@ -1,28 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA2E7z4r3FPoz11AL3QfGzpuZNdZDMTmhXaQt5Uqo6PAC8a3rA
ETZckIBtNLcU5Eg7Kg5VANUK/Y+TaVlcZapKsfxJja6aNEkiE8OKo31Xkz+ByYyb
YRSDCanaXhuwOTUxDoZJu53mSNgZlyn25fEBU7y18tUtAXuiEliqJ7Ek359mRDLF
OBKqsjsworWlS8Zf99roRfOkrDFNA8leIb9lBbQ+f6B2vP78J/Q9xXJX8aFzZFH5
aMrOqoCINQmgS0qb/FBVFxI9tqBUsJ7QpmvtWa0NacexS/1kH0FE2UiVFUM6FUMI
Jwy7/MS1zG4fyNrt9p+LnE23q8IGYe4JdC1NSwIDAQABAoIBAHykYhyRxYrZpv3Y
B6pUIHVX1+Ka4V98+IFrPynHNW9F7UzxmqNQc95AYq0xojQ4+v6s64ZjPMYHaaYW
/AsJKamN+sRNjEX8rko9LzIuE7yhp6QABbjXHPsAiPgZdF5CrFX2Q558yinHfFeC
sualDWK3JxEajaiBGU8BEGt2xAymuWACGblrM1aAEZa8B84TW3CzzcdyzAkn8P3e
piJCe+DWMc33441r0KlV5GruwF9ewXiWzZtXAOiP/0xEDICFdlFWbO39myMpxDdU
Y0uZ+zmn2G3gz2tz25thH0Wl7mDQ3AA0VlHurgPBBEekeZPQmjiKW+F4slCzXvuy
kW/urIECgYEA/LhY+OWlZVXzIEly7z1/cU9/WImqTs2uRKDeQHMwZrd7D9BXkJuQ
jPN+jZlMYBBrxoaCywbMrgB80Z3MgGHaSx9OIDEZmaxyuQv0zQJCMogysYkbCcaD
mHYnyAf7OXa708Z168WAisEhrwa/DXBn3/hPoBkrbMsuPF/J+tEP7lsCgYEA2x2g
86SitgPVeNV3iuZ6D/SV0QIbDWOYoST2GQn2LnfALIOrzpXRClOSQZ2pGtg9gYo1
owUyyOSv2Fke93p3ufHv3Gqvjl55lzBVV0siHkEXwHcol36DDGQcskVnXJqaL3IF
tiOisuJS9A7PW7gEi0miyGzzB/kh/IEWHKqLL9ECgYEAoBOFB+MuqMmQftsHWlLx
7qwUVdidb90IjZ/4J4rPFcESyimFzas8HIv/lWGM5yx/l/iL0F42N+FHLt9tMcTJ
qNvjeLChLp307RGNtm2/0JJEyf+2iLKdmGz/Nc0YbIWw46vJ9dXcXgeHdn4ndjPF
GDEI/rfysa7hUoy6O41BMhECgYBPJsLPgHdufLAOeD44pM0PGnFMERCoo4OtImbr
4JdXbdazvdTASYo7yriYj1VY5yhAtSZu/x+7RjDnXDo9d7XsK6NT4g4Mxb/yh3ks
kW1/tE/aLLEzGHZKcZeUJlISN57e6Ld7dh/9spf4pajuHuk1T6JH+GNKTAqk5hSQ
wmKJIQKBgCGBWGvJrCeT5X9oHdrlHj2YoKvIIG1eibagcjcKemD7sWzi7Q4P7JIo
xeX8K1WVxdBpo4/RiQcGFmwSmSUKwwr1dO00xtjxIl7ip4DU+WAM7CdmcOIOMbr4
rP9T/wy1ZBkERCIw2ElybTzB8yuOlNLuOMhUeU55xUMFNYYrWEp2
-----END RSA PRIVATE KEY-----

View File

@ -1,18 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIC8zCCAdugAwIBAgIJANrYVzo59a4lMA0GCSqGSIb3DQEBCwUAMBAxDjAMBgNV
BAMMBSoucmRiMB4XDTE2MDQwNTIzMDcyNFoXDTI2MDQwMzIzMDcyNFowEDEOMAwG
A1UEAwwFKi5yZGIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQD54K+k
KRjKH33vsZn98YY8bE4p+wJ7OjlVcKdojlVxQ8ZlNM9kip4jDXQK4P90PdxkT8t2
0xxZJqEh2oaOJ9dTi96M0stleeHgud2i862g15iKR9djvLXGaYV50FyT6ZDqaz1y
2KVS0fNy/rKKo8exphhKUymLgroTd9+biNFQ701EfqyNzDHbRCyWD0nIJah218tR
lCCfYfYzPiPIKDc40wPSn16f7pKxLTxYwMSk6iQ2rrF/uRz/Pn0nIjfFsEih15Bz
XibZsToru/SCmJv1T8mYPRccQ+hLfoFpg81pAwcHvOCI8zYkzgNWwTrymlxn65If
EhnjexODf3p7EgnvAgMBAAGjUDBOMB0GA1UdDgQWBBSABTfpeRP7nqHmtXaA4Ai8
E7IGqzAfBgNVHSMEGDAWgBSABTfpeRP7nqHmtXaA4Ai8E7IGqzAMBgNVHRMEBTAD
AQH/MA0GCSqGSIb3DQEBCwUAA4IBAQA3rtK/vl2bAxCk1aF5ub4KVfsQUru7gRfj
UvFZcLGzigAXQ1zHX5TEaivUanDkXmEJ2/sTpHHaZDIDMG6776Jlq7I+rWurVKW3
Lsq05KMW095bn0om4nGrPLZRyfhubJ27nmQhri/+zWCaWkTe1kVpAhjqDyWqkYw4
/roVk4r9P3hf7M1bB9EK/MZU1OLIAGlSn3MaDUewpgwYZDSdItHm1XS56NL3xKgF
r3WtsbRPf71sldL24/YnC/ZLcQq2plrDN7TYv1Xxfo+biI8JWGgQX2bkOSmi7SZ/
46uKF1tdJu6xyZdTko62SFPO9A6+KeY1wosmGc+RAiebPQEoeMUC
-----END CERTIFICATE-----

View File

@ -1,18 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIC8zCCAdugAwIBAgIJANrYVzo59a4lMA0GCSqGSIb3DQEBCwUAMBAxDjAMBgNV
BAMMBSoucmRiMB4XDTE2MDQwNTIzMDcyNFoXDTI2MDQwMzIzMDcyNFowEDEOMAwG
A1UEAwwFKi5yZGIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQD54K+k
KRjKH33vsZn98YY8bE4p+wJ7OjlVcKdojlVxQ8ZlNM9kip4jDXQK4P90PdxkT8t2
0xxZJqEh2oaOJ9dTi96M0stleeHgud2i862g15iKR9djvLXGaYV50FyT6ZDqaz1y
2KVS0fNy/rKKo8exphhKUymLgroTd9+biNFQ701EfqyNzDHbRCyWD0nIJah218tR
lCCfYfYzPiPIKDc40wPSn16f7pKxLTxYwMSk6iQ2rrF/uRz/Pn0nIjfFsEih15Bz
XibZsToru/SCmJv1T8mYPRccQ+hLfoFpg81pAwcHvOCI8zYkzgNWwTrymlxn65If
EhnjexODf3p7EgnvAgMBAAGjUDBOMB0GA1UdDgQWBBSABTfpeRP7nqHmtXaA4Ai8
E7IGqzAfBgNVHSMEGDAWgBSABTfpeRP7nqHmtXaA4Ai8E7IGqzAMBgNVHRMEBTAD
AQH/MA0GCSqGSIb3DQEBCwUAA4IBAQA3rtK/vl2bAxCk1aF5ub4KVfsQUru7gRfj
UvFZcLGzigAXQ1zHX5TEaivUanDkXmEJ2/sTpHHaZDIDMG6776Jlq7I+rWurVKW3
Lsq05KMW095bn0om4nGrPLZRyfhubJ27nmQhri/+zWCaWkTe1kVpAhjqDyWqkYw4
/roVk4r9P3hf7M1bB9EK/MZU1OLIAGlSn3MaDUewpgwYZDSdItHm1XS56NL3xKgF
r3WtsbRPf71sldL24/YnC/ZLcQq2plrDN7TYv1Xxfo+biI8JWGgQX2bkOSmi7SZ/
46uKF1tdJu6xyZdTko62SFPO9A6+KeY1wosmGc+RAiebPQEoeMUC
-----END CERTIFICATE-----

View File

@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA+eCvpCkYyh9977GZ/fGGPGxOKfsCezo5VXCnaI5VcUPGZTTP
ZIqeIw10CuD/dD3cZE/LdtMcWSahIdqGjifXU4vejNLLZXnh4LndovOtoNeYikfX
Y7y1xmmFedBck+mQ6ms9ctilUtHzcv6yiqPHsaYYSlMpi4K6E3ffm4jRUO9NRH6s
jcwx20Qslg9JyCWodtfLUZQgn2H2Mz4jyCg3ONMD0p9en+6SsS08WMDEpOokNq6x
f7kc/z59JyI3xbBIodeQc14m2bE6K7v0gpib9U/JmD0XHEPoS36BaYPNaQMHB7zg
iPM2JM4DVsE68ppcZ+uSHxIZ43sTg396exIJ7wIDAQABAoIBAQCml2riSlfxoY9H
t6OQD29MZ3SxPl0IJOhGk0W5SnOigOoLXWsLf/MwMW71Nc56BCgkZKKkxNi4gy2Y
MWXV7q/7TlwAjST3sYurVJ90XXubqUFUp9Ls9spFzuIjNYwTPPvVncuo/tEx5zGk
sDP+hHTFdpPpMYqYLX67LgdRXaUXjDI7pg9oOAj9Xl8pHi5TP/DXHo+swF0F3r54
EqlS9PnObszI7e/ReQzh940nEWzdHle0hHinfeDCpW3S7P5xb39NEUC55ogkFNWX
2cbJJtS8dqgcBmnSK+0WetXEhydrk/5GmIu+gnyGLzuidZYOQn3gWb7ZiXJJFVX2
xfGji2vZAoGBAP1TfZsxbmcM7i7vxZQuPIbNTAP5qW/6m/DA8e1+ZMe0+UC6gvO9
XgYvJ6BGckVTZWCxmDfsNObqvkjvMS8m2/FeDCL6NCVDtS+i8kq+LkR49sYdAvxw
DMVqJx77bh6FbO8L5TWuvHZ6/0kbD8JEAZ1p8n4WAYDsyMNM/gVePqLtAoGBAPyD
4J64g9549h2qnaNKA9Mph202LhgPgmlctM/DPNM13+su+AXC1S4JSZv2YQMq5nty
yHXin1TUy0p8mt4+w75jCatulkOLKbnl3NYM6uzlXP0RSsStA4EycWQ0BBg1DFwW
BxOxsnTr0rBzSeFTZav8eCp/VYlJnb9sUlwjbzjLAoGAUPva4L0ZtTnt/vVJ7Ygm
c1W4ImEy6IhuR7X24VyRrUJOmIHHkVINd96lRVif+Uei1hmQNvh9JQEQWdKVn6RF
ldDiAmCIQQ13I8ZsvLY1plAhW840gSz0+DtqTD5Gwt0WqQjdep7kwt+pMt7C1/DT
r1YKXoJ8cpG/0KeRYXfygDUCgYEAxRnfM6UM8ZNzcHajszhrweCRn/KBijBY+Arv
65gWmzpbPQUdfcm1gsinF0D6OnG7FDLlO/cXrSyoPc0DSWSuf6ZoftLEIZa3jC5a
8Q2GNkFWEwbzWI8/xBHupmtfotGNgzeCcKHsjQ0iGK70xRfGrbdUyL85sf6vTiKs
KtVR1H8CgYBifRUqy77A0eaR8SjTOuI7izoxcJH9DwfBQmg4h5g9jorf180R9IH7
V8NWqvjLFbNI2kxQ9SbDTex0XRE4gdSmTeFywtCrSBgP594XK/KwKGhDHe96ve1G
/CKGwCAhMKBfZcrPccXDGp0CbWLemTTKmWsfO4i4YvUhTCXCRokYYA==
-----END RSA PRIVATE KEY-----

View File

@ -1,33 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIFhjCCA26gAwIBAgIJAMJ4Mtt6YhNLMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV
BAYTAlVTMQswCQYDVQQIDAJDQTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEPMA0G
A1UECgwGRG9ja2VyMRowGAYDVQQDDBFOb3RhcnkgVGVzdGluZyBDQTAeFw0xNTA3
MTYwNDI1MDBaFw0yNTA3MTMwNDI1MDBaMF8xCzAJBgNVBAYTAlVTMQswCQYDVQQI
DAJDQTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEPMA0GA1UECgwGRG9ja2VyMRow
GAYDVQQDDBFOb3RhcnkgVGVzdGluZyBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIP
ADCCAgoCggIBAMzUzq2O07tm3A/4emCN/294jUBnNeGlM4TgsB8W9ingw9CU7oBn
CRTK94cGDHTb5ofcj9Kt4/dSL52uJpkZshmAga4fDDhtntnUHaKYzjoZSKZtq7qV
hC1Dah7s3zftZn4NHiRe82loXH/W0//0MWdQCaLc8E0rd/amrd6EO+5SUwF4dXSk
nWoo3oxtOEnb1uQcWWIiwLRmd1pw3PW/bt/SHssD5dJ+78/nR1qCHhJyLVpylMiy
WijkMKW7mbQFefuCOsQ0QvGG3BrTLu+fVs9GYNzHC+L1bSQbfts4nOSodcB/klhd
mbgVW8mrgeHww/jgb2WJW9Y3RFNp/VEuhVrHiz/NW2qE3nPLEnu0vd50jYIXbvBm
fbhCoJntYAiCY0l8v+POgP3ACtsS41rcn8VyD3Ho4u4186ki71+QRQTsUk2MXRV6
AKQ9u4Cl4d0tV1oHjVyiKDv8PNakNrI48KmnF9R9wMgzDHIoBVQZraVTyPwW9HvS
8K3Lsm6QAE7pErideOyBViOiiqvW7rUaLERTkhGirX2RChwhYLtYIj0LitgzdaT4
JD1JxonqN30g2jk1+mJKMEeWBMTjFqtzuQPYH3HkHKxoNfvEuL5fsZSmhV/mR+yW
lSe1f8r1qpAACj/K3mome/z8UhNxzEW8TCYkwamLkAPF485W64KIYI1tAgMBAAGj
RTBDMBIGA1UdEwEB/wQIMAYBAf8CAQEwDgYDVR0PAQH/BAQDAgFGMB0GA1UdDgQW
BBR1DNVNxOFsi9Z7xXfnT2PH+DtoWTANBgkqhkiG9w0BAQsFAAOCAgEAUbbrI3OQ
5XO8HHpoTwVqFzSzKOuSSrcMGrv67rn+2HvVJYfxtusZBS6+Rw7QVG3daPS+pSNX
NM1qyin3BjpNR2lI771yyK/yjjNH9pZPR+8ThJ8/77roLJudTCCPt49PoYgSQQsp
IB75PlqnTWVwccW9pm2zSdqDxFeZpTpwEvgyX8MNCfYeynxp5+S81593z8iav16u
t2I38NyFJKuxin9zNkxkpf/a9Pr/Gk56gw1OfHXp+sW/6KIzx8fjQuL6P8HEpwVG
zXXA8fMX91cIFI4+DTc8mPjtYvT6/PzDWE/q6FZZnbHJ50Ngg5D8uFN5lLgZFNtf
ITeoNjTk2koq8vvTW8FDpMkb50zqGdBoIdDtRFd3oot+MEg+6mba+Kttwg05aJ9a
SIIxjvU4NH6qOXBSgzaI1hMr7DTBnaXxMEBiaNaPg2nqi6uhaUOcVw3F01yBfGfX
aGsNLKpFiKFYQfOR1M2ho/7AL19GYQD3IFWDJqk0/eQLfFR74iKVMz6ndwt9F7A8
0xxGXGpw2NJQTWLQui4Wzt33q541ihzL7EDtybBScUdIOIEO20mHr2czFoTL9IKx
rU0Ck5BMyMBB+DOppP+TeKjutAI1yRVsNoabOuK4oo/FmqysgQoHEE+gVUThrrpE
wV1EBILkX6O4GiMqu1+x92/yCmlKEg0Q6MM=
-----END CERTIFICATE-----

View File

@ -1,32 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIFVzCCAz+gAwIBAgIBAzANBgkqhkiG9w0BAQsFADBfMRowGAYDVQQDDBFOb3Rh
cnkgVGVzdGluZyBDQTELMAkGA1UEBhMCVVMxFjAUBgNVBAcMDVNhbiBGcmFuY2lz
Y28xDzANBgNVBAoMBkRvY2tlcjELMAkGA1UECAwCQ0EwHhcNMTUwNzE2MDQyNTUw
WhcNMTYwNzE1MDQyNTUwWjBgMRswGQYDVQQDDBJzZWN1cmUuZXhhbXBsZS5jb20x
CzAJBgNVBAYTAlVTMRYwFAYDVQQHDA1TYW4gRnJhbmNpc2NvMQ8wDQYDVQQKDAZE
b2NrZXIxCzAJBgNVBAgMAkNBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAmLYiYCTAWJBWAuxZLqVmV4FiUdGgEqoQvCbN73zF/mQfhq0CITo6xSxs1QiG
DOzUtkpzXzziSj4J5+et4JkFleeEKaMcHadeIsSlHGvVtXDv93oR3ydmfZO+ULRU
8xHloqcLr1KrOP1daLfdMRbactd75UQgvw9XTsdeMVX5AlicSENVKV+AQXvVpv8P
T10MSvlBFam4reXuY/SkeMbIaW5pFu6AQv3Zmftt2ta0CB9kb1mYd+OKru8Hnnq5
aJw6R3GhP0TBd25P1PkiSxM2KGYZZk0W/NZqLK9/LTFKTNCv7VjCbysVo7HxCY0b
Qe/bDP82v7SnLtb3aZogfva4HQIDAQABo4IBGzCCARcwgYgGA1UdIwSBgDB+gBR3
uPIXAXAKdG1FZU8V3kQOKBNeqaFjpGEwXzELMAkGA1UEBhMCVVMxCzAJBgNVBAgM
AkNBMRYwFAYDVQQHDA1TYW4gRnJhbmNpc2NvMQ8wDQYDVQQKDAZEb2NrZXIxGjAY
BgNVBAMMEU5vdGFyeSBUZXN0aW5nIENBggEBMAwGA1UdEwEB/wQCMAAwHQYDVR0l
BBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMA4GA1UdDwEB/wQEAwIFoDAuBgNVHREE
JzAlghJzZWN1cmUuZXhhbXBsZS5jb22CCWxvY2FsaG9zdIcEfwAAATAdBgNVHQ4E
FgQUDPD4CaXRbu5QBb5e8y8odvTqW4IwDQYJKoZIhvcNAQELBQADggIBAJOylmc4
n7J64GKsP/xhUdKKV9/KD+ufzpKbrLIojWn7rTye70vY0OjQFuOXc54yjMSIL+/5
mlNQ7Y/fJS8xdH79ER+4nWMuD2eciLnsLgbYUk4hiyby8/5V+/YqPeCpPCn6TJRK
a0E6lV/UjXJdrigJvJoNOR8ZgtEZ/QPgjJEVUsg47dtqzsDpgeS8dcjuMWpZxP02
qavFLDjSFzVH+2D6Oty1DQplm//3XaRXh23dOCP8wj/bxvnVToFWs+zO4uT1LF/S
KXCNQoeiGxWHyzrXFVVtVnC9FSNz0Gg2/Em1tfRgvhUn4KLJcvZW9o1R7VVCX0L1
0x0fyK3VWeWc86a5a681amKZSEbjAmIVZF9zOX0PODC8oy+zqOPWa0WCl4K6zDC6
2IIFBBNy50ZS2iON6RY6mE7NmA78gckf415cqIVrloYJbbTDepfhTV218SLepph4
uGb2/sxklfHOYE+rpHciibWwXrwlODJaXuzXFhplUd/ovdujBNAIHkBfzy+Y6z2s
bwZcfqD4NIb/AGhIyW2vbvu4zslDp1MEsLoaO+SzirMzkyMBlKRt120tws4EkUlm
/QhjSUoZpCAsy5C/pV4+bx0SysNd/S+kKaRZc/U6Y3ZYBFhszLh7JaLXKmk7wHnE
rggm6oz4L/GyPWc/FjfnsefWKM2yC3QDhjvj
-----END CERTIFICATE-----

View File

@ -1,28 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAmLYiYCTAWJBWAuxZLqVmV4FiUdGgEqoQvCbN73zF/mQfhq0C
ITo6xSxs1QiGDOzUtkpzXzziSj4J5+et4JkFleeEKaMcHadeIsSlHGvVtXDv93oR
3ydmfZO+ULRU8xHloqcLr1KrOP1daLfdMRbactd75UQgvw9XTsdeMVX5AlicSENV
KV+AQXvVpv8PT10MSvlBFam4reXuY/SkeMbIaW5pFu6AQv3Zmftt2ta0CB9kb1mY
d+OKru8Hnnq5aJw6R3GhP0TBd25P1PkiSxM2KGYZZk0W/NZqLK9/LTFKTNCv7VjC
bysVo7HxCY0bQe/bDP82v7SnLtb3aZogfva4HQIDAQABAoIBAQCLPj+X5MrRtkIH
BlTHGJ95mIr6yaYofpMlzEgoX1/1dnvcg/IWNA8UbE6L7Oq17FiEItyR8WTwhyLn
JrO/wCd8qQ40HPrs+wf1sdJPWPATMfhMcizLihSE2mtFETkILcByD9iyszFWlIdQ
jZ4NPaZP4rWgtf8Z1zYnqdf0Kk0T2imFya0qyoRLo40kxeb4p5K53JD7rPLQNyvO
YeFXTuKxBrFEMs6/wFjl+TO4nfHQXQlgQp4MNd9L5fEQBj+TvGVX+zcQEmzxljK8
zFNXyxvXgjBPD+0V7yRhTYjrUfZJ4RX1yKDpdsva6BXL7t9hNEg/aGnKRDYF3i5q
WQz8csCBAoGBAMfdtAr3RCuCxe0TIVBon5wubau6HLOxorcXSvxO5PO2kzhy3+GY
xcCMJ+Wo0dTFXjQD3oxRKuDrPRK7AX/grYn7qJo6W7SM9xYEq3HspJJFGkcRsvem
MALt8bvG5NkGmLJD+pTOKVaTZRjW3BM6GcMzBgsLynQcLllRtNI8Hcw9AoGBAMOa
CMsWQfoOUjUffrXN0UnXLEPEeazPobnCHVtE244FdX/BFu5WMA7qqaPRyvnfK0Vl
vF5sGNiBCOnq1zjYee6FD2eyAzVmWJXM1DB4Ewp4ZaABS0ZCZgNfyd1badY4IZpw
pjYEQprguw+J8yZItNJRo+WBmnSgZy6o1bpDaflhAoGAYf61GS9VkFPlQbFAg1FY
+NXW1f1Bt2VgV48nKAByx3/8PRAt70ndo+PUaAlXIJDI+I3xHzFo6bDNWBKy0IVT
8TSf3UbB0gvP1k7h1NDnfAQ/txrZeg1Uuwr5nE0Pxc0zLyyffzh6EkXgqsYmT5MM
MKYiz2WvlTCAFTE3jGEHZy0CgYBti/cgxnZs9VhVKC5u47YzBK9lxMPgZOjOgEiw
tP/Bqo0D38BX+y0vLX2UogprpvE1DKVSvHetyZaUa1HeJF8llp/qE2h4n7k9LFoq
SxVe588CrbbawpUfjqYfsvKzZvxq4mw0FG65DuO08C2dY1rh75c7EjrO1obzOtt4
VgkkAQKBgDnRyLnzlMfvjCyW9+cHbURQNe2iupfnlrXWEntg56USBVrFtfRQxDRp
fBtlq+0BNfDVdoVNasTCBW16UKoRBH1/k5idz5QPEbKY2055sNxHMVg0uzdb4HXr
73uaYzNrT8P7wyHFF3UL5bd0aO5DT1VYvGlHHgOhCyqcM+RBgPBS
-----END RSA PRIVATE KEY-----

View File

@ -1,12 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIBpDCCAUqgAwIBAgIRAIquZ7lRJj1Um030Kd7GFXgwCgYIKoZIzj0EAwIwODEa
MBgGA1UEChMRZG9ja2VyLmNvbS9ub3RhcnkxGjAYBgNVBAMTEWRvY2tlci5jb20v
bm90YXJ5MB4XDTE1MDcxNzAwMzE1NFoXDTE3MDcxNjAwMzE1NFowODEaMBgGA1UE
ChMRZG9ja2VyLmNvbS9ub3RhcnkxGjAYBgNVBAMTEWRvY2tlci5jb20vbm90YXJ5
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEjnnozttLzYgIN5fL8ZwYbsMig0pj
HSNupVTPjDIrLUYUnoQfG6IQ0E2BMixEGnI/A9WreeXP2oz06LZ4SROMQqM1MDMw
DgYDVR0PAQH/BAQDAgCgMBMGA1UdJQQMMAoGCCsGAQUFBwMDMAwGA1UdEwEB/wQC
MAAwCgYIKoZIzj0EAwIDSAAwRQIgT9cxottjza9BBQcMsoB/Uf2JYXWgSkp9QMXT
8mG4mMICIQDMYWFdgn5u8nDeThJ+bG8Lu5nIGb/NWEOFtU0xQv913Q==
-----END CERTIFICATE-----

View File

@ -1,11 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIBqDCCAU6gAwIBAgIRAM1vKVhmZuWcrogc3ASBaZUwCgYIKoZIzj0EAwIwOjEb
MBkGA1UEChMSc2VjdXJlLmV4YW1wbGUuY29tMRswGQYDVQQDExJzZWN1cmUuZXhh
bXBsZS5jb20wHhcNMTUwNzE3MDU1NTIzWhcNMTcwNzE2MDU1NTIzWjA6MRswGQYD
VQQKExJzZWN1cmUuZXhhbXBsZS5jb20xGzAZBgNVBAMTEnNlY3VyZS5leGFtcGxl
LmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABI556M7bS82ICDeXy/GcGG7D
IoNKYx0jbqVUz4wyKy1GFJ6EHxuiENBNgTIsRBpyPwPVq3nlz9qM9Oi2eEkTjEKj
NTAzMA4GA1UdDwEB/wQEAwIAoDATBgNVHSUEDDAKBggrBgEFBQcDAzAMBgNVHRMB
Af8EAjAAMAoGCCqGSM49BAMCA0gAMEUCIER2XCkQ8dUWBZEUeT5kABg7neiHPtSL
VVE6bJxu2sxlAiEAkRG6u1ieXKGl38gUkCn75Yvo9nOSLdh0gtxUUcOXvUc=
-----END CERTIFICATE-----

Some files were not shown because too many files have changed in this diff Show More