Compare commits

..

No commits in common. "main" and "v1.11.1.redhat1" have entirely different histories.

6794 changed files with 2314124 additions and 131507 deletions

View File

@ -1,51 +0,0 @@
Thanks submitting your Operator. Please check below list before you create your Pull Request.
### New Submissions
* [x] Are you familiar with our [contribution guidelines](https://github.com/operator-framework/community-operators/blob/master/docs/contributing-via-pr.md)?
* [x] Have you [packaged and deployed](https://github.com/operator-framework/community-operators/blob/master/docs/testing-operators.md) your Operator for Operator Framework?
* [x] Have you tested your Operator with all Custom Resource Definitions?
* [x] Have you tested your Operator in all supported [installation modes](https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/design/building-your-csv.md#operator-metadata)?
* [x] Have you considered whether you want use [semantic versioning order](https://github.com/operator-framework/community-operators/blob/master/docs/operator-ci-yaml.md#semver-mode)?
* [x] Is your submission [signed](https://github.com/operator-framework/community-operators/blob/master/docs/contributing-prerequisites.md#sign-your-work)?
* [x] Is operator [icon](https://github.com/operator-framework/community-operators/blob/master/docs/packaging-operator.md#operator-icon) set?
### Updates to existing Operators
* [x] Did you create a `ci.yaml` file according to the [update instructions](https://github.com/operator-framework/community-operators/blob/master/docs/operator-ci-yaml.md)?
* [x] Is your new CSV pointing to the previous version with the `replaces` property if you chose `replaces-mode` via the `updateGraph` property in `ci.yaml`?
* [x] Is your new CSV referenced in the [appropriate channel](https://github.com/operator-framework/community-operators/blob/master/docs/packaging-operator.md#channels) defined in the `package.yaml` or `annotations.yaml` ?
* [ ] Have you tested an update to your Operator when deployed via OLM?
* [x] Is your submission [signed](https://github.com/operator-framework/community-operators/blob/master/docs/contributing-prerequisites.md#sign-your-work)?
### Your submission should not
* [x] Modify more than one operator
* [x] Modify an Operator you don't own
* [x] Rename an operator - please remove and add with a different name instead
* [x] Submit operators to both `upstream-community-operators` and `community-operators` at once
* [x] Modify any files outside the above mentioned folders
* [x] Contain more than one commit. **Please squash your commits.**
### Operator Description must contain (in order)
1. [x] Description about the managed Application and where to find more information
2. [x] Features and capabilities of your Operator and how to use it
3. [x] Any manual steps about potential pre-requisites for using your Operator
### Operator Metadata should contain
* [x] Human readable name and 1-liner description about your Operator
* [x] Valid [category name](https://github.com/operator-framework/community-operators/blob/master/docs/packaging-operator.md#categories)<sup>1</sup>
* [x] One of the pre-defined [capability levels](https://github.com/operator-framework/operator-courier/blob/4d1a25d2c8d52f7de6297ec18d8afd6521236aa2/operatorcourier/validate.py#L556)<sup>2</sup>
* [x] Links to the maintainer, source code and documentation
* [x] Example templates for all Custom Resource Definitions intended to be used
* [x] A quadratic logo
Remember that you can preview your CSV [here](https://operatorhub.io/preview).
--
<sup>1</sup> If you feel your Operator does not fit any of the pre-defined categories, file an issue against this repo and explain your need
<sup>2</sup> For more information see [here](https://sdk.operatorframework.io/docs/overview/#operator-capability-level)

View File

@ -1,51 +0,0 @@
Thanks submitting your Operator. Please check below list before you create your Pull Request.
### New Submissions
* [ ] Are you familiar with our [contribution guidelines](https://github.com/operator-framework/community-operators/blob/master/docs/contributing-via-pr.md)?
* [ ] Have you [packaged and deployed](https://github.com/operator-framework/community-operators/blob/master/docs/testing-operators.md) your Operator for Operator Framework?
* [ ] Have you tested your Operator with all Custom Resource Definitions?
* [ ] Have you tested your Operator in all supported [installation modes](https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/design/building-your-csv.md#operator-metadata)?
* [ ] Have you considered whether you want use [semantic versioning order](https://github.com/operator-framework/community-operators/blob/master/docs/operator-ci-yaml.md#semver-mode)?
* [ ] Is your submission [signed](https://github.com/operator-framework/community-operators/blob/master/docs/contributing-prerequisites.md#sign-your-work)?
* [ ] Is operator [icon](https://github.com/operator-framework/community-operators/blob/master/docs/packaging-operator.md#operator-icon) set?
### Updates to existing Operators
* [ ] Did you create a `ci.yaml` file according to the [update instructions](https://github.com/operator-framework/community-operators/blob/master/docs/operator-ci-yaml.md)?
* [ ] Is your new CSV pointing to the previous version with the `replaces` property if you chose `replaces-mode` via the `updateGraph` property in `ci.yaml`?
* [ ] Is your new CSV referenced in the [appropriate channel](https://github.com/operator-framework/community-operators/blob/master/docs/packaging-operator.md#channels) defined in the `package.yaml` or `annotations.yaml` ?
* [ ] Have you tested an update to your Operator when deployed via OLM?
* [ ] Is your submission [signed](https://github.com/operator-framework/community-operators/blob/master/docs/contributing-prerequisites.md#sign-your-work)?
### Your submission should not
* [ ] Modify more than one operator
* [ ] Modify an Operator you don't own
* [ ] Rename an operator - please remove and add with a different name instead
* [ ] Submit operators to both `upstream-community-operators` and `community-operators` at once
* [ ] Modify any files outside the above mentioned folders
* [ ] Contain more than one commit. **Please squash your commits.**
### Operator Description must contain (in order)
1. [ ] Description about the managed Application and where to find more information
2. [ ] Features and capabilities of your Operator and how to use it
3. [ ] Any manual steps about potential pre-requisites for using your Operator
### Operator Metadata should contain
* [ ] Human readable name and 1-liner description about your Operator
* [ ] Valid [category name](https://github.com/operator-framework/community-operators/blob/master/docs/packaging-operator.md#categories)<sup>1</sup>
* [ ] One of the pre-defined [capability levels](https://github.com/operator-framework/operator-courier/blob/4d1a25d2c8d52f7de6297ec18d8afd6521236aa2/operatorcourier/validate.py#L556)<sup>2</sup>
* [ ] Links to the maintainer, source code and documentation
* [ ] Example templates for all Custom Resource Definitions intended to be used
* [ ] A quadratic logo
Remember that you can preview your CSV [here](https://operatorhub.io/preview).
--
<sup>1</sup> If you feel your Operator does not fit any of the pre-defined categories, file an issue against this repo and explain your need
<sup>2</sup> For more information see [here](https://sdk.operatorframework.io/docs/overview/#operator-capability-level)

View File

@ -1,10 +0,0 @@
#!/bin/bash
OPERATOR_VERSION=$(git describe --tags)
echo "${GITHUB_TOKEN}" | gh auth login --with-token
gh config set prompt disabled
gh release create \
-t "Release ${OPERATOR_VERSION}" \
"${OPERATOR_VERSION}" \
'dist/jaeger-operator.yaml#Installation manifest for Kubernetes'

View File

@ -1,3 +0,0 @@
#!/bin/bash
./bin/goimports -local "github.com/jaegertracing/jaeger-operator" -l -w $(git ls-files "*\.go" | grep -v vendor)

View File

@ -1,80 +0,0 @@
#!/bin/bash
OPENAPIGEN=openapi-gen
command -v ${OPENAPIGEN} > /dev/null
if [ $? != 0 ]; then
if [ -n ${GOPATH} ]; then
OPENAPIGEN="${GOPATH}/bin/openapi-gen"
fi
fi
CONTROLLERGEN=controller-gen
command -v ${CONTROLLERGEN} > /dev/null
if [ $? != 0 ]; then
if [ -n ${GOPATH} ]; then
CONTROLLERGEN="${GOPATH}/bin/controller-gen"
fi
fi
CLIENTGEN=client-gen
command -v ${CLIENTGEN} > /dev/null
if [ $? != 0 ]; then
if [ -n ${GOPATH} ]; then
CLIENTGEN="${GOPATH}/bin/client-gen"
fi
fi
# generate the CRD(s)
${CONTROLLERGEN} crd paths=./pkg/apis/jaegertracing/... crd:maxDescLen=0,trivialVersions=true output:dir=./deploy/crds/
RT=$?
if [ ${RT} != 0 ]; then
echo "Failed to generate CRDs."
exit ${RT}
fi
# move the generated CRD to the same location the operator-sdk places
mv deploy/crds/jaegertracing.io_jaegers.yaml deploy/crds/jaegertracing.io_jaegers_crd.yaml
# the controller-gen will generate a list of CRDs, but the operator-sdk tooling expects
# a single item
# the proper solutions are, in order:
# 1) find a controller-gen switch that makes it write only one CRD. Such a switch doesn't exist yet: https://git.io/JvX5D
# 2) use a YAML command line tool to get the first item from the file
# 3) chop off the first two lines of the file
# the last option is the easiest to implement for now, also because `tail` is found everywhere
echo "$(tail -n +3 deploy/crds/jaegertracing.io_jaegers_crd.yaml)" > deploy/crds/jaegertracing.io_jaegers_crd.yaml
if ! [[ "$(head -n 1 deploy/crds/jaegertracing.io_jaegers_crd.yaml)" == "apiVersion"* ]]; then
echo "The generated CRD doesn't seem valid. Make sure the controller-gen is generating the CRD in the expected format. Aborting."
exit 1
fi
# generate the schema validation (openapi) stubs
${OPENAPIGEN} --logtostderr=true -o "" -i ./pkg/apis/jaegertracing/v1 -O zz_generated.openapi -p ./pkg/apis/jaegertracing/v1 -h /dev/null -r "-"
RT=$?
if [ ${RT} != 0 ]; then
echo "Failed to generate the openapi (schema validation) stubs."
exit ${RT}
fi
# generate the Kubernetes stubs
operator-sdk generate k8s
RT=$?
if [ ${RT} != 0 ]; then
echo "Failed to generate the Kubernetes stubs."
exit ${RT}
fi
# generate the clients
${CLIENTGEN} \
--input "jaegertracing/v1" \
--input-base github.com/jaegertracing/jaeger-operator/pkg/apis \
--go-header-file /dev/null \
--output-package github.com/jaegertracing/jaeger-operator/pkg/client \
--clientset-name versioned \
--output-base ../../../
RT=$?
if [ ${RT} != 0 ]; then
echo "Failed to generate the Jaeger Tracing clients."
exit ${RT}
fi

View File

@ -1,8 +0,0 @@
#!/usr/bin/env bash
RE='\([0-9]\+\)[.]\([0-9]\+\)[.]\([0-9]\+\)\([0-9A-Za-z-]*\)'
MAJOR=$(echo ${1} | sed -e "s#${RE}#\1#")
MINOR=$(echo ${1} | sed -e "s#${RE}#\2#")
PATCH=$(echo ${1} | sed -e "s#${RE}#\3#")
PATCH=$(( $PATCH + 1 ))
echo "${MAJOR}.${MINOR}.${PATCH}"

View File

@ -1,58 +0,0 @@
#!/bin/bash
COMMUNITY_OPERATORS_REPOSITORY="k8s-operatorhub/community-operators"
UPSTREAM_REPOSITORY="redhat-openshift-ecosystem/community-operators-prod"
LOCAL_REPOSITORIES_PATH=${LOCAL_REPOSITORIES_PATH:-"$(dirname $(dirname $(pwd)))"}
if [[ ! -d "${LOCAL_REPOSITORIES_PATH}/${COMMUNITY_OPERATORS_REPOSITORY}" ]]; then
echo "${LOCAL_REPOSITORIES_PATH}/${COMMUNITY_OPERATORS_REPOSITORY} doesn't exist, aborting."
exit 1
fi
if [[ ! -d "${LOCAL_REPOSITORIES_PATH}/${UPSTREAM_REPOSITORY}" ]]; then
echo "${LOCAL_REPOSITORIES_PATH}/${UPSTREAM_REPOSITORY} doesn't exist, aborting."
exit 1
fi
OLD_PWD=$(pwd)
VERSION=$(grep operator= versions.txt | awk -F= '{print $2}')
for dest in ${COMMUNITY_OPERATORS_REPOSITORY} ${UPSTREAM_REPOSITORY}; do
cd "${LOCAL_REPOSITORIES_PATH}/${dest}"
git remote | grep upstream > /dev/null
if [[ $? != 0 ]]; then
echo "Cannot find a remote named 'upstream'. Adding one."
git remote add upstream git@github.com:${dest}.git
fi
git fetch -q upstream
git checkout -q main
git rebase -q upstream/main
cp -r "${OLD_PWD}/bundle" "operators/jaeger/${VERSION}"
git checkout -q -b Update-Jaeger-to-${VERSION}
if [[ $? != 0 ]]; then
echo "Cannot switch to the new branch Update-Jaeger-${dest}-to-${VERSION}. Aborting"
exit 1
fi
git add .
git commit -sqm "Update Jaeger to v${VERSION}"
command -v gh > /dev/null
if [[ $? != 0 ]]; then
echo "'gh' command not found, can't submit the PR on your behalf."
break
fi
echo "Submitting PR on your behalf via 'hub'"
gh pr create --title "Update Jaeger to v${VERSION}" --body-file "${OLD_PWD}/.ci/.checked-pr-template.md"
done
cd ${OLD_PWD}
echo "Completed."

View File

@ -1,37 +0,0 @@
#!/bin/bash
if [[ -z $OPERATOR_VERSION ]]; then
echo "OPERATOR_VERSION isn't set. Skipping process."
exit 1
fi
JAEGER_VERSION=$(echo $JAEGER_VERSION | tr -d '"')
JAEGER_AGENT_VERSION=$(echo $JAEGER_AGENT_VERSION | tr -d '"')
PREVIOUS_VERSION=$(grep operator= versions.txt | awk -F= '{print $2}')
# change the versions.txt, bump only operator version.
sed "s~operator=${PREVIOUS_VERSION}~operator=${OPERATOR_VERSION}~gi" -i versions.txt
# changes to deploy/operator.yaml
sed "s~replaces: jaeger-operator.v.*~replaces: jaeger-operator.v${PREVIOUS_VERSION}~i" -i config/manifests/bases/jaeger-operator.clusterserviceversion.yaml
# Update the examples according to the release
sed -i "s~all-in-one:.*~all-in-one:${JAEGER_VERSION}~gi" examples/all-in-one-with-options.yaml
# statefulset-manual-sidecar
sed -i "s~jaeger-agent:.*~jaeger-agent:${JAEGER_AGENT_VERSION}~gi" examples/statefulset-manual-sidecar.yaml
# operator-with-tracing
sed -i "s~jaeger-operator:.*~jaeger-operator:${OPERATOR_VERSION}~gi" examples/operator-with-tracing.yaml
sed -i "s~jaeger-agent:.*~jaeger-agent:${JAEGER_AGENT_VERSION}~gi" examples/operator-with-tracing.yaml
# tracegen
sed -i "s~jaeger-tracegen:.*~jaeger-tracegen:${JAEGER_VERSION}~gi" examples/tracegen.yaml
VERSION=${OPERATOR_VERSION} USER=jaegertracing make bundle

View File

@ -1,43 +0,0 @@
#!/bin/bash
BASE_BUILD_IMAGE=${BASE_BUILD_IMAGE:-"jaegertracing/jaeger-operator"}
OPERATOR_VERSION=${OPERATOR_VERSION:-$(git describe --tags)}
## if we are on a release tag, let's extract the version number
## the other possible value, currently, is 'main' (or another branch name)
## if we are not running in the CI, it fallsback to the `git describe` above
if [[ $OPERATOR_VERSION == v* ]]; then
OPERATOR_VERSION=$(echo ${OPERATOR_VERSION} | grep -Po "([\d\.]+)")
MAJOR_MINOR=$(echo ${OPERATOR_VERSION} | awk -F. '{print $1"."$2}')
fi
BUILD_IMAGE=${BUILD_IMAGE:-"${BASE_BUILD_IMAGE}:${OPERATOR_VERSION}"}
DOCKER_USERNAME=${DOCKER_USERNAME:-"jaegertracingbot"}
if [ "${DOCKER_PASSWORD}x" != "x" -a "${DOCKER_USERNAME}x" != "x" ]; then
echo "Performing a 'docker login'"
echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
fi
IMAGE_TAGS="--tag ${BUILD_IMAGE}"
if [ "${MAJOR_MINOR}x" != "x" ]; then
MAJOR_MINOR_IMAGE="${BASE_BUILD_IMAGE}:${MAJOR_MINOR}"
IMAGE_TAGS="${IMAGE_TAGS} --tag ${MAJOR_MINOR_IMAGE}"
fi
## now, push to quay.io
if [ "${QUAY_PASSWORD}x" != "x" -a "${QUAY_USERNAME}x" != "x" ]; then
echo "Performing a 'docker login' for Quay"
echo "${QUAY_PASSWORD}" | docker login -u "${QUAY_USERNAME}" quay.io --password-stdin
echo "Tagging ${BUILD_IMAGE} as quay.io/${BUILD_IMAGE}"
IMAGE_TAGS="${IMAGE_TAGS} --tag quay.io/${BUILD_IMAGE}"
if [ "${MAJOR_MINOR_IMAGE}x" != "x" ]; then
IMAGE_TAGS="${IMAGE_TAGS} --tag quay.io/${MAJOR_MINOR_IMAGE}"
fi
fi
echo "Building with tags ${IMAGE_TAGS}"
IMAGE_TAGS=${IMAGE_TAGS} make dockerx

View File

@ -1,16 +1,5 @@
coverage:
status:
project:
default:
target: auto
# this allows a 0.1% drop from the previous base commit coverage
threshold: 0.1%
ignore:
- "apis/v1/zz_generated.deepcopy.go"
- "apis/v1/zz_generated.defaults.go"
- "apis/v1/zz_generated.openapi.go"
- "apis/v1/groupversion_info.go"
- "pkg/kafka/v1beta2/zz_generated.deepcopy.go"
- "pkg/kafka/v1beta2/zz_generated.openapi.go"
- "pkg/kafka/v1beta2/groupversion_info.go"
- "pkg/util/k8s_utils.go"
- "pkg/apis/io/v1alpha1/zz_generated.deepcopy.go"
- "pkg/apis/jaegertracing/v1/zz_generated.deepcopy.go"
- "pkg/apis/io/v1alpha1/zz_generated.defaults.go"
- "pkg/apis/jaegertracing/v1/zz_generated.defaults.go"

View File

@ -1,4 +0,0 @@
# More info: https://docs.docker.com/engine/reference/builder/#dockerignore-file
# Ignore build and test binaries.
bin/
testbin/

View File

@ -1,62 +0,0 @@
version: 2
updates:
- package-ecosystem: docker
directory: "/"
schedule:
interval: daily
time: "03:00"
timezone: "Europe/Berlin"
labels:
- dependencies
- docker
- ok-to-test
- package-ecosystem: docker
directory: "/tests"
schedule:
interval: daily
time: "03:00"
timezone: "Europe/Berlin"
labels:
- dependencies
- docker
- ok-to-test
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
time: "03:00"
timezone: "Europe/Berlin"
labels:
- dependencies
- go
- ok-to-test
groups:
golang-org-x:
patterns:
- "golang.org/x/*"
opentelemetry:
patterns:
- "go.opentelemetry.io/*"
prometheus:
patterns:
- "github.com/prometheus-operator/prometheus-operator"
- "github.com/prometheus-operator/prometheus-operator/*"
- "github.com/prometheus/prometheus"
- "github.com/prometheus/prometheus/*"
- "github.com/prometheus/client_go"
- "github.com/prometheus/client_go/*"
kubernetes:
patterns:
- "k8s.io/*"
- "sigs.k8s.io/*"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
time: "03:00"
timezone: "Europe/Berlin"
labels:
- dependencies
- github_actions
- ok-to-test

View File

@ -1,42 +0,0 @@
name: "CI Workflow"
on:
push:
branches: [ main ]
paths-ignore:
- '**.md'
pull_request:
branches: [ main ]
paths-ignore:
- '**.md'
permissions:
contents: read
jobs:
basic-checks:
runs-on: ubuntu-20.04
env:
USER: jaegertracing
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set up Go
uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0
with:
go-version: "1.22"
- name: "install kubebuilder"
run: ./hack/install/install-kubebuilder.sh
- name: "install kustomize"
run: ./hack/install/install-kustomize.sh
- name: "basic checks"
run: make install-tools ci
- name: "upload test coverage report"
uses: codecov/codecov-action@0565863a31f2c772f9f0395002a31e3f06189574 # v5.4.0
with:
token: ${{ secrets.CODECOV_TOKEN }}

View File

@ -1,52 +0,0 @@
name: "CodeQL"
on:
push:
branches: [ main ]
paths-ignore:
- '**.md'
pull_request:
branches: [ main ]
paths-ignore:
- '**.md'
permissions:
contents: read
jobs:
codeql-analyze:
permissions:
actions: read # for github/codeql-action/init to get workflow details
contents: read # for actions/checkout to fetch code
security-events: write # for github/codeql-action/autobuild to send a status report
name: CodeQL Analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: [ 'go' ]
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: "Set up Go"
uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0
with:
go-version-file: "go.mod"
# Disable CodeQL for tests
# https://github.com/github/codeql/issues/4786
- run: rm -rf ./tests
- name: Initialize CodeQL
uses: github/codeql-action/init@b56ba49b26e50535fa1e7f7db0f4f7b4bf65d80d # v3.28.10
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@b56ba49b26e50535fa1e7f7db0f4f7b4bf65d80d # v3.28.10
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@b56ba49b26e50535fa1e7f7db0f4f7b4bf65d80d # v3.28.10

View File

@ -1,84 +0,0 @@
name: E2E tests
on:
push:
branches: [ main ]
paths-ignore:
- '**.md'
pull_request:
branches: [ main ]
paths-ignore:
- '**.md'
concurrency:
group: e2e-tests-${{ github.ref }}-${{ github.workflow }}
cancel-in-progress: true
permissions:
contents: read
jobs:
e2e:
name: "Run ${{ matrix.testsuite.label }} E2E tests (${{ matrix.kube-version }})"
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
kube-version:
- "1.19"
- "1.30"
testsuite:
- { name: "elasticsearch", label: "Elasticsearch" }
- { name: "examples", label: "Examples" }
- { name: "generate", label: "Generate" }
- { name: "miscellaneous", label: "Miscellaneous" }
- { name: "sidecar", label: "Sidecar" }
- { name: "streaming", label: "Streaming" }
- { name: "ui", label: "UI" }
- { name: "upgrade", label: "Upgrade" }
steps:
- name: "Check out code into the Go module directory"
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: "Set up Go"
uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0
with:
go-version: "1.22"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
with:
install: true
- name: Cache Docker layers
uses: actions/cache@d4323d4df104b026a6aa633fdb11d772146be0bf # v4.2.2
with:
path: /tmp/.buildx-cache
key: e2e-${{ github.sha }}
restore-keys: |
e2e-
- name: "Install KIND"
run: ./hack/install/install-kind.sh
shell: bash
- name: "Install KUTTL"
run: ./hack/install/install-kuttl.sh
shell: bash
- name: "Install gomplate"
run: ./hack/install/install-gomplate.sh
shell: bash
- name: "Install dependencies"
run: make install-tools
shell: bash
- name: "Run ${{ matrix.testsuite.label }} E2E test suite on Kube ${{ matrix.kube-version }}"
env:
VERBOSE: "true"
KUBE_VERSION: "${{ matrix.kube-version }}"
DOCKER_BUILD_OPTIONS: "--cache-from type=local,src=/tmp/.buildx-cache --cache-to type=local,dest=/tmp/.buildx-cache-new,mode=max --load"
run: make run-e2e-tests-${{ matrix.testsuite.name }}
shell: bash
# Temp fix
# https://github.com/docker/build-push-action/issues/252
# https://github.com/moby/buildkit/issues/1896
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
shell: bash

View File

@ -1,54 +0,0 @@
name: Scorecard supply-chain security
on:
# For Branch-Protection check. Only the default branch is supported. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection
branch_protection_rule:
# To guarantee Maintained check is occasionally updated. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained
schedule:
- cron: '45 13 * * 1'
push:
branches: [ "main" ]
permissions: read-all
jobs:
analysis:
name: Scorecard analysis
runs-on: ubuntu-latest
permissions:
# Needed to upload the results to code-scanning dashboard.
security-events: write
# Needed to publish results and get a badge (see publish_results below).
id-token: write
# Uncomment the permissions below if installing in a private repository.
# contents: read
# actions: read
steps:
- name: "Checkout code"
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@f49aabe0b5af0936a0987cfb85d86b75731b0186 # v2.4.1
with:
results_file: results.sarif
results_format: sarif
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@4cec3d8aa04e39d1a68397de0c4cd6fb9dce8ec1 # v4.6.1
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@b56ba49b26e50535fa1e7f7db0f4f7b4bf65d80d # v3.28.10
with:
sarif_file: results.sarif

View File

@ -1,28 +0,0 @@
name: "Publish images"
on:
push:
branches: [ main ]
paths-ignore:
- '**.md'
permissions:
contents: read
jobs:
publish:
runs-on: ubuntu-latest
env:
USER: jaegertracing
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: "publishes the images"
env:
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKERHUB_TOKEN }}
QUAY_USERNAME: ${{ secrets.QUAY_USERNAME }}
QUAY_PASSWORD: ${{ secrets.QUAY_PASSWORD }}
OPERATOR_VERSION: main
run: ./.ci/publish-images.sh

View File

@ -1,43 +0,0 @@
name: "Prepare the release"
on:
push:
tags:
- 'v*'
jobs:
release:
runs-on: ubuntu-20.04
env:
USER: jaegertracing
steps:
- name: Set up Go
uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0
with:
go-version: "1.22"
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: "install kubebuilder"
run: ./hack/install/install-kubebuilder.sh
- name: "install kustomize"
run: ./hack/install/install-kustomize.sh
- uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: "generate release resources"
run: make release-artifacts USER=jaegertracing
- name: "create the release in GitHub"
env:
GITHUB_TOKEN: ${{ github.token }}
run: ./.ci/create-release-github.sh
- name: "publishes the images"
env:
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKERHUB_TOKEN }}
QUAY_USERNAME: ${{ secrets.QUAY_USERNAME }}
QUAY_PASSWORD: ${{ secrets.QUAY_PASSWORD }}
run: ./.ci/publish-images.sh

View File

@ -1,30 +0,0 @@
name: "Operator-SDK Scorecard"
on:
push:
branches: [ main ]
paths-ignore:
- '**.md'
pull_request:
branches: [ main ]
paths-ignore:
- '**.md'
permissions:
contents: read
jobs:
operator-sdk-scorecard:
name: "Operator-SDK Scorecard"
runs-on: ubuntu-latest
steps:
- name: "Check out code"
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: "Install KIND"
run: ./hack/install/install-kind.sh
- name: "Install KUTTL"
run: ./hack/install/install-kuttl.sh
- name: "Run Operator-SDK scorecard test"
env:
DOCKER_BUILD_OPTIONS: "--cache-from type=local,src=/tmp/.buildx-cache --cache-to type=local,dest=/tmp/.buildx-cache-new,mode=max --load"
run: make scorecard-tests-local

16
.gitignore vendored
View File

@ -2,11 +2,6 @@
build/_output
build/_test
deploy/test
vendor
bin
tests/_build
_build
logs
# Created by https://www.gitignore.io/api/go,vim,emacs,visualstudiocode
### Emacs ###
# -*- mode: gitignore; -*-
@ -82,17 +77,6 @@ tags
### VisualStudioCode ###
.vscode/*
.history
### Goland ###
.idea
# End of https://www.gitignore.io/api/go,vim,emacs,visualstudiocode
fmt.log
import.log
### Kubernetes ###
kubeconfig
bin
### Timestamp files to avoid rebuilding Docker images if not needed ###
build-assert-job
docker-e2e-upgrade-image
build-e2e-upgrade-image
### Reports for E2E tests
reports

View File

@ -1,33 +0,0 @@
issues:
# Excluding configuration per-path, per-linter, per-text and per-source
exclude-rules:
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- gosec
- linters:
- staticcheck
text: "SA1019:"
linters-settings:
goimports:
local-prefixes: github.com/jaegertracing/jaeger-operator
gosimple:
go: "1.22"
linters:
enable:
- bidichk
- errorlint
- gofumpt
- goimports
- gosec
- govet
- misspell
- testifylint
disable:
- errcheck
run:
go: '1.22'
timeout: 10m

42
.travis.yml Normal file
View File

@ -0,0 +1,42 @@
language: go
sudo: required
go:
- 1.11.1
stages:
- name: build
- name: deploy
# only deploy if:
## we are not in a PR
## tag is blank (ie, master or any other branch)
## tag is a release tag (release/v1.6.1, which is the release build)
if: (type != pull_request) AND ((tag IS blank) OR (tag =~ /^release\/v.[\d\.]+(\-.*)?$/))
jobs:
include:
- stage: build
env:
- OPERATOR_VERSION="JOB_${TRAVIS_JOB_NUMBER}"
name: "Build"
install:
- "./.travis/install.sh"
script:
- "./.travis/script.sh"
after_success:
- "./.travis/after_success.sh"
- stage: deploy
name: "Publish latest image"
env:
- OPERATOR_VERSION="${TRAVIS_BRANCH}"
script:
- "./.travis/publish-images.sh"
- stage: deploy
name: "Release"
env:
- OPERATOR_VERSION="${TRAVIS_BRANCH}"
script:
- "./.travis/release.sh"
if: tag =~ /^release\/v.[\d\.]+(\-.*)?$/

4
.travis/after_success.sh Executable file
View File

@ -0,0 +1,4 @@
#!/bin/bash
echo "Uploading code coverage results"
bash <(curl -s https://codecov.io/bash)

View File

@ -0,0 +1,90 @@
import argparse
def cleanup_imports_and_return(imports):
os_packages = []
jaeger_packages = []
thirdparty_packages = []
for i in imports:
if i.strip() == "":
continue
if i.find("github.com/jaegertracing/jaeger-operator/") != -1:
jaeger_packages.append(i)
elif i.find(".com") != -1 or i.find(".net") != -1 or i.find(".org") != -1 or i.find(".in") != -1 or i.find("k8s.") != -1:
thirdparty_packages.append(i)
else:
os_packages.append(i)
l = []
needs_new_line = False
if os_packages:
l.extend(os_packages)
needs_new_line = True
if thirdparty_packages:
if needs_new_line:
l.append("")
l.extend(thirdparty_packages)
needs_new_line = True
if jaeger_packages:
if needs_new_line:
l.append("")
l.extend(jaeger_packages)
imports_reordered = imports != l
l.insert(0, "import (")
l.append(")")
return l, imports_reordered
def parse_go_file(f):
with open(f, 'r') as go_file:
lines = [i.rstrip() for i in go_file.readlines()]
in_import_block = False
imports_reordered = False
imports = []
output_lines = []
for line in lines:
if in_import_block:
endIdx = line.find(")")
if endIdx != -1:
in_import_block = False
ordered_imports, imports_reordered = cleanup_imports_and_return(imports)
output_lines.extend(ordered_imports)
imports = []
continue
imports.append(line)
else:
importIdx = line.find("import (")
if importIdx != -1:
in_import_block = True
continue
output_lines.append(line)
output_lines.append("")
return "\n".join(output_lines), imports_reordered
def main():
parser = argparse.ArgumentParser(
description='Tool to make cleaning up import orders easily')
parser.add_argument('-o', '--output', default='stdout',
choices=['inplace', 'stdout'],
help='output target [default: stdout]')
parser.add_argument('-t', '--target',
help='list of filenames to operate upon',
nargs='+',
required=True)
args = parser.parse_args()
output = args.output
go_files = args.target
for f in go_files:
parsed, imports_reordered = parse_go_file(f)
if output == "stdout" and imports_reordered:
print(f + " imports out of order")
else:
with open(f, 'w') as ofile:
ofile.write(parsed)
if __name__ == '__main__':
main()

View File

@ -0,0 +1,5 @@
#!/bin/bash
set -e
python .travis/import-order-cleanup.py -o $1 -t $(git ls-files "*\.go" | grep -v -e vendor)

11
.travis/install.sh Executable file
View File

@ -0,0 +1,11 @@
#!/bin/bash
echo "Installing gosec"
go get github.com/securego/gosec/cmd/gosec/...
echo "Installing golint"
go get -u golang.org/x/lint/golint
echo "Installing operator-sdk"
curl https://github.com/operator-framework/operator-sdk/releases/download/v0.5.0/operator-sdk-v0.5.0-x86_64-linux-gnu -sLo $GOPATH/bin/operator-sdk
chmod +x $GOPATH/bin/operator-sdk

28
.travis/publish-images.sh Executable file
View File

@ -0,0 +1,28 @@
#!/bin/bash
BASE_BUILD_IMAGE=${BASE_BUILD_IMAGE:-"jaegertracing/jaeger-operator"}
OPERATOR_VERSION=${OPERATOR_VERSION:-$(git describe --tags)}
## if we are on a release tag, let's extract the version number
## the other possible value, currently, is 'master' (or another branch name)
## if we are not running in travis, it fallsback to the `git describe` above
if [[ $OPERATOR_VERSION == release* ]]; then
OPERATOR_VERSION=$(echo ${OPERATOR_VERSION} | grep -Po "([\d\.]+)")
MAJOR_MINOR=$(echo ${OPERATOR_VERSION} | awk -F. '{print $1"."$2}')
fi
BUILD_IMAGE=${BUILD_IMAGE:-"${BASE_BUILD_IMAGE}:${OPERATOR_VERSION}"}
if [ "${DOCKER_PASSWORD}x" != "x" -a "${DOCKER_USERNAME}x" != "x" ]; then
echo "Performing a 'docker login'"
echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
fi
echo "Building and publishing image ${BUILD_IMAGE}"
make build docker push BUILD_IMAGE=${BUILD_IMAGE}
if [ "${MAJOR_MINOR}x" != "x" ]; then
MAJOR_MINOR_IMAGE="${BASE_BUILD_IMAGE}:${MAJOR_MINOR}"
docker tag "${BUILD_IMAGE}" "${MAJOR_MINOR_IMAGE}"
docker push "${MAJOR_MINOR_IMAGE}"
fi

55
.travis/release.sh Executable file
View File

@ -0,0 +1,55 @@
#!/bin/bash
git diff -s --exit-code
if [[ $? != 0 ]]; then
echo "The repository isn't clean. We won't proceed, as we don't know if we should commit those changes or not."
exit 1
fi
BASE_BUILD_IMAGE=${BASE_BUILD_IMAGE:-"jaegertracing/jaeger-operator"}
OPERATOR_VERSION=${OPERATOR_VERSION:-$(git describe --tags)}
OPERATOR_VERSION=$(echo ${OPERATOR_VERSION} | grep -Po "([\d\.]+)")
JAEGER_VERSION=$(echo ${OPERATOR_VERSION} | grep -Po "([\d]+\.[\d]+)")
TAG=${TAG:-"v${OPERATOR_VERSION}"}
BUILD_IMAGE=${BUILD_IMAGE:-"${BASE_BUILD_IMAGE}:${OPERATOR_VERSION}"}
CREATED_AT=$(date -u -Isecond)
# changes to deploy/operator.yaml
sed "s~image: jaegertracing/jaeger-operator.*~image: ${BUILD_IMAGE}~gi" -i deploy/operator.yaml
# changes to deploy/olm-catalog/jaeger.package.yaml
sed "s/currentCSV: jaeger-operator.*/currentCSV: jaeger-operator.v${OPERATOR_VERSION}/gi" -i deploy/olm-catalog/jaeger.package.yaml
# changes to deploy/olm-catalog/jaeger.clusterserviceversion.yaml
sed "s~containerImage: docker.io/jaegertracing/jaeger-operator.*~containerImage: docker.io/${BUILD_IMAGE}~gi" -i deploy/olm-catalog/jaeger.clusterserviceversion.yaml
sed "s/name: jaeger-operator\.v.*/name: jaeger-operator.v${OPERATOR_VERSION}/gi" -i deploy/olm-catalog/jaeger.clusterserviceversion.yaml
sed "s~image: jaegertracing/jaeger-operator.*~image: ${BUILD_IMAGE}~gi" -i deploy/olm-catalog/jaeger.clusterserviceversion.yaml
sed "s/all-in-one:.*\"/all-in-one:${JAEGER_VERSION}\"/gi" -i deploy/olm-catalog/jaeger.clusterserviceversion.yaml
sed "s/createdAt: .*/createdAt: \"${CREATED_AT}\"/gi" -i deploy/olm-catalog/jaeger.clusterserviceversion.yaml
export PREVIOUS_OPERATOR_VERSION=`grep "version: [0-9]" deploy/olm-catalog/jaeger.clusterserviceversion.yaml | cut -f4 -d' '`
sed "s/replaces: jaeger-operator\.v.*/replaces: jaeger-operator.v${PREVIOUS_OPERATOR_VERSION}/gi" -i deploy/olm-catalog/jaeger.clusterserviceversion.yaml
## there's a "version: v1" there somewhere that we want to avoid
sed -E "s/version: ([0-9\.]+).*/version: ${OPERATOR_VERSION}/gi" -i deploy/olm-catalog/jaeger.clusterserviceversion.yaml
# changes to test/operator.yaml
sed "s~image: jaegertracing/jaeger-operator.*~image: ${BUILD_IMAGE}~gi" -i test/operator.yaml
git diff -s --exit-code
if [[ $? == 0 ]]; then
echo "No changes detected. Skipping."
else
git add \
deploy/operator.yaml \
deploy/olm-catalog/jaeger.package.yaml \
deploy/olm-catalog/jaeger.clusterserviceversion.yaml \
test/operator.yaml
git commit -qm "Release ${TAG}" --author="Jaeger Release <jaeger-release@jaegertracing.io>"
git tag ${TAG}
git push --repo=https://${GH_WRITE_TOKEN}@github.com/jaegertracing/jaeger-operator.git --tags
git push https://${GH_WRITE_TOKEN}@github.com/jaegertracing/jaeger-operator.git refs/tags/${TAG}:master
fi

8
.travis/script.sh Executable file
View File

@ -0,0 +1,8 @@
#!/bin/bash
make ci
RT=$?
if [ ${RT} != 0 ]; then
echo "Failed to build the operator."
exit ${RT}
fi

230
CHANGELOG.adoc Normal file
View File

@ -0,0 +1,230 @@
:toc:
= Changelog
== v1.11.1 (2019-04-09)
* Include docs for common config (https://github.com/jaegertracing/jaeger-operator/pull/367[#367])
* Reinstated the registration of ES types (https://github.com/jaegertracing/jaeger-operator/pull/366[#366])
* Add support for affinity and tolerations (https://github.com/jaegertracing/jaeger-operator/pull/361[#361])
* Support injection of JAEGER_SERVICE_NAME based on app or k8s recommended labels (https://github.com/jaegertracing/jaeger-operator/pull/362[#362])
* Change ES operator apiversion (https://github.com/jaegertracing/jaeger-operator/pull/360[#360])
* Update test to run on OpenShift (https://github.com/jaegertracing/jaeger-operator/pull/350[#350])
* Add prometheus scrape 'false' annotation to headless collector service (https://github.com/jaegertracing/jaeger-operator/pull/348[#348])
* Derive agent container/host ports from options if specified (https://github.com/jaegertracing/jaeger-operator/pull/353[#353])
== v1.11.0 (2019-03-22)
=== Breaking changes
* Moved from v1alpha1 to v1 (https://github.com/jaegertracing/jaeger-operator/pull/265[#265])
* Use storage flags instead of CR properties for spark job (https://github.com/jaegertracing/jaeger-operator/pull/295[#295])
* Changed from 'size' to 'replicas' (https://github.com/jaegertracing/jaeger-operator/pull/271[#271]). "Size" will still work for the next couple of releases.
=== Other changes
* Initialise menu to include Log Out option when using OAuth Proxy (https://github.com/jaegertracing/jaeger-operator/pull/344[#344])
* Change Operator provider to CNCF (https://github.com/jaegertracing/jaeger-operator/pull/263[#263])
* Added note about the apiVersion used up to 1.10.0 (https://github.com/jaegertracing/jaeger-operator/pull/283[#283])
* Implemented a second service for the collector (https://github.com/jaegertracing/jaeger-operator/pull/339[#339])
* Enabled DNS as the service discovery mechanism for agent => collector communication (https://github.com/jaegertracing/jaeger-operator/pull/333[#333])
* Sorted the container arguments inside deployments (https://github.com/jaegertracing/jaeger-operator/pull/337[#337])
* Use client certs for elasticsearch (https://github.com/jaegertracing/jaeger-operator/pull/325[#325])
* Load back Elasticsearch certs from secrets (https://github.com/jaegertracing/jaeger-operator/pull/324[#324])
* Disable spark dependencies for self provisioned es (https://github.com/jaegertracing/jaeger-operator/pull/319[#319])
* Remove index cleaner from prod-es-deploy example (https://github.com/jaegertracing/jaeger-operator/pull/314[#314])
* Set default query timeout for provisioned ES (https://github.com/jaegertracing/jaeger-operator/pull/313[#313])
* Automatically Enable/disable depenencies tab (https://github.com/jaegertracing/jaeger-operator/pull/311[#311])
* Unmarshall numbers in options to number not float64 (https://github.com/jaegertracing/jaeger-operator/pull/308[#308])
* Inject archive index configuration for provisioned ES (https://github.com/jaegertracing/jaeger-operator/pull/309[#309])
* update #305, add grps and health port to jaeger collector service (https://github.com/jaegertracing/jaeger-operator/pull/306[#306])
* Enable archive button if archive storage is enabled (https://github.com/jaegertracing/jaeger-operator/pull/303[#303])
* Fix reverting ingress security to oauth-proxy on openshift if set to none (https://github.com/jaegertracing/jaeger-operator/pull/301[#301])
* Change agent reporter to GRPC (https://github.com/jaegertracing/jaeger-operator/pull/299[#299])
* Bump jaeger version to 1.11 (https://github.com/jaegertracing/jaeger-operator/pull/300[#300])
* Enable agent readiness probe (https://github.com/jaegertracing/jaeger-operator/pull/297[#297])
* Use storage flags instead of CR properties for spark job (https://github.com/jaegertracing/jaeger-operator/pull/295[#295])
* Change operator.yaml to use master, to keep the readme uptodate with latest version (https://github.com/jaegertracing/jaeger-operator/pull/296[#296])
* Add Elasticsearch image to CR and flag (https://github.com/jaegertracing/jaeger-operator/pull/289[#289])
* Updated to Operator SDK 0.5.0 (https://github.com/jaegertracing/jaeger-operator/pull/273[#273])
* Block until objects have been created and are ready (https://github.com/jaegertracing/jaeger-operator/pull/279[#279])
* Add rollover support (https://github.com/jaegertracing/jaeger-operator/pull/267[#267])
* Added publishing of major.minor image for the operator (https://github.com/jaegertracing/jaeger-operator/pull/274[#274])
* Use only ES data nodes to calculate shards (https://github.com/jaegertracing/jaeger-operator/pull/257[#257])
* Reinstated sidecar for query, plus small refactoring of sidecar (https://github.com/jaegertracing/jaeger-operator/pull/246[#246])
* Remove ES master certs (https://github.com/jaegertracing/jaeger-operator/pull/256[#256])
* Store back the CR only if it has changed (https://github.com/jaegertracing/jaeger-operator/pull/249[#249])
* Fixed role rule for Elasticsearch (https://github.com/jaegertracing/jaeger-operator/pull/251[#251])
* Wait for elasticsearch cluster to be up (https://github.com/jaegertracing/jaeger-operator/pull/242[#242])
== v1.10.0 (2019-02-28)
* Automatically detect when the ES operator is available (https://github.com/jaegertracing/jaeger-operator/pull/239[#239])
* Adjusted logs to be consistent across the code base (https://github.com/jaegertracing/jaeger-operator/pull/237[#237])
* Fixed deployment of Elasticsearch via its operator (https://github.com/jaegertracing/jaeger-operator/pull/234[#234])
* Set ES shards and replicas based on redundancy policy (https://github.com/jaegertracing/jaeger-operator/pull/229[#229])
* Update Jaeger CR (https://github.com/jaegertracing/jaeger-operator/pull/193[#193])
* Add storage secrets to es-index-cleaner cronjob (https://github.com/jaegertracing/jaeger-operator/pull/197[#197])
* Removed constraint on namespace when obtaining available Jaeger instances (https://github.com/jaegertracing/jaeger-operator/pull/213[#213])
* Added workaround for kubectl logs and get pods commands (https://github.com/jaegertracing/jaeger-operator/pull/225[#225])
* Add -n observability so kubectl get deployment command works correctly (https://github.com/jaegertracing/jaeger-operator/pull/223[#223])
* Added capability of detecting the platform (https://github.com/jaegertracing/jaeger-operator/pull/217[#217])
* Deploy one ES node (https://github.com/jaegertracing/jaeger-operator/pull/221[#221])
* Use centos image (https://github.com/jaegertracing/jaeger-operator/pull/220[#220])
* Add support for deploying elasticsearch (https://github.com/jaegertracing/jaeger-operator/pull/191[#191])
* Replaced use of strings.ToLower comparison with EqualFold (https://github.com/jaegertracing/jaeger-operator/pull/214[#214])
* Bump Jaeger to 1.10 (https://github.com/jaegertracing/jaeger-operator/pull/212[#212])
* Ignore golang coverage html (https://github.com/jaegertracing/jaeger-operator/pull/208[#208])
== v1.9.2 (2019-02-11)
* Enable single operator to monitor all namespaces (https://github.com/jaegertracing/jaeger-operator/pull/188[#188])
* Added flag to control the logging level (https://github.com/jaegertracing/jaeger-operator/pull/202[#202])
* Updated operator-sdk to v0.4.1 (https://github.com/jaegertracing/jaeger-operator/pull/200[#200])
* Added newline to the end of the role YAML file (https://github.com/jaegertracing/jaeger-operator/pull/199[#199])
* Added mention to WATCH_NAMESPACE when running for OpenShift (https://github.com/jaegertracing/jaeger-operator/pull/195[#195])
* Added openshift route to role (https://github.com/jaegertracing/jaeger-operator/pull/198[#198])
* Added Route to SDK Scheme (https://github.com/jaegertracing/jaeger-operator/pull/194[#194])
* Add Jaeger CSV and Package for OLM integration and deployment of the … (https://github.com/jaegertracing/jaeger-operator/pull/173[#173])
== v1.9.1 (2019-01-30)
* Remove debug logging from simple-streaming example (https://github.com/jaegertracing/jaeger-operator/pull/185[#185])
* Add ingester (and kafka) support (https://github.com/jaegertracing/jaeger-operator/pull/168[#168])
* When filtering storage options, also include '-archive' related options (https://github.com/jaegertracing/jaeger-operator/pull/182[#182])
== v1.9.0 (2019-01-23)
* Changed to use recommended labels (https://github.com/jaegertracing/jaeger-operator/pull/172[#172])
* Enable dependencies and index cleaner by default (https://github.com/jaegertracing/jaeger-operator/pull/162[#162])
* Fix log when spak depenencies are used with unsupported storage (https://github.com/jaegertracing/jaeger-operator/pull/161[#161])
* Fix serviceaccount could not be created by the operator on openshift. (https://github.com/jaegertracing/jaeger-operator/pull/165[#165])
* Add Elasticsearch index cleaner as cron job (https://github.com/jaegertracing/jaeger-operator/pull/155[#155])
* Fix import order for collector-test (https://github.com/jaegertracing/jaeger-operator/pull/158[#158])
* Smoke test (https://github.com/jaegertracing/jaeger-operator/pull/145[#145])
* Add deploy clean target and rename es/cass to deploy- (https://github.com/jaegertracing/jaeger-operator/pull/149[#149])
* Add spark job (https://github.com/jaegertracing/jaeger-operator/pull/140[#140])
* Automatically format imports (https://github.com/jaegertracing/jaeger-operator/pull/151[#151])
* Silence 'mkdir' from e2e-tests (https://github.com/jaegertracing/jaeger-operator/pull/153[#153])
* Move pkg/configmap to pkg/config/ui (https://github.com/jaegertracing/jaeger-operator/pull/152[#152])
* Fix secrets readme (https://github.com/jaegertracing/jaeger-operator/pull/150[#150])
== v1.8.2 (2018-12-03)
* Configure sampling strategies (https://github.com/jaegertracing/jaeger-operator/pull/139[#139])
* Add support for secrets (https://github.com/jaegertracing/jaeger-operator/pull/114[#114])
* Fix crd links (https://github.com/jaegertracing/jaeger-operator/pull/132[#132])
* Create e2e testdir, fix contributing readme (https://github.com/jaegertracing/jaeger-operator/pull/131[#131])
* Enable JAEGER_SERVICE_NAME and JAEGER_PROPAGATION env vars to be set … (https://github.com/jaegertracing/jaeger-operator/pull/128[#128])
* Add CRD to install steps, and update cleanup instructions (https://github.com/jaegertracing/jaeger-operator/pull/129[#129])
* Rename controller to strategy (https://github.com/jaegertracing/jaeger-operator/pull/125[#125])
* Add tests for new operator-sdk related code (https://github.com/jaegertracing/jaeger-operator/pull/122[#122])
* Update README.adoc to match yaml files in deploy (https://github.com/jaegertracing/jaeger-operator/pull/124[#124])
== v1.8.1 (2018-11-21)
* Add support for UI configuration (https://github.com/jaegertracing/jaeger-operator/pull/115[#115])
* Use proper jaeger-operator version for e2e tests and remove readiness check from DaemonSet (https://github.com/jaegertracing/jaeger-operator/pull/120[#120])
* Migrate to Operator SDK 0.1.0 (https://github.com/jaegertracing/jaeger-operator/pull/116[#116])
* Fix changelog 'new features' header for 1.8 (https://github.com/jaegertracing/jaeger-operator/pull/113[#113])
== v1.8.0 (2018-11-13)
*Notable new Features*
* Query base path should be used to configure correct path in ingress (https://github.com/jaegertracing/jaeger-operator/pull/108[#108])
* Enable resources to be defined at top level and overridden at compone… (https://github.com/jaegertracing/jaeger-operator/pull/110[#110])
* Add OAuth Proxy to UI when on OpenShift (https://github.com/jaegertracing/jaeger-operator/pull/100[#100])
* Enable top level annotations to be defined (https://github.com/jaegertracing/jaeger-operator/pull/97[#97])
* Support volumes and volumeMounts (https://github.com/jaegertracing/jaeger-operator/pull/82[#82])
* Add support for OpenShift routes (https://github.com/jaegertracing/jaeger-operator/pull/93[#93])
* Enable annotations to be specified with the deployable components (https://github.com/jaegertracing/jaeger-operator/pull/86[#86])
* Add support for Cassandra create-schema job (https://github.com/jaegertracing/jaeger-operator/pull/71[#71])
* Inject sidecar in properly annotated pods (https://github.com/jaegertracing/jaeger-operator/pull/58[#58])
* Support deployment of agent as a DaemonSet (https://github.com/jaegertracing/jaeger-operator/pull/52[#52])
*Breaking changes*
* Change CRD to use lower camel case (https://github.com/jaegertracing/jaeger-operator/pull/87[#87])
* Factor out ingress from all-in-one and query, as common to both but i… (https://github.com/jaegertracing/jaeger-operator/pull/91[#91])
* Remove zipkin service (https://github.com/jaegertracing/jaeger-operator/pull/75[#75])
*Full list of commits:*
* Query base path should be used to configure correct path in ingress (https://github.com/jaegertracing/jaeger-operator/pull/108[#108])
* Enable resources to be defined at top level and overridden at compone… (https://github.com/jaegertracing/jaeger-operator/pull/110[#110])
* Fix disable-oauth-proxy example (https://github.com/jaegertracing/jaeger-operator/pull/107[#107])
* Add OAuth Proxy to UI when on OpenShift (https://github.com/jaegertracing/jaeger-operator/pull/100[#100])
* Refactor common spec elements into a single struct with common proces… (https://github.com/jaegertracing/jaeger-operator/pull/105[#105])
* Ensure 'make generate' has been executed when model changes are made (https://github.com/jaegertracing/jaeger-operator/pull/101[#101])
* Enable top level annotations to be defined (https://github.com/jaegertracing/jaeger-operator/pull/97[#97])
* Update generated code and reverted change to 'all-in-one' in CRD (https://github.com/jaegertracing/jaeger-operator/pull/98[#98])
* Support volumes and volumeMounts (https://github.com/jaegertracing/jaeger-operator/pull/82[#82])
* Update readme to include info about storage options being located in … (https://github.com/jaegertracing/jaeger-operator/pull/96[#96])
* Enable storage options to be filtered out based on specified storage … (https://github.com/jaegertracing/jaeger-operator/pull/94[#94])
* Add support for OpenShift routes (https://github.com/jaegertracing/jaeger-operator/pull/93[#93])
* Change CRD to use lower camel case (https://github.com/jaegertracing/jaeger-operator/pull/87[#87])
* Factor out ingress from all-in-one and query, as common to both but i… (https://github.com/jaegertracing/jaeger-operator/pull/91[#91])
* Fix operator SDK version as master is too unpredicatable at the moment (https://github.com/jaegertracing/jaeger-operator/pull/92[#92])
* Update generated file after new annotations field (https://github.com/jaegertracing/jaeger-operator/pull/90[#90])
* Enable annotations to be specified with the deployable components (https://github.com/jaegertracing/jaeger-operator/pull/86[#86])
* Remove zipkin service (https://github.com/jaegertracing/jaeger-operator/pull/75[#75])
* Add support for Cassandra create-schema job (https://github.com/jaegertracing/jaeger-operator/pull/71[#71])
* Fix table of contents on readme (https://github.com/jaegertracing/jaeger-operator/pull/73[#73])
* Update the Operator SDK version (https://github.com/jaegertracing/jaeger-operator/pull/69[#69])
* Add sidecar.istio.io/inject=false annotation to all-in-one, agent (da… (https://github.com/jaegertracing/jaeger-operator/pull/67[#67])
* Fix zipkin port issue (https://github.com/jaegertracing/jaeger-operator/pull/65[#65])
* Go 1.11.1 (https://github.com/jaegertracing/jaeger-operator/pull/61[#61])
* Inject sidecar in properly annotated pods (https://github.com/jaegertracing/jaeger-operator/pull/58[#58])
* Support deployment of agent as a DaemonSet (https://github.com/jaegertracing/jaeger-operator/pull/52[#52])
* Normalize options on the stub and update the normalized CR (https://github.com/jaegertracing/jaeger-operator/pull/54[#54])
* Document the disable ingress feature (https://github.com/jaegertracing/jaeger-operator/pull/55[#55])
* dep ensure (https://github.com/jaegertracing/jaeger-operator/pull/51[#51])
* Add support for JaegerIngressSpec to all-in-one
== v1.7.0 (2018-09-25)
This release brings Jaeger v1.7 to the Operator.
*Full list of commits:*
* Release v1.7.0
* Bump Jaeger to 1.7 (https://github.com/jaegertracing/jaeger-operator/pull/41[#41])
== v1.6.5 (2018-09-21)
This is our initial release based on Jaeger 1.6.
*Full list of commits:*
* Release v1.6.5
* Push the tag with the new commit to master, not the release tag
* Fix git push syntax
* Push tag to master
* Merge release commit into master (https://github.com/jaegertracing/jaeger-operator/pull/39[#39])
* Add query ingress enable switch (https://github.com/jaegertracing/jaeger-operator/pull/36[#36])
* Fix the run goal (https://github.com/jaegertracing/jaeger-operator/pull/35[#35])
* Release v1.6.1
* Add 'build' step when publishing image
* Fix docker push command and update release instructions
* Add release scripts (https://github.com/jaegertracing/jaeger-operator/pull/32[#32])
* Fix command to deploy the simplest operator (https://github.com/jaegertracing/jaeger-operator/pull/34[#34])
* Add IntelliJ specific files to gitignore (https://github.com/jaegertracing/jaeger-operator/pull/33[#33])
* Add prometheus scrape annotations to Jaeger collector, query and all-in-one (https://github.com/jaegertracing/jaeger-operator/pull/27[#27])
* Remove work in progress notice
* Add instructions on how to run the operator on OpenShift
* Support Jaeger version and image override
* Fix publishing of release
* Release Docker image upon merge to master
* Reuse the same ES for all tests
* Improved how to execute the e2e tests
* Correct uninstall doc to reference delete not create (https://github.com/jaegertracing/jaeger-operator/pull/16[#16])
* Set ENTRYPOINT for Dockerfile
* Run 'docker' target only before e2e-tests
* 'dep ensure' after adding Cobra/Viper
* Update the Jaeger Operator version at build time
* Add ingress permission to the jaeger-operator
* Install golint/gosec
* Disabled e2e tests on Travis
* Initial working version
* INITIAL COMMIT

View File

@ -1,860 +0,0 @@
Changes by Version
==================
## v1.65.0 (2025-01-22)
* Pin agent version to 1.62.0 ([#2790](https://github.com/jaegertracing/jaeger-operator/pull/2790), [@rubenvp8510](https://github.com/rubenvp8510))
* Added compatibility for Jaeger Operator v1.61.x and v1.62.x ([#2725](https://github.com/jaegertracing/jaeger-operator/pull/2725), [@mooneeb](https://github.com/mooneeb))
## v1.62.0 (2024-10-10)
* TRACING-4238 | Fix gatewat 502 timeout ([#2694](https://github.com/jaegertracing/jaeger-operator/pull/2694), [@pavolloffay](https://github.com/pavolloffay))
* feat: added missing test for elasticsearch reconciler ([#2662](https://github.com/jaegertracing/jaeger-operator/pull/2662), [@Ankit152](https://github.com/Ankit152))
## v1.61.0 (2024-09-16)
* Bump google.golang.org/grpc from 1.66.0 to 1.66.1 ([#2675](https://github.com/jaegertracing/jaeger-operator/pull/2675), [@dependabot[bot]](https://github.com/apps/dependabot))
* Bump google.golang.org/grpc from 1.65.0 to 1.66.0 ([#2670](https://github.com/jaegertracing/jaeger-operator/pull/2670), [@dependabot[bot]](https://github.com/apps/dependabot))
* Bump the opentelemetry group with 9 updates ([#2668](https://github.com/jaegertracing/jaeger-operator/pull/2668), [@dependabot[bot]](https://github.com/apps/dependabot))
## v1.60.0 (2024-08-13)
* Fix Golang version in go.mod ([#2652](https://github.com/jaegertracing/jaeger-operator/pull/2652), [@iblancasa](https://github.com/iblancasa))
## v1.60.0 (2024-08-09)
* Test on k8s 1.30 ([#2647](https://github.com/jaegertracing/jaeger-operator/pull/2647), [@pavolloffay](https://github.com/pavolloffay))
* Bump go to 1.22 and controller-gen to 1.14 ([#2646](https://github.com/jaegertracing/jaeger-operator/pull/2646), [@pavolloffay](https://github.com/pavolloffay))
## v1.59.0 (2024-08-06)
* Update compatibility matrix for v1.57.x ([#2594](https://github.com/jaegertracing/jaeger-operator/pull/2594), [@mooneeb](https://github.com/mooneeb))
* imagePullSecrets is not set for agent DaemonSet ([#2563](https://github.com/jaegertracing/jaeger-operator/pull/2563), [@antoniomerlin](https://github.com/antoniomerlin))
## v1.57.0 (2024-05-06)
## v1.55.0 (2024-03-22)
* Add server URL to JaegerMetricsStorageSpec ([#2481](https://github.com/jaegertracing/jaeger-operator/pull/2481), [@antoniomerlin](https://github.com/antoniomerlin))
* Use the host set in the Ingess field for the OpenShift Route ([#2409](https://github.com/jaegertracing/jaeger-operator/pull/2409), [@iblancasa](https://github.com/iblancasa))
* Add minimum Kubernetes and OpenShift versions ([#2492](https://github.com/jaegertracing/jaeger-operator/pull/2492), [@andreasgerstmayr](https://github.com/andreasgerstmayr))
## v1.54.0 (2024-02-14)
* apis/v1: add jaeger agent deprecation warning ([#2471](https://github.com/jaegertracing/jaeger-operator/pull/2471), [@frzifus](https://github.com/frzifus))
## V1.53.0 (2024-01-17)
* Choose the newer autoscaling version by default ([#2374](https://github.com/jaegertracing/jaeger-operator/pull/2374), [@iblancasa](https://github.com/iblancasa))
* Upgrade operator-sdk to 1.32.0 ([#2388](https://github.com/jaegertracing/jaeger-operator/pull/2388), [@iblancasa](https://github.com/iblancasa))
* Fix containerImage field and remove statement about failing CI ([#2386](https://github.com/jaegertracing/jaeger-operator/pull/2386), [@iblancasa](https://github.com/iblancasa))
* Fix injection: prefer jaeger in the same namespace ([#2383](https://github.com/jaegertracing/jaeger-operator/pull/2383), [@pavolloffay](https://github.com/pavolloffay))
## v1.52.0 (2023-12-07)
* Add missing container security context settings and tests ([#2354](https://github.com/jaegertracing/jaeger-operator/pull/2354), [@tingeltangelthomas](https://github.com/tingeltangelthomas))
## v1.51.0 (2023-11-17)
* Support configuring images via RELATED_IMAGE_ environment variables ([#2355](https://github.com/jaegertracing/jaeger-operator/pull/2355), [@andreasgerstmayr](https://github.com/andreasgerstmayr))
* Regenerate ES certificated when is close to 1 day for expire ([#2356](https://github.com/jaegertracing/jaeger-operator/pull/2356), [@rubenvp8510](https://github.com/rubenvp8510))
* Bump actions/checkout from 3 to 4 ([#2316](https://github.com/jaegertracing/jaeger-operator/pull/2316), [@dependabot[bot]](https://github.com/apps/dependabot))
* bump grpc to 1.58.3 ([#2346](https://github.com/jaegertracing/jaeger-operator/pull/2346), [@rubenvp8510](https://github.com/rubenvp8510))
* Bump golang version to 1.21 ([#2347](https://github.com/jaegertracing/jaeger-operator/pull/2347), [@rubenvp8510](https://github.com/rubenvp8510))
* Ensure oauth-proxy ImageStream is detected eventually ([#2340](https://github.com/jaegertracing/jaeger-operator/pull/2340), [@bverschueren](https://github.com/bverschueren))
* Check if envFrom has ConfigMapRef set ([#2342](https://github.com/jaegertracing/jaeger-operator/pull/2342), [@edwardecook](https://github.com/edwardecook))
* Bump golang.org/x/net from 0.13.0 to 0.17.0 ([#2343](https://github.com/jaegertracing/jaeger-operator/pull/2343), [@dependabot[bot]](https://github.com/apps/dependabot))
* Fix issue related to new encoding in oauth-proxy image ([#2345](https://github.com/jaegertracing/jaeger-operator/pull/2345), [@iblancasa](https://github.com/iblancasa))
* Always generate new oauth-proxy password ([#2333](https://github.com/jaegertracing/jaeger-operator/pull/2333), [@pavolloffay](https://github.com/pavolloffay))
* Add v1.48.x and v1.49.x to the support map ([#2332](https://github.com/jaegertracing/jaeger-operator/pull/2332), [@ishaqkhattana](https://github.com/ishaqkhattana))
* Pass proxy env vars to operands ([#2330](https://github.com/jaegertracing/jaeger-operator/pull/2330), [@pavolloffay](https://github.com/pavolloffay))
* Protect auth delegator behind a mutex ([#2318](https://github.com/jaegertracing/jaeger-operator/pull/2318), [@iblancasa](https://github.com/iblancasa))
## v1.49.1 (2023-09-07)
* fix: protect the kafka-profision setting behind a mutex ([#2308](https://github.com/jaegertracing/jaeger-operator/pull/2308), [@iblancasa](https://github.com/iblancasa))
## v1.48.1 (2023-09-04)
* Use base image that does not require subscription (centos 9 stream) ([#2313](https://github.com/jaegertracing/jaeger-operator/pull/2313), [@pavolloffay](https://github.com/pavolloffay))
* Update go dependencies to Kubernetes 0.28.1 ([#2301](https://github.com/jaegertracing/jaeger-operator/pull/2301), [@pavolloffay](https://github.com/pavolloffay))
* Protect the ESProvisioning setting behind a mutex ([#2287](https://github.com/jaegertracing/jaeger-operator/pull/2287), [@iblancasa](https://github.com/iblancasa))
## v1.48.0 (2023-08-28)
* Remove the TokenReview after checking we can create it ([#2286](https://github.com/jaegertracing/jaeger-operator/pull/2286), [@iblancasa](https://github.com/iblancasa))
* Fix apiVersion and kind are missing in jaeger-operator generate output ([#2281](https://github.com/jaegertracing/jaeger-operator/pull/2281), [@hiteshwani29](https://github.com/hiteshwani29))
* Fix custom labels for the deployable components in production strategy ([#2277](https://github.com/jaegertracing/jaeger-operator/pull/2277), [@hiteshwani29](https://github.com/hiteshwani29))
* Ensure the OAuth Proxy image detection is run after the platform detection ([#2280](https://github.com/jaegertracing/jaeger-operator/pull/2280), [@iblancasa](https://github.com/iblancasa))
* Added changes to respect env variable set from envFrom configMaps ([#2272](https://github.com/jaegertracing/jaeger-operator/pull/2272), [@hiteshwani29](https://github.com/hiteshwani29))
* Refactor the autodetect module to reduce the number of writes/reads in viper configuration ([#2274](https://github.com/jaegertracing/jaeger-operator/pull/2274), [@iblancasa](https://github.com/iblancasa))
## v1.47.0 (2023-07-12)
* Expose admin ports for agent, collector, and query Deployments via the equivalent Service ([#2262](https://github.com/jaegertracing/jaeger-operator/pull/2262), [@thomaspaulin](https://github.com/thomaspaulin))
* update otel sdk to v1.16.0/v0.39.0 ([#2261](https://github.com/jaegertracing/jaeger-operator/pull/2261), [@frzifus](https://github.com/frzifus))
* Extended compatibility matrix ([#2255](https://github.com/jaegertracing/jaeger-operator/pull/2255), [@shazib-summar](https://github.com/shazib-summar))
* Add support for Kubernetes 1.27 ([#2235](https://github.com/jaegertracing/jaeger-operator/pull/2235), [@iblancasa](https://github.com/iblancasa))
* Jaeger Collector Config: `Lifecycle` and `TerminationGracePeriodSeconds` ([#2242](https://github.com/jaegertracing/jaeger-operator/pull/2242), [@taj-p](https://github.com/taj-p))
## v1.46.0 (2023-06-16)
* Missing exposed port 16685 in query deployments ([#2239](https://github.com/jaegertracing/jaeger-operator/pull/2239), [@iblancasa](https://github.com/iblancasa))
* Use Golang 1.20 ([#2205](https://github.com/jaegertracing/jaeger-operator/pull/2205), [@iblancasa](https://github.com/iblancasa))
* [BugFix] Properly set imagePullPolicy and containerSecurityContext for EsIndexCleaner cronjob container ([#2224](https://github.com/jaegertracing/jaeger-operator/pull/2224), [@michalschott](https://github.com/michalschott))
* Remove resource limitation for the operator pod ([#2221](https://github.com/jaegertracing/jaeger-operator/pull/2221), [@iblancasa](https://github.com/iblancasa))
* Add PriorityClass for AllInOne strategy ([#2218](https://github.com/jaegertracing/jaeger-operator/pull/2218), [@sonofgibs](https://github.com/sonofgibs))
## v1.45.0 (2023-05-16)
## v1.44.0 (2023-04-13)
* Feat: add `NodeSelector` to jaeger collector, query, and ingestor ([#2200](https://github.com/jaegertracing/jaeger-operator/pull/2200), [@AhmedGrati](https://github.com/AhmedGrati))
## v1.43.0 (2023-02-07)
* update operator-sdk to 1.27.0 ([#2178](https://github.com/jaegertracing/jaeger-operator/pull/2178), [@iblancasa](https://github.com/iblancasa))
* Support JaegerCommonSpec in JaegerCassandraCreateSchemaSpec ([#2176](https://github.com/jaegertracing/jaeger-operator/pull/2176), [@haanhvu](https://github.com/haanhvu))
## v1.42.0 (2023-02-07)
* Upgrade Kafka Operator default version to 0.32.0 ([#2150](https://github.com/jaegertracing/jaeger-operator/pull/2150), [@iblancasa](https://github.com/iblancasa))
* Upgrade Kind, Kind images and add Kubernetes 1.26 ([#2161](https://github.com/jaegertracing/jaeger-operator/pull/2161), [@iblancasa](https://github.com/iblancasa))
1.41.1 (2023-01-23)
-------------------
* Fix the Jaeger version for the Jaeger Operator 1.41.x ([#2157](https://github.com/jaegertracing/jaeger-operator/pull/2157), [@iblancasa](https://github.com/iblancasa))
1.40.0 (2022-12-23)
-------------------
* Support e2e tests on multi architecture environment ([#2139](https://github.com/jaegertracing/jaeger-operator/pull/2139), [@jkandasa](https://github.com/jkandasa))
* limit the get of deployments to WATCH_NAMESPACE on sync ([#2126](https://github.com/jaegertracing/jaeger-operator/pull/2126), [@rubenvp8510](https://github.com/rubenvp8510))
* choose first server address ([#2087](https://github.com/jaegertracing/jaeger-operator/pull/2087), [@Efrat19](https://github.com/Efrat19))
* Fix query ingress when using streaming strategy ([#2120](https://github.com/jaegertracing/jaeger-operator/pull/2120), [@kevinearls](https://github.com/kevinearls))
* Fix Liveness Probe for Ingester and Query ([#2122](https://github.com/jaegertracing/jaeger-operator/pull/2122), [@ricoberger](https://github.com/ricoberger))
* Fix for min tls version to v1.2 ([#2119](https://github.com/jaegertracing/jaeger-operator/pull/2119), [@kangsheng89](https://github.com/kangsheng89))
1.39.0 (2022-11-03)
-------------------
* Fix: svc port doesnt match istio convention ([#2101](https://github.com/jaegertracing/jaeger-operator/pull/2101), [@frzifus](https://github.com/frzifus))
1.38.1 (2022-10-11)
-------------------
* Add ability to specify es proxy resources ([#2079](https://github.com/jaegertracing/jaeger-operator/pull/2079), [@rubenvp8510](https://github.com/rubenvp8510))
* Fix: CVE-2022-27664 ([#2081](https://github.com/jaegertracing/jaeger-operator/pull/2081), [@albertlockett](https://github.com/albertlockett))
* Add liveness and readiness probes to injected sidecar ([#2077](https://github.com/jaegertracing/jaeger-operator/pull/2077), [@MacroPower](https://github.com/MacroPower))
* Add http- port prefix to follow istio naming conventions ([#2075](https://github.com/jaegertracing/jaeger-operator/pull/2075), [@cnvergence](https://github.com/cnvergence))
1.38.0 (2022-09-19)
-------------------
* added pathType to ingress ([#2066](https://github.com/jaegertracing/jaeger-operator/pull/2066), [@giautm](https://github.com/giautm))
* set alias enable variable for spark cronjob ([#2061](https://github.com/jaegertracing/jaeger-operator/pull/2061), [@miyunari](https://github.com/miyunari))
* migrate autoscaling v2beta2 to v2 for Kubernetes 1.26 ([#2055](https://github.com/jaegertracing/jaeger-operator/pull/2055), [@iblancasa](https://github.com/iblancasa))
* add container security context support ([#2033](https://github.com/jaegertracing/jaeger-operator/pull/2033), [@mjnagel](https://github.com/mjnagel))
* change verbosity level and message of the log for autoprovisioned kafka ([#2026](https://github.com/jaegertracing/jaeger-operator/pull/2026), [@iblancasa](https://github.com/iblancasa))
1.37.0 (2022-08-11)
-------------------
* Upgrade operator-sdk to 1.22.2 ([#2021](https://github.com/jaegertracing/jaeger-operator/pull/2021), [@iblancasa](https://github.com/iblancasa))
* es-dependencies: support image pull secret ([#2012](https://github.com/jaegertracing/jaeger-operator/pull/2012), [@frzifus](https://github.com/frzifus))
1.36.0 (2022-07-18)
-------------------
* added flag to change webhook port ([#1991](https://github.com/jaegertracing/jaeger-operator/pull/1991), [@klubi](https://github.com/klubi))
* Upgrade operator-sdk to 1.22.0 ([#1951](https://github.com/jaegertracing/jaeger-operator/pull/1951), [@iblancasa](https://github.com/iblancasa))
* Add elasticsearch storage date format config. ([#1325](https://github.com/jaegertracing/jaeger-operator/pull/1325), [@sniperking1234](https://github.com/sniperking1234))
* Add support for custom liveness probe ([#1605](https://github.com/jaegertracing/jaeger-operator/pull/1605), [@ricoberger](https://github.com/ricoberger))
* Add service annotations ([#1526](https://github.com/jaegertracing/jaeger-operator/pull/1526), [@herbguo](https://github.com/herbguo))
1.35.0 (2022-06-16)
-------------------
* fix: point to a newer openshift oauth image 4.12 ([#1955](https://github.com/jaegertracing/jaeger-operator/pull/1955), [@frzifus](https://github.com/frzifus))
* Expose OTLP collector and allInOne ports ([#1948](https://github.com/jaegertracing/jaeger-operator/pull/1948), [@rubenvp8510](https://github.com/rubenvp8510))
* Add support for ImagePullSecrets in cronjobs ([#1935](https://github.com/jaegertracing/jaeger-operator/pull/1935), [@alexandrevilain](https://github.com/alexandrevilain))
* fix: ocp es rollover #1932 ([#1937](https://github.com/jaegertracing/jaeger-operator/pull/1937), [@frzifus](https://github.com/frzifus))
* add kafkaSecretName for collector and ingester ([#1910](https://github.com/jaegertracing/jaeger-operator/pull/1910), [@luohua13](https://github.com/luohua13))
* Add autoscalability E2E test for OpenShift ([#1936](https://github.com/jaegertracing/jaeger-operator/pull/1936), [@iblancasa](https://github.com/iblancasa))
* Fix version in Docker container. ([#1924](https://github.com/jaegertracing/jaeger-operator/pull/1924), [@iblancasa](https://github.com/iblancasa))
* Verify namespace permissions before adding ns controller ([#1914](https://github.com/jaegertracing/jaeger-operator/pull/1914), [@rubenvp8510](https://github.com/rubenvp8510))
* fix: skip dependencies on openshift platform ([#1921](https://github.com/jaegertracing/jaeger-operator/pull/1921), [@frzifus](https://github.com/frzifus))
* fix: remove common name label ([#1920](https://github.com/jaegertracing/jaeger-operator/pull/1920), [@frzifus](https://github.com/frzifus))
* Ignore not found error on 1.31.0 upgrade routine ([#1913](https://github.com/jaegertracing/jaeger-operator/pull/1913), [@rubenvp8510](https://github.com/rubenvp8510))
1.34.1 (2022-05-24)
-------------------
Fix: storage.es.tls.enabled flag not passed to es-index-cleaner ([#1896](https://github.com/jaegertracing/jaeger-operator/pull/1896), [@indigostar-kr](https://github.com/indigostar-kr))
1.34.0 (2022-05-18)
-------------------
* Fix: jaeger operator fails to parse Jaeger instance version ([#1885](https://github.com/jaegertracing/jaeger-operator/pull/1885), [@rubenvp8510](https://github.com/rubenvp8510))
* Support Kubernetes 1.24 ([#1882](https://github.com/jaegertracing/jaeger-operator/pull/1882), [@iblancasa](https://github.com/iblancasa))
* Cronjob migration ([#1856](https://github.com/jaegertracing/jaeger-operator/pull/1856), [@kevinearls](https://github.com/kevinearls))
* Fix: setting default Istio annotation in Pod instead of Deployment ([#1860](https://github.com/jaegertracing/jaeger-operator/pull/1860), [@cnvergence](https://github.com/cnvergence))
* Add http- prefix to port names in collector and agent services ([#1862](https://github.com/jaegertracing/jaeger-operator/pull/1862), [@cnvergence](https://github.com/cnvergence))
1.33.0 (2022-04-14)
-------------------
* Adding priority-class for esIndexCleaner ([#1732](https://github.com/jaegertracing/jaeger-operator/pull/1732), [@swapnilpotnis](https://github.com/swapnilpotnis))
* Fix: webhook deadlock ([#1850](https://github.com/jaegertracing/jaeger-operator/pull/1850), [@frzifus](https://github.com/frzifus))
* Fix: take namespace modifications into account ([#1839](https://github.com/jaegertracing/jaeger-operator/pull/1839), [@frzifus](https://github.com/frzifus))
* Replace deployment reconciler with webhook ([#1828](https://github.com/jaegertracing/jaeger-operator/pull/1828), [@frzifus](https://github.com/frzifus))
* Add managed by metric ([#1831](https://github.com/jaegertracing/jaeger-operator/pull/1831), [@rubenvp8510](https://github.com/rubenvp8510))
* Fix admissionReviews version for operator-sdk upgrade ([#1827](https://github.com/jaegertracing/jaeger-operator/pull/1827), [@kevinearls](https://github.com/kevinearls))
* Make RHOL Elasticsearch cert-management feature optional ([#1824](https://github.com/jaegertracing/jaeger-operator/pull/1824), [@pavolloffay](https://github.com/pavolloffay))
* Update the operator-sdk to v1.17.0 ([#1825](https://github.com/jaegertracing/jaeger-operator/pull/1825), [@kevinearls](https://github.com/kevinearls))
* Fix metrics selectors ([#1742](https://github.com/jaegertracing/jaeger-operator/pull/1742), [@rubenvp8510](https://github.com/rubenvp8510))
1.32.0 (2022-03-09)
-------------------
* Custom Image Pull Policy ([#1798](https://github.com/jaegertracing/jaeger-operator/pull/1798), [@edenkoveshi](https://github.com/edenkoveshi))
* add METRICS_STORAGE_TYPE for metrics query ([#1755](https://github.com/jaegertracing/jaeger-operator/pull/1755), [@JaredTan95](https://github.com/JaredTan95))
* Make operator more resiliant to etcd defrag activity ([#1795](https://github.com/jaegertracing/jaeger-operator/pull/1795), [@pavolloffay](https://github.com/pavolloffay))
* Automatically set num shards and replicas from referenced OCP ES ([#1737](https://github.com/jaegertracing/jaeger-operator/pull/1737), [@pavolloffay](https://github.com/pavolloffay))
* support image pull secrets ([#1740](https://github.com/jaegertracing/jaeger-operator/pull/1740), [@frzifus](https://github.com/frzifus))
* Fix webhook secret cert name ([#1772](https://github.com/jaegertracing/jaeger-operator/pull/1772), [@rubenvp8510](https://github.com/rubenvp8510))
1.31.0 (2022-02-09)
-------------------
* Fix panic caused by an invalid type assertion ([#1738](https://github.com/jaegertracing/jaeger-operator/pull/1738), [@frzifus](https://github.com/frzifus))
* Add ES autoprovisioning CR metric ([#1728](https://github.com/jaegertracing/jaeger-operator/pull/1728), [@rubenvp8510](https://github.com/rubenvp8510))
* Use Elasticsearch provisioning from OpenShift Elasticsearch operator ([#1708](https://github.com/jaegertracing/jaeger-operator/pull/1708), [@pavolloffay](https://github.com/pavolloffay))
1.30.0 (2022-01-18)
-------------------
* Only expose the query-http[s] port in the OpenShift route ([#1719](https://github.com/jaegertracing/jaeger-operator/pull/1719), [@rkukura](https://github.com/rkukura))
* Add CR Metrics for Jaeger Kind. ([#1706](https://github.com/jaegertracing/jaeger-operator/pull/1706), [@rubenvp8510](https://github.com/rubenvp8510))
* Avoid calling k8s api for each resource kind on the cluster ([#1712](https://github.com/jaegertracing/jaeger-operator/pull/1712), [@rubenvp8510](https://github.com/rubenvp8510))
* First call of autodetect should be synchronous ([#1713](https://github.com/jaegertracing/jaeger-operator/pull/1713), [@rubenvp8510](https://github.com/rubenvp8510))
* Add permissions for imagestreams ([#1714](https://github.com/jaegertracing/jaeger-operator/pull/1714), [@rubenvp8510](https://github.com/rubenvp8510))
* Restore default metrics port to avoid breaking helm ([#1703](https://github.com/jaegertracing/jaeger-operator/pull/1703), [@rubenvp8510](https://github.com/rubenvp8510))
* Add leases permissions to manifest. ([#1704](https://github.com/jaegertracing/jaeger-operator/pull/1704), [@rubenvp8510](https://github.com/rubenvp8510))
* Change spark-dependencies image to GHCR ([#1701](https://github.com/jaegertracing/jaeger-operator/pull/1701), [@pavolloffay](https://github.com/pavolloffay))
* Register ES types ([#1688](https://github.com/jaegertracing/jaeger-operator/pull/1688), [@rubenvp8510](https://github.com/rubenvp8510))
* Add support for IBM Power (ppc64le) arch ([#1672](https://github.com/jaegertracing/jaeger-operator/pull/1672), [@Abhijit-Mane](https://github.com/Abhijit-Mane))
* util.Truncate add the values to the truncated after the excess is 0 ([#1678](https://github.com/jaegertracing/jaeger-operator/pull/1678), [@mmatache](https://github.com/mmatache))
1.29.1 (2021-12-15)
-------------------
* Register oschema for openshift resources ([#1673](https://github.com/jaegertracing/jaeger-operator/pull/1673), [@rubenvp8510](https://github.com/rubenvp8510))
1.29.0 (2021-12-10)
-------------------
* Fix default namespace ([#1651](https://github.com/jaegertracing/jaeger-operator/pull/1651), [@rubenvp8510](https://github.com/rubenvp8510))
* Fix finding the correct instance when there are multiple jaeger instances during injecting the sidecar ([#1639](https://github.com/jaegertracing/jaeger-operator/pull/1639), [@alibo](https://github.com/alibo))
* Migrate to operator-sdk 1.13 ([#1623](https://github.com/jaegertracing/jaeger-operator/pull/1623), [@rubenvp8510](https://github.com/rubenvp8510))
1.28.0 (2021-11-08)
-------------------
* Use CRDs to detect features in the cluster ([#1608](https://github.com/jaegertracing/jaeger-operator/pull/1608), [@pavolloffay](https://github.com/pavolloffay))
* Make ServiceMonitor creation optional ([#1323](https://github.com/jaegertracing/jaeger-operator/pull/1323), [@igorwwwwwwwwwwwwwwwwwwww](https://github.com/igorwwwwwwwwwwwwwwwwwwww))
* Change default OpenShift query ingress SAR to pods in the jaeger namespace ([#1583](https://github.com/jaegertracing/jaeger-operator/pull/1583), [@pavolloffay](https://github.com/pavolloffay))
* Fix gRPC flags for OpenShift when 'reporter.grpc.host-port' is defined ([#1584](https://github.com/jaegertracing/jaeger-operator/pull/1584), [@Git-Jiro](https://github.com/Git-Jiro))
1.27.0 (2021-10-07)
-------------------
* Allow sidecar injection for query pod from other Jaeger instances ([#1569](https://github.com/jaegertracing/jaeger-operator/pull/1569), [@pavolloffay](https://github.com/pavolloffay))
* Avoid touching jaeger deps on deployment/ns controller ([#1529](https://github.com/jaegertracing/jaeger-operator/pull/1529), [@rubenvp8510](https://github.com/rubenvp8510))
1.26.0 (2021-09-30)
-------------------
* Add ingressClassName field to query ingress ([#1557](https://github.com/jaegertracing/jaeger-operator/pull/1557), [@rubenvp8510](https://github.com/rubenvp8510))
* Add disconnected annotation to csv ([#1536](https://github.com/jaegertracing/jaeger-operator/pull/1536), [@rubenvp8510](https://github.com/rubenvp8510))
1.25.0 (2021-08-08)
-------------------
* Add support repetitive arguments to operand ([#1434](https://github.com/jaegertracing/jaeger-operator/pull/1434), [@rubenvp8510](https://github.com/rubenvp8510))
* Allow TLS flags to be disabled ([#1440](https://github.com/jaegertracing/jaeger-operator/pull/1440), [@rubenvp8510](https://github.com/rubenvp8510))
* Add gRPC port for jaeger-query into its service resource ([#1521](https://github.com/jaegertracing/jaeger-operator/pull/1521), [@rubenvp8510](https://github.com/rubenvp8510))
* Sidecar removed when annotation is false ([#1508](https://github.com/jaegertracing/jaeger-operator/pull/1508), [@mfz85](https://github.com/mfz85))
* Add support for GRPC storage plugin ([#1517](https://github.com/jaegertracing/jaeger-operator/pull/1517), [@pavolloffay](https://github.com/pavolloffay))
* Fix overwritten default labels in label selectors of `Service` ([#1490](https://github.com/jaegertracing/jaeger-operator/pull/1490), [@rudeigerc](https://github.com/rudeigerc))
* Add resources requests and limits to the operator ([#1515](https://github.com/jaegertracing/jaeger-operator/pull/1515), [@brunopadz](https://github.com/brunopadz))
* Instrument instances types ([#1484](https://github.com/jaegertracing/jaeger-operator/pull/1484), [@rubenvp8510](https://github.com/rubenvp8510))
1.24.0 (2021-07-08)
-------------------
* Include OIDC plugin in binary ([#1501](https://github.com/jaegertracing/jaeger-operator/pull/1501), [@esnible](https://github.com/esnible))
* Update jaeger operator to support strimzi operator 0.23.0 ([#1495](https://github.com/jaegertracing/jaeger-operator/pull/1495), [@rubenvp8510](https://github.com/rubenvp8510))
* Feature/add deployment strategy to crd ([#1499](https://github.com/jaegertracing/jaeger-operator/pull/1499), [@ethernoy](https://github.com/ethernoy))
* Add cassandraCreateSchema affinity ([#1475](https://github.com/jaegertracing/jaeger-operator/pull/1475), [@chasekiefer](https://github.com/chasekiefer))
* Allow to pass ES_TIME_RANGE var to Spark dependencies job ([#1481](https://github.com/jaegertracing/jaeger-operator/pull/1481), [@Gr1N](https://github.com/Gr1N))
* Pass secretName to cassandra dependencies job (#1162) ([#1447](https://github.com/jaegertracing/jaeger-operator/pull/1447), [@Gerrit-K](https://github.com/Gerrit-K))
1.23.0 (2021-06-11)
-------------------
* Implement backoff limit for jobs ([#1468](https://github.com/jaegertracing/jaeger-operator/pull/1468), [@chasekiefer](https://github.com/chasekiefer))
* Remove OwnerReferences from CA configmaps ([#1467](https://github.com/jaegertracing/jaeger-operator/pull/1467), [@rubenvp8510](https://github.com/rubenvp8510))
* Add compatibility matrix ([#1465](https://github.com/jaegertracing/jaeger-operator/pull/1465), [@jpkrohling](https://github.com/jpkrohling))
* Promote crd to apiextensions.k8s.io/v1 ([#1456](https://github.com/jaegertracing/jaeger-operator/pull/1456), [@rubenvp8510](https://github.com/rubenvp8510))
* Add preserve unknown fields annotation to FreeForm and Options fields ([#1435](https://github.com/jaegertracing/jaeger-operator/pull/1435), [@rubenvp8510](https://github.com/rubenvp8510))
* Migrate remaining flags and some env vars to 1.22 ([#1449](https://github.com/jaegertracing/jaeger-operator/pull/1449), [@rubenvp8510](https://github.com/rubenvp8510))
* Fix override storage and ingress values when upgrade to 1.22 ([#1439](https://github.com/jaegertracing/jaeger-operator/pull/1439), [@rubenvp8510](https://github.com/rubenvp8510))
* Add agent dnsPolicy option ([#1370](https://github.com/jaegertracing/jaeger-operator/pull/1370), [@faceair](https://github.com/faceair))
1.22.1 (2021-04-19)
-------------------
* Allow configure custom certificates to collector ([#1418](https://github.com/jaegertracing/jaeger-operator/pull/1418), [@rubenvp8510](https://github.com/rubenvp8510))
* Add support for NodePort in Jaeger Query Service ([#1394](https://github.com/jaegertracing/jaeger-operator/pull/1394), [@CSP197](https://github.com/CSP197))
1.22.0 (2021-03-16)
-------------------
* Add ability to indicate PriorityClass for collector and query ([#1413](https://github.com/jaegertracing/jaeger-operator/pull/1413), [@majidazimi](https://github.com/majidazimi))
* simplest example file should be as simplest ([#1404](https://github.com/jaegertracing/jaeger-operator/pull/1404), [@jkandasa](https://github.com/jkandasa))
* Add ability to indicate PriorityClass for agent ([#1392](https://github.com/jaegertracing/jaeger-operator/pull/1392), [@elkh510](https://github.com/elkh510))
* Migrate jaeger.tags in existing CRs ([#1380](https://github.com/jaegertracing/jaeger-operator/pull/1380), [@jpkrohling](https://github.com/jpkrohling))
1.21.3 (2021-02-09)
-------------------
* Remove support for the experimental OpenTelemetry-based Jaeger ([#1379](https://github.com/jaegertracing/jaeger-operator/pull/1379), [@jpkrohling](https://github.com/jpkrohling))
* Fix way we force es secret reconcile ([#1374](https://github.com/jaegertracing/jaeger-operator/pull/1374), [@kevinearls](https://github.com/kevinearls))
* added the codeql.yml ([#1313](https://github.com/jaegertracing/jaeger-operator/pull/1313), [@KrishnaSindhur](https://github.com/KrishnaSindhur))
* Fix service port naming convention ([#1368](https://github.com/jaegertracing/jaeger-operator/pull/1368), [@lujiajing1126](https://github.com/lujiajing1126))
* Add volumes and volume-mounts for spark dependencies ([#1359](https://github.com/jaegertracing/jaeger-operator/pull/1359), [@kevinearls](https://github.com/kevinearls))
* Create missing CA config maps on deployment controller ([#1347](https://github.com/jaegertracing/jaeger-operator/pull/1347), [@jpkrohling](https://github.com/jpkrohling))
* set non root group ([#1339](https://github.com/jaegertracing/jaeger-operator/pull/1339), [@UsaninMax](https://github.com/UsaninMax))
* Kafka 2.4 not supported by RH AMQ operator 1.6 ([#1335](https://github.com/jaegertracing/jaeger-operator/pull/1335), [@jkandasa](https://github.com/jkandasa))
* Trigger deployments reconciliation when jaeger instance is created ([#1334](https://github.com/jaegertracing/jaeger-operator/pull/1334), [@rubenvp8510](https://github.com/rubenvp8510))
* Copy common spec to avoid touching persisted CR spec ([#1333](https://github.com/jaegertracing/jaeger-operator/pull/1333), [@rubenvp8510](https://github.com/rubenvp8510))
* Try to resolve container.name from the injected agent args ([#1319](https://github.com/jaegertracing/jaeger-operator/pull/1319), [@lujiajing1126](https://github.com/lujiajing1126))
* Fix typo in CONTRIBUTING.md ([#1321](https://github.com/jaegertracing/jaeger-operator/pull/1321), [@sniperking1234](https://github.com/sniperking1234))
1.21.2 (2020-11-20)
-------------------
* Fixes jaeger version ([#1311](https://github.com/jaegertracing/jaeger-operator/pull/1311), [@rubenvp8510](https://github.com/rubenvp8510))
1.21.1 (2020-11-19)
* Update UI documentation link if is present ([#1290](https://github.com/jaegertracing/jaeger-operator/pull/1290), [@rubenvp8510](https://github.com/rubenvp8510))
1.21.0 (2020-11-17)
-------------------
* Regenerate self-provisioned ES TLS cert when it's outdated ([#1301](https://github.com/jaegertracing/jaeger-operator/pull/1301), [@kevinearls](https://github.com/kevinearls))
* Enable tolerations support in elasticsearch config ([#1296](https://github.com/jaegertracing/jaeger-operator/pull/1296), [@kevinearls](https://github.com/kevinearls))
* Update github.com/miekg/dns to v1.1.35 ([#1298](https://github.com/jaegertracing/jaeger-operator/pull/1298), [@objectiser](https://github.com/objectiser))
* Add serviceType for the collector service ([#1286](https://github.com/jaegertracing/jaeger-operator/pull/1286), [@sschne](https://github.com/sschne))
* Add env var JAEGER_DISABLED ([#1285](https://github.com/jaegertracing/jaeger-operator/pull/1285), [@sschne](https://github.com/sschne))
* Fix secret creation when using self provisioned elasticsearch instances ([#1288](https://github.com/jaegertracing/jaeger-operator/pull/1288), [@kevinearls](https://github.com/kevinearls))
* Convert storage type to typed string ([#1282](https://github.com/jaegertracing/jaeger-operator/pull/1282), [@SezalAgrawal](https://github.com/SezalAgrawal))
* Use New Admin Port Flag ([#1281](https://github.com/jaegertracing/jaeger-operator/pull/1281), [@johanavril](https://github.com/johanavril))
* Update instances status using client.Status().update interface ([#1253](https://github.com/jaegertracing/jaeger-operator/pull/1253), [@rubenvp8510](https://github.com/rubenvp8510))
* Remove gRPC host-port from being added to the CR (agent) ([#1272](https://github.com/jaegertracing/jaeger-operator/pull/1272), [@jpkrohling](https://github.com/jpkrohling))
* Sync OTEL config volume/mount and args ([#1268](https://github.com/jaegertracing/jaeger-operator/pull/1268), [@jpkrohling](https://github.com/jpkrohling))
* Publish container - dockerx should not use tag BUILD_IMAGE ([#1270](https://github.com/jaegertracing/jaeger-operator/pull/1270), [@morlay](https://github.com/morlay))
* Speed up buildx process ([#1267](https://github.com/jaegertracing/jaeger-operator/pull/1267), [@morlay](https://github.com/morlay))
* Fix the dependencies ([#1264](https://github.com/jaegertracing/jaeger-operator/pull/1264), [@faceair](https://github.com/faceair))
* Add agent hostNetwork option ([#1257](https://github.com/jaegertracing/jaeger-operator/pull/1257), [@faceair](https://github.com/faceair))
* Skip detectClusterRoles for Kubernetes ([#1262](https://github.com/jaegertracing/jaeger-operator/pull/1262), [@johanavril](https://github.com/johanavril))
* Elasticsearch: add SYS_CHROOT capability ([#1260](https://github.com/jaegertracing/jaeger-operator/pull/1260), [@haircommander](https://github.com/haircommander))
* Allow overriding the vertx example app image and config values ([#1259](https://github.com/jaegertracing/jaeger-operator/pull/1259), [@kevinearls](https://github.com/kevinearls))
* Simplify OTEL related environment variables ([#1255](https://github.com/jaegertracing/jaeger-operator/pull/1255), [@kevinearls](https://github.com/kevinearls))
* Add CQLSH_PORT environment variable ([#1243](https://github.com/jaegertracing/jaeger-operator/pull/1243), [@Ashmita152](https://github.com/Ashmita152))
* Expose elasticsearch container ports ([#1224](https://github.com/jaegertracing/jaeger-operator/pull/1224), [@jkandasa](https://github.com/jkandasa))
* Adding samples for ingress hosts and annotations ([#1231](https://github.com/jaegertracing/jaeger-operator/pull/1231), [@prageethw](https://github.com/prageethw))
* Don't set kafka batch options when using otel collector ([#1227](https://github.com/jaegertracing/jaeger-operator/pull/1227), [@kevinearls](https://github.com/kevinearls))
1.20.0 (2020-09-30)
-------------------
* Added configuration for the agent's securityContext ([#1190](https://github.com/jaegertracing/jaeger-operator/pull/1190), [@chgl](https://github.com/chgl))
* Completely replace the sidecar on each reconciliation, call patch instead of update. ([#1212](https://github.com/jaegertracing/jaeger-operator/pull/1212), [@rubenvp8510](https://github.com/rubenvp8510))
* Remove sidecars of annotated namespaces when annotation is deleted ([#1209](https://github.com/jaegertracing/jaeger-operator/pull/1209), [@rubenvp8510](https://github.com/rubenvp8510))
* Create service accounts before storage dependencies/init schemas ([#1196](https://github.com/jaegertracing/jaeger-operator/pull/1196), [@pavolloffay](https://github.com/pavolloffay))
* Added 'w3c' to the injected JAEGER_PROPAGATION env var ([#1192](https://github.com/jaegertracing/jaeger-operator/pull/1192), [@chgl](https://github.com/chgl))
* Create daemonsets after services and deployments. ([#1176](https://github.com/jaegertracing/jaeger-operator/pull/1176), [@jpkrohling](https://github.com/jpkrohling))
* Add consolelink permissions to cluster role ([#1177](https://github.com/jaegertracing/jaeger-operator/pull/1177), [@rubenvp8510](https://github.com/rubenvp8510))
1.19.0 (2020-08-27)
-------------------
Breaking changes:
* None
Other noteworthy changes:
* Remove explicitly setting agent's reporter type ([#1168](https://github.com/jaegertracing/jaeger-operator/pull/1168), [@pavolloffay](https://github.com/pavolloffay))
* Apply the securityContext to the cassandraCreateSchema job ([#1167](https://github.com/jaegertracing/jaeger-operator/pull/1167), [@chgl](https://github.com/chgl))
* Disabled service links ([#1161](https://github.com/jaegertracing/jaeger-operator/pull/1161), [@mikelorant](https://github.com/mikelorant))
* Create option to specify type for Query service ([#1132](https://github.com/jaegertracing/jaeger-operator/pull/1132), [@Aneurysm9](https://github.com/Aneurysm9))
* Added missing metrics port to operator's deployment ([#1157](https://github.com/jaegertracing/jaeger-operator/pull/1157), [@jpkrohling](https://github.com/jpkrohling))
* Support custom labels in Jaeger all-in-one deployments (#629) ([#1153](https://github.com/jaegertracing/jaeger-operator/pull/1153), [@albertteoh](https://github.com/albertteoh))
* Added interactive flag for docker to fix issue 1150 ([#1154](https://github.com/jaegertracing/jaeger-operator/pull/1154), [@sundar-cs](https://github.com/sundar-cs))
* Avoid error message assertions on OS dependent errors (#716) ([#1151](https://github.com/jaegertracing/jaeger-operator/pull/1151), [@albertteoh](https://github.com/albertteoh))
* Add link to openshift console ([#1142](https://github.com/jaegertracing/jaeger-operator/pull/1142), [@rubenvp8510](https://github.com/rubenvp8510))
* Add common field to jaeger-es-rollover-create-mapping ([#1144](https://github.com/jaegertracing/jaeger-operator/pull/1144), [@lighteness](https://github.com/lighteness))
* Refined Jaeger instance injection logic ([#1146](https://github.com/jaegertracing/jaeger-operator/pull/1146), [@rubenvp8510](https://github.com/rubenvp8510))
* Update downloaded SDK version and update deprecated struct name ([#1133](https://github.com/jaegertracing/jaeger-operator/pull/1133), [@chlunde](https://github.com/chlunde))
* Update x/crypto version ([#1136](https://github.com/jaegertracing/jaeger-operator/pull/1136), [@objectiser](https://github.com/objectiser))
* Fixed binding of command line flags ([#1129](https://github.com/jaegertracing/jaeger-operator/pull/1129), [@jpkrohling](https://github.com/jpkrohling))
* Updated Operator SDK to v0.18.2 ([#1126](https://github.com/jaegertracing/jaeger-operator/pull/1126), [@jpkrohling](https://github.com/jpkrohling))
* Create and mount service CA via ConfigMap ([#1124](https://github.com/jaegertracing/jaeger-operator/pull/1124), [@jpkrohling](https://github.com/jpkrohling))
* Set the grpc port name to include http(s) prefix. ([#1122](https://github.com/jaegertracing/jaeger-operator/pull/1122), [@jpkrohling](https://github.com/jpkrohling))
* Fix duplicate mount path for /etc/pki/ca-trust/extracted/pem ([#1121](https://github.com/jaegertracing/jaeger-operator/pull/1121), [@objectiser](https://github.com/objectiser))
* Adjusted gRPC options for OpenShift when TLS is enabled ([#1119](https://github.com/jaegertracing/jaeger-operator/pull/1119), [@jpkrohling](https://github.com/jpkrohling))
* Add support for imagePullSecrets to sidecar's Deployment ([#1115](https://github.com/jaegertracing/jaeger-operator/pull/1115), [@Saad-Hussain1](https://github.com/Saad-Hussain1))
* Add TraceTTL to cassandra schema spec ([#1111](https://github.com/jaegertracing/jaeger-operator/pull/1111), [@moolen](https://github.com/moolen))
* Deploy trusted CA config map in OpenShift when agent injected into a … ([#1110](https://github.com/jaegertracing/jaeger-operator/pull/1110), [@objectiser](https://github.com/objectiser))
* Mount volumes from agent spec ([#1102](https://github.com/jaegertracing/jaeger-operator/pull/1102), [@Saad-Hussain1](https://github.com/Saad-Hussain1))
* Added missing displayName to CSV 1.18.1 ([#1095](https://github.com/jaegertracing/jaeger-operator/pull/1095), [@jpkrohling](https://github.com/jpkrohling))
1.18.1 (2020-06-19)
-------------------
Breaking changes:
* None
Other noteworthy changes:
* Add trusted CA bundle support for OpenShift ([#1079](https://github.com/jaegertracing/jaeger-operator/pull/1079), [@objectiser](https://github.com/objectiser))
* create Jaeger resource in the watched namespace ([#1036](https://github.com/jaegertracing/jaeger-operator/pull/1036), [@therealmitchconnors](https://github.com/therealmitchconnors))
* Set correct branch for ES 4.4 ([#1081](https://github.com/jaegertracing/jaeger-operator/pull/1081), [@pavolloffay](https://github.com/pavolloffay))
* Add OTEL config to all-in-one ([#1080](https://github.com/jaegertracing/jaeger-operator/pull/1080), [@pavolloffay](https://github.com/pavolloffay))
1.18.0 (2020-05-15)
-------------------
Breaking changes:
Other noteworthy changes:
* Migrate Ingress from API extensions/v1beta1 to networking.k8s.io/v1beta1 ([#1039](https://github.com/jaegertracing/jaeger-operator/pull/1039), [@rubenvp8510](https://github.com/rubenvp8510))
* Make sure truncated labels are valid ([#1055](https://github.com/jaegertracing/jaeger-operator/pull/1055), [@rubenvp8510](https://github.com/rubenvp8510))
* Add CLI command to generate k8s manifests ([#1046](https://github.com/jaegertracing/jaeger-operator/pull/1046), [@chlunde](https://github.com/chlunde))
* Add OTEL config to Jaeger CR ([#1056](https://github.com/jaegertracing/jaeger-operator/pull/1056), [@pavolloffay](https://github.com/pavolloffay))
* Missing components added to func JaegerServiceAccountFor() ([#1057](https://github.com/jaegertracing/jaeger-operator/pull/1057), [@AdrieVanDijk](https://github.com/AdrieVanDijk))
* Fix typo in godoc ([#1052](https://github.com/jaegertracing/jaeger-operator/pull/1052), [@jjmengze](https://github.com/jjmengze))
* Change source of oauth-proxy image from the imagestream ([#1049](https://github.com/jaegertracing/jaeger-operator/pull/1049), [@objectiser](https://github.com/objectiser))
* Handle normalization of host:port addresses in operator upgrade for 1.18 ([#1033](https://github.com/jaegertracing/jaeger-operator/pull/1033), [@rubenvp8510](https://github.com/rubenvp8510))
* Use semver on the upgrade process ([#1034](https://github.com/jaegertracing/jaeger-operator/pull/1034), [@rubenvp8510](https://github.com/rubenvp8510))
* Do not set the default index cleaner, rollover and dependencies image in CR ([#1037](https://github.com/jaegertracing/jaeger-operator/pull/1037), [@objectiser](https://github.com/objectiser))
* Allow oauth proxy imagestream to be used by specifying the namespace/… ([#1035](https://github.com/jaegertracing/jaeger-operator/pull/1035), [@objectiser](https://github.com/objectiser))
* Added auto-scale to the ingester ([#1006](https://github.com/jaegertracing/jaeger-operator/pull/1006), [@rubenvp8510](https://github.com/rubenvp8510))
* Synch changes in cert generation script with CLO ([#1008](https://github.com/jaegertracing/jaeger-operator/pull/1008), [@pavolloffay](https://github.com/pavolloffay))
* Fix autodetect restarting platform from OpenShift to Kubernetes ([#1003](https://github.com/jaegertracing/jaeger-operator/pull/1003), [@objectiser](https://github.com/objectiser))
* Update deployment sidecar when flags change ([#961](https://github.com/jaegertracing/jaeger-operator/pull/961), [@rubenvp8510](https://github.com/rubenvp8510))
* Marked specific fields as nullable to keep backwards compatibility ([#985](https://github.com/jaegertracing/jaeger-operator/pull/985), [@jpkrohling](https://github.com/jpkrohling))
* Restored the displayName in the CSV ([#987](https://github.com/jaegertracing/jaeger-operator/pull/987), [@jpkrohling](https://github.com/jpkrohling))
* Change 'make generate' to write only a single CRD ([#978](https://github.com/jaegertracing/jaeger-operator/pull/978), [@jpkrohling](https://github.com/jpkrohling))
* Prevent operator from overriding .Spec.Replicas ([#979](https://github.com/jaegertracing/jaeger-operator/pull/979), [@jpkrohling](https://github.com/jpkrohling))
1.17.1 (2020-03-18)
-------------------
Breaking changes:
* None
Other noteworthy changes:
* No modify annotation when injecting ([#902](https://github.com/jaegertracing/jaeger-operator/pull/902), [@rubenvp8510](https://github.com/rubenvp8510))
* Add Jaeger client generated code through client-gen ([#921](https://github.com/jaegertracing/jaeger-operator/pull/921), [@rareddy](https://github.com/rareddy))
* Use non-cached CR on reconciliation ([#940](https://github.com/jaegertracing/jaeger-operator/pull/940), [@jpkrohling](https://github.com/jpkrohling))
* Update README.md ([#954](https://github.com/jaegertracing/jaeger-operator/pull/954), [@slikk66](https://github.com/slikk66))
* Add example StatefulSet with manual sidecar definition ([#949](https://github.com/jaegertracing/jaeger-operator/pull/949), [@ewohltman](https://github.com/ewohltman))
* [oc] Auto create TLS cert in collector deployment ([#914](https://github.com/jaegertracing/jaeger-operator/pull/914), [@annanay25](https://github.com/annanay25))
* Reorganized cluster roles, added rules to watch all namespaces ([#936](https://github.com/jaegertracing/jaeger-operator/pull/936), [@jpkrohling](https://github.com/jpkrohling))
* Replaced client.List with reader.List ([#937](https://github.com/jaegertracing/jaeger-operator/pull/937), [@jpkrohling](https://github.com/jpkrohling))
* Removed descriptions from CRD ([#932](https://github.com/jaegertracing/jaeger-operator/pull/932), [@jpkrohling](https://github.com/jpkrohling))
1.17.0 (2020-02-26)
-------------------
Breaking changes:
* Removed 'Size' property from components ([#850](https://github.com/jaegertracing/jaeger-operator/pull/850))
Other noteworthy changes:
* Use ubi as base image ([#924](https://github.com/jaegertracing/jaeger-operator/pull/924))
* Changed the operator to gracefully degrade when not on cluster-wide scope ([#916](https://github.com/jaegertracing/jaeger-operator/pull/916))
* Updated admin-port for the Agent ([#922](https://github.com/jaegertracing/jaeger-operator/pull/922))
* Limit some properties to use at most 63 chars ([#904](https://github.com/jaegertracing/jaeger-operator/pull/904))
* Add http- prefix to collector service port names ([#911](https://github.com/jaegertracing/jaeger-operator/pull/911))
* Change query service portname to 'http-query' ([#909](https://github.com/jaegertracing/jaeger-operator/pull/909))
* Disable agent injection to jaeger instances and when false value is used ([#903](https://github.com/jaegertracing/jaeger-operator/pull/903))
* Per namespace agent injection ([#897](https://github.com/jaegertracing/jaeger-operator/pull/897))
* Preserve generated cookie secret on the reconciliation process ([#883](https://github.com/jaegertracing/jaeger-operator/pull/883))
* Add additional printer columns ([#898](https://github.com/jaegertracing/jaeger-operator/pull/898))
* cassandra-create-schema job: set job deadline to 1 day, improve resilience ([#893](https://github.com/jaegertracing/jaeger-operator/pull/893))
* Removed user_setup script ([#890](https://github.com/jaegertracing/jaeger-operator/pull/890))
* Updated Operator SDK to v0.15.1 ([#891](https://github.com/jaegertracing/jaeger-operator/pull/891))
* Auto-inject the IP tag for operator-injected agent ([#871](https://github.com/jaegertracing/jaeger-operator/pull/871))
* Remove deployment updates from autodetect loop ([#869](https://github.com/jaegertracing/jaeger-operator/pull/869))
* Auto-inject agent tags in multi-container pods ([#864](https://github.com/jaegertracing/jaeger-operator/pull/864))
* Include the Log Out option when a custom menu is used ([#867](https://github.com/jaegertracing/jaeger-operator/pull/867))
* Added auto-scale to the collector ([#856](https://github.com/jaegertracing/jaeger-operator/pull/856))
* Support self provisioned ES in streaming strategy ([#842](https://github.com/jaegertracing/jaeger-operator/pull/842))
* Fix hardcoded self provisioned kafka broker URL ([#841](https://github.com/jaegertracing/jaeger-operator/pull/841))
* Configure keyspace in cassandra init job ([#837](https://github.com/jaegertracing/jaeger-operator/pull/837))
* Added 'openapi' generated resources ([#819](https://github.com/jaegertracing/jaeger-operator/pull/819))
1.16.0 (2019-12-17)
-------------------
Breaking changes:
* None
Other noteworthy changes:
* Fixed permissions for ServiceMonitor objects ([#831](https://github.com/jaegertracing/jaeger-operator/pull/831))
* Add timeout for Cassandra Schema creation job ([#820](https://github.com/jaegertracing/jaeger-operator/pull/820))
* Fixed the with-badger-and-volume example ([#827](https://github.com/jaegertracing/jaeger-operator/pull/827))
* Run rollover cronjob by default daily at midnight ([#812](https://github.com/jaegertracing/jaeger-operator/pull/812))
* Added basic status to CR{D} ([#802](https://github.com/jaegertracing/jaeger-operator/pull/802))
* Disabled tracing by default ([#805](https://github.com/jaegertracing/jaeger-operator/pull/805))
* Remove unnecessary options from auto-kafka-prov example ([#810](https://github.com/jaegertracing/jaeger-operator/pull/810))
* Use APIReader for Get/List resources on the autodetect functions ([#814](https://github.com/jaegertracing/jaeger-operator/pull/814))
* Updated Operator SDK to v0.12.0 ([#799](https://github.com/jaegertracing/jaeger-operator/pull/799))
* Added OpenTelemetry instrumentation ([#738](https://github.com/jaegertracing/jaeger-operator/pull/738))
* Fixed nil pointer when no Jaeger is suitable for sidecar injection ([#783](https://github.com/jaegertracing/jaeger-operator/pull/783))
* CSV changes to be picked up for next release ([#772](https://github.com/jaegertracing/jaeger-operator/pull/772))
* Correctly expose UDP container ports of injected sidecar containers ([#773](https://github.com/jaegertracing/jaeger-operator/pull/773))
* Scan deployments for agent injection ([#454](https://github.com/jaegertracing/jaeger-operator/pull/454))
1.15.0 (2019-11-09)
-------------------
Breaking changes:
* Breaking change - removed legacy io.jaegertracing CRD ([#649](https://github.com/jaegertracing/jaeger-operator/pull/649))
Other noteworthy changes:
* fix sampling strategy file issue in Jaeger Collector ([#741](https://github.com/jaegertracing/jaeger-operator/pull/741))
* Enable tag/digest to be specified in the image parameters to the operator ([#743](https://github.com/jaegertracing/jaeger-operator/pull/743))
* Upgrade deprecated flags from 1.14 and previous, to 1.15 ([#730](https://github.com/jaegertracing/jaeger-operator/pull/730))
* Use StatefulSet from apps/v1 API for ES and Cassandra ([#727](https://github.com/jaegertracing/jaeger-operator/pull/727))
* Read the service account's namespace when POD_NAMESPACE is missing ([#722](https://github.com/jaegertracing/jaeger-operator/pull/722))
* Added automatic provisioning of Kafka when its operator is available ([#713](https://github.com/jaegertracing/jaeger-operator/pull/713))
* New DeploymentStrategy type for JaegerSpec.Strategy ([#704](https://github.com/jaegertracing/jaeger-operator/pull/704))
* Added workflows publishing the 'master' container image ([#718](https://github.com/jaegertracing/jaeger-operator/pull/718))
* Added labels to cronjob pod template ([#707](https://github.com/jaegertracing/jaeger-operator/pull/707))
* Pass only specified options to spark dependencies ([#708](https://github.com/jaegertracing/jaeger-operator/pull/708))
* Updated Operator SDK to v0.11.0 ([#695](https://github.com/jaegertracing/jaeger-operator/pull/695))
* Update gopkg.in/yaml.v2 dependency to v2.2.4 ([#699](https://github.com/jaegertracing/jaeger-operator/pull/699))
* added cassandra creds ([#590](https://github.com/jaegertracing/jaeger-operator/pull/590))
* Updated the business-application example ([#693](https://github.com/jaegertracing/jaeger-operator/pull/693))
* Add support for TLS on ingress ([#681](https://github.com/jaegertracing/jaeger-operator/pull/681))
* Add support to SuccessfulJobsHistoryLimit ([#621](https://github.com/jaegertracing/jaeger-operator/pull/621))
* Add prometheus annotations to sidecar's deployment ([#684](https://github.com/jaegertracing/jaeger-operator/pull/684))
* add missing grpc port ([#680](https://github.com/jaegertracing/jaeger-operator/pull/680))
* Recognize when a resource has been deleted while the operator waits ([#672](https://github.com/jaegertracing/jaeger-operator/pull/672))
* Enable the documentation URL in the default menu items to be configured via the operator CLI ([#666](https://github.com/jaegertracing/jaeger-operator/pull/666))
* Adjusted the ALM examples and operator capabilities in CSV ([#665](https://github.com/jaegertracing/jaeger-operator/pull/665))
* Bring jaeger operator repo inline with contributing guidelines in mai… ([#664](https://github.com/jaegertracing/jaeger-operator/pull/664))
* Fix error handling when getting environment variable value ([#661](https://github.com/jaegertracing/jaeger-operator/pull/661))
* Update install-sdk to work on Mac ([#660](https://github.com/jaegertracing/jaeger-operator/pull/660))
* Improved the install-sdk target ([#653](https://github.com/jaegertracing/jaeger-operator/pull/653))
* Use elasticsearch operator 4.2, add workflow for 4.1 ([#631](https://github.com/jaegertracing/jaeger-operator/pull/631))
* Load env variables in the given secretName in Spark dependencies ([#651](https://github.com/jaegertracing/jaeger-operator/pull/651))
* Added default agent tags ([#648](https://github.com/jaegertracing/jaeger-operator/pull/648))
1.14.0 (2019-09-04)
-------------------
* Add commonSpec to other jobs (es-index-cleaner, es-rollover, cassandr… ([#640](https://github.com/jaegertracing/jaeger-operator/pull/640))
* Add common spec to dependencies ([#637](https://github.com/jaegertracing/jaeger-operator/pull/637))
* Add resource limits for spark dependencies cronjob ([#620](https://github.com/jaegertracing/jaeger-operator/pull/620))
* Add Jaeger version to Elasticsearch job images ([#628](https://github.com/jaegertracing/jaeger-operator/pull/628))
* Add badger to supported list of storage types ([#616](https://github.com/jaegertracing/jaeger-operator/pull/616))
* Get rid of finalizer, clean sidecars when no jaeger instance found ([#575](https://github.com/jaegertracing/jaeger-operator/pull/575))
* Deploy production ready self provisioned ES by default ([#585](https://github.com/jaegertracing/jaeger-operator/pull/585))
* Always deploy client,data nodes with master node ([#586](https://github.com/jaegertracing/jaeger-operator/pull/586))
* Configure index cleaner properly when rollover is enabled ([#587](https://github.com/jaegertracing/jaeger-operator/pull/587))
* Agent service ports with correct protocol ([#579](https://github.com/jaegertracing/jaeger-operator/pull/579))
* Renamed the ManagedBy label to OperatedBy ([#576](https://github.com/jaegertracing/jaeger-operator/pull/576))
* Added htpasswd option to the OpenShift OAuth type ([#573](https://github.com/jaegertracing/jaeger-operator/pull/573))
* Changed Operator to set ownership of the instances it manages ([#571](https://github.com/jaegertracing/jaeger-operator/pull/571))
* Added upgrade mechanism for managed Jaeger instances ([#476](https://github.com/jaegertracing/jaeger-operator/pull/476))
* Check and update finalizers before setting APIVersion and Kind ([#558](https://github.com/jaegertracing/jaeger-operator/pull/558))
* Remove sidecar when instance is deleted ([#453](https://github.com/jaegertracing/jaeger-operator/pull/453))
* Allow setting es-operator-image ([#549](https://github.com/jaegertracing/jaeger-operator/pull/549))
* Use zero redundancy when number of ES nodes is 1 ([#539](https://github.com/jaegertracing/jaeger-operator/pull/539))
* Use es-operator from 4.1 branch ([#537](https://github.com/jaegertracing/jaeger-operator/pull/537))
* Reinstated the service metrics ([#530](https://github.com/jaegertracing/jaeger-operator/pull/530))
* Use ES single redundancy by default ([#531](https://github.com/jaegertracing/jaeger-operator/pull/531))
* Change replace method, to remain compatible with golang 1.11 ([#529](https://github.com/jaegertracing/jaeger-operator/pull/529))
* Avoid touching the original structure of the options. ([#523](https://github.com/jaegertracing/jaeger-operator/pull/523))
* Prevented the Operator from overriding Secrets/ImagePullSecrets on ServiceAccounts ([#526](https://github.com/jaegertracing/jaeger-operator/pull/526))
* Added support for OpenShift-specific OAuth Proxy options ([#508](https://github.com/jaegertracing/jaeger-operator/pull/508))
* Allowed usage of custom SA for OAuth Proxy ([#520](https://github.com/jaegertracing/jaeger-operator/pull/520))
* Make sure the ES operator's UUID is a valid DNS name ([#515](https://github.com/jaegertracing/jaeger-operator/pull/515))
* Set the ES node GenUUID to explicit value based on jaeger instance namespace and name ([#495](https://github.com/jaegertracing/jaeger-operator/pull/495))
* Add linkerd.io/inject=disabled annotation ([#507](https://github.com/jaegertracing/jaeger-operator/pull/507))
1.13.1 (2019-07-05)
-------------------
* Bump Jaeger to 1.13 ([#504](https://github.com/jaegertracing/jaeger-operator/pull/504))
* Disable the property ttlSecondsAfterFinished ([#503](https://github.com/jaegertracing/jaeger-operator/pull/503))
* Set default redundancy policy to zero ([#501](https://github.com/jaegertracing/jaeger-operator/pull/501))
1.13.0 (2019-07-02)
-------------------
* Changed to always use namespace when a name is involved ([#485](https://github.com/jaegertracing/jaeger-operator/pull/485))
* Sanitize names that must follow DNS naming rules ([#483](https://github.com/jaegertracing/jaeger-operator/pull/483))
* Added instructions for daemonsets on OpenShift ([#346](https://github.com/jaegertracing/jaeger-operator/pull/346))
* Enable completion time-to-live to be set on all jobs ([#407](https://github.com/jaegertracing/jaeger-operator/pull/407))
1.12.1 (2019-06-06)
-------------------
* Removed 'expose metrics port' to prevent 'failed to create or get service' error ([#462](https://github.com/jaegertracing/jaeger-operator/pull/462))
* Add support for securityContext and serviceAccount ([#456](https://github.com/jaegertracing/jaeger-operator/pull/456))
* Add install SDK goal to make ([#458](https://github.com/jaegertracing/jaeger-operator/pull/458))
* Upgraded the operator-sdk version to 0.8.1 ([#449](https://github.com/jaegertracing/jaeger-operator/pull/449))
* Switch to go modules from dep ([#449](https://github.com/jaegertracing/jaeger-operator/pull/449))
* Do not set a default Elasticsearch image ([#450](https://github.com/jaegertracing/jaeger-operator/pull/450))
* Log the operator image name when created ([#452](https://github.com/jaegertracing/jaeger-operator/pull/452))
* Add label to the common spec ([#445](https://github.com/jaegertracing/jaeger-operator/pull/445))
* Fix injecting volumes into rollover jobs ([#446](https://github.com/jaegertracing/jaeger-operator/pull/446))
* Remove race condition by disabling esIndexCleaner till after SmokeTes… ([#437](https://github.com/jaegertracing/jaeger-operator/pull/437))
* Fix runtime panic when trying to update operator controlled resources that don't have annotation or labels ([#433](https://github.com/jaegertracing/jaeger-operator/pull/433))
1.12.0 (2019-05-22)
-------------------
* Update to 1.12 and use new admin ports ([#425](https://github.com/jaegertracing/jaeger-operator/pull/425))
* Use ephemeral storage for Kafka tests ([#419](https://github.com/jaegertracing/jaeger-operator/pull/419))
* Fix csv example and add spec.maturity ([#416](https://github.com/jaegertracing/jaeger-operator/pull/416))
* Add resources requests/limits to oauth_proxy ([#410](https://github.com/jaegertracing/jaeger-operator/pull/410))
* Check that context is not nil before calling cleanup ([#413](https://github.com/jaegertracing/jaeger-operator/pull/413))
* Improve error message when queries fail ([#402](https://github.com/jaegertracing/jaeger-operator/pull/402))
* Add resource requirements to sidecar agent ([#401](https://github.com/jaegertracing/jaeger-operator/pull/401))
* Add streaming e2e tests ([#400](https://github.com/jaegertracing/jaeger-operator/pull/400))
* Make sure to call ctx.cleanup if perpare()) fails ([#389](https://github.com/jaegertracing/jaeger-operator/pull/389))
* Change how Kafka is configured for collector and ingester ([#390](https://github.com/jaegertracing/jaeger-operator/pull/390))
* Use storage namespace in index cleaner test ([#382](https://github.com/jaegertracing/jaeger-operator/pull/382))
* Fix rbac policy issue with blockOwnerDeletion ([#384](https://github.com/jaegertracing/jaeger-operator/pull/384))
* Reinstate gosec with fix for OOM error ([#381](https://github.com/jaegertracing/jaeger-operator/pull/381))
* Enhance ES index cleaner e2e test to verify indices have been removed ([#378](https://github.com/jaegertracing/jaeger-operator/pull/378))
* Add owner ref on operator's service to ensure it gets deleted when op… ([#377](https://github.com/jaegertracing/jaeger-operator/pull/377))
* Update CSV description to comply with guidelines ([#374](https://github.com/jaegertracing/jaeger-operator/pull/374))
* Include elasticsearch statefulset nodes in availability check ([#371](https://github.com/jaegertracing/jaeger-operator/pull/371))
* Fail lint goal if not empty ([#372](https://github.com/jaegertracing/jaeger-operator/pull/372))
1.11.1 (2019-04-09)
-------------------
* Include docs for common config ([#367](https://github.com/jaegertracing/jaeger-operator/pull/367))
* Reinstated the registration of ES types ([#366](https://github.com/jaegertracing/jaeger-operator/pull/366))
* Add support for affinity and tolerations ([#361](https://github.com/jaegertracing/jaeger-operator/pull/361))
* Support injection of JAEGER_SERVICE_NAME based on app or k8s recommended labels ([#362](https://github.com/jaegertracing/jaeger-operator/pull/362))
* Change ES operator apiversion ([#360](https://github.com/jaegertracing/jaeger-operator/pull/360))
* Update test to run on OpenShift ([#350](https://github.com/jaegertracing/jaeger-operator/pull/350))
* Add prometheus scrape 'false' annotation to headless collector service ([#348](https://github.com/jaegertracing/jaeger-operator/pull/348))
* Derive agent container/host ports from options if specified ([#353](https://github.com/jaegertracing/jaeger-operator/pull/353))
1.11.0 (2019-03-22)
-------------------
### Breaking changes
* Moved from v1alpha1 to v1 ([#265](https://github.com/jaegertracing/jaeger-operator/pull/265))
* Use storage flags instead of CR properties for spark job ([#295](https://github.com/jaegertracing/jaeger-operator/pull/295))
* Changed from 'size' to 'replicas' ([#271](https://github.com/jaegertracing/jaeger-operator/pull/271)). "Size" will still work for the next couple of releases.
### Other changes
* Initialise menu to include Log Out option when using OAuth Proxy ([#344](https://github.com/jaegertracing/jaeger-operator/pull/344))
* Change Operator provider to CNCF ([#263](https://github.com/jaegertracing/jaeger-operator/pull/263))
* Added note about the apiVersion used up to 1.10.0 ([#283](https://github.com/jaegertracing/jaeger-operator/pull/283))
* Implemented a second service for the collector ([#339](https://github.com/jaegertracing/jaeger-operator/pull/339))
* Enabled DNS as the service discovery mechanism for agent => collector communication ([#333](https://github.com/jaegertracing/jaeger-operator/pull/333))
* Sorted the container arguments inside deployments ([#337](https://github.com/jaegertracing/jaeger-operator/pull/337))
* Use client certs for elasticsearch ([#325](https://github.com/jaegertracing/jaeger-operator/pull/325))
* Load back Elasticsearch certs from secrets ([#324](https://github.com/jaegertracing/jaeger-operator/pull/324))
* Disable spark dependencies for self provisioned es ([#319](https://github.com/jaegertracing/jaeger-operator/pull/319))
* Remove index cleaner from prod-es-deploy example ([#314](https://github.com/jaegertracing/jaeger-operator/pull/314))
* Set default query timeout for provisioned ES ([#313](https://github.com/jaegertracing/jaeger-operator/pull/313))
* Automatically Enable/disable depenencies tab ([#311](https://github.com/jaegertracing/jaeger-operator/pull/311))
* Unmarshall numbers in options to number not float64 ([#308](https://github.com/jaegertracing/jaeger-operator/pull/308))
* Inject archive index configuration for provisioned ES ([#309](https://github.com/jaegertracing/jaeger-operator/pull/309))
* update #305, add grps and health port to jaeger collector service ([#306](https://github.com/jaegertracing/jaeger-operator/pull/306))
* Enable archive button if archive storage is enabled ([#303](https://github.com/jaegertracing/jaeger-operator/pull/303))
* Fix reverting ingress security to oauth-proxy on openshift if set to none ([#301](https://github.com/jaegertracing/jaeger-operator/pull/301))
* Change agent reporter to GRPC ([#299](https://github.com/jaegertracing/jaeger-operator/pull/299))
* Bump jaeger version to 1.11 ([#300](https://github.com/jaegertracing/jaeger-operator/pull/300))
* Enable agent readiness probe ([#297](https://github.com/jaegertracing/jaeger-operator/pull/297))
* Use storage flags instead of CR properties for spark job ([#295](https://github.com/jaegertracing/jaeger-operator/pull/295))
* Change operator.yaml to use master, to keep the readme uptodate with latest version ([#296](https://github.com/jaegertracing/jaeger-operator/pull/296))
* Add Elasticsearch image to CR and flag ([#289](https://github.com/jaegertracing/jaeger-operator/pull/289))
* Updated to Operator SDK 0.5.0 ([#273](https://github.com/jaegertracing/jaeger-operator/pull/273))
* Block until objects have been created and are ready ([#279](https://github.com/jaegertracing/jaeger-operator/pull/279))
* Add rollover support ([#267](https://github.com/jaegertracing/jaeger-operator/pull/267))
* Added publishing of major.minor image for the operator ([#274](https://github.com/jaegertracing/jaeger-operator/pull/274))
* Use only ES data nodes to calculate shards ([#257](https://github.com/jaegertracing/jaeger-operator/pull/257))
* Reinstated sidecar for query, plus small refactoring of sidecar ([#246](https://github.com/jaegertracing/jaeger-operator/pull/246))
* Remove ES master certs ([#256](https://github.com/jaegertracing/jaeger-operator/pull/256))
* Store back the CR only if it has changed ([#249](https://github.com/jaegertracing/jaeger-operator/pull/249))
* Fixed role rule for Elasticsearch ([#251](https://github.com/jaegertracing/jaeger-operator/pull/251))
* Wait for elasticsearch cluster to be up ([#242](https://github.com/jaegertracing/jaeger-operator/pull/242))
1.10.0 (2019-02-28)
-------------------
* Automatically detect when the ES operator is available ([#239](https://github.com/jaegertracing/jaeger-operator/pull/239))
* Adjusted logs to be consistent across the code base ([#237](https://github.com/jaegertracing/jaeger-operator/pull/237))
* Fixed deployment of Elasticsearch via its operator ([#234](https://github.com/jaegertracing/jaeger-operator/pull/234))
* Set ES shards and replicas based on redundancy policy ([#229](https://github.com/jaegertracing/jaeger-operator/pull/229))
* Update Jaeger CR ([#193](https://github.com/jaegertracing/jaeger-operator/pull/193))
* Add storage secrets to es-index-cleaner cronjob ([#197](https://github.com/jaegertracing/jaeger-operator/pull/197))
* Removed constraint on namespace when obtaining available Jaeger instances ([#213](https://github.com/jaegertracing/jaeger-operator/pull/213))
* Added workaround for kubectl logs and get pods commands ([#225](https://github.com/jaegertracing/jaeger-operator/pull/225))
* Add -n observability so kubectl get deployment command works correctly ([#223](https://github.com/jaegertracing/jaeger-operator/pull/223))
* Added capability of detecting the platform ([#217](https://github.com/jaegertracing/jaeger-operator/pull/217))
* Deploy one ES node ([#221](https://github.com/jaegertracing/jaeger-operator/pull/221))
* Use centos image ([#220](https://github.com/jaegertracing/jaeger-operator/pull/220))
* Add support for deploying elasticsearch ([#191](https://github.com/jaegertracing/jaeger-operator/pull/191))
* Replaced use of strings.ToLower comparison with EqualFold ([#214](https://github.com/jaegertracing/jaeger-operator/pull/214))
* Bump Jaeger to 1.10 ([#212](https://github.com/jaegertracing/jaeger-operator/pull/212))
* Ignore golang coverage html ([#208](https://github.com/jaegertracing/jaeger-operator/pull/208))
1.9.2 (2019-02-11)
------------------
* Enable single operator to monitor all namespaces ([#188](https://github.com/jaegertracing/jaeger-operator/pull/188))
* Added flag to control the logging level ([#202](https://github.com/jaegertracing/jaeger-operator/pull/202))
* Updated operator-sdk to v0.4.1 ([#200](https://github.com/jaegertracing/jaeger-operator/pull/200))
* Added newline to the end of the role YAML file ([#199](https://github.com/jaegertracing/jaeger-operator/pull/199))
* Added mention to WATCH_NAMESPACE when running for OpenShift ([#195](https://github.com/jaegertracing/jaeger-operator/pull/195))
* Added openshift route to role ([#198](https://github.com/jaegertracing/jaeger-operator/pull/198))
* Added Route to SDK Scheme ([#194](https://github.com/jaegertracing/jaeger-operator/pull/194))
* Add Jaeger CSV and Package for OLM integration and deployment of the … ([#173](https://github.com/jaegertracing/jaeger-operator/pull/173))
1.9.1 (2019-01-30)
------------------
* Remove debug logging from simple-streaming example ([#185](https://github.com/jaegertracing/jaeger-operator/pull/185))
* Add ingester (and kafka) support ([#168](https://github.com/jaegertracing/jaeger-operator/pull/168))
* When filtering storage options, also include '-archive' related options ([#182](https://github.com/jaegertracing/jaeger-operator/pull/182))
1.9.0 (2019-01-23)
------------------
* Changed to use recommended labels ([#172](https://github.com/jaegertracing/jaeger-operator/pull/172))
* Enable dependencies and index cleaner by default ([#162](https://github.com/jaegertracing/jaeger-operator/pull/162))
* Fix log when spak depenencies are used with unsupported storage ([#161](https://github.com/jaegertracing/jaeger-operator/pull/161))
* Fix serviceaccount could not be created by the operator on openshift. ([#165](https://github.com/jaegertracing/jaeger-operator/pull/165))
* Add Elasticsearch index cleaner as cron job ([#155](https://github.com/jaegertracing/jaeger-operator/pull/155))
* Fix import order for collector-test ([#158](https://github.com/jaegertracing/jaeger-operator/pull/158))
* Smoke test ([#145](https://github.com/jaegertracing/jaeger-operator/pull/145))
* Add deploy clean target and rename es/cass to deploy- ([#149](https://github.com/jaegertracing/jaeger-operator/pull/149))
* Add spark job ([#140](https://github.com/jaegertracing/jaeger-operator/pull/140))
* Automatically format imports ([#151](https://github.com/jaegertracing/jaeger-operator/pull/151))
* Silence 'mkdir' from e2e-tests ([#153](https://github.com/jaegertracing/jaeger-operator/pull/153))
* Move pkg/configmap to pkg/config/ui ([#152](https://github.com/jaegertracing/jaeger-operator/pull/152))
* Fix secrets readme ([#150](https://github.com/jaegertracing/jaeger-operator/pull/150))
1.8.2 (2018-12-03)
------------------
* Configure sampling strategies ([#139](https://github.com/jaegertracing/jaeger-operator/pull/139))
* Add support for secrets ([#114](https://github.com/jaegertracing/jaeger-operator/pull/114))
* Fix crd links ([#132](https://github.com/jaegertracing/jaeger-operator/pull/132))
* Create e2e testdir, fix contributing readme ([#131](https://github.com/jaegertracing/jaeger-operator/pull/131))
* Enable JAEGER_SERVICE_NAME and JAEGER_PROPAGATION env vars to be set … ([#128](https://github.com/jaegertracing/jaeger-operator/pull/128))
* Add CRD to install steps, and update cleanup instructions ([#129](https://github.com/jaegertracing/jaeger-operator/pull/129))
* Rename controller to strategy ([#125](https://github.com/jaegertracing/jaeger-operator/pull/125))
* Add tests for new operator-sdk related code ([#122](https://github.com/jaegertracing/jaeger-operator/pull/122))
* Update README.adoc to match yaml files in deploy ([#124](https://github.com/jaegertracing/jaeger-operator/pull/124))
1.8.1 (2018-11-21)
------------------
* Add support for UI configuration ([#115](https://github.com/jaegertracing/jaeger-operator/pull/115))
* Use proper jaeger-operator version for e2e tests and remove readiness check from DaemonSet ([#120](https://github.com/jaegertracing/jaeger-operator/pull/120))
* Migrate to Operator SDK 0.1.0 ([#116](https://github.com/jaegertracing/jaeger-operator/pull/116))
* Fix changelog 'new features' header for 1.8 ([#113](https://github.com/jaegertracing/jaeger-operator/pull/113))
1.8.0 (2018-11-13)
------------------
*Notable new Features*
* Query base path should be used to configure correct path in ingress ([#108](https://github.com/jaegertracing/jaeger-operator/pull/108))
* Enable resources to be defined at top level and overridden at compone… ([#110](https://github.com/jaegertracing/jaeger-operator/pull/110))
* Add OAuth Proxy to UI when on OpenShift ([#100](https://github.com/jaegertracing/jaeger-operator/pull/100))
* Enable top level annotations to be defined ([#97](https://github.com/jaegertracing/jaeger-operator/pull/97))
* Support volumes and volumeMounts ([#82](https://github.com/jaegertracing/jaeger-operator/pull/82))
* Add support for OpenShift routes ([#93](https://github.com/jaegertracing/jaeger-operator/pull/93))
* Enable annotations to be specified with the deployable components ([#86](https://github.com/jaegertracing/jaeger-operator/pull/86))
* Add support for Cassandra create-schema job ([#71](https://github.com/jaegertracing/jaeger-operator/pull/71))
* Inject sidecar in properly annotated pods ([#58](https://github.com/jaegertracing/jaeger-operator/pull/58))
* Support deployment of agent as a DaemonSet ([#52](https://github.com/jaegertracing/jaeger-operator/pull/52))
*Breaking changes*
* Change CRD to use lower camel case ([#87](https://github.com/jaegertracing/jaeger-operator/pull/87))
* Factor out ingress from all-in-one and query, as common to both but i… ([#91](https://github.com/jaegertracing/jaeger-operator/pull/91))
* Remove zipkin service ([#75](https://github.com/jaegertracing/jaeger-operator/pull/75))
*Full list of commits:*
* Query base path should be used to configure correct path in ingress ([#108](https://github.com/jaegertracing/jaeger-operator/pull/108))
* Enable resources to be defined at top level and overridden at compone… ([#110](https://github.com/jaegertracing/jaeger-operator/pull/110))
* Fix disable-oauth-proxy example ([#107](https://github.com/jaegertracing/jaeger-operator/pull/107))
* Add OAuth Proxy to UI when on OpenShift ([#100](https://github.com/jaegertracing/jaeger-operator/pull/100))
* Refactor common spec elements into a single struct with common proces… ([#105](https://github.com/jaegertracing/jaeger-operator/pull/105))
* Ensure 'make generate' has been executed when model changes are made ([#101](https://github.com/jaegertracing/jaeger-operator/pull/101))
* Enable top level annotations to be defined ([#97](https://github.com/jaegertracing/jaeger-operator/pull/97))
* Update generated code and reverted change to 'all-in-one' in CRD ([#98](https://github.com/jaegertracing/jaeger-operator/pull/98))
* Support volumes and volumeMounts ([#82](https://github.com/jaegertracing/jaeger-operator/pull/82))
* Update readme to include info about storage options being located in … ([#96](https://github.com/jaegertracing/jaeger-operator/pull/96))
* Enable storage options to be filtered out based on specified storage … ([#94](https://github.com/jaegertracing/jaeger-operator/pull/94))
* Add support for OpenShift routes ([#93](https://github.com/jaegertracing/jaeger-operator/pull/93))
* Change CRD to use lower camel case ([#87](https://github.com/jaegertracing/jaeger-operator/pull/87))
* Factor out ingress from all-in-one and query, as common to both but i… ([#91](https://github.com/jaegertracing/jaeger-operator/pull/91))
* Fix operator SDK version as master is too unpredicatable at the moment ([#92](https://github.com/jaegertracing/jaeger-operator/pull/92))
* Update generated file after new annotations field ([#90](https://github.com/jaegertracing/jaeger-operator/pull/90))
* Enable annotations to be specified with the deployable components ([#86](https://github.com/jaegertracing/jaeger-operator/pull/86))
* Remove zipkin service ([#75](https://github.com/jaegertracing/jaeger-operator/pull/75))
* Add support for Cassandra create-schema job ([#71](https://github.com/jaegertracing/jaeger-operator/pull/71))
* Fix table of contents on readme ([#73](https://github.com/jaegertracing/jaeger-operator/pull/73))
* Update the Operator SDK version ([#69](https://github.com/jaegertracing/jaeger-operator/pull/69))
* Add sidecar.istio.io/inject=false annotation to all-in-one, agent (da… ([#67](https://github.com/jaegertracing/jaeger-operator/pull/67))
* Fix zipkin port issue ([#65](https://github.com/jaegertracing/jaeger-operator/pull/65))
* Go 1.11.1 ([#61](https://github.com/jaegertracing/jaeger-operator/pull/61))
* Inject sidecar in properly annotated pods ([#58](https://github.com/jaegertracing/jaeger-operator/pull/58))
* Support deployment of agent as a DaemonSet ([#52](https://github.com/jaegertracing/jaeger-operator/pull/52))
* Normalize options on the stub and update the normalized CR ([#54](https://github.com/jaegertracing/jaeger-operator/pull/54))
* Document the disable ingress feature ([#55](https://github.com/jaegertracing/jaeger-operator/pull/55))
* dep ensure ([#51](https://github.com/jaegertracing/jaeger-operator/pull/51))
* Add support for JaegerIngressSpec to all-in-one
1.7.0 (2018-09-25)
------------------
This release brings Jaeger v1.7 to the Operator.
*Full list of commits:*
* Release v1.7.0
* Bump Jaeger to 1.7 ([#41](https://github.com/jaegertracing/jaeger-operator/pull/41))
1.6.5 (2018-09-21)
------------------
This is our initial release based on Jaeger 1.6.
*Full list of commits:*
* Release v1.6.5
* Push the tag with the new commit to master, not the release tag
* Fix git push syntax
* Push tag to master
* Merge release commit into master ([#39](https://github.com/jaegertracing/jaeger-operator/pull/39))
* Add query ingress enable switch ([#36](https://github.com/jaegertracing/jaeger-operator/pull/36))
* Fix the run goal ([#35](https://github.com/jaegertracing/jaeger-operator/pull/35))
* Release v1.6.1
* Add 'build' step when publishing image
* Fix docker push command and update release instructions
* Add release scripts ([#32](https://github.com/jaegertracing/jaeger-operator/pull/32))
* Fix command to deploy the simplest operator ([#34](https://github.com/jaegertracing/jaeger-operator/pull/34))
* Add IntelliJ specific files to gitignore ([#33](https://github.com/jaegertracing/jaeger-operator/pull/33))
* Add prometheus scrape annotations to Jaeger collector, query and all-in-one ([#27](https://github.com/jaegertracing/jaeger-operator/pull/27))
* Remove work in progress notice
* Add instructions on how to run the operator on OpenShift
* Support Jaeger version and image override
* Fix publishing of release
* Release Docker image upon merge to master
* Reuse the same ES for all tests
* Improved how to execute the e2e tests
* Correct uninstall doc to reference delete not create ([#16](https://github.com/jaegertracing/jaeger-operator/pull/16))
* Set ENTRYPOINT for Dockerfile
* Run 'docker' target only before e2e-tests
* 'dep ensure' after adding Cobra/Viper
* Update the Jaeger Operator version at build time
* Add ingress permission to the jaeger-operator
* Install golint/gosec
* Disabled e2e tests on Travis
* Initial working version
* INITIAL COMMIT

View File

@ -1,34 +0,0 @@
The following table shows the compatibility of Jaeger Operator with three different components: Kubernetes, Strimzi Operator, and Cert-Manager.
| Jaeger Operator | Kubernetes | Strimzi Operator | Cert-Manager |
|-----------------|----------------|--------------------|--------------|
| v1.62.x | v1.19 to v1.30 | v0.32 | v1.6.1 |
| v1.61.x | v1.19 to v1.30 | v0.32 | v1.6.1 |
| v1.60.x | v1.19 to v1.30 | v0.32 | v1.6.1 |
| v1.59.x | v1.19 to v1.28 | v0.32 | v1.6.1 |
| v1.58.x | skipped | skipped | skipped |
| v1.57.x | v1.19 to v1.28 | v0.32 | v1.6.1 |
| v1.56.x | v1.19 to v1.28 | v0.32 | v1.6.1 |
| v1.55.x | v1.19 to v1.28 | v0.32 | v1.6.1 |
| v1.54.x | v1.19 to v1.28 | v0.32 | v1.6.1 |
| v1.53.x | v1.19 to v1.28 | v0.32 | v1.6.1 |
| v1.52.x | v1.19 to v1.28 | v0.32 | v1.6.1 |
| v1.51.x | v1.19 to v1.28 | v0.32 | v1.6.1 |
| v1.50.x | v1.19 to v1.28 | v0.32 | v1.6.1 |
| v1.49.x | v1.19 to v1.28 | v0.32 | v1.6.1 |
| v1.48.x | v1.19 to v1.27 | v0.32 | v1.6.1 |
| v1.47.x | v1.19 to v1.27 | v0.32 | v1.6.1 |
| v1.46.x | v1.19 to v1.26 | v0.32 | v1.6.1 |
| v1.45.x | v1.19 to v1.26 | v0.32 | v1.6.1 |
| v1.44.x | v1.19 to v1.26 | v0.32 | v1.6.1 |
| v1.43.x | v1.19 to v1.26 | v0.32 | v1.6.1 |
| v1.42.x | v1.19 to v1.26 | v0.32 | v1.6.1 |
| v1.41.x | v1.19 to v1.25 | v0.30 | v1.6.1 |
| v1.40.x | v1.19 to v1.25 | v0.30 | v1.6.1 |
| v1.39.x | v1.19 to v1.25 | v0.30 | v1.6.1 |
| v1.38.x | v1.19 to v1.25 | v0.30 | v1.6.1 |
| v1.37.x | v1.19 to v1.24 | v0.23 | v1.6.1 |
| v1.36.x | v1.19 to v1.24 | v0.23 | v1.6.1 |
| v1.35.x | v1.19 to v1.24 | v0.23 | v1.6.1 |
| v1.34.x | v1.19 to v1.24 | v0.23 | v1.6.1 |
| v1.33.x | v1.19 to v1.23 | v0.23 | v1.6.1 |

217
CONTRIBUTING.adoc Normal file
View File

@ -0,0 +1,217 @@
= How to Contribute to the Jaeger Operator for Kubernetes
:toc[]:
We'd love your help!
This project is link:LICENSE[Apache 2.0 licensed] and accepts contributions via GitHub pull requests. This document outlines some of the conventions on development workflow, commit message formatting, contact points and other resources to make it easier to get your contribution accepted.
We gratefully welcome improvements to documentation as well as to code.
== Certificate of Origin
By contributing to this project you agree to the link:https://developercertificate.org/[Developer Certificate of Origin] (DCO). This document was created by the Linux Kernel community and is a simple statement that you, as a contributor, have the legal right to make the contribution. See the [DCO](DCO) file for details.
== Getting Started
This project is a regular link:https://coreos.com/operators/[Kubernetes Operator] built using the Operator SDK. Refer to the Operator SDK documentation to understand the basic architecture of this operator.
=== Installing the Operator SDK command line tool
At the time of this writing, the link:https://github.com/operator-framework/operator-sdk[Operator SDK GitHub page] listed the following commands as required to install the command line tool:
[source,bash]
----
mkdir -p $GOPATH/src/github.com/operator-framework
cd $GOPATH/src/github.com/operator-framework
git clone https://github.com/operator-framework/operator-sdk
cd operator-sdk
git checkout v0.5.0
make dep
make install
----
Alternatively, a released binary can be used instead:
[source,bash]
----
curl https://github.com/operator-framework/operator-sdk/releases/download/v0.5.0/operator-sdk-v0.5.0-x86_64-linux-gnu -sLo $GOPATH/bin/operator-sdk
chmod +x $GOPATH/bin/operator-sdk
----
NOTE: Make sure your `$GOPATH/bin` is part of your regular `$PATH`.
=== Developing
As usual for operators following the Operator SDK, the dependencies are checked into the source repository under the `vendor` directory. The dependencies are managed using link:https://github.com/golang/dep[`go dep`]. Refer to that project's documentation for instructions on how to add or update dependencies.
The first step is to get a local Kubernetes instance up and running. The recommended approach is using `minikube`. Refer to the Kubernetes' link:https://kubernetes.io/docs/tasks/tools/install-minikube/[documentation] for instructions on how to install it.
Once `minikube` is installed, it can be started with:
[source,bash]
----
minikube start
----
NOTE: Make sure to read the documentation to learn the performance switches that can be applied to your platform.
Once minikube has finished starting, get the Operator running:
[source,bash]
----
make run
----
At this point, a Jaeger instance can be installed:
[source,bash]
----
kubectl apply -f deploy/examples/simplest.yaml
kubectl get jaegers
kubectl get pods
----
To remove the instance:
[source,bash]
----
kubectl delete -f deploy/examples/simplest.yaml
----
Tests should be simple unit tests and/or end-to-end tests. For small changes, unit tests should be sufficient, but every new feature should be accompanied with end-to-end tests as well. Tests can be executed with:
[source,bash]
----
make test
----
NOTE: you can adjust the Docker image namespace by overriding the variable `NAMESPACE`, like: `make test NAMESPACE=quay.io/my-username`. The full Docker image name can be customized by overriding `BUILD_IMAGE` instead, like: `make test BUILD_IMAGE=quay.io/my-username/jaeger-operator:0.0.1`
Similar instructions also work for OpenShift, but the target `run-openshift` can be used instead of `run`. Make sure you are using the `default` namespace or that you are overriding the target namespace by setting `NAMESPACE`, like: `make run-openshift WATCH_NAMESPACE=myproject`
==== Model changes
The Operator SDK generates the `pkg/apis/jaegertracing/v1/zz_generated.deepcopy.go` file via the command `make generate`. This should be executed whenever there's a model change (`pkg/apis/jaegertracing/v1/jaeger_types.go`)
==== Ingress configuration
Kubernetes comes with no ingress provider by default. For development purposes, when running `minikube`, the following command can be executed to install an ingress provider:
[source,bash]
----
make ingress
----
This will install the `NGINX` ingress provider. It's recommended to wait for the ingress pods to be in the `READY` and `RUNNING` state before starting the operator. You can check it by running:
[source,bash]
----
kubectl get pods -n ingress-nginx
----
To verify that it's working, deploy the `simplest.yaml` and check the ingress routes:
[source,bash]
----
$ kubectl apply -f deploy/examples/simplest.yaml
jaeger.jaegertracing.io/simplest created
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
simplest-query * 192.168.122.69 80 12s
----
Accessing the provided "address" in your web browser should display the Jaeger UI.
==== Storage configuration
There are a set of templates under the `test` directory that can be used to setup an Elasticsearch and/or Cassandra cluster. Alternatively, the following commands can be executed to install it:
[source,bash]
----
make es
make cassandra
----
==== Operator-Lifecycle-Manager Integration
The link:https://github.com/operator-framework/operator-lifecycle-manager/[Operator-Lifecycle-Manager (OLM)] can install, manage, and upgrade operators and their dependencies in a cluster.
With OLM, users can:
* Define applications as a single Kubernetes resource that encapsulates requirements and metadata
* Install applications automatically with dependency resolution or manually with nothing but kubectl
* Upgrade applications automatically with different approval policies
OLM also enforces some constraints on the components it manages in order to ensure a good user experience.
The Jaeger community provides and mantains a link:https://github.com/operator-framework/operator-lifecycle-manager/blob/master/Documentation/design/building-your-csv.md/[ClusterServiceVersion (CSV) YAML] to integrate with OLM.
Starting from operator-sdk v0.5.0, one can generate and update CSVs based on the yaml files in the deploy folder.
The Jaeger CSV can be updated to version 1.9.0 with the following command:
[source,bash]
----
$ operator-sdk olm-catalog gen-csv --csv-version 1.9.0
INFO[0000] Generating CSV manifest version 1.9.0
INFO[0000] Create deploy/olm-catalog/jaeger-operator.csv.yaml
INFO[0000] Create deploy/olm-catalog/_generated.concat_crd.yaml
----
The generated CSV yaml should then be compared and used to update the deploy/olm-catalog/jaeger.clusterserviceversion.yaml file which represents the stable version copied to the operatorhub following each jaeger operator release. Once merged, the jaeger-operator.csv.yaml file should be removed.
The jaeger.clusterserviceversion.yaml file can then be tested with this command:
[source,bash]
----
$ operator-sdk scorecard --cr-manifest deploy/examples/simplest.yaml --csv-path deploy/olm-catalog/jaeger.clusterserviceversion.yaml --init-timeout 30
Checking for existence of spec and status blocks in CR
Checking that operator actions are reflected in status
Checking that writing into CRs has an effect
Checking for CRD resources
Checking for existence of example CRs
Checking spec descriptors
Checking status descriptors
Basic Operator:
Spec Block Exists: 1/1 points
Status Block Exist: 1/1 points
Operator actions are reflected in status: 0/1 points
Writing into CRs has an effect: 1/1 points
OLM Integration:
Owned CRDs have resources listed: 0/1 points
CRs have at least 1 example: 1/1 points
Spec fields with descriptors: 0/12 points
Status fields with descriptors: N/A (depends on an earlier test that failed)
Total Score: 4/18 points
----
==== E2E tests
The whole set of end-to-end tests can be executed via:
[source,bash]
----
$ make e2e-tests
----
The end-to-end tests are split into tags and can be executed in separate groups, such as:
[source,bash]
----
$ make e2e-tests-smoke
----
Other targets include `e2e-tests-cassandra` and `e2e-tests-elasticsearch`. Refer to the `Makefile` for an up-to-date list of targets.
If you face issues like the one below, make sure you don't have any Jaeger instances (`kubectl get jaegers`) running nor Ingresses (`kubectl get ingresses`):
[source]
----
--- FAIL: TestSmoke (316.59s)
--- FAIL: TestSmoke/smoke (316.55s)
--- FAIL: TestSmoke/smoke/daemonset (115.54s)
...
...
daemonset.go:30: timed out waiting for the condition
...
...
----

View File

@ -1,266 +0,0 @@
# How to Contribute to the Jaeger Operator for Kubernetes
We'd love your help!
This project is [Apache 2.0 licensed](LICENSE) and accepts contributions via GitHub pull requests. This document outlines some of the conventions on development workflow, commit message formatting, contact points and other resources to make it easier to get your contribution accepted.
We gratefully welcome improvements to documentation as well as to code.
This project is a regular [Kubernetes Operator](https://coreos.com/operators/) built using the Operator SDK. Refer to the Operator SDK documentation to understand the basic architecture of this operator.
## Installing the Operator SDK command line tool
Follow the installation guidelines from [Operator SDK GitHub page](https://github.com/operator-framework/operator-sdk)
## Developing
As usual for operators following the Operator SDK in recent versions, the dependencies are managed using [`go modules`](https://golang.org/doc/go1.11#modules). Refer to that project's documentation for instructions on how to add or update dependencies.
The first step is to get a local Kubernetes instance up and running. The recommended approach for development is using `minikube` with *ingress* enabled. Refer to the Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) for instructions on how to install it.
Once `minikube` is installed, it can be started with:
```sh
minikube start --addons=ingress
```
NOTE: Make sure to read the documentation to learn the performance switches that can be applied to your platform.
Log into docker (or another image registry):
```sh
docker login --username <dockerusername>
```
Once minikube has finished starting, get the Operator running:
```sh
make cert-manager
IMG=docker.io/$USER/jaeger-operator:latest make generate bundle docker push deploy
```
NOTE: If your registry username is not the same as $USER, modify the previous command before executing it. Also change *docker.io* if you are using a different image registry.
At this point, a Jaeger instance can be installed:
```sh
kubectl apply -f examples/simplest.yaml
kubectl get jaegers
kubectl get pods
```
To verify the Jaeger instance is running, execute *minikube ip* and open that address in a browser, or follow the steps below
```sh
export MINIKUBE_IP=`minikube ip`
curl http://{$MINIKUBE_IP}/api/services
```
NOTE: you may have to execute the *curl* command twice to get a non-empty result
Tests should be simple unit tests and/or end-to-end tests. For small changes, unit tests should be sufficient, but every new feature should be accompanied with end-to-end tests as well. Tests can be executed with:
```sh
make test
```
#### Cleaning up
To remove the instance:
```sh
kubectl delete -f examples/simplest.yaml
```
#### Model changes
The Operator SDK generates the `pkg/apis/jaegertracing/v1/zz_generated.*.go` files via the command `make generate`. This should be executed whenever there's a model change (`pkg/apis/jaegertracing/v1/jaeger_types.go`)
### Storage configuration
There are a set of templates under the `test` directory that can be used to setup an Elasticsearch and/or Cassandra cluster. Alternatively, the following commands can be executed to install it:
```sh
make es
make cassandra
```
### Operator-Lifecycle-Manager Integration
The [Operator-Lifecycle-Manager (OLM)](https://github.com/operator-framework/operator-lifecycle-manager/) can install, manage, and upgrade operators and their dependencies in a cluster.
With OLM, users can:
* Define applications as a single Kubernetes resource that encapsulates requirements and metadata
* Install applications automatically with dependency resolution or manually with nothing but kubectl
* Upgrade applications automatically with different approval policies
OLM also enforces some constraints on the components it manages in order to ensure a good user experience.
The Jaeger community provides and maintains a [ClusterServiceVersion (CSV) YAML](https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/design/building-your-csv.md) to integrate with OLM.
Starting from operator-sdk v0.5.0, one can generate and update CSVs based on the yaml files in the deploy folder.
The Jaeger CSV can be updated to version 1.9.0 with the following command:
```sh
$ operator-sdk generate csv --csv-version 1.9.0
INFO[0000] Generating CSV manifest version 1.9.0
INFO[0000] Create deploy/olm-catalog/jaeger-operator.csv.yaml
INFO[0000] Create deploy/olm-catalog/_generated.concat_crd.yaml
```
The generated CSV yaml should then be compared and used to update the `deploy/olm-catalog/jaeger.clusterserviceversion.yaml` file which represents the stable version copied to the operatorhub following each jaeger operator release. Once merged, the `jaeger-operator.csv.yaml` file should be removed.
The `jaeger.clusterserviceversion.yaml` file can then be tested with this command:
```sh
$ operator-sdk scorecard --cr-manifest examples/simplest.yaml --csv-path deploy/olm-catalog/jaeger.clusterserviceversion.yaml --init-timeout 30
Checking for existence of spec and status blocks in CR
Checking that operator actions are reflected in status
Checking that writing into CRs has an effect
Checking for CRD resources
Checking for existence of example CRs
Checking spec descriptors
Checking status descriptors
Basic Operator:
Spec Block Exists: 1/1 points
Status Block Exist: 1/1 points
Operator actions are reflected in status: 0/1 points
Writing into CRs has an effect: 1/1 points
OLM Integration:
Owned CRDs have resources listed: 0/1 points
CRs have at least 1 example: 1/1 points
Spec fields with descriptors: 0/12 points
Status fields with descriptors: N/A (depends on an earlier test that failed)
Total Score: 4/18 points
```
## E2E tests
### Requisites
Before running the E2E tests you need to install:
* [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation): a tool for running local Kubernetes clusters
* [KUTTL](https://kuttl.dev/docs/cli.html#setup-the-kuttl-kubectl-plugin): a tool to run the Kubernetes tests
### Runing the E2E tests
#### Using KIND cluster
The whole set of end-to-end tests can be executed via:
```sh
$ make run-e2e-tests
```
The end-to-end tests are split into tags and can be executed in separate groups, such as:
```sh
$ make run-e2e-tests-examples
```
Other targets include `run-e2e-tests-cassandra` and `run-e2e-tests-elasticsearch`. You can list them running:
```sh
$ make e2e-test-suites
```
**Note**: there are some variables you need to take into account in order to
improve your experience running the E2E tests.
| Variable name | Description | Example usage |
|-------------------|-----------------------------------------------------|------------------------------------|
| KUTTL_OPTIONS | Options to pass directly to the KUTTL call | KUTTL_OPTIONS="--test es-rollover" |
| E2E_TESTS_TIMEOUT | Timeout for each step in the E2E tests. In seconds | E2E_TESTS_TIMEOUT=500 |
| USE_KIND_CLUSTER | Start a KIND cluster to run the E2E tests | USE_KIND_CLUSTER=true |
| KIND_KEEP_CLUSTER | Not remove the KIND cluster after running the tests | KIND_KEEP_CLUSTER=true |
Also, you can enable/disable the installation of the different operators needed
to run the tests:
| Variable name | Description | Example usage |
|----------------|---------------------------------------------|---------------------|
| JAEGER_OLM | Jaeger Operator was installed using OLM | JAEGER_OLM=true |
| KAFKA_OLM | Kafka Operator was installed using OLM | KAFKA_OLM=true |
| PROMETHEUS_OLM | Prometheus Operator was installed using OLM | PROMETHEUS_OLM=true |
#### An external cluster (like OpenShift)
The commands from the previous section are valid when running the E2E tests in an
external cluster like OpenShift, minikube or other Kubernetes environment. The only
difference are:
* You need to log in your Kubernetes cluster before running the E2E tests
* You need to provide the `USE_KIND_CLUSTER=false` parameter when calling `make`
```sh
$ make run-e2e-tests USE_KIND_CLUSTER=false
```
### Developing new E2E tests
E2E tests are located under `tests/e2e`. Each folder is associated to an E2E test suite. The
Tests are developed using KUTTL. Before developing a new test, [learn how KUTTL test works](https://kuttl.dev/docs/what-is-kuttl.html).
To add a new suite, it is needed to create a new folder with the name of the suite under `tests/e2e`.
Each suite folder contains:
* `Makefile`: describes the rules associated to rendering the files needed for your tests and run the tests
* `render.sh`: renders all the files needed for your tests (or to skip them)
* A folder per test to run
When the test are rendered, each test folder is copied to `_build`. The files generated
by `render.sh` are created under `_build/<test name>`.
##### Makefile
The `Makefile` file must contain two rules:
```Makefile
render-e2e-tests-<suite name>: set-assert-e2e-img-name
./tests/e2e/<suite name>/render.sh
run-e2e-tests-<suite name>: TEST_SUITE_NAME=<suite name>
run-e2e-tests-<suite name>: run-suite-tests
```
Where `<suite name>` is the name of your E2E test suite. Your E2E test suite
will be automatically indexed in the `run-e2e-tests` Makefile target.
##### render.sh
This file renders all the YAML files that are part of the E2E test. The `render.sh`
file must start with:
```bash
#!/bin/bash
source $(dirname "$0")/../render-utils.sh
```
The `render-utils.sh` file contains multiple functions to make easier to develop E2E tests and reuse logic. You can go to it and review the documentation of each one of the functions to
understand their parameters and effects.
#### Building [OCI Images](https://github.com/opencontainers/image-spec/blob/master/spec.md) for multiple arch (linux/arm64, linux/amd64)
OCI images could be built and published by [buildx](https://github.com/docker/buildx), it could be executed for local test via:
```sh
$ OPERATOR_VERSION=devel ./.ci/publish-images.sh
```
more arch support only need to change `--platform=linux/amd64,linux/arm64`
if we want to execute this in local env, need to setup buildx:
1. install docker cli plugin
```sh
$ export DOCKER_BUILDKIT=1
$ docker build --platform=local -o . git://github.com/docker/buildx
$ mkdir -p ~/.docker/cli-plugins
$ mv buildx ~/.docker/cli-plugins/docker-buildx
```
(via https://github.com/docker/buildx#with-buildx-or-docker-1903)
2. install qemu for multi arch
```sh
$ docker run --privileged --rm tonistiigi/binfmt --install all
```
(via https://github.com/docker/buildx#building-multi-platform-images)
3. create a builder
```sh
$ docker buildx create --use --name builder
```

View File

@ -1,157 +0,0 @@
# How to Contribute to Jaeger
We'd love your help!
Jaeger is [Apache 2.0 licensed](./LICENSE) and accepts contributions via GitHub
pull requests. This document outlines some of the conventions on development
workflow, commit message formatting, contact points and other resources to make
it easier to get your contribution accepted.
We gratefully welcome improvements to documentation as well as to code.
Table of Contents:
* [Making a Change](#making-a-change)
* [License](#license)
* [Certificate of Origin - Sign your work](#certificate-of-origin---sign-your-work)
* [Branches](#branches)
## Making a Change
**Before making any significant changes, please open an issue**. Each issue
should describe the following:
* Requirement - what kind of business use case are you trying to solve?
* Problem - what in Jaeger blocks you from solving the requirement?
* Proposal - what do you suggest to solve the problem or improve the existing
situation?
* Any open questions to address
Discussing your proposed changes ahead of time will make the contribution
process smooth for everyone. Once the approach is agreed upon, make your changes
and open a pull request (PR). Each PR should describe:
* Which problem it is solving. Normally it should be simply a reference to the
corresponding issue, e.g. `Resolves #123`.
* What changes are made to achieve that.
Your pull request is most likely to be accepted if **each commit**:
* Has a [good commit message][good-commit-msg]. In summary:
* Separate subject from body with a blank line
* Limit the subject line to 50 characters
* Capitalize the subject line
* Do not end the subject line with a period
* Use the imperative mood in the subject line
* Wrap the body at 72 characters
* Use the body to explain _what_ and _why_ instead of _how_
* Has been signed by the author ([see below](#certificate-of-origin---sign-your-work)).
## License
By contributing your code, you agree to license your contribution under the
terms of the [Apache License](./LICENSE).
If you are adding a new file it should have a header like below. In some
languages, e.g. Python, you may need to change the comments to start with `#`.
The easiest way is to copy the header from one of the existing source files and
make sure the year is current and the copyright says "The Jaeger Authors".
```
// Copyright (c) 2019 The Jaeger Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
```
## Certificate of Origin - Sign your work
By contributing to this project you agree to the
[Developer Certificate of Origin](https://developercertificate.org/) (or simply
[DCO](./DCO)). This document was created by the Linux Kernel community and is a
simple statement that you, as a contributor, have the legal right to make the
contribution.
The sign-off is a simple line at the end of the explanation for the patch, which
certifies that you wrote it or otherwise have the right to pass it on as an
open-source patch. The rules are pretty simple: if you can certify the
conditions in the [DCO](./DCO), then just add a line to every git commit
message:
Signed-off-by: Bender Bending Rodriguez <bender.is.great@gmail.com>
using your real name (sorry, no pseudonyms or anonymous contributions.) You can
add the sign off when creating the git commit via `git commit -s`.
If you want signing to be automatic you can set up some aliases:
### Missing sign-offs
Note that **every commit in the pull request must be signed**. Jaeger
repositories are configured with a [DCO-bot][dco-bot] that will check sign-offs
on every commit and block the PR from being merged if some commits are missing
sign-offs. If you only have one commit or the latest commit in the PR is missing
a sign-off, the simplest way to fix this is to run:
```
git commit --amend -s
```
which will prompt you to edit the commit message while adding a signature.
Simply accept the text as is, and push the branch:
```
git push --force
```
If some commit in the middle of your commit history is missing the sign-off, the
simplest solution is to squash the commits into one and sign it. For example,
suppose that your branch history looks like this:
```
fe43631 - Fix HotROD Docker command
933efb3 - Add files for ingester
214c133 - Rename gas to gosec
0a40309 - Update Makefile build_ui target to lerna structure
7919cd9 - Add support for Cassandra reconnect interval
a0dc40e - Fix deploy step
77a0573 - (tag: v1.6.0) Prepare release 1.6.0
```
Let's assume that the first commit `77a0573` was the commit before you started
work on your PR, and commits from `a0dc40e` to `fe43631` are your changes that
you want to squash. You can run the soft reset command:
```
git reset --soft 77a0573
```
It will undo all changes after commit `77a0573` and stage them. You can commit
them all at once while adding the signature:
```
git commit -s -m 'your commit message, e.g. the PR title'
```
Then push the branch:
```
git push --force
```
[good-commit-msg]: https://chris.beams.io/posts/git-commit/
[dco-bot]: https://github.com/probot/dco#how-it-works
## Branches
Upstream repository should contain only maintenance branches (e.g. `release-1.0`). For feature
branches use forked repository.

37
DCO
View File

@ -1,37 +0,0 @@
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

View File

@ -1,56 +0,0 @@
# Build the manager binary
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.22@sha256:f43c6f049f04cbbaeb28f0aad3eea15274a7d0a7899a617d0037aec48d7ab010 as builder
WORKDIR /workspace
# Copy the Go Modules manifests
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
COPY hack/install/install-dependencies.sh hack/install/
COPY hack/install/install-utils.sh hack/install/
COPY go.mod .
COPY go.sum .
RUN ./hack/install/install-dependencies.sh
# Copy the go source
COPY main.go main.go
COPY apis/ apis/
COPY cmd/ cmd/
COPY controllers/ controllers/
COPY pkg/ pkg/
COPY versions.txt versions.txt
ARG JAEGER_VERSION
ARG JAEGER_AGENT_VERSION
ARG VERSION_PKG
ARG VERSION
ARG VERSION_DATE
# Dockerfile `FROM --platform=${BUILDPLATFORM}` means
# prepare image for build for matched BUILDPLATFORM, eq. linux/amd64
# by this way, we could avoid to using qemu, which slow down compiling process.
# and usefully for language who support multi-arch build like go.
# see last part of https://docs.docker.com/buildx/working-with-buildx/#build-multi-platform-images
ARG TARGETARCH
# Build
RUN CGO_ENABLED=0 GOOS=linux GOARCH=${TARGETARCH} GO111MODULE=on go build -ldflags="-X ${VERSION_PKG}.version=${VERSION} -X ${VERSION_PKG}.buildDate=${VERSION_DATE} -X ${VERSION_PKG}.defaultJaeger=${JAEGER_VERSION} -X ${VERSION_PKG}.defaultAgent=${JAEGER_AGENT_VERSION}" -a -o jaeger-operator main.go
FROM quay.io/centos/centos:stream9
ENV USER_UID=1001 \
USER_NAME=jaeger-operator
RUN INSTALL_PKGS="openssl" && \
dnf install -y $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
dnf clean all && \
mkdir /tmp/_working_dir && \
chmod og+w /tmp/_working_dir
WORKDIR /
COPY --from=builder /workspace/jaeger-operator .
COPY scripts/cert_generation.sh scripts/cert_generation.sh
USER ${USER_UID}:${USER_UID}
ENTRYPOINT ["/jaeger-operator"]

View File

@ -1,35 +0,0 @@
# Build the manager binary
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.22@sha256:f43c6f049f04cbbaeb28f0aad3eea15274a7d0a7899a617d0037aec48d7ab010 as builder
WORKDIR /workspace
# Download the dependencies. Doing this, if there are changes in the source
# code but not in the dependencies to download, the tool to build the image will
# use the cached dependencies
COPY hack/install/install-dependencies.sh hack/install/
COPY hack/install/install-utils.sh hack/install/
COPY go.mod .
COPY go.sum .
RUN ./hack/install/install-dependencies.sh
COPY tests tests
ENV CGO_ENABLED=0
# Build
ARG TARGETOS
ARG TARGETARCH
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o ./reporter -a ./tests/assert-jobs/reporter/main.go
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o ./reporter-otlp -a ./tests/assert-jobs/reporter-otlp/main.go
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o ./query -a ./tests/assert-jobs/query/main.go
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o ./index -a ./tests/assert-jobs/index/main.go
# Use the curl container image to ensure we have curl installed. Also, it is a
# minimal container image
FROM curlimages/curl@sha256:94e9e444bcba979c2ea12e27ae39bee4cd10bc7041a472c4727a558e213744e6
WORKDIR /
COPY --from=builder /workspace/reporter .
COPY --from=builder /workspace/reporter-otlp .
COPY --from=builder /workspace/query .
COPY --from=builder /workspace/index .

1194
Gopkg.lock generated Normal file

File diff suppressed because it is too large Load Diff

97
Gopkg.toml Normal file
View File

@ -0,0 +1,97 @@
# Force dep to vendor the code generators, which aren't imported just used at dev time.
required = [
"k8s.io/code-generator/cmd/defaulter-gen",
"k8s.io/code-generator/cmd/deepcopy-gen",
"k8s.io/code-generator/cmd/conversion-gen",
"k8s.io/code-generator/cmd/client-gen",
"k8s.io/code-generator/cmd/lister-gen",
"k8s.io/code-generator/cmd/informer-gen",
"k8s.io/kube-openapi/cmd/openapi-gen",
"k8s.io/gengo/args",
"sigs.k8s.io/controller-tools/pkg/crd/generator",
]
[[override]]
name = "k8s.io/code-generator"
# revision for tag "kubernetes-1.13.1"
revision = "c2090bec4d9b1fb25de3812f868accc2bc9ecbae"
[[override]]
name = "k8s.io/kube-openapi"
revision = "0cf8f7e6ed1d2e3d47d02e3b6e559369af24d803"
[[override]]
name = "github.com/go-openapi/spec"
branch = "master"
[[override]]
name = "sigs.k8s.io/controller-tools"
version = "=v0.1.8"
[[override]]
name = "k8s.io/api"
# revision for tag "kubernetes-1.13.1"
revision = "05914d821849570fba9eacfb29466f2d8d3cd229"
[[override]]
name = "k8s.io/apiextensions-apiserver"
# revision for tag "kubernetes-1.13.1"
revision = "0fe22c71c47604641d9aa352c785b7912c200562"
[[override]]
name = "k8s.io/apimachinery"
# revision for tag "kubernetes-1.13.1"
revision = "2b1284ed4c93a43499e781493253e2ac5959c4fd"
[[override]]
name = "k8s.io/client-go"
# revision for tag "kubernetes-1.13.1"
revision = "8d9ed539ba3134352c586810e749e58df4e94e4f"
[[override]]
name = "github.com/coreos/prometheus-operator"
version = "=v0.26.0"
[[override]]
name = "sigs.k8s.io/controller-runtime"
version = "=v0.1.10"
[[constraint]]
name = "github.com/operator-framework/operator-sdk"
version = "=v0.5.0" #osdk_version_annotation
[[constraint]]
name = "github.com/spf13/cobra"
version = "0.0.3"
[[constraint]]
name = "github.com/spf13/viper"
version = "1.1.0"
[[constraint]]
name = "github.com/mitchellh/go-homedir"
version = "v1.0.0"
[[constraint]]
name = "github.com/sirupsen/logrus"
version = "v1.2.0"
[[constraint]]
name = "github.com/stretchr/testify"
version = "v1.2.2"
[[constraint]]
name = "github.com/openshift/api"
branch = "release-3.11" # why don't they have tags/versions??
[prune]
go-tests = true
non-go = true
[[prune.project]]
name = "k8s.io/code-generator"
non-go = false
[[prune.project]]
name = "k8s.io/gengo"
non-go = false

581
Makefile
View File

@ -1,518 +1,169 @@
include tests/e2e/Makefile
# When the VERBOSE variable is set to 1, all the commands are shown
ifeq ("$(VERBOSE)","true")
echo_prefix=">>>>"
else
VECHO = @
endif
VERSION_DATE ?= $(shell date -u +'%Y-%m-%dT%H:%M:%SZ')
PLATFORMS ?= linux/arm64,linux/amd64,linux/s390x,linux/ppc64le
GOARCH ?= $(go env GOARCH)
GOOS ?= $(go env GOOS)
GO_FLAGS ?= GOOS=$(GOOS) GOARCH=$(GOARCH) CGO_ENABLED=0 GO111MODULE=on
GOPATH ?= "$(HOME)/go"
GOROOT ?= "$(shell go env GOROOT)"
GO_FLAGS ?= GOOS=linux GOARCH=amd64 CGO_ENABLED=0
KUBERNETES_CONFIG ?= "$(HOME)/.kube/config"
WATCH_NAMESPACE ?= ""
BIN_DIR ?= bin
BIN_DIR ?= "build/_output/bin"
IMPORT_LOG=import.log
FMT_LOG=fmt.log
ECHO ?= @echo $(echo_prefix)
SED ?= "sed"
# Jaeger Operator build variables
OPERATOR_NAME ?= jaeger-operator
IMG_PREFIX ?= quay.io/${USER}
OPERATOR_VERSION ?= "$(shell grep -v '\#' versions.txt | grep operator | awk -F= '{print $$2}')"
VERSION ?= "$(shell grep operator= versions.txt | awk -F= '{print $$2}')"
IMG ?= ${IMG_PREFIX}/${OPERATOR_NAME}:${VERSION}
BUNDLE_IMG ?= ${IMG_PREFIX}/${OPERATOR_NAME}-bundle:$(addprefix v,${VERSION})
OUTPUT_BINARY ?= "$(BIN_DIR)/jaeger-operator"
NAMESPACE ?= "$(USER)"
BUILD_IMAGE ?= "$(NAMESPACE)/$(OPERATOR_NAME):latest"
OUTPUT_BINARY ?= "$(BIN_DIR)/$(OPERATOR_NAME)"
VERSION_PKG ?= "github.com/jaegertracing/jaeger-operator/pkg/version"
export JAEGER_VERSION ?= "$(shell grep jaeger= versions.txt | awk -F= '{print $$2}')"
# agent was removed in jaeger 1.62.0, and the new versions of jaeger doesn't distribute the images anymore
# for that reason the last version of the agent is 1.62.0 and is pined here so we can update jaeger and maintain
# the latest agent image.
export JAEGER_AGENT_VERSION ?= "1.62.0"
# Kafka and Kafka Operator variables
JAEGER_VERSION ?= "$(shell grep -v '\#' jaeger.version)"
OPERATOR_VERSION ?= "$(shell git describe --tags)"
STORAGE_NAMESPACE ?= "${shell kubectl get sa default -o jsonpath='{.metadata.namespace}' || oc project -q}"
KAFKA_NAMESPACE ?= "kafka"
KAFKA_VERSION ?= 0.32.0
KAFKA_EXAMPLE ?= "https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/${KAFKA_VERSION}/examples/kafka/kafka-persistent-single.yaml"
KAFKA_YAML ?= "https://github.com/strimzi/strimzi-kafka-operator/releases/download/${KAFKA_VERSION}/strimzi-cluster-operator-${KAFKA_VERSION}.yaml"
# Prometheus Operator variables
PROMETHEUS_OPERATOR_TAG ?= v0.39.0
PROMETHEUS_BUNDLE ?= https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/${PROMETHEUS_OPERATOR_TAG}/bundle.yaml
# Metrics server variables
METRICS_SERVER_TAG ?= v0.6.1
METRICS_SERVER_YAML ?= https://github.com/kubernetes-sigs/metrics-server/releases/download/${METRICS_SERVER_TAG}/components.yaml
# Ingress controller variables
INGRESS_CONTROLLER_TAG ?= v1.0.1
INGRESS_CONTROLLER_YAML ?= https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-${INGRESS_CONTROLLER_TAG}/deploy/static/provider/kind/deploy.yaml
## Location to install tool dependencies
LOCALBIN ?= $(shell pwd)/bin
# Cert manager version to use
CERTMANAGER_VERSION ?= 1.6.1
CMCTL ?= $(LOCALBIN)/cmctl
# Operator SDK
OPERATOR_SDK ?= $(LOCALBIN)/operator-sdk
OPERATOR_SDK_VERSION ?= 1.32.0
# Minimum Kubernetes and OpenShift versions
MIN_KUBERNETES_VERSION ?= 1.19.0
MIN_OPENSHIFT_VERSION ?= 4.12
# Use a KIND cluster for the E2E tests
USE_KIND_CLUSTER ?= true
# Is Jaeger Operator installed via OLM?
JAEGER_OLM ?= false
# Is Kafka Operator installed via OLM?
KAFKA_OLM ?= false
# Is Prometheus Operator installed via OLM?
PROMETHEUS_OLM ?= false
# Istio binary path and version
ISTIOCTL ?= $(LOCALBIN)/istioctl
# Tools
CRDOC ?= $(LOCALBIN)/crdoc
KIND ?= $(LOCALBIN)/kind
KUSTOMIZE ?= $(LOCALBIN)/kustomize
ES_OPERATOR_NAMESPACE = openshift-logging
$(LOCALBIN):
mkdir -p $(LOCALBIN)
# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)
ifeq (,$(shell go env GOBIN))
GOBIN=$(shell go env GOPATH)/bin
else
GOBIN=$(shell go env GOBIN)
endif
LD_FLAGS ?= "-X $(VERSION_PKG).version=$(VERSION) -X $(VERSION_PKG).buildDate=$(VERSION_DATE) -X $(VERSION_PKG).defaultJaeger=$(JAEGER_VERSION) -X $(VERSION_PKG).defaultAgent=$(JAEGER_AGENT_VERSION)"
# ENVTEST_K8S_VERSION refers to the version of kubebuilder assets to be downloaded by envtest binary.
ENVTEST ?= $(LOCALBIN)/setup-envtest
ENVTEST_K8S_VERSION = 1.30
# Options for KIND version to use
export KUBE_VERSION ?= 1.30
KIND_CONFIG ?= kind-$(KUBE_VERSION).yaml
SCORECARD_TEST_IMG ?= quay.io/operator-framework/scorecard-test:v$(OPERATOR_SDK_VERSION)
LD_FLAGS ?= "-X $(VERSION_PKG).version=$(OPERATOR_VERSION) -X $(VERSION_PKG).buildDate=$(VERSION_DATE) -X $(VERSION_PKG).defaultJaeger=$(JAEGER_VERSION)"
PACKAGES := $(shell go list ./cmd/... ./pkg/...)
.DEFAULT_GOAL := build
# Options for 'bundle-build'
ifneq ($(origin CHANNELS), undefined)
BUNDLE_CHANNELS := --channels=$(CHANNELS)
endif
ifneq ($(origin DEFAULT_CHANNEL), undefined)
BUNDLE_DEFAULT_CHANNEL := --default-channel=$(DEFAULT_CHANNEL)
endif
BUNDLE_METADATA_OPTS ?= $(BUNDLE_CHANNELS) $(BUNDLE_DEFAULT_CHANNEL)
# Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
CRD_OPTIONS ?= "crd:maxDescLen=0,generateEmbeddedObjectMeta=true"
# If we are running in CI, run go test in verbose mode
ifeq (,$(CI))
GOTEST_OPTS=
else
GOTEST_OPTS=-v
endif
all: manager
.PHONY: check
check: install-tools
$(ECHO) Checking...
$(VECHO)./.ci/format.sh > $(FMT_LOG)
$(VECHO)[ ! -s "$(FMT_LOG)" ] || (echo "Go fmt, license check, or import ordering failures, run 'make format'" | cat - $(FMT_LOG) && false)
ensure-generate-is-noop: VERSION=$(OPERATOR_VERSION)
ensure-generate-is-noop: set-image-controller generate bundle
$(VECHO)# on make bundle config/manager/kustomization.yaml includes changes, which should be ignored for the below check
$(VECHO)git restore config/manager/kustomization.yaml
$(VECHO)git diff -s --exit-code api/v1/zz_generated.*.go || (echo "Build failed: a model has been changed but the generated resources aren't up to date. Run 'make generate' and update your PR." && exit 1)
$(VECHO)git diff -s --exit-code bundle config || (echo "Build failed: the bundle, config files has been changed but the generated bundle, config files aren't up to date. Run 'make bundle' and update your PR." && git diff && exit 1)
$(VECHO)git diff -s --exit-code docs/api.md || (echo "Build failed: the api.md file has been changed but the generated api.md file isn't up to date. Run 'make api-docs' and update your PR." && git diff && exit 1)
check:
@echo Checking...
@go fmt $(PACKAGES) > $(FMT_LOG)
@.travis/import-order-cleanup.sh stdout > $(IMPORT_LOG)
@[ ! -s "$(FMT_LOG)" -a ! -s "$(IMPORT_LOG)" ] || (echo "Go fmt, license check, or import ordering failures, run 'make format'" | cat - $(FMT_LOG) $(IMPORT_LOG) && false)
.PHONY: ensure-generate-is-noop
ensure-generate-is-noop: generate
@git diff -s --exit-code pkg/apis/jaegertracing/v1/zz_generated.deepcopy.go || (echo "Build failed: a model has been changed but the deep copy functions aren't up to date. Run 'make generate' and update your PR." && exit 1)
.PHONY: format
format: install-tools
$(ECHO) Formatting code...
$(VECHO)./.ci/format.sh
format:
@echo Formatting code...
@.travis/import-order-cleanup.sh inplace
@go fmt $(PACKAGES)
PHONY: lint
lint: install-tools
$(ECHO) Linting...
$(VECHO)$(LOCALBIN)/golangci-lint -v run
.PHONY: vet
vet: ## Run go vet against code.
go vet ./...
.PHONY: lint
lint:
@echo Linting...
@golint $(PACKAGES)
@gosec -quiet -exclude=G104 $(PACKAGES) 2>/dev/null
.PHONY: build
build: format
$(ECHO) Building...
$(VECHO)./hack/install/install-dependencies.sh
$(VECHO)${GO_FLAGS} go build -ldflags $(LD_FLAGS) -o $(OUTPUT_BINARY) main.go
@echo Building...
@${GO_FLAGS} go build -o $(OUTPUT_BINARY) -ldflags $(LD_FLAGS)
.PHONY: docker
docker:
$(VECHO)[ ! -z "$(PIPELINE)" ] || docker build --build-arg=GOPROXY=${GOPROXY} --build-arg=VERSION=${VERSION} --build-arg=JAEGER_VERSION=${JAEGER_VERSION} --build-arg=JAEGER_AGENT_VERSION=${JAEGER_AGENT_VERSION} --build-arg=TARGETARCH=$(GOARCH) --build-arg VERSION_DATE=${VERSION_DATE} --build-arg VERSION_PKG=${VERSION_PKG} -t "$(IMG)" . ${DOCKER_BUILD_OPTIONS}
.PHONY: dockerx
dockerx:
$(VECHO)[ ! -z "$(PIPELINE)" ] || docker buildx build --push --progress=plain --build-arg=VERSION=${VERSION} --build-arg=JAEGER_VERSION=${JAEGER_VERSION} --build-arg=JAEGER_AGENT_VERSION=${JAEGER_AGENT_VERSION} --build-arg=GOPROXY=${GOPROXY} --build-arg VERSION_DATE=${VERSION_DATE} --build-arg VERSION_PKG=${VERSION_PKG} --platform=$(PLATFORMS) $(IMAGE_TAGS) .
@docker build --file build/Dockerfile -t "$(BUILD_IMAGE)" .
.PHONY: push
push:
ifeq ($(CI),true)
$(ECHO) Skipping push, as the build is running within a CI environment
else
$(ECHO) "Pushing image $(IMG)..."
$(VECHO)docker push $(IMG) > /dev/null
endif
@echo Pushing image $(BUILD_IMAGE)...
@docker push $(BUILD_IMAGE) > /dev/null
.PHONY: unit-tests
unit-tests: envtest
unit-tests:
@echo Running unit tests...
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(LOCALBIN) -p path)" go test -p 1 ${GOTEST_OPTS} ./... -cover -coverprofile=cover.out -ldflags $(LD_FLAGS)
@go test $(PACKAGES) -cover -coverprofile=cover.out
.PHONY: set-node-os-linux
set-node-os-linux:
# Elasticsearch requires labeled nodes. These labels are by default present in OCP 4.2
$(VECHO)kubectl label nodes --all kubernetes.io/os=linux --overwrite
.PHONY: e2e-tests
e2e-tests: prepare-e2e-tests e2e-tests-smoke e2e-tests-cassandra e2e-tests-es e2e-tests-self-provisioned-es
cert-manager: cmctl
# Consider using cmctl to install the cert-manager once install command is not experimental
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v${CERTMANAGER_VERSION}/cert-manager.yaml
$(CMCTL) check api --wait=5m
.PHONY: prepare-e2e-tests
prepare-e2e-tests: crd build docker push
@mkdir -p deploy/test
@cp test/role_binding.yaml deploy/test/namespace-manifests.yaml
@echo "---" >> deploy/test/namespace-manifests.yaml
undeploy-cert-manager:
kubectl delete --ignore-not-found=true -f https://github.com/jetstack/cert-manager/releases/download/v${CERTMANAGER_VERSION}/cert-manager.yaml
@cat test/role.yaml >> deploy/test/namespace-manifests.yaml
@echo "---" >> deploy/test/namespace-manifests.yaml
cmctl: $(CMCTL)
$(CMCTL): $(LOCALBIN)
./hack/install/install-cmctl.sh $(CERTMANAGER_VERSION)
@cat test/service_account.yaml >> deploy/test/namespace-manifests.yaml
@echo "---" >> deploy/test/namespace-manifests.yaml
@cat test/operator.yaml | sed "s~image: jaegertracing\/jaeger-operator\:.*~image: $(BUILD_IMAGE)~gi" >> deploy/test/namespace-manifests.yaml
.PHONY: e2e-tests-smoke
e2e-tests-smoke: prepare-e2e-tests
@echo Running Smoke end-to-end tests...
@go test -tags=smoke ./test/e2e/... -kubeconfig $(KUBERNETES_CONFIG) -namespacedMan ../../deploy/test/namespace-manifests.yaml -globalMan ../../deploy/crds/jaegertracing_v1_jaeger_crd.yaml -root .
.PHONY: e2e-tests-cassandra
e2e-tests-cassandra: prepare-e2e-tests cassandra
@echo Running Cassandra end-to-end tests...
@STORAGE_NAMESPACE=$(STORAGE_NAMESPACE) go test -tags=cassandra ./test/e2e/... -kubeconfig $(KUBERNETES_CONFIG) -namespacedMan ../../deploy/test/namespace-manifests.yaml -globalMan ../../deploy/crds/jaegertracing_v1_jaeger_crd.yaml -root .
.PHONY: e2e-tests-es
e2e-tests-es: prepare-e2e-tests es
@echo Running Elasticsearch end-to-end tests...
@STORAGE_NAMESPACE=$(STORAGE_NAMESPACE) go test -tags=elasticsearch ./test/e2e/... -kubeconfig $(KUBERNETES_CONFIG) -namespacedMan ../../deploy/test/namespace-manifests.yaml -globalMan ../../deploy/crds/jaegertracing_v1_jaeger_crd.yaml -root .
.PHONY: e2e-tests-self-provisioned-es
e2e-tests-self-provisioned-es: prepare-e2e-tests deploy-es-operator
@echo Running Self provisioned Elasticsearch end-to-end tests...
@go test -tags=self_provisioned_elasticsearch ./test/e2e/... -kubeconfig $(KUBERNETES_CONFIG) -namespacedMan ../../deploy/test/namespace-manifests.yaml -globalMan ../../deploy/crds/jaegertracing_v1_jaeger_crd.yaml -root .
.PHONY: run
run: crd
@rm -rf /tmp/_cert*
@bash -c 'trap "exit 0" INT; OPERATOR_NAME=${OPERATOR_NAME} KUBERNETES_CONFIG=${KUBERNETES_CONFIG} WATCH_NAMESPACE=${WATCH_NAMESPACE} go run -ldflags ${LD_FLAGS} main.go start'
.PHONY: set-max-map-count
set-max-map-count:
@minishift ssh -- 'sudo sysctl -w vm.max_map_count=262144' > /dev/null 2>&1 || true
.PHONY: deploy-es-operator
deploy-es-operator: set-max-map-count
@kubectl create namespace ${ES_OPERATOR_NAMESPACE} 2>&1 | grep -v "already exists" || true
@kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/prometheusrule.crd.yaml
@kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/servicemonitor.crd.yaml
@kubectl apply -f https://raw.githubusercontent.com/openshift/elasticsearch-operator/master/manifests/01-service-account.yaml -n ${ES_OPERATOR_NAMESPACE}
@kubectl apply -f https://raw.githubusercontent.com/openshift/elasticsearch-operator/master/manifests/02-role.yaml
@kubectl apply -f https://raw.githubusercontent.com/openshift/elasticsearch-operator/master/manifests/03-role-bindings.yaml
@kubectl apply -f https://raw.githubusercontent.com/openshift/elasticsearch-operator/master/manifests/04-crd.yaml -n ${ES_OPERATOR_NAMESPACE}
@kubectl apply -f https://raw.githubusercontent.com/openshift/elasticsearch-operator/master/manifests/05-deployment.yaml -n ${ES_OPERATOR_NAMESPACE}
.PHONY: es
es: storage
ifeq ($(SKIP_ES_EXTERNAL),true)
$(ECHO) Skipping creation of external Elasticsearch instance
else
$(VECHO)kubectl create -f ./tests/elasticsearch.yml --namespace $(STORAGE_NAMESPACE) 2>&1 | grep -v "already exists" || true
endif
.PHONY: istio
istio:
$(ECHO) Install istio with minimal profile
$(VECHO)./hack/install/install-istio.sh
$(VECHO)${ISTIOCTL} install --set profile=minimal -y
.PHONY: undeploy-istio
undeploy-istio:
$(VECHO)${ISTIOCTL} manifest generate --set profile=demo | kubectl delete --ignore-not-found=true -f - || true
$(VECHO)kubectl delete namespace istio-system --ignore-not-found=true || true
@kubectl create -f ./test/elasticsearch.yml --namespace $(STORAGE_NAMESPACE) 2>&1 | grep -v "already exists" || true
.PHONY: cassandra
cassandra: storage
$(VECHO)kubectl create -f ./tests/cassandra.yml --namespace $(STORAGE_NAMESPACE) 2>&1 | grep -v "already exists" || true
@kubectl create -f ./test/cassandra.yml --namespace $(STORAGE_NAMESPACE) 2>&1 | grep -v "already exists" || true
.PHONY: storage
storage:
$(ECHO) Creating namespace $(STORAGE_NAMESPACE)
$(VECHO)kubectl create namespace $(STORAGE_NAMESPACE) 2>&1 | grep -v "already exists" || true
.PHONY: deploy-kafka-operator
deploy-kafka-operator:
$(ECHO) Creating namespace $(KAFKA_NAMESPACE)
$(VECHO)kubectl create namespace $(KAFKA_NAMESPACE) 2>&1 | grep -v "already exists" || true
ifeq ($(KAFKA_OLM),true)
$(ECHO) Skipping kafka-operator deployment, assuming it has been installed via OperatorHub
else
$(VECHO)curl --fail --location https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.32.0/strimzi-0.32.0.tar.gz --output tests/_build/kafka-operator.tar.gz --create-dirs
$(VECHO)tar xf tests/_build/kafka-operator.tar.gz
$(VECHO)${SED} -i 's/namespace: .*/namespace: ${KAFKA_NAMESPACE}/' strimzi-${KAFKA_VERSION}/install/cluster-operator/*RoleBinding*.yaml
$(VECHO)kubectl create -f strimzi-${KAFKA_VERSION}/install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n ${KAFKA_NAMESPACE}
$(VECHO)kubectl create -f strimzi-${KAFKA_VERSION}/install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n ${KAFKA_NAMESPACE}
$(VECHO)kubectl create -f strimzi-${KAFKA_VERSION}/install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n ${KAFKA_NAMESPACE}
$(VECHO)kubectl apply -f strimzi-${KAFKA_VERSION}/install/cluster-operator/ -n ${KAFKA_NAMESPACE}
endif
.PHONY: undeploy-kafka-operator
undeploy-kafka-operator:
ifeq ($(KAFKA_OLM),true)
$(ECHO) Skiping kafka-operator undeploy
else
$(VECHO)kubectl delete --namespace $(KAFKA_NAMESPACE) -f tests/_build/kafka-operator.yaml --ignore-not-found=true 2>&1 || true
$(VECHO)kubectl delete clusterrolebinding strimzi-cluster-operator-namespaced --ignore-not-found=true || true
$(VECHO)kubectl delete clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --ignore-not-found=true || true
$(VECHO)kubectl delete clusterrolebinding strimzi-cluster-operator-topic-operator-delegation --ignore-not-found=true || true
endif
.PHONY: kafka
kafka: deploy-kafka-operator
ifeq ($(SKIP_KAFKA),true)
$(ECHO) Skipping Kafka/external ES related tests
else
$(ECHO) Creating namespace $(KAFKA_NAMESPACE)
$(VECHO)mkdir -p tests/_build/
$(VECHO)kubectl create namespace $(KAFKA_NAMESPACE) 2>&1 | grep -v "already exists" || true
$(VECHO)curl --fail --location $(KAFKA_EXAMPLE) --output tests/_build/kafka-example.yaml --create-dirs
$(VECHO)${SED} -i 's/size: 100Gi/size: 10Gi/g' tests/_build/kafka-example.yaml
$(VECHO)kubectl -n $(KAFKA_NAMESPACE) apply --dry-run=client -f tests/_build/kafka-example.yaml
$(VECHO)kubectl -n $(KAFKA_NAMESPACE) apply -f tests/_build/kafka-example.yaml 2>&1 | grep -v "already exists" || true
endif
.PHONY: undeploy-kafka
undeploy-kafka: undeploy-kafka-operator
$(VECHO)kubectl delete --namespace $(KAFKA_NAMESPACE) -f tests/_build/kafka-example.yaml 2>&1 || true
.PHONY: deploy-prometheus-operator
deploy-prometheus-operator:
ifeq ($(PROMETHEUS_OLM),true)
$(ECHO) Skipping prometheus-operator deployment, assuming it has been installed via OperatorHub
else
$(VECHO)kubectl apply -f ${PROMETHEUS_BUNDLE}
endif
.PHONY: undeploy-prometheus-operator
undeploy-prometheus-operator:
ifeq ($(PROMETHEUS_OLM),true)
$(ECHO) Skipping prometheus-operator undeployment, as it should have been installed via OperatorHub
else
$(VECHO)kubectl delete -f ${PROMETHEUS_BUNDLE} --ignore-not-found=true || true
endif
@echo Creating namespace $(STORAGE_NAMESPACE)
@kubectl create namespace $(STORAGE_NAMESPACE) 2>&1 | grep -v "already exists" || true
.PHONY: clean
clean: undeploy-kafka undeploy-prometheus-operator undeploy-istio undeploy-cert-manager
$(VECHO)kubectl delete namespace $(KAFKA_NAMESPACE) --ignore-not-found=true 2>&1 || true
$(VECHO)if [ -d tests/_build ]; then rm -rf tests/_build ; fi
$(VECHO)kubectl delete -f ./tests/cassandra.yml --ignore-not-found=true -n $(STORAGE_NAMESPACE) || true
$(VECHO)kubectl delete -f ./tests/elasticsearch.yml --ignore-not-found=true -n $(STORAGE_NAMESPACE) || true
clean:
@rm -f deploy/test/*.yaml
@if [ -d deploy/test ]; then rmdir deploy/test ; fi
@kubectl delete -f ./test/cassandra.yml --ignore-not-found=true -n $(STORAGE_NAMESPACE) || true
@kubectl delete -f ./test/elasticsearch.yml --ignore-not-found=true -n $(STORAGE_NAMESPACE) || true
@kubectl delete namespace ${ES_OPERATOR_NAMESPACE} || true
.PHONY: manifests
manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
$(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
.PHONY: crd
crd:
@kubectl create -f deploy/crds/jaegertracing_v1_jaeger_crd.yaml 2>&1 | grep -v "already exists" || true
.PHONY: ingress
ingress:
# see https://kubernetes.github.io/ingress-nginx/deploy/#verify-installation
@kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.18.0/deploy/mandatory.yaml
@minikube addons enable ingress
.PHONY: generate
generate: controller-gen api-docs ## Generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
$(CONTROLLER_GEN) object:headerFile="hack/boilerplate.go.txt" paths="./..."
generate:
@operator-sdk generate k8s
.PHONY: test
test: unit-tests run-e2e-tests
test: unit-tests e2e-tests
.PHONY: all
all: check format lint build test
.PHONY: ci
ci: install-tools ensure-generate-is-noop check format lint build unit-tests
ci: ensure-generate-is-noop check format lint build unit-tests
##@ Deployment
ignore-not-found ?= false
.PHONY: install
install: manifests kustomize ## Install CRDs into the K8s cluster specified in ~/.kube/config.
$(KUSTOMIZE) build config/crd | kubectl apply -f -
.PHONY: uninstall
uninstall: manifests kustomize ## Uninstall CRDs from the K8s cluster specified in ~/.kube/config.
$(KUSTOMIZE) build config/crd | kubectl delete --ignore-not-found=$(ignore-not-found) -f -
.PHONY: deploy
deploy: manifests kustomize ## Deploy controller to the K8s cluster specified in ~/.kube/config.
kubectl create namespace observability 2>&1 | grep -v "already exists" || true
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
./hack/enable-operator-features.sh
$(KUSTOMIZE) build config/default | kubectl apply -f -
.PHONY: undeploy
undeploy: kustomize ## Undeploy controller from the K8s cluster specified in ~/.kube/config.
$(KUSTOMIZE) build config/default | kubectl delete --ignore-not-found=$(ignore-not-found) -f -
.PHONY: operatorhub
operatorhub: check-operatorhub-pr-template
$(VECHO)./.ci/operatorhub.sh
.PHONY: check-operatorhub-pr-template
check-operatorhub-pr-template:
$(VECHO)curl https://raw.githubusercontent.com/operator-framework/community-operators/master/docs/pull_request_template.md -o .ci/.operatorhub-pr-template.md -s > /dev/null 2>&1
$(VECHO)git diff -s --exit-code .ci/.operatorhub-pr-template.md || (echo "Build failed: the PR template for OperatorHub has changed. Sync it and try again." && exit 1)
.PHONY: changelog
changelog:
$(ECHO) "Set env variable OAUTH_TOKEN before invoking, https://github.com/settings/tokens/new?description=GitHub%20Changelog%20Generator%20token"
$(VECHO)docker run --rm -v "${PWD}:/app" pavolloffay/gch:latest --oauth-token ${OAUTH_TOKEN} --branch main --owner jaegertracing --repo jaeger-operator
CONTROLLER_GEN = $(shell pwd)/bin/controller-gen
controller-gen: ## Download controller-gen locally if necessary.
$(VECHO)./hack/install/install-controller-gen.sh
.PHONY: envtest
envtest: $(ENVTEST) ## Download envtest-setup locally if necessary.
$(ENVTEST): $(LOCALBIN)
test -s $(ENVTEST) || GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
.PHONY: bundle
bundle: manifests kustomize operator-sdk ## Generate bundle manifests and metadata, then validate generated files.
$(SED) -i "s#containerImage: quay.io/jaegertracing/jaeger-operator:$(OPERATOR_VERSION)#containerImage: quay.io/jaegertracing/jaeger-operator:$(VERSION)#g" config/manifests/bases/jaeger-operator.clusterserviceversion.yaml
$(SED) -i 's/minKubeVersion: .*/minKubeVersion: $(MIN_KUBERNETES_VERSION)/' config/manifests/bases/jaeger-operator.clusterserviceversion.yaml
$(SED) -i 's/com.redhat.openshift.versions=.*/com.redhat.openshift.versions=v$(MIN_OPENSHIFT_VERSION)/' bundle.Dockerfile
$(SED) -i 's/com.redhat.openshift.versions: .*/com.redhat.openshift.versions: v$(MIN_OPENSHIFT_VERSION)/' bundle/metadata/annotations.yaml
$(OPERATOR_SDK) generate kustomize manifests -q
cd config/manager && $(KUSTOMIZE) edit set image controller=$(IMG)
$(KUSTOMIZE) build config/manifests | $(OPERATOR_SDK) generate bundle -q --overwrite --manifests --version $(VERSION) $(BUNDLE_METADATA_OPTS)
$(OPERATOR_SDK) bundle validate ./bundle
./hack/ignore-createdAt-bundle.sh
.PHONY: bundle-build
bundle-build: ## Build the bundle image.
docker build -f bundle.Dockerfile -t $(BUNDLE_IMG) .
.PHONY: bundle-push
bundle-push: ## Push the bundle image.
docker push $(BUNDLE_IMG)
.PHONY: opm
OPM = ./bin/opm
opm: ## Download opm locally if necessary.
ifeq (,$(wildcard $(OPM)))
ifeq (,$(shell which opm 2>/dev/null))
@{ \
set -e ;\
mkdir -p $(dir $(OPM)) ;\
OS=$(shell go env GOOS) && ARCH=$(shell go env GOARCH) && \
curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.15.1/$${OS}-$${ARCH}-opm ;\
chmod +x $(OPM) ;\
}
else
OPM = $(shell which opm)
endif
endif
# A comma-separated list of bundle images (e.g. make catalog-build BUNDLE_IMGS=example.com/operator-bundle:v0.1.0,example.com/operator-bundle:v0.2.0).
# These images MUST exist in a registry and be pull-able.
BUNDLE_IMGS ?= $(BUNDLE_IMG)
# The image tag given to the resulting catalog image (e.g. make catalog-build CATALOG_IMG=example.com/operator-catalog:v0.2.0).
CATALOG_IMG ?= $(IMAGE_TAG_BASE)-catalog:v$(VERSION)
# Set CATALOG_BASE_IMG to an existing catalog image tag to add $BUNDLE_IMGS to that image.
ifneq ($(origin CATALOG_BASE_IMG), undefined)
FROM_INDEX_OPT := --from-index $(CATALOG_BASE_IMG)
endif
# Build a catalog image by adding bundle images to an empty catalog using the operator package manager tool, 'opm'.
# This recipe invokes 'opm' in 'semver' bundle add mode. For more information on add modes, see:
# https://github.com/operator-framework/community-operators/blob/7f1438c/docs/packaging-operator.md#updating-your-existing-operator
.PHONY: catalog-build
catalog-build: opm ## Build a catalog image.
$(OPM) index add --container-tool docker --mode semver --tag $(CATALOG_IMG) --bundles $(BUNDLE_IMGS) $(FROM_INDEX_OPT)
# Push the catalog image.
.PHONY: catalog-push
catalog-push: ## Push a catalog image.
$(MAKE) docker-push IMG=$(CATALOG_IMG)
.PHONY: start-kind
start-kind: kind
ifeq ($(USE_KIND_CLUSTER),true)
$(ECHO) Starting KIND cluster...
# Instead of letting KUTTL create the Kind cluster (using the CLI or in the kuttl-tests.yaml
# file), the cluster is created here. There are multiple reasons to do this:
# * The kubectl command will not work outside KUTTL
# * Some KUTTL versions are not able to start properly a Kind cluster
# * The cluster will be removed after running KUTTL (this can be disabled). Sometimes,
# the cluster teardown is not done properly and KUTTL can not be run with the --start-kind flag
# When the Kind cluster is not created by Kuttl, the kindContainers parameter
# from kuttl-tests.yaml has not effect so, it is needed to load the container
# images here.
$(VECHO)$(KIND) create cluster --config $(KIND_CONFIG) 2>&1 | grep -v "already exists" || true
# Install metrics-server for HPA
$(ECHO)"Installing the metrics-server in the kind cluster"
$(VECHO)kubectl apply -f $(METRICS_SERVER_YAML)
$(VECHO)kubectl patch deployment -n kube-system metrics-server --type "json" -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": --kubelet-insecure-tls}]'
# Install the ingress-controller
$(ECHO)"Installing the Ingress controller in the kind cluster"
$(VECHO)kubectl apply -f $(INGRESS_CONTROLLER_YAML)
# Check the deployments were done properly
$(ECHO)"Checking the metrics-server was deployed properly"
$(VECHO)kubectl wait --for=condition=available deployment/metrics-server -n kube-system --timeout=5m
$(ECHO)"Checking the Ingress controller deployment was done successfully"
$(VECHO)kubectl wait --for=condition=available deployment ingress-nginx-controller -n ingress-nginx --timeout=5m
else
$(ECHO)"KIND cluster creation disabled. Skipping..."
endif
stop-kind:
$(ECHO)"Stopping the kind cluster"
$(VECHO)kind delete cluster
.PHONY: install-git-hooks
install-git-hooks:
$(VECHO)cp scripts/git-hooks/pre-commit .git/hooks
# Generates the released manifests
release-artifacts: set-image-controller
mkdir -p dist
$(KUSTOMIZE) build config/default -o dist/jaeger-operator.yaml
# Set the controller image parameters
set-image-controller: manifests kustomize
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
.PHONY: tools
tools: kustomize controller-gen operator-sdk
.PHONY: install-tools
install-tools: operator-sdk
$(VECHO)./hack/install/install-golangci-lint.sh
$(VECHO)./hack/install/install-goimports.sh
.PHONY: kustomize
kustomize: $(KUSTOMIZE)
$(KUSTOMIZE): $(LOCALBIN)
./hack/install/install-kustomize.sh
.PHONY: kind
kind: $(KIND)
$(KIND): $(LOCALBIN)
./hack/install/install-kind.sh
.PHONY: prepare-release
prepare-release:
$(VECHO)./.ci/prepare-release.sh
scorecard-tests: operator-sdk
echo "Operator sdk is $(OPERATOR_SDK)"
$(OPERATOR_SDK) scorecard bundle -w 10m || (echo "scorecard test failed" && exit 1)
scorecard-tests-local: kind
$(VECHO)$(KIND) create cluster --config $(KIND_CONFIG) 2>&1 | grep -v "already exists" || true
$(VECHO)docker pull $(SCORECARD_TEST_IMG)
$(VECHO)$(KIND) load docker-image $(SCORECARD_TEST_IMG)
$(VECHO)kubectl wait --timeout=5m --for=condition=available deployment/coredns -n kube-system
$(VECHO)$(MAKE) scorecard-tests
.PHONY: operator-sdk
operator-sdk: $(OPERATOR_SDK)
$(OPERATOR_SDK): $(LOCALBIN)
test -s $(OPERATOR_SDK) || curl -sLo $(OPERATOR_SDK) https://github.com/operator-framework/operator-sdk/releases/download/v${OPERATOR_SDK_VERSION}/operator-sdk_`go env GOOS`_`go env GOARCH`
@chmod +x $(OPERATOR_SDK)
api-docs: crdoc kustomize
@{ \
set -e ;\
TMP_DIR=$$(mktemp -d) ; \
$(KUSTOMIZE) build config/crd -o $$TMP_DIR/crd-output.yaml ;\
$(CRDOC) --resources $$TMP_DIR/crd-output.yaml --output docs/api.md ;\
}
.PHONY: crdoc
crdoc: $(CRDOC)
$(CRDOC): $(LOCALBIN)
test -s $(CRDOC) || GOBIN=$(LOCALBIN) go install fybrik.io/crdoc@v0.5.2
@chmod +x $(CRDOC)
.PHONY: scorecard
scorecard:
@operator-sdk scorecard --cr-manifest deploy/examples/simplest.yaml --csv-path deploy/olm-catalog/jaeger.clusterserviceversion.yaml --init-timeout 30

23
PROJECT
View File

@ -1,23 +0,0 @@
domain: jaegertracing.io
layout:
- go.kubebuilder.io/v3
multigroup: true
plugins:
manifests.sdk.operatorframework.io/v2: {}
scorecard.sdk.operatorframework.io/v2: {}
projectName: jaeger-operator
repo: github.com/jaegertracing/jaeger-operator
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: jaegertracing.io
kind: Jaeger
path: github.com/jaegertracing/jaeger-operator/apis/v1
version: v1
webhooks:
defaulting: true
validation: true
webhookVersion: v1
version: "3"

601
README.adoc Normal file
View File

@ -0,0 +1,601 @@
:toc: macro
image:https://travis-ci.org/jaegertracing/jaeger-operator.svg?branch=master["Build Status", link="https://travis-ci.org/jaegertracing/jaeger-operator"]
image:https://goreportcard.com/badge/github.com/jaegertracing/jaeger-operator["Go Report Card", link="https://goreportcard.com/report/github.com/jaegertracing/jaeger-operator"]
image:https://codecov.io/gh/jaegertracing/jaeger-operator/branch/master/graph/badge.svg["Code Coverage", link="https://codecov.io/gh/jaegertracing/jaeger-operator"]
= Jaeger Operator for Kubernetes
toc::[]
IMPORTANT: The Jaeger Operator version is related to the version of the Jaeger components (Query, Collector, Agent) up to the minor portion. The patch version portion does *not* follow the ones from the Jaeger components. For instance, the Operator version 1.8.1 uses the Jaeger Docker images tagged with version 1.8 by default.
== Installing the operator
NOTE: The following instructions will deploy a version of the operator that is using the latest `master` version. If
you want to install a particular stable version of the operator, you will need to edit the `operator.yaml` and specify
the version as the tag in the container image - and then use the relevant `apiVersion` for the Jaeger operator.
|===
|Up to version |apiVersion |CRD yaml
|master
|jaegertracing.io/v1
|https://github.com/jaegertracing/jaeger-operator/blob/master/deploy/crds/jaegertracing_v1_jaeger_crd.yaml[jaegertracing_v1_jaeger_crd.yaml]
|1.10.0
|io.jaegertracing/v1alpha1
|https://github.com/jaegertracing/jaeger-operator/blob/master/deploy/crds/io_v1alpha1_jaeger_crd.yaml[io_v1alpha1_jaeger_crd.yaml]
|===
=== Kubernetes
NOTE: Make sure your `kubectl` command is properly configured to talk to a valid Kubernetes cluster. If you don't have one yet, check link:https://kubernetes.io/docs/tasks/tools/install-minikube/[`minikube`] out.
To install the operator, run:
[source,bash]
----
kubectl create namespace observability # <1>
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing_v1_jaeger_crd.yaml # <2>
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml
----
<1> This creates the namespace used by default in the deployment files.
<2> This installs the "Custom Resource Definition" for the `apiVersion: jaegertracing.io/v1`
IMPORTANT: when using a Jaeger Operator up to v1.10.0, install the CRD file `io_v1alpha1_jaeger_crd.yaml` in addition to `jaegertracing_v1_jaeger_crd.yaml`. This is because up to that version, the `apiVersion` in use was `io.jaegertracing/v1alpha1`.
If you want to install the Jaeger operator in a different namespace, you will need to edit the deployment
files to change `observability` to the required value.
At this point, there should be a `jaeger-operator` deployment available:
[source,bash]
----
$ kubectl get deployment jaeger-operator -n observability
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
jaeger-operator 1 1 1 1 48s
----
The operator is now ready to create Jaeger instances!
=== OpenShift
The instructions from the previous section also work on OpenShift. Make sure to install the RBAC rules, the CRD and the operator as a privileged user, such as `system:admin`.
[source,bash]
----
oc login -u system:admin
oc new-project observability # <1>
oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing_v1_jaeger_crd.yaml # <2>
oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml
oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml
oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml
----
<1> This creates the namespace used by default in the deployment files.
<2> This installs the "Custom Resource Definition" for the `apiVersion: jaegertracing.io/v1`
IMPORTANT: when using a Jaeger Operator up to v1.10.0, install the CRD file `io_v1alpha1_jaeger_crd.yaml` in addition to `jaegertracing_v1_jaeger_crd.yaml`. This is because up to that version, the `apiVersion` in use was `io.jaegertracing/v1alpha1`.
If you want to install the Jaeger operator in a different namespace, you will need to edit the deployment
files to change `observability` to the required value.
Once the operator is installed, grant the role `jaeger-operator` to users who should be able to install individual Jaeger instances. The following example creates a role binding allowing the user `developer` to create Jaeger instances:
[source,bash]
----
oc create \
rolebinding developer-jaeger-operator \
--role=jaeger-operator \
--user=developer
----
After the role is granted, switch back to a non-privileged user.
== Creating a new Jaeger instance
Example custom resources, for different configurations of Jaeger, can be found https://github.com/jaegertracing/jaeger-operator/tree/master/deploy/examples[here].
The simplest possible way to install is by creating a YAML file like the following:
.simplest.yaml
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simplest
----
The YAML file can then be used with `kubectl`:
[source,bash]
----
kubectl apply -f simplest.yaml
----
In a few seconds, a new in-memory all-in-one instance of Jaeger will be available, suitable for quick demos and development purposes. To check the instances that were created, list the `jaeger` objects:
[source,bash]
----
$ kubectl get jaeger
NAME CREATED AT
simplest 28s
----
To get the pod name, query for the pods belonging to the `simplest` Jaeger instance:
[source,bash]
----
$ kubectl get pods -l app.kubernetes.io/instance=simplest
NAME READY STATUS RESTARTS AGE
simplest-6499bb6cdd-kqx75 1/1 Running 0 2m
----
Similarly, the logs can be queried either from the pod directly using the pod name obtained from the previous example, or from all pods belonging to our instance:
[source,bash]
----
$ kubectl logs -l app.kubernetes.io/instance=simplest
...
{"level":"info","ts":1535385688.0951214,"caller":"healthcheck/handler.go:133","msg":"Health Check state change","status":"ready"}
----
NOTE: On OpenShift the container name must be specified
[source,bash]
----
$ kubectl logs -l app.kubernetes.io/instance=simplest -c jaeger
...
{"level":"info","ts":1535385688.0951214,"caller":"healthcheck/handler.go:133","msg":"Health Check state change","status":"ready"}
----
For reference, here's how a more complex all-in-one instance can be created:
.all-in-one.yaml
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: my-jaeger
spec:
strategy: allInOne # <1>
allInOne:
image: jaegertracing/all-in-one:latest # <2>
options: # <3>
log-level: debug # <4>
storage:
type: memory # <5>
options: # <6>
memory: # <7>
max-traces: 100000
ingress:
enabled: false # <8>
agent:
strategy: DaemonSet # <9>
annotations:
scheduler.alpha.kubernetes.io/critical-pod: "" # <10>
----
<1> The default strategy is `allInOne`. The only other possible values are `production` and `streaming`.
<2> The image to use, in a regular Docker syntax
<3> The (non-storage related) options to be passed verbatim to the underlying binary. Refer to the Jaeger documentation and/or to the `--help` option from the related binary for all the available options.
<4> The option is a simple `key: value` map. In this case, we want the option `--log-level=debug` to be passed to the binary.
<5> The storage type to be used. By default it will be `memory`, but can be any other supported storage type (e.g. elasticsearch, cassandra, kafka, etc).
<6> All storage related options should be placed here, rather than under the 'allInOne' or other component options.
<7> Some options are namespaced and we can alternatively break them into nested objects. We could have specified `memory.max-traces: 100000`.
<8> By default, an ingress object is created for the query service. It can be disabled by setting its `enabled` option to `false`. If deploying on OpenShift, this will be represented by a Route object.
<9> By default, the operator assumes that agents are deployed as sidecars within the target pods. Specifying the strategy as "DaemonSet" changes that and makes the operator deploy the agent as DaemonSet. Note that your tracer client will probably have to override the "JAEGER_AGENT_HOST" env var to use the node's IP.
<10> Define annotations to be applied to all deployments (not services). These can be overridden by annotations defined on the individual components.
== Updating a Jaeger instance (experimental)
A Jaeger instance can be updated by changing the `CustomResource`, either via `kubectl edit jaeger simplest`, where `simplest` is the Jaeger's instance name, or by applying the updated YAML file via `kubectl apply -f simplest.yaml`.
IMPORTANT: the name of the Jaeger instance cannot be updated, as it's part of the identifying information for the resource
Simpler changes such as changing the replica sizes can be applied without much concern, whereas changes to the strategy should be watched closely and might potentially cause an outage for individual components (collector/query/agent).
While changing the backing storage is supported, migration of the data is not.
== Strategies
As shown in the example above, the Jaeger instance is associated with a strategy. The strategy determines the architecture to be used for the Jaeger backend.
The available strategies are described in the following sections.
=== AllInOne (Default)
This strategy is intended for development, testing and demo purposes.
The main backend components, agent, collector and query service, are all packaged into a single executable which is configured (by default) to use in-memory storage.
=== Production
The `production` strategy is intended (as the name suggests) for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore separately deployed.
The agent can be injected as a sidecar on the instrumented application or as a daemonset.
The query and collector services are configured with a supported storage type - currently cassandra or elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes.
The main additional requirement is to provide the details of the storage type and options, e.g.
[source,yaml]
----
storage:
type: elasticsearch
options:
es:
server-urls: http://elasticsearch:9200
----
=== Streaming
The `streaming` strategy is designed to augment the `production` strategy by providing a streaming capability that effectively sits between the collector and the backend storage (e.g. cassandra or elasticsearch). This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post processing capabilities to tap into the real time span data directly from the streaming platform (kafka).
The only additional information required is to provide the details for accessing the Kafka platform, which is configured in a new `ingester` component:
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-streaming
spec:
strategy: streaming
ingester:
options:
kafka: # <1>
topic: jaeger-spans
brokers: my-cluster-kafka-brokers.kafka:9092
ingester:
deadlockInterval: 0 # <2>
storage:
type: elasticsearch
options:
es:
server-urls: http://elasticsearch:9200
----
<1> Identifies the kafka configuration used by the collector, to produce the messages, and the ingester to consume the messages
<2> The deadlock interval can be disabled to avoid the ingester being terminated when no messages arrive within the default 1 minute period
== Elasticsearch storage
Under some circumstances, the Jaeger Operator can make use of the link:https://github.com/openshift/elasticsearch-operator[Elasticsearch Operator] to provision a suitable Elasticsearch cluster.
IMPORTANT: this feature is experimental and has been tested only on OpenShift clusters. Elasticsearch also requires the memory setting to be configured like `minishift ssh -- 'sudo sysctl -w vm.max_map_count=262144'`. Spark dependencies are not supported with this feature link:https://github.com/jaegertracing/jaeger-operator/issues/294[#294].
When there are no `es.server-urls` options as part of a Jaeger `production` instance and `elasticsearch` is set as the storage type, the Jaeger Operator creates an Elasticsearch cluster via the Elasticsearch Operator by creating a Custom Resource based on the configuration provided in storage section. The Elasticsearch cluster is meant to be dedicated for a single Jaeger instance.
The self-provision of an Elasticsearch cluster can be disabled by setting the flag `--es-provision` to `false`. The default value is `auto`, which will make the Jaeger Operator query the Kubernetes for its ability to handle a `Elasticsearch` custom resource. This is usually set by the Elasticsearch Operator during its installation process, so, if the Elasticsearch Operator is expected to run *after* the Jaeger Operator, the flag can be set to `true`.
IMPORTANT: At the moment there can be only one Jaeger with self-provisioned Elasticsearch instance per namespace.
== Accessing the UI
=== Kubernetes
The operator creates a Kubernetes link:https://kubernetes.io/docs/concepts/services-networking/ingress/[`ingress`] route, which is the Kubernetes' standard for exposing a service to the outside world, but it comes with no Ingress providers by default. link:https://kubernetes.github.io/ingress-nginx/deploy/#verify-installation[Check the documentation] on what's the most appropriate way to achieve that for your platform, but the following commands should provide a good start on `minikube`:
[source,bash]
----
minikube addons enable ingress
----
Once that is done, the UI can be found by querying the Ingress object:
[source,bash]
----
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
simplest-query * 192.168.122.34 80 3m
----
IMPORTANT: an `Ingress` object is *not* created when the operator is running on OpenShift
In this example, the Jaeger UI is available at http://192.168.122.34
=== OpenShift
When using the `operator-openshift.yaml` resource, the Operator will automatically create a `Route` object for the query services. Check the hostname/port with the following command:
[source,bash]
----
oc get routes
----
NOTE: make sure to use `https` with the hostname/port you get from the command above, otherwise you'll see a message like: "Application is not available".
By default, the Jaeger UI is protected with OpenShift's OAuth service and any valid user is able to login. For development purposes, the user/password combination `developer/developer` can be used. To disable this feature and leave the Jaeger UI unsecured, set the Ingress property `security` to `none`:
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: disable-oauth-proxy
spec:
ingress:
security: none
----
== Auto injection of Jaeger Agent sidecars
The operator can also inject Jaeger Agent sidecars in `Deployment` workloads, provided that the deployment has the annotation `sidecar.jaegertracing.io/inject` with a suitable value. The values can be either `"true"` (as string), or the Jaeger instance name, as returned by `kubectl get jaegers`. When `"true"` is used, there should be exactly *one* Jaeger instance for the same namespace as the deployment, otherwise, the operator can't figure out automatically which Jaeger instance to use.
The following snippet shows a simple application that will get a sidecar injected, with the Jaeger Agent pointing to the single Jaeger instance available in the same namespace:
[source,yaml]
----
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
annotations:
"sidecar.jaegertracing.io/inject": "true" # <1>
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: acme/myapp:myversion
----
<1> Either `"true"` (as string) or the Jaeger instance name
A complete sample deployment is available at link:./deploy/examples/business-application-injected-sidecar.yaml[`deploy/examples/business-application-injected-sidecar.yaml`]
== Agent as DaemonSet
By default, the Operator expects the agents to be deployed as sidecars to the target applications. This is convenient for several purposes, like in a multi-tenant scenario or to have better load balancing, but there are scenarios where it's desirable to install the agent as a `DaemonSet`. In that case, specify the Agent's strategy to `DaemonSet`, as follows:
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: my-jaeger
spec:
agent:
strategy: DaemonSet
----
IMPORTANT: if you attempt to install two Jaeger instances on the same cluster with `DaemonSet` as the strategy, only *one* will end up deploying a `DaemonSet`, as the agent is required to bind to well-known ports on the node. Because of that, the second daemon set will fail to bind to those ports.
Your tracer client will then most likely need to be told where the agent is located. This is usually done by setting the env var `JAEGER_AGENT_HOST` and should be set to the value of the Kubernetes node's IP, like:
[source,yaml]
----
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: acme/myapp:myversion
env:
- name: JAEGER_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
----
== Secrets support
The Operator supports passing secrets to the Collector, Query and All-In-One deployments. This can be used for example, to pass credentials (username/password) to access the underlying storage backend (for ex: Elasticsearch).
The secrets are available as environment variables in the (Collector/Query/All-In-One) nodes.
[source,yaml]
----
storage:
type: elasticsearch
options:
es:
server-urls: http://elasticsearch:9200
secretName: jaeger-secrets
----
The secret itself would be managed outside of the `jaeger-operator` CR.
== Define sampling strategies
The operator can be used to define sampling strategies that will be supplied to tracers that have been configured
to use a remote sampler:
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: with-sampling
spec:
strategy: allInOne
sampling:
options:
default_strategy:
type: probabilistic
param: 50
----
This example defines a default sampling strategy that is probabilistic, with a 50% chance of the trace instances being
sampled.
Refer to the Jaeger documentation on link:https://www.jaegertracing.io/docs/latest/sampling/#collector-sampling-configuration[Collector Sampling Configuration] to see how service and endpoint sampling can be configured. The JSON representation described in that documentation can be used in the operator by converting to YAML.
== Schema migration
=== Cassandra
When the storage type is set to Cassandra, the operator will automatically create a batch job that creates the required schema for Jaeger to run. This batch job will block the Jaeger installation, so that it starts only after the schema is successfuly created. The creation of this batch job can be disabled by setting the `enabled` property to `false`:
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: cassandra-without-create-schema
spec:
strategy: allInOne
storage:
type: cassandra
cassandraCreateSchema:
enabled: false # <1>
----
<1> Defaults to `true`
Further aspects of the batch job can be configured as well. An example with all the possible options is shown below:
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: cassandra-with-create-schema
spec:
strategy: allInOne # <1>
storage:
type: cassandra
options: # <2>
cassandra:
servers: cassandra
keyspace: jaeger_v1_datacenter3
cassandraCreateSchema: # <3>
datacenter: "datacenter3"
mode: "test"
----
<1> The same works for `production` and `streaming`
<2> These options are for the regular Jaeger components, like `collector` and `query`
<3> The options for the `create-schema` job
NOTE: the default create-schema job uses `MODE=prod`, which implies a replication factor of `2`, using `NetworkTopologyStrategy` as the class, effectively meaning that at least 3 nodes are required in the Cassandra cluster. If a `SimpleStrategy` is desired, set the mode to `test`, which then sets the replication factor of `1`. Refer to the link:https://github.com/jaegertracing/jaeger/blob/master/plugin/storage/cassandra/schema/create.sh[create-schema script] for more details.
== Finer grained configuration
The custom resource can be used to define finer grained Kubernetes configuration applied to all Jaeger components or at the individual component level.
When a common definition (for all Jaeger components) is required, it is defined under the `spec` node. When the definition relates to an individual component, it is placed under the `spec/<component>` node.
The types of configuration supported include:
* link:https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/[annotations]
* link:https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container[resources] to limit cpu and memory
* link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity[affinity] to determine which nodes a pod can be allocated to
* link:https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/[tolerations] in conjunction with `taints` to enable pods to avoid being repelled from a node
* link:https://kubernetes.io/docs/concepts/storage/volumes/[volumes] and volume mounts
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-prod
spec:
strategy: production
storage:
type: elasticsearch
options:
es:
server-urls: http://elasticsearch:9200
annotations:
key1: value1
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoExecute"
volumeMounts:
- name: config-vol
mountPath: /etc/config
volumes:
- name: config-vol
configMap:
name: log-config
items:
- key: log_level
path: log_level
----
== Removing an instance
To remove an instance, just use the `delete` command with the file used for the instance creation:
[source,bash]
----
kubectl delete -f simplest.yaml
----
Alternatively, you can remove a Jaeger instance by running:
[source,bash]
----
kubectl delete jaeger simplest
----
NOTE: deleting the instance will not remove the data from a permanent storage used with this instance. Data from in-memory instances, however, will be lost.
== Monitoring the operator
The Jaeger Operator starts a Prometheus-compatible endpoint on `0.0.0.0:8383/metrics` with internal metrics that can be used to monitor the process.
NOTE: The Jaeger Operator does not yet publish its own metrics. Rather, it makes available metrics reported by the components it uses, such as the Operator SDK.
== Uninstalling the operator
Similar to the installation, just run:
[source,bash]
----
kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml
kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml
kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml
kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing_v1_jaeger_crd.yaml
----

222
README.md
View File

@ -1,222 +0,0 @@
[![Build Status][ci-img]][ci] [![Go Report Card][goreport-img]][goreport] [![Code Coverage][cov-img]][cov] [![GoDoc][godoc-img]][godoc] [![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/jaegertracing/jaeger-operator/badge)](https://securityscorecards.dev/viewer/?uri=github.com/jaegertracing/jaeger-operator)
# Jaeger Operator for Kubernetes
The Jaeger Operator is an implementation of a [Kubernetes Operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/).
## Getting started
Firstly, ensure an [ingress-controller is deployed](https://kubernetes.github.io/ingress-nginx/deploy/). When using `minikube`, you can use the `ingress` add-on: `minikube start --addons=ingress`
Then follow the Jaeger Operator [installation instructions](https://www.jaegertracing.io/docs/latest/operator/).
Once the `jaeger-operator` deployment in the namespace `observability` is ready, create a Jaeger instance, like:
```
kubectl apply -n observability -f - <<EOF
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simplest
EOF
```
This will create a Jaeger instance named `simplest`. The Jaeger UI is served via the `Ingress`, like:
```console
$ kubectl get -n observability ingress
NAME HOSTS ADDRESS PORTS AGE
simplest-query * 192.168.122.34 80 3m
```
In this example, the Jaeger UI is available at http://192.168.122.34.
The official documentation for the Jaeger Operator, including all its customization options, are available under the main [Jaeger Documentation](https://www.jaegertracing.io/docs/latest/operator/).
CRD-API documentation can be found [here](./docs/api.md).
## Compatibility matrix
See the compatibility matrix [here](./COMPATIBILITY.md).
### Jaeger Operator vs. Jaeger
The Jaeger Operator follows the same versioning as the operand (Jaeger) up to the minor part of the version. For example, the Jaeger Operator v1.22.2 tracks Jaeger 1.22.0. The patch part of the version indicates the patch level of the operator itself, not that of Jaeger. Whenever a new patch version is released for Jaeger, we'll release a new patch version of the operator.
### Jaeger Operator vs. Kubernetes
We strive to be compatible with the widest range of Kubernetes versions as possible, but some changes to Kubernetes itself require us to break compatibility with older Kubernetes versions, be it because of code imcompatibilities, or in the name of maintainability.
Our promise is that we'll follow what's common practice in the Kubernetes world and support N-2 versions, based on the release date of the Jaeger Operator.
For instance, when we released v1.22.0, the latest Kubernetes version was v1.20.5. As such, the minimum version of Kubernetes we support for Jaeger Operator v1.22.0 is v1.18 and we tested it with up to 1.20.
The Jaeger Operator *might* work on versions outside of the given range, but when opening new issues, please make sure to test your scenario on a supported version.
### Jaeger Operator vs. Strimzi Operator
We maintain compatibility with a set of tested Strimzi operator versions, but some changes in Strimzi operator require us to break compatibility with older versions.
The jaeger Operator *might* work on other untested versions of Strimzi Operator, but when opening new issues, please make sure to test your scenario on a supported version.
## (experimental) Generate Kubernetes manifest file
Sometimes it is preferable to generate plain manifests files instead of running an operator in a cluster. `jaeger-operator generate` generates kubernetes manifests from a given CR. In this example we apply the manifest generated by [examples/simplest.yaml](https://raw.githubusercontent.com/jaegertracing/jaeger-operator/main/examples/simplest.yaml) to the namespace `jaeger-test`:
```bash
curl https://raw.githubusercontent.com/jaegertracing/jaeger-operator/main/examples/simplest.yaml | docker run -i --rm jaegertracing/jaeger-operator:main generate | kubectl apply -n jaeger-test -f -
```
It is recommended to deploy the operator instead of generating a static manifest.
## Jaeger V2 Operator
As the Jaeger V2 is released, it is decided that Jaeger V2 will deployed on Kubernetes using [OpenTelemetry Operator](https://github.com/open-telemetry/opentelemetry-operator). This will benefit both the users of Jaeger and OpenTelemetry. To use Jaeger V2 with OpenTelemetry Operator, the steps are as follows:
* Install the cert-manager in the existing cluster with the command:
```bash
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.1/cert-manager.yaml
```
Please verify all the resources (e.g., Pods and Deployments) are in a ready state in the `cert-manager` namespace.
* Install the OpenTelemetry Operator by running:
```bash
kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml
```
Please verify all the resources (e.g., Pods and Deployments) are in a ready state in the `opentelemetry-operator-system` namespace.
### Using Jaeger with in-memory storage
Once all the resources are ready, create a Jaeger instance as follows:
```yaml
kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: jaeger-inmemory-instance
spec:
image: jaegertracing/jaeger:latest
ports:
- name: jaeger
port: 16686
config:
service:
extensions: [jaeger_storage, jaeger_query]
pipelines:
traces:
receivers: [otlp]
exporters: [jaeger_storage_exporter]
extensions:
jaeger_query:
storage:
traces: memstore
jaeger_storage:
backends:
memstore:
memory:
max_traces: 100000
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
jaeger_storage_exporter:
trace_storage: memstore
EOF
```
To use the in-memory storage ui for Jaeger V2, expose the pod, deployment or the service as follows:
```bash
kubectl port-forward deployment/jaeger-inmemory-instance-collector 8080:16686
```
Or
```bash
kubectl port-forward service/jaeger-inmemory-instance-collector 8080:16686
```
Once done, type `localhost:8080` in the browser to interact with the UI.
[Note] There's an ongoing development in OpenTelemetry Operator where users will be able to interact directly with the UI.
### Using Jaeger with database to store traces
To use Jaeger V2 with the supported database, it is mandatory to create database deployments and they should be in `ready` state [(ref)](https://www.jaegertracing.io/docs/2.0/storage/).
Create a Kubernetes Service that exposes the database pods enabling communication between the database and Jaeger pods.
This can be achieved by creating a service in two ways, first by creating it [manually](https://kubernetes.io/docs/concepts/services-networking/service/) or second by creating it using imperative command.
```bash
kubectl expose pods <pod-name> --port=<port-number> --name=<name-of-the-service>
```
Or
```bash
kubectl expose deployment <deployment-name> --port=<port-number> --name=<name-of-the-service>
```
After the service is created, add the name of the service as an endpoint in their respective config as follows:
* [Cassandra DB](https://github.com/jaegertracing/jaeger/blob/main/cmd/jaeger/config-cassandra.yaml):
```yaml
jaeger_storage:
backends:
some_storage:
cassandra:
connection:
servers: [<name-of-the-service>]
```
* [ElasticSearch](https://github.com/jaegertracing/jaeger/blob/main/cmd/jaeger/config-elasticsearch.yaml):
```yaml
jaeger_storage:
backends:
some_storage:
elasticseacrh:
servers: [<name-of-the-service>]
```
Use the modified config to create Jaeger instance with the help of OpenTelemetry Operator.
```yaml
kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: jaeger-storage-instance # name of your choice
spec:
image: jaegertracing/jaeger:latest
ports:
- name: jaeger
port: 16686
config:
# modified config
EOF
```
## Contributing and Developing
Please see [CONTRIBUTING.md](CONTRIBUTING.md).
## License
[Apache 2.0 License](./LICENSE).
[ci-img]: https://github.com/jaegertracing/jaeger-operator/workflows/CI%20Workflow/badge.svg
[ci]: https://github.com/jaegertracing/jaeger-operator/actions
[cov-img]: https://codecov.io/gh/jaegertracing/jaeger-operator/branch/main/graph/badge.svg
[cov]: https://codecov.io/github/jaegertracing/jaeger-operator/
[goreport-img]: https://goreportcard.com/badge/github.com/jaegertracing/jaeger-operator
[goreport]: https://goreportcard.com/report/github.com/jaegertracing/jaeger-operator
[godoc-img]: https://godoc.org/github.com/jaegertracing/jaeger-operator?status.svg
[godoc]: https://godoc.org/github.com/jaegertracing/jaeger-operator/apis/v1#JaegerSpec

17
RELEASE.adoc Normal file
View File

@ -0,0 +1,17 @@
= Releasing the Jaeger Operator for Kubernetes
1. Prepare a changelog and get it merged. A list of commits since the last release (`v1.8.0` in the following example) can be obtained via:
$ git log --format="format:* %s" v1.8.0...HEAD
1. Test!
export BUILD_IMAGE_TEST="${USER}/jaeger-operator:latest"
export BUILD_IMAGE="${BUILD_IMAGE_TEST}"
make all
1. Tag and push
git checkout master ## it's only possible to release from master for now!
git tag release/v1.6.1
git push git@github.com:jaegertracing/jaeger-operator.git release/v1.6.1

View File

@ -1,72 +0,0 @@
# Releasing the Jaeger Operator for Kubernetes
## Generating the changelog
- Get the `OAUTH_TOKEN` from [Github](https://github.com/settings/tokens/new?description=GitHub%20Changelog%20Generator%20token), select `repo:status` scope.
- Run `OAUTH_TOKEN=... make changelog`
- Remove the commits that are not relevant to users, like:
* CI or testing-specific commits (e2e, unit test, ...)
* bug fixes for problems that are not part of a release yet
* version bumps for internal dependencies
## Releasing
Steps to release a new version of the Jaeger Operator:
1. Change the `versions.txt `so that it lists the target version of the Jaeger (if it is required). **Don't touch the operator version**: it will be changed automatically in the next step.
2. Confirm that `MIN_KUBERNETES_VERSION` and `MIN_OPENSHIFT_VERSION` in the `Makefile` are still up-to-date, and update them if required.
2. Run `OPERATOR_VERSION=1.30.0 make prepare-release`, using the operator version that will be released.
3. Run the E2E tests in OpenShift as described in [the CONTRIBUTING.md](CONTRIBUTING.md#an-external-cluster-like-openshift) file. The tests will be executed automatically in Kubernetes by the GitHub Actions CI later.
4. Prepare a changelog since last release.
4. Update the release manager schedule.
5. Commit the changes and create a pull request:
```sh
git commit -sm "Preparing release v1.30.0"
```
5. Once the changes above are merged and available in `main` tag it with the desired version, prefixed with `v`, eg. `v1.30.0`
```sh
git checkout main
git tag v1.30.0
git push git@github.com:jaegertracing/jaeger-operator.git v1.30.0
```
6. The GitHub Workflow will take it from here, creating a GitHub release and publishing the images
7. After the release, PRs needs to be created against the Operator Hub Community Operators repositories:
* One for the [upstream-community-operators](https://github.com/k8s-operatorhub/community-operators), used by OLM on Kubernetes.
* One for the [community-operators](https://github.com/redhat-openshift-ecosystem/community-operators-prod) used by OpenShift.
This can be done with the following steps:
- Update main `git pull git@github.com:jaegertracing/jaeger-operator.git main`
- Clone both repositories `upstream-community-operators` and `community-operators`
- Run `make operatorhub`
* If you have [`gh`](https://cli.github.com/) installed and configured, it will open the necessary PRs for you automatically.
* If you don't have it, the branches will be pushed to `origin` and you should be able to open the PR from there
## Note
After the PRs have been made it must be ensured that:
- Images listed in the ClusterServiceVersion (CSV) have a versions tag [#1682](https://github.com/jaegertracing/jaeger-operator/issues/1682)
- No `bundle` folder is included in the release
- No foreign CRs like prometheus are in the manifests
## Release managers
The operator should be released within a week after the [Jaeger release](https://github.com/jaegertracing/jaeger/blob/main/RELEASE.md#release-managers).
| Version | Release Manager |
|---------| -------------------------------------------------------- |
| 1.63.0 | [Benedikt Bongartz](https://github.com/frzifus) |
| 1.64.0 | [Pavol Loffay](https://github.com/pavolloffay) |
| 1.65.0 | [Israel Blancas](https://github.com/iblancasa) |
| 1.66.0 | [Ruben Vargas](https://github.com/rubenvp8510) |

View File

@ -1,20 +0,0 @@
package v1
import (
"github.com/spf13/viper"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
)
// NewJaeger returns a new Jaeger instance with the given name
func NewJaeger(nsn types.NamespacedName) *Jaeger {
return &Jaeger{
ObjectMeta: metav1.ObjectMeta{
Name: nsn.Name,
Namespace: nsn.Namespace,
Labels: map[string]string{
LabelOperatedBy: viper.GetString(ConfigIdentity),
},
},
}
}

View File

@ -1,33 +0,0 @@
package v1
const (
// LabelOperatedBy is used as the key to the label indicating which operator is managing the instance
LabelOperatedBy string = "jaegertracing.io/operated-by"
// ConfigIdentity is the key to the configuration map related to the operator's identity
ConfigIdentity string = "identity"
// ConfigWatchNamespace is the key to the configuration map related to the namespace the operator should watch
ConfigWatchNamespace string = "watch-namespace"
// ConfigEnableNamespaceController is the key to the configuration map related to the boolean, determining whether the namespace controller is enabled
ConfigEnableNamespaceController string = "enable-namespace-controller"
// ConfigOperatorScope is the configuration key holding the scope of the operator
ConfigOperatorScope string = "operator-scope"
// WatchAllNamespaces is the value that the ConfigWatchNamespace holds to represent "all namespaces".
WatchAllNamespaces string = ""
// OperatorScopeCluster signals that the operator's instance is installed cluster-wide
OperatorScopeCluster string = "cluster"
// OperatorScopeNamespace signals that the operator's instance is working on a single namespace
OperatorScopeNamespace string = "namespace"
// BootstrapTracer is the OpenTelemetry tracer name for the bootstrap procedure
BootstrapTracer string = "operator/bootstrap"
// ReconciliationTracer is the OpenTelemetry tracer name for the reconciliation loops
ReconciliationTracer string = "operator/reconciliation"
)

View File

@ -1,44 +0,0 @@
package v1
import (
"errors"
"strings"
)
// DeploymentStrategy represents the possible values for deployment strategies
type DeploymentStrategy string
const (
// DeploymentStrategyDeprecatedAllInOne represents the (deprecated) 'all-in-one' deployment strategy
DeploymentStrategyDeprecatedAllInOne DeploymentStrategy = "all-in-one"
// DeploymentStrategyAllInOne represents the 'allInOne' deployment strategy (default)
DeploymentStrategyAllInOne DeploymentStrategy = "allinone"
// DeploymentStrategyStreaming represents the 'streaming' deployment strategy
DeploymentStrategyStreaming DeploymentStrategy = "streaming"
// DeploymentStrategyProduction represents the 'production' deployment strategy
DeploymentStrategyProduction DeploymentStrategy = "production"
)
// UnmarshalText implements encoding.TextUnmarshaler to ensure that JSON values in the
// strategy field of JSON jaeger specs are interpreted in a case-insensitive manner
func (ds *DeploymentStrategy) UnmarshalText(text []byte) error {
if ds == nil {
return errors.New("DeploymentStrategy: UnmarshalText on nil pointer")
}
switch strings.ToLower(string(text)) {
default:
*ds = DeploymentStrategyAllInOne
case string(DeploymentStrategyDeprecatedAllInOne):
*ds = DeploymentStrategyDeprecatedAllInOne
case string(DeploymentStrategyStreaming):
*ds = DeploymentStrategyStreaming
case string(DeploymentStrategyProduction):
*ds = DeploymentStrategyProduction
}
return nil
}

View File

@ -1,56 +0,0 @@
package v1
import (
"encoding/json"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestUnmarshalJSON(t *testing.T) {
tcs := map[string]struct {
json string
expected DeploymentStrategy
}{
"allInOne": {json: `"allInOne"`, expected: DeploymentStrategyAllInOne},
"streaming": {json: `"streaming"`, expected: DeploymentStrategyStreaming},
"production": {json: `"production"`, expected: DeploymentStrategyProduction},
"all-in-one": {json: `"all-in-one"`, expected: DeploymentStrategyDeprecatedAllInOne},
"ALLinONE": {json: `"ALLinONE"`, expected: DeploymentStrategyAllInOne},
"StReAmInG": {json: `"StReAmInG"`, expected: DeploymentStrategyStreaming},
"Production": {json: `"Production"`, expected: DeploymentStrategyProduction},
"All-IN-One": {json: `"All-IN-One"`, expected: DeploymentStrategyDeprecatedAllInOne},
"random value": {json: `"random value"`, expected: DeploymentStrategyAllInOne},
"empty string": {json: `""`, expected: DeploymentStrategyAllInOne},
}
for name, tc := range tcs {
t.Run(name, func(t *testing.T) {
ds := DeploymentStrategy("")
err := json.Unmarshal([]byte(tc.json), &ds)
require.NoError(t, err)
assert.Equal(t, tc.expected, ds)
})
}
}
func TestMarshalJSON(t *testing.T) {
tcs := map[string]struct {
strategy DeploymentStrategy
expected string
}{
"allinone": {strategy: DeploymentStrategyAllInOne, expected: `"allinone"`},
"streaming": {strategy: DeploymentStrategyStreaming, expected: `"streaming"`},
"production": {strategy: DeploymentStrategyProduction, expected: `"production"`},
"all-in-one": {strategy: DeploymentStrategyDeprecatedAllInOne, expected: `"all-in-one"`},
}
for name, tc := range tcs {
t.Run(name, func(t *testing.T) {
data, err := json.Marshal(tc.strategy)
require.NoError(t, err)
assert.Equal(t, tc.expected, string(data))
})
}
}

View File

@ -1,58 +0,0 @@
package v1
import (
"encoding/json"
)
// FreeForm defines a common options parameter that maintains the hierarchical structure of the data, unlike Options which flattens the hierarchy into a key/value map where the hierarchy is converted to '.' separated items in the key.
type FreeForm struct {
json *[]byte `json:"-"`
}
// NewFreeForm build a new FreeForm object based on the given map
func NewFreeForm(o map[string]interface{}) FreeForm {
freeForm := FreeForm{}
if o != nil {
j, _ := json.Marshal(o)
freeForm.json = &j
}
return freeForm
}
// UnmarshalJSON implements an alternative parser for this field
func (o *FreeForm) UnmarshalJSON(b []byte) error {
o.json = &b
return nil
}
// MarshalJSON specifies how to convert this object into JSON
func (o FreeForm) MarshalJSON() ([]byte, error) {
if nil == o.json {
return []byte("{}"), nil
}
if len(*o.json) == 0 {
return []byte("{}"), nil
}
return *o.json, nil
}
// IsEmpty determines if the freeform options are empty
func (o FreeForm) IsEmpty() bool {
if nil == o.json {
return true
}
return len(*o.json) == 0 || string(*o.json) == "{}"
}
// GetMap returns a map created from json
func (o FreeForm) GetMap() (map[string]interface{}, error) {
m := map[string]interface{}{}
if nil == o.json {
return m, nil
}
if err := json.Unmarshal(*o.json, &m); err != nil {
return nil, err
}
return m, nil
}

View File

@ -1,76 +0,0 @@
package v1
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestFreeForm(t *testing.T) {
uiconfig := `{"es":{"password":"changeme","server-urls":"http://elasticsearch:9200","username":"elastic"}}`
o := NewFreeForm(map[string]interface{}{
"es": map[string]interface{}{
"server-urls": "http://elasticsearch:9200",
"username": "elastic",
"password": "changeme",
},
})
json, err := o.MarshalJSON()
require.NoError(t, err)
assert.NotNil(t, json)
assert.Equal(t, uiconfig, string(*o.json))
}
func TestFreeFormUnmarhalMarshal(t *testing.T) {
uiconfig := `{"es":{"password":"changeme","server-urls":"http://elasticsearch:9200","username":"elastic"}}`
o := NewFreeForm(nil)
o.UnmarshalJSON([]byte(uiconfig))
json, err := o.MarshalJSON()
require.NoError(t, err)
assert.NotNil(t, json)
assert.Equal(t, uiconfig, string(*o.json))
}
func TestFreeFormIsEmptyFalse(t *testing.T) {
o := NewFreeForm(map[string]interface{}{
"es": map[string]interface{}{
"server-urls": "http://elasticsearch:9200",
"username": "elastic",
"password": "changeme",
},
})
assert.False(t, o.IsEmpty())
}
func TestFreeFormIsEmptyTrue(t *testing.T) {
o := NewFreeForm(map[string]interface{}{})
assert.True(t, o.IsEmpty())
}
func TestFreeFormIsEmptyNilTrue(t *testing.T) {
o := NewFreeForm(nil)
assert.True(t, o.IsEmpty())
}
func TestToMap(t *testing.T) {
tests := []struct {
m map[string]interface{}
expected map[string]interface{}
err string
}{
{expected: map[string]interface{}{}},
{m: map[string]interface{}{"foo": "bar$"}, expected: map[string]interface{}{"foo": "bar$"}},
{m: map[string]interface{}{"foo": true}, expected: map[string]interface{}{"foo": true}},
}
for _, test := range tests {
f := NewFreeForm(test.m)
got, err := f.GetMap()
if test.err != "" {
require.EqualError(t, err, test.err)
} else {
require.NoError(t, err)
assert.Equal(t, test.expected, got)
}
}
}

View File

@ -1,20 +0,0 @@
// Package v1 contains API Schema definitions for the jaegertracing.io v1 API group
// +kubebuilder:object:generate=true
// +groupName=jaegertracing.io
package v1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
var (
// GroupVersion is group version used to register these objects
GroupVersion = schema.GroupVersion{Group: "jaegertracing.io", Version: "v1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)

View File

@ -1,793 +0,0 @@
package v1
import (
esv1 "github.com/openshift/elasticsearch-operator/apis/logging/v1"
appsv1 "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// IngressSecurityType represents the possible values for the security type
type IngressSecurityType string
// JaegerPhase represents the current phase of Jaeger instances
type JaegerPhase string
// JaegerStorageType represents the Jaeger storage type
type JaegerStorageType string
const (
// FlagCronJobsVersion represents the version of the Kubernetes CronJob API
FlagCronJobsVersion = "cronjobs-version"
// FlagCronJobsVersionBatchV1 represents the batch/v1 version of the Kubernetes CronJob API, available as of 1.21
FlagCronJobsVersionBatchV1 = "batch/v1"
// FlagCronJobsVersionBatchV1Beta1 represents the batch/v1beta1 version of the Kubernetes CronJob API, no longer available as of 1.25
FlagCronJobsVersionBatchV1Beta1 = "batch/v1beta1"
// FlagAutoscalingVersion represents the version of the Kubernetes Autoscaling API
FlagAutoscalingVersion = "autoscaling-version"
// FlagAutoscalingVersionV2 represents the v2 version of the Kubernetes Autoscaling API, available as of 1.23
FlagAutoscalingVersionV2 = "autoscaling/v2"
// FlagAutoscalingVersionV2Beta2 represents the v2beta2 version of the Kubernetes Autoscaling API, no longer available as of 1.26
FlagAutoscalingVersionV2Beta2 = "autoscaling/v2beta2"
// FlagPlatform represents the flag to set the platform
FlagPlatform = "platform"
// FlagPlatformAutoDetect represents the "auto-detect" value for the platform flag
FlagPlatformAutoDetect = "auto-detect"
// FlagESProvision represents the 'es-provision' flag
FlagESProvision = "es-provision"
// FlagProvisionElasticsearchAuto represents the 'auto' value for the 'es-provision' flag
FlagProvisionElasticsearchAuto = "auto"
// FlagProvisionKafkaAuto represents the 'auto' value for the 'kafka-provision' flag
FlagProvisionKafkaAuto = "auto"
// FlagKafkaProvision represents the 'kafka-provision' flag.
FlagKafkaProvision = "kafka-provision"
// FlagAuthDelegatorAvailability represents the 'auth-delegator-available' flag.
FlagAuthDelegatorAvailability = "auth-delegator-available"
// FlagOpenShiftOauthProxyImage represents the 'openshift-oauth-proxy-image' flag.
FlagOpenShiftOauthProxyImage = "openshift-oauth-proxy-image"
// IngressSecurityNone disables any form of security for ingress objects (default)
IngressSecurityNone IngressSecurityType = ""
// FlagDefaultIngressClass represents the default Ingress class from the cluster
FlagDefaultIngressClass = "default-ingressclass"
// IngressSecurityNoneExplicit used when the user specifically set it to 'none'
IngressSecurityNoneExplicit IngressSecurityType = "none"
// IngressSecurityOAuthProxy represents an OAuth Proxy as security type
IngressSecurityOAuthProxy IngressSecurityType = "oauth-proxy"
// AnnotationProvisionedKafkaKey is a label to be added to Kafkas that have been provisioned by Jaeger
AnnotationProvisionedKafkaKey string = "jaegertracing.io/kafka-provisioned"
// AnnotationProvisionedKafkaValue is a label to be added to Kafkas that have been provisioned by Jaeger
AnnotationProvisionedKafkaValue string = "true"
// JaegerPhaseFailed indicates that the Jaeger instance failed to be provisioned
JaegerPhaseFailed JaegerPhase = "Failed"
// JaegerPhaseRunning indicates that the Jaeger instance is ready and running
JaegerPhaseRunning JaegerPhase = "Running"
// JaegerMemoryStorage indicates that the Jaeger storage type is memory. This is the default storage type.
JaegerMemoryStorage JaegerStorageType = "memory"
// JaegerCassandraStorage indicates that the Jaeger storage type is cassandra
JaegerCassandraStorage JaegerStorageType = "cassandra"
// JaegerESStorage indicates that the Jaeger storage type is elasticsearch
JaegerESStorage JaegerStorageType = "elasticsearch"
// JaegerKafkaStorage indicates that the Jaeger storage type is kafka
JaegerKafkaStorage JaegerStorageType = "kafka"
// JaegerBadgerStorage indicates that the Jaeger storage type is badger
JaegerBadgerStorage JaegerStorageType = "badger"
// JaegerGRPCPluginStorage indicates that the Jaeger storage type is grpc-plugin
JaegerGRPCPluginStorage JaegerStorageType = "grpc-plugin"
)
// ValidStorageTypes returns the list of valid storage types
func ValidStorageTypes() []JaegerStorageType {
return []JaegerStorageType{
JaegerMemoryStorage,
JaegerCassandraStorage,
JaegerESStorage,
JaegerKafkaStorage,
JaegerBadgerStorage,
JaegerGRPCPluginStorage,
}
}
// OptionsPrefix returns the options prefix associated with the storage type
func (storageType JaegerStorageType) OptionsPrefix() string {
if storageType == JaegerESStorage {
return "es"
}
if storageType == JaegerGRPCPluginStorage {
return "grpc-storage-plugin"
}
return string(storageType)
}
// JaegerSpec defines the desired state of Jaeger
type JaegerSpec struct {
// +optional
// +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Strategy"
Strategy DeploymentStrategy `json:"strategy,omitempty"`
// +optional
AllInOne JaegerAllInOneSpec `json:"allInOne,omitempty"`
// +optional
Query JaegerQuerySpec `json:"query,omitempty"`
// +optional
Collector JaegerCollectorSpec `json:"collector,omitempty"`
// +optional
Ingester JaegerIngesterSpec `json:"ingester,omitempty"`
// +optional
// +nullable
Agent JaegerAgentSpec `json:"agent,omitempty"`
// +optional
UI JaegerUISpec `json:"ui,omitempty"`
// +optional
Sampling JaegerSamplingSpec `json:"sampling,omitempty"`
// +optional
Storage JaegerStorageSpec `json:"storage,omitempty"`
// +optional
Ingress JaegerIngressSpec `json:"ingress,omitempty"`
// +optional
JaegerCommonSpec `json:",inline,omitempty"`
}
// JaegerStatus defines the observed state of Jaeger
type JaegerStatus struct {
// +operator-sdk:csv:customresourcedefinitions:type=status
// +operator-sdk:csv:customresourcedefinitions:displayName="Version"
Version string `json:"version"`
// +operator-sdk:csv:customresourcedefinitions:type=status
// +operator-sdk:csv:customresourcedefinitions:displayName="Phase"
Phase JaegerPhase `json:"phase"`
}
// Jaeger is the Schema for the jaegers API
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +operator-sdk:gen-csv:customresourcedefinitions.displayName="Jaeger"
// +operator-sdk:csv:customresourcedefinitions:resources={{CronJob,v1beta1},{Pod,v1},{Deployment,apps/v1}, {Ingress,networking/v1},{DaemonSets,apps/v1},{StatefulSets,apps/v1},{ConfigMaps,v1},{Service,v1}}
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="Status",type="string",JSONPath=".status.phase",description="Jaeger instance's status"
// +kubebuilder:printcolumn:name="Version",type="string",JSONPath=".status.version",description="Jaeger Version"
// +kubebuilder:printcolumn:name="Strategy",type="string",JSONPath=".spec.strategy",description="Jaeger deployment strategy"
// +kubebuilder:printcolumn:name="Storage",type="string",JSONPath=".spec.storage.type",description="Jaeger storage type"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
type Jaeger struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ObjectMeta `json:"metadata,omitempty"`
// +optional
Spec JaegerSpec `json:"spec,omitempty"`
// +optional
Status JaegerStatus `json:"status,omitempty"`
}
// JaegerCommonSpec defines the common elements used in multiple other spec structs
type JaegerCommonSpec struct {
// +optional
// +listType=atomic
Volumes []v1.Volume `json:"volumes,omitempty"`
// +optional
// +listType=atomic
VolumeMounts []v1.VolumeMount `json:"volumeMounts,omitempty"`
// +nullable
// +optional
Annotations map[string]string `json:"annotations,omitempty"`
// +optional
Labels map[string]string `json:"labels,omitempty"`
// +nullable
// +optional
Resources v1.ResourceRequirements `json:"resources,omitempty"`
// +optional
Affinity *v1.Affinity `json:"affinity,omitempty"`
// +optional
// +listType=atomic
Tolerations []v1.Toleration `json:"tolerations,omitempty"`
// +optional
SecurityContext *v1.PodSecurityContext `json:"securityContext,omitempty"`
// +optional
ContainerSecurityContext *v1.SecurityContext `json:"containerSecurityContext,omitempty"`
// +optional
ServiceAccount string `json:"serviceAccount,omitempty"`
// +optional
LivenessProbe *v1.Probe `json:"livenessProbe,omitempty"`
// +optional
// +listType=atomic
ImagePullSecrets []v1.LocalObjectReference `json:"imagePullSecrets,omitempty"`
// +optional
ImagePullPolicy v1.PullPolicy `json:"imagePullPolicy,omitempty"`
}
// JaegerQuerySpec defines the options to be used when deploying the query
type JaegerQuerySpec struct {
// Replicas represents the number of replicas to create for this service.
// +optional
Replicas *int32 `json:"replicas,omitempty"`
// +optional
Image string `json:"image,omitempty"`
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Options Options `json:"options,omitempty"`
// +optional
MetricsStorage JaegerMetricsStorageSpec `json:"metricsStorage,omitempty"`
// +optional
JaegerCommonSpec `json:",inline,omitempty"`
// +optional
// ServiceType represents the type of Service to create.
// Valid values include: ClusterIP, NodePort, LoadBalancer, and ExternalName.
// The default, if omitted, is ClusterIP.
// See https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
ServiceType v1.ServiceType `json:"serviceType,omitempty"`
// +optional
// NodePort represents the port at which the NodePort service to allocate
NodePort int32 `json:"nodePort,omitempty"`
// +optional
// NodePort represents the port at which the NodePort service to allocate
GRPCNodePort int32 `json:"grpcNodePort,omitempty"`
// +optional
// TracingEnabled if set to false adds the JAEGER_DISABLED environment flag and removes the injected
// agent container from the query component to disable tracing requests to the query service.
// The default, if omitted, is true
TracingEnabled *bool `json:"tracingEnabled,omitempty"`
// +optional
PriorityClassName string `json:"priorityClassName,omitempty"`
// +optional
// +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Strategy"
Strategy *appsv1.DeploymentStrategy `json:"strategy,omitempty"`
// +optional
// +nullable
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
}
// JaegerUISpec defines the options to be used to configure the UI
type JaegerUISpec struct {
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Options FreeForm `json:"options,omitempty"`
}
// JaegerSamplingSpec defines the options to be used to configure the UI
type JaegerSamplingSpec struct {
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Options FreeForm `json:"options,omitempty"`
}
// JaegerIngressSpec defines the options to be used when deploying the query ingress
type JaegerIngressSpec struct {
// +optional
Enabled *bool `json:"enabled,omitempty"`
// +optional
Security IngressSecurityType `json:"security,omitempty"`
// +optional
Openshift JaegerIngressOpenShiftSpec `json:"openshift,omitempty"`
// +optional
// +listType=atomic
Hosts []string `json:"hosts,omitempty"`
// +optional
PathType networkingv1.PathType `json:"pathType,omitempty"`
// +optional
// +listType=atomic
TLS []JaegerIngressTLSSpec `json:"tls,omitempty"`
// Deprecated in favor of the TLS property
// +optional
SecretName string `json:"secretName,omitempty"`
// +optional
JaegerCommonSpec `json:",inline,omitempty"`
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Options Options `json:"options,omitempty"`
// +optional
IngressClassName *string `json:"ingressClassName,omitempty"`
}
// JaegerIngressTLSSpec defines the TLS configuration to be used when deploying the query ingress
type JaegerIngressTLSSpec struct {
// +optional
// +listType=atomic
Hosts []string `json:"hosts,omitempty"`
// +optional
SecretName string `json:"secretName,omitempty"`
}
// JaegerIngressOpenShiftSpec defines the OpenShift-specific options in the context of ingress connections,
// such as options for the OAuth Proxy
type JaegerIngressOpenShiftSpec struct {
// +optional
SAR *string `json:"sar,omitempty"`
// +optional
DelegateUrls string `json:"delegateUrls,omitempty"`
// +optional
HtpasswdFile string `json:"htpasswdFile,omitempty"`
// SkipLogout tells the operator to not automatically add a "Log Out" menu option to the custom Jaeger configuration
// +optional
SkipLogout *bool `json:"skipLogout,omitempty"`
// Timeout defines client timeout from oauth-proxy to jaeger.
// +optional
Timeout *metav1.Duration `json:"timeout,omitempty"`
}
// JaegerAllInOneSpec defines the options to be used when deploying the query
type JaegerAllInOneSpec struct {
// +optional
Image string `json:"image,omitempty"`
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Options Options `json:"options,omitempty"`
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Config FreeForm `json:"config,omitempty"`
// +optional
MetricsStorage JaegerMetricsStorageSpec `json:"metricsStorage,omitempty"`
// +optional
JaegerCommonSpec `json:",inline,omitempty"`
// +optional
// TracingEnabled if set to false adds the JAEGER_DISABLED environment flag and removes the injected
// agent container from the query component to disable tracing requests to the query service.
// The default, if omitted, is true
TracingEnabled *bool `json:"tracingEnabled,omitempty"`
// +optional
// +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Strategy"
Strategy *appsv1.DeploymentStrategy `json:"strategy,omitempty"`
// +optional
PriorityClassName string `json:"priorityClassName,omitempty"`
}
// AutoScaleSpec defines the common elements used for create HPAs
type AutoScaleSpec struct {
// Autoscale turns on/off the autoscale feature. By default, it's enabled if the Replicas field is not set.
// +optional
Autoscale *bool `json:"autoscale,omitempty"`
// MinReplicas sets a lower bound to the autoscaling feature.
// +optional
MinReplicas *int32 `json:"minReplicas,omitempty"`
// MaxReplicas sets an upper bound to the autoscaling feature. When autoscaling is enabled and no value is provided, a default value is used.
// +optional
MaxReplicas *int32 `json:"maxReplicas,omitempty"`
}
// JaegerCollectorSpec defines the options to be used when deploying the collector
type JaegerCollectorSpec struct {
// +optional
AutoScaleSpec `json:",inline,omitempty"`
// Replicas represents the number of replicas to create for this service.
// +optional
Replicas *int32 `json:"replicas,omitempty"`
// +optional
Image string `json:"image,omitempty"`
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Options Options `json:"options,omitempty"`
// +optional
JaegerCommonSpec `json:",inline,omitempty"`
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Config FreeForm `json:"config,omitempty"`
// +optional
// ServiceType represents the type of Service to create.
// Valid values include: ClusterIP, NodePort, LoadBalancer, and ExternalName.
// The default, if omitted, is ClusterIP.
// See https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
ServiceType v1.ServiceType `json:"serviceType,omitempty"`
// +optional
PriorityClassName string `json:"priorityClassName,omitempty"`
// +optional
// +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Strategy"
Strategy *appsv1.DeploymentStrategy `json:"strategy,omitempty"`
// +optional
KafkaSecretName string `json:"kafkaSecretName"`
// +optional
// +nullable
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// +optional
Lifecycle *v1.Lifecycle `json:"lifecycle,omitempty"`
// +optional
TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"`
}
// JaegerIngesterSpec defines the options to be used when deploying the ingester
type JaegerIngesterSpec struct {
// +optional
AutoScaleSpec `json:",inline,omitempty"`
// Replicas represents the number of replicas to create for this service.
// +optional
Replicas *int32 `json:"replicas,omitempty"`
// +optional
Image string `json:"image,omitempty"`
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Options Options `json:"options,omitempty"`
// +optional
JaegerCommonSpec `json:",inline,omitempty"`
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Config FreeForm `json:"config,omitempty"`
// +optional
Strategy *appsv1.DeploymentStrategy `json:"strategy,omitempty"`
// +optional
KafkaSecretName string `json:"kafkaSecretName"`
// +optional
// +nullable
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
}
// JaegerAgentSpec defines the options to be used when deploying the agent
type JaegerAgentSpec struct {
// Strategy can be either 'DaemonSet' or 'Sidecar' (default)
// +optional
Strategy string `json:"strategy,omitempty"`
// +optional
Image string `json:"image,omitempty"`
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Options Options `json:"options,omitempty"`
// +optional
JaegerCommonSpec `json:",inline,omitempty"`
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Config FreeForm `json:"config,omitempty"`
// +optional
SidecarSecurityContext *v1.SecurityContext `json:"sidecarSecurityContext,omitempty"`
// +optional
HostNetwork *bool `json:"hostNetwork,omitempty"`
// +optional
DNSPolicy v1.DNSPolicy `json:"dnsPolicy,omitempty"`
// +optional
PriorityClassName string `json:"priorityClassName,omitempty"`
}
// JaegerStorageSpec defines the common storage options to be used for the query and collector
type JaegerStorageSpec struct {
// +optional
Type JaegerStorageType `json:"type,omitempty"`
// +optional
SecretName string `json:"secretName,omitempty"`
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Options Options `json:"options,omitempty"`
// +optional
CassandraCreateSchema JaegerCassandraCreateSchemaSpec `json:"cassandraCreateSchema,omitempty"`
// +optional
Dependencies JaegerDependenciesSpec `json:"dependencies,omitempty"`
// +optional
EsIndexCleaner JaegerEsIndexCleanerSpec `json:"esIndexCleaner,omitempty"`
// +optional
EsRollover JaegerEsRolloverSpec `json:"esRollover,omitempty"`
// +optional
Elasticsearch ElasticsearchSpec `json:"elasticsearch,omitempty"`
// +optional
GRPCPlugin GRPCPluginSpec `json:"grpcPlugin,omitempty"`
}
// JaegerMetricsStorageSpec defines the Metrics storage options to be used for the query and collector.
type JaegerMetricsStorageSpec struct {
// +optional
Type JaegerStorageType `json:"type,omitempty"`
// +optional
ServerUrl string `json:"server-url,omitempty"`
}
// ElasticsearchSpec represents the ES configuration options that we pass down to the OpenShift Elasticsearch operator.
type ElasticsearchSpec struct {
// Name of the OpenShift Elasticsearch instance. Defaults to elasticsearch.
// +optional
Name string `json:"name,omitempty"`
// Whether Elasticsearch should be provisioned or not.
// +optional
DoNotProvision bool `json:"doNotProvision,omitempty"`
// Whether Elasticsearch cert management feature should be used.
// This is a preferred setting for new Jaeger deployments on OCP versions newer than 4.6.
// The cert management feature was added to Red Hat Openshift logging 5.2 in OCP 4.7.
// +optional
UseCertManagement *bool `json:"useCertManagement,omitempty"`
// +optional
Image string `json:"image,omitempty"`
// +optional
Resources *v1.ResourceRequirements `json:"resources,omitempty"`
// +optional
NodeCount int32 `json:"nodeCount,omitempty"`
// +optional
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// +optional
Storage esv1.ElasticsearchStorageSpec `json:"storage,omitempty"`
// +optional
RedundancyPolicy esv1.RedundancyPolicyType `json:"redundancyPolicy,omitempty"`
// +optional
// +listType=atomic
Tolerations []v1.Toleration `json:"tolerations,omitempty"`
// +optional
ProxyResources *v1.ResourceRequirements `json:"proxyResources,omitempty"`
}
// JaegerCassandraCreateSchemaSpec holds the options related to the create-schema batch job
type JaegerCassandraCreateSchemaSpec struct {
// +optional
Enabled *bool `json:"enabled,omitempty"`
// Image specifies the container image to use to create the cassandra schema.
// The Image is used by a Kubernetes Job, defaults to the image provided through the cli flag "jaeger-cassandra-schema-image" (default: jaegertracing/jaeger-cassandra-schema).
// See here for the jaeger-provided image: https://github.com/jaegertracing/jaeger/tree/main/plugin/storage/cassandra
// +optional
Image string `json:"image,omitempty"`
// Datacenter is a collection of racks in the cassandra topology.
// defaults to "test"
// +optional
Datacenter string `json:"datacenter,omitempty"`
// Mode controls the replication factor of your cassandra schema.
// Set it to "prod" (which is the default) to use the NetworkTopologyStrategy with a replication factor of 2, effectively meaning
// that at least 3 nodes are required in the cassandra cluster.
// When set to "test" the schema uses the SimpleStrategy with a replication factor of 1. You never want to do this in a production setup.
// +optional
Mode string `json:"mode,omitempty"`
// TraceTTL sets the TTL for your trace data
// +optional
TraceTTL string `json:"traceTTL,omitempty"`
// Timeout controls the Job deadline, it defaults to 1 day.
// specify it with a value which can be parsed by time.ParseDuration, e.g. 24h or 120m.
// If the job does not succeed within that duration it transitions into a permanent error state.
// See https://github.com/jaegertracing/jaeger-kubernetes/issues/32 and
// https://github.com/jaegertracing/jaeger-kubernetes/pull/125
// +optional
Timeout string `json:"timeout,omitempty"`
// +optional
Affinity *v1.Affinity `json:"affinity,omitempty"`
// +optional
TTLSecondsAfterFinished *int32 `json:"ttlSecondsAfterFinished,omitempty"`
}
// GRPCPluginSpec represents the grpc-plugin configuration options.
type GRPCPluginSpec struct {
// This image is used as an init-container to copy plugin binary into /plugin directory.
// +optional
Image string `json:"image,omitempty"`
}
// JaegerDependenciesSpec defined options for running spark-dependencies.
type JaegerDependenciesSpec struct {
// +optional
Enabled *bool `json:"enabled,omitempty"`
// +optional
SparkMaster string `json:"sparkMaster,omitempty"`
// +optional
Schedule string `json:"schedule,omitempty"`
// +optional
SuccessfulJobsHistoryLimit *int32 `json:"successfulJobsHistoryLimit,omitempty"`
// +optional
Image string `json:"image,omitempty"`
// +optional
JavaOpts string `json:"javaOpts,omitempty"`
// +optional
CassandraClientAuthEnabled bool `json:"cassandraClientAuthEnabled,omitempty"`
// +optional
ElasticsearchClientNodeOnly *bool `json:"elasticsearchClientNodeOnly,omitempty"`
// +optional
ElasticsearchNodesWanOnly *bool `json:"elasticsearchNodesWanOnly,omitempty"`
// +optional
ElasticsearchTimeRange string `json:"elasticsearchTimeRange,omitempty"`
// +optional
TTLSecondsAfterFinished *int32 `json:"ttlSecondsAfterFinished,omitempty"`
// BackoffLimit sets the Kubernetes back-off limit
// +optional
BackoffLimit *int32 `json:"backoffLimit,omitempty"`
// +optional
JaegerCommonSpec `json:",inline,omitempty"`
}
// JaegerEsIndexCleanerSpec holds the options related to es-index-cleaner
type JaegerEsIndexCleanerSpec struct {
// +optional
Enabled *bool `json:"enabled,omitempty"`
// +optional
NumberOfDays *int `json:"numberOfDays,omitempty"`
// +optional
Schedule string `json:"schedule,omitempty"`
// +optional
SuccessfulJobsHistoryLimit *int32 `json:"successfulJobsHistoryLimit,omitempty"`
// +optional
Image string `json:"image,omitempty"`
// +optional
TTLSecondsAfterFinished *int32 `json:"ttlSecondsAfterFinished,omitempty"`
// BackoffLimit sets the Kubernetes back-off limit
// +optional
BackoffLimit *int32 `json:"backoffLimit,omitempty"`
// +optional
JaegerCommonSpec `json:",inline,omitempty"`
// +optional
PriorityClassName string `json:"priorityClassName,omitempty"`
}
// JaegerEsRolloverSpec holds the options related to es-rollover
type JaegerEsRolloverSpec struct {
// +optional
Image string `json:"image,omitempty"`
// +optional
Schedule string `json:"schedule,omitempty"`
// +optional
SuccessfulJobsHistoryLimit *int32 `json:"successfulJobsHistoryLimit,omitempty"`
// +optional
Conditions string `json:"conditions,omitempty"`
// +optional
TTLSecondsAfterFinished *int32 `json:"ttlSecondsAfterFinished,omitempty"`
// BackoffLimit sets the Kubernetes back-off limit
// +optional
BackoffLimit *int32 `json:"backoffLimit,omitempty"`
// we parse it with time.ParseDuration
// +optional
ReadTTL string `json:"readTTL,omitempty"`
// +optional
JaegerCommonSpec `json:",inline,omitempty"`
}
//+kubebuilder:object:root=true
// JaegerList contains a list of Jaeger
type JaegerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Jaeger `json:"items"`
}
func init() {
SchemeBuilder.Register(&Jaeger{}, &JaegerList{})
}

View File

@ -1,27 +0,0 @@
package v1
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestDefaultPrefix(t *testing.T) {
assert.Equal(t, "anystorage", JaegerStorageType("anystorage").OptionsPrefix())
}
func TestElasticsearchPrefix(t *testing.T) {
assert.Equal(t, "es", JaegerESStorage.OptionsPrefix())
}
func TestValidTypes(t *testing.T) {
assert.ElementsMatch(t, ValidStorageTypes(),
[]JaegerStorageType{
JaegerMemoryStorage,
JaegerCassandraStorage,
JaegerESStorage,
JaegerKafkaStorage,
JaegerBadgerStorage,
JaegerGRPCPluginStorage,
})
}

View File

@ -1,164 +0,0 @@
package v1
import (
"context"
"fmt"
"regexp"
esv1 "github.com/openshift/elasticsearch-operator/apis/logging/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/webhook/admission"
)
const (
defaultElasticsearchName = "elasticsearch"
)
// log is for logging in this package.
var (
jaegerlog = logf.Log.WithName("jaeger-resource")
cl client.Client
)
// SetupWebhookWithManager adds Jaeger webook to the manager.
func (j *Jaeger) SetupWebhookWithManager(mgr ctrl.Manager) error {
cl = mgr.GetClient()
return ctrl.NewWebhookManagedBy(mgr).
For(j).
Complete()
}
//+kubebuilder:webhook:path=/mutate-jaegertracing-io-v1-jaeger,mutating=true,failurePolicy=fail,sideEffects=None,groups=jaegertracing.io,resources=jaegers,verbs=create;update,versions=v1,name=mjaeger.kb.io,admissionReviewVersions={v1}
func (j *Jaeger) objsWithOptions() []*Options {
return []*Options{
&j.Spec.AllInOne.Options, &j.Spec.Query.Options, &j.Spec.Collector.Options,
&j.Spec.Ingester.Options, &j.Spec.Agent.Options, &j.Spec.Storage.Options,
}
}
// Default implements webhook.Defaulter so a webhook will be registered for the type
func (j *Jaeger) Default() {
jaegerlog.Info("default", "name", j.Name)
jaegerlog.Info("WARNING jaeger-agent is deprecated and will removed in v1.55.0. See https://github.com/jaegertracing/jaeger/issues/4739", "component", "agent")
if j.Spec.Storage.Elasticsearch.Name == "" {
j.Spec.Storage.Elasticsearch.Name = defaultElasticsearchName
}
if ShouldInjectOpenShiftElasticsearchConfiguration(j.Spec.Storage) && j.Spec.Storage.Elasticsearch.DoNotProvision {
// check if ES instance exists
es := &esv1.Elasticsearch{}
err := cl.Get(context.Background(), types.NamespacedName{
Namespace: j.Namespace,
Name: j.Spec.Storage.Elasticsearch.Name,
}, es)
if errors.IsNotFound(err) {
return
}
j.Spec.Storage.Elasticsearch.NodeCount = OpenShiftElasticsearchNodeCount(es.Spec)
}
for _, opt := range j.objsWithOptions() {
optCopy := opt.DeepCopy()
if f := getAdditionalTLSFlags(optCopy.ToArgs()); f != nil {
newOpts := optCopy.GenericMap()
for k, v := range f {
newOpts[k] = v
}
if err := opt.parse(newOpts); err != nil {
jaegerlog.Error(err, "name", j.Name, "method", "Option.Parse")
}
}
}
}
// TODO(user): change verbs to "verbs=create;update;delete" if you want to enable deletion validation.
//+kubebuilder:webhook:path=/validate-jaegertracing-io-v1-jaeger,mutating=false,failurePolicy=fail,sideEffects=None,groups=jaegertracing.io,resources=jaegers,verbs=create;update,versions=v1,name=vjaeger.kb.io,admissionReviewVersions={v1}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (j *Jaeger) ValidateCreate() (admission.Warnings, error) {
jaegerlog.Info("validate create", "name", j.Name)
return j.ValidateUpdate(nil)
}
// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
func (j *Jaeger) ValidateUpdate(_ runtime.Object) (admission.Warnings, error) {
jaegerlog.Info("validate update", "name", j.Name)
if ShouldInjectOpenShiftElasticsearchConfiguration(j.Spec.Storage) && j.Spec.Storage.Elasticsearch.DoNotProvision {
// check if ES instance exists
es := &esv1.Elasticsearch{}
err := cl.Get(context.Background(), types.NamespacedName{
Namespace: j.Namespace,
Name: j.Spec.Storage.Elasticsearch.Name,
}, es)
if errors.IsNotFound(err) {
return nil, fmt.Errorf("elasticsearch instance not found: %w", err)
}
}
for _, opt := range j.objsWithOptions() {
got := opt.DeepCopy().ToArgs()
if f := getAdditionalTLSFlags(got); f != nil {
return nil, fmt.Errorf("tls flags incomplete, got: %v", got)
}
}
return nil, nil
}
// ValidateDelete implements webhook.Validator so a webhook will be registered for the type
func (j *Jaeger) ValidateDelete() (admission.Warnings, error) {
jaegerlog.Info("validate delete", "name", j.Name)
return nil, nil
}
// OpenShiftElasticsearchNodeCount returns total node count of Elasticsearch nodes.
func OpenShiftElasticsearchNodeCount(spec esv1.ElasticsearchSpec) int32 {
nodes := int32(0)
for i := 0; i < len(spec.Nodes); i++ {
nodes += spec.Nodes[i].NodeCount
}
return nodes
}
// ShouldInjectOpenShiftElasticsearchConfiguration returns true if OpenShift Elasticsearch is used and its configuration should be used.
func ShouldInjectOpenShiftElasticsearchConfiguration(s JaegerStorageSpec) bool {
if s.Type != JaegerESStorage {
return false
}
_, ok := s.Options.Map()["es.server-urls"]
return !ok
}
var (
tlsFlag = regexp.MustCompile("--.*tls.*=")
tlsFlagIdx = regexp.MustCompile("--.*tls")
tlsEnabledExists = regexp.MustCompile("--.*tls.enabled")
)
// getAdditionalTLSFlags returns additional tls arguments based on the argument
// list. If no additional argument is needed, nil is returned.
func getAdditionalTLSFlags(args []string) map[string]interface{} {
var res map[string]interface{}
for _, arg := range args {
a := []byte(arg)
if tlsEnabledExists.Match(a) {
// NOTE: if flag exists, we are done.
return nil
}
if tlsFlag.Match(a) && res == nil {
idx := tlsFlagIdx.FindIndex(a)
res = make(map[string]interface{})
res[arg[idx[0]+2:idx[1]]+".enabled"] = "true"
}
}
return res
}

View File

@ -1,369 +0,0 @@
package v1
import (
"fmt"
"testing"
"github.com/google/go-cmp/cmp"
esv1 "github.com/openshift/elasticsearch-operator/apis/logging/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/scheme"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/webhook"
)
var (
_ webhook.Defaulter = &Jaeger{}
_ webhook.Validator = &Jaeger{}
)
func TestDefault(t *testing.T) {
tests := []struct {
name string
objs []runtime.Object
j *Jaeger
expected *Jaeger
}{
{
name: "set missing ES name",
j: &Jaeger{
Spec: JaegerSpec{
Storage: JaegerStorageSpec{
Elasticsearch: ElasticsearchSpec{
Name: "",
},
},
},
},
expected: &Jaeger{
Spec: JaegerSpec{
Storage: JaegerStorageSpec{
Elasticsearch: ElasticsearchSpec{
Name: "elasticsearch",
},
},
},
},
},
{
name: "set ES node count",
objs: []runtime.Object{
&corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: "project1",
},
},
&esv1.Elasticsearch{
ObjectMeta: metav1.ObjectMeta{
Name: "my-es",
Namespace: "project1",
},
Spec: esv1.ElasticsearchSpec{
Nodes: []esv1.ElasticsearchNode{
{
NodeCount: 3,
},
},
},
},
},
j: &Jaeger{
ObjectMeta: metav1.ObjectMeta{
Namespace: "project1",
},
Spec: JaegerSpec{
Storage: JaegerStorageSpec{
Type: "elasticsearch",
Elasticsearch: ElasticsearchSpec{
Name: "my-es",
DoNotProvision: true,
},
},
},
},
expected: &Jaeger{
ObjectMeta: metav1.ObjectMeta{
Namespace: "project1",
},
Spec: JaegerSpec{
Storage: JaegerStorageSpec{
Type: "elasticsearch",
Elasticsearch: ElasticsearchSpec{
Name: "my-es",
NodeCount: 3,
DoNotProvision: true,
},
},
},
},
},
{
name: "do not set ES node count",
j: &Jaeger{
ObjectMeta: metav1.ObjectMeta{
Namespace: "project1",
},
Spec: JaegerSpec{
Storage: JaegerStorageSpec{
Type: "elasticsearch",
Elasticsearch: ElasticsearchSpec{
Name: "my-es",
DoNotProvision: false,
NodeCount: 1,
},
},
},
},
expected: &Jaeger{
ObjectMeta: metav1.ObjectMeta{
Namespace: "project1",
},
Spec: JaegerSpec{
Storage: JaegerStorageSpec{
Type: "elasticsearch",
Elasticsearch: ElasticsearchSpec{
Name: "my-es",
NodeCount: 1,
DoNotProvision: false,
},
},
},
},
},
{
name: "missing tls enable flag",
j: &Jaeger{
ObjectMeta: metav1.ObjectMeta{
Namespace: "project1",
},
Spec: JaegerSpec{
Storage: JaegerStorageSpec{
Type: JaegerMemoryStorage,
Options: NewOptions(map[string]interface{}{"stuff.tls.test": "something"}),
},
},
},
expected: &Jaeger{
ObjectMeta: metav1.ObjectMeta{
Namespace: "project1",
},
Spec: JaegerSpec{
Storage: JaegerStorageSpec{
Type: JaegerMemoryStorage,
Options: NewOptions(
map[string]interface{}{
"stuff.tls.test": "something",
"stuff.tls.enabled": "true",
},
),
Elasticsearch: ElasticsearchSpec{
Name: defaultElasticsearchName,
},
},
},
},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
require.NoError(t, esv1.AddToScheme(scheme.Scheme))
require.NoError(t, AddToScheme(scheme.Scheme))
fakeCl := fake.NewClientBuilder().WithRuntimeObjects(test.objs...).Build()
cl = fakeCl
test.j.Default()
assert.Equal(t, test.expected, test.j)
})
}
}
func TestValidateDelete(t *testing.T) {
warnings, err := new(Jaeger).ValidateDelete()
assert.Nil(t, warnings)
require.NoError(t, err)
}
func TestValidate(t *testing.T) {
tests := []struct {
name string
objsToCreate []runtime.Object
current *Jaeger
err string
}{
{
name: "ES instance exists",
objsToCreate: []runtime.Object{
&corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: "project1",
},
},
&esv1.Elasticsearch{
ObjectMeta: metav1.ObjectMeta{
Name: "my-es",
Namespace: "project1",
},
Spec: esv1.ElasticsearchSpec{
Nodes: []esv1.ElasticsearchNode{
{
NodeCount: 3,
},
},
},
},
},
current: &Jaeger{
ObjectMeta: metav1.ObjectMeta{
Namespace: "project1",
},
Spec: JaegerSpec{
Storage: JaegerStorageSpec{
Type: "elasticsearch",
Elasticsearch: ElasticsearchSpec{
Name: "my-es",
DoNotProvision: true,
},
},
},
},
},
{
name: "ES instance does not exist",
objsToCreate: []runtime.Object{
&corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: "project1",
},
},
},
current: &Jaeger{
ObjectMeta: metav1.ObjectMeta{
Namespace: "project1",
},
Spec: JaegerSpec{
Storage: JaegerStorageSpec{
Type: "elasticsearch",
Elasticsearch: ElasticsearchSpec{
Name: "my-es",
DoNotProvision: true,
},
},
},
},
err: `elasticsearch instance not found: elasticsearchs.logging.openshift.io "my-es" not found`,
},
{
name: "missing tls options",
current: &Jaeger{
ObjectMeta: metav1.ObjectMeta{
Namespace: "project1",
},
Spec: JaegerSpec{
Storage: JaegerStorageSpec{
Options: NewOptions(map[string]interface{}{
"something.tls.else": "fails",
}),
Type: JaegerMemoryStorage,
},
},
},
err: `tls flags incomplete, got: [--something.tls.else=fails]`,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
require.NoError(t, esv1.AddToScheme(scheme.Scheme))
require.NoError(t, AddToScheme(scheme.Scheme))
fakeCl := fake.NewClientBuilder().WithRuntimeObjects(test.objsToCreate...).Build()
cl = fakeCl
warnings, err := test.current.ValidateCreate()
if test.err != "" {
require.Error(t, err)
assert.Equal(t, test.err, err.Error())
} else {
require.NoError(t, err)
}
assert.Nil(t, warnings)
})
}
}
func TestShouldDeployElasticsearch(t *testing.T) {
tests := []struct {
j JaegerStorageSpec
expected bool
}{
{j: JaegerStorageSpec{}},
{j: JaegerStorageSpec{Type: JaegerCassandraStorage}},
{j: JaegerStorageSpec{Type: JaegerESStorage, Options: NewOptions(map[string]interface{}{"es.server-urls": "foo"})}},
{j: JaegerStorageSpec{Type: JaegerESStorage}, expected: true},
}
for i, test := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
assert.Equal(t, test.expected, ShouldInjectOpenShiftElasticsearchConfiguration(test.j))
})
}
}
func TestGetAdditionalTLSFlags(t *testing.T) {
tt := []struct {
name string
args []string
expect map[string]interface{}
}{
{
name: "no tls flag",
args: []string{"--something.else"},
expect: nil,
},
{
name: "already enabled",
args: []string{"--something.tls.enabled=true", "--something.tls.else=abc"},
expect: nil,
},
{
name: "is disabled",
args: []string{"--tls.enabled=false", "--something.else", "--something.tls.else=abc"},
expect: nil,
},
{
name: "must be enabled",
args: []string{"--something.tls.else=abc"},
expect: map[string]interface{}{
"something.tls.enabled": "true",
},
},
{
// NOTE: we want to avoid something like:
// --kafka.consumer.authentication=tls.enabled=true
name: "enable consumer tls",
args: []string{
"--es.server-urls=http://elasticsearch:9200",
"--kafka.consumer.authentication=tls",
"--kafka.consumer.brokers=my-cluster-kafka-bootstrap:9093",
"--kafka.consumer.tls.ca=/var/run/secrets/cluster-ca/ca.crt",
"--kafka.consumer.tls.cert=/var/run/secrets/kafkauser/user.crt",
"--kafka.consumer.tls.key=/var/run/secrets/kafkauser/user.key",
},
expect: map[string]interface{}{
"kafka.consumer.tls.enabled": "true",
},
},
}
for _, tc := range tt {
t.Run(tc.name, func(t *testing.T) {
got := getAdditionalTLSFlags(tc.args)
if !cmp.Equal(tc.expect, got) {
t.Error("err:", cmp.Diff(tc.expect, got))
}
})
}
}

View File

@ -1,14 +0,0 @@
package v1
import (
"github.com/go-logr/logr"
logf "sigs.k8s.io/controller-runtime/pkg/log"
)
// Logger returns a logger filled with context-related fields, such as Name and Namespace
func (j *Jaeger) Logger() logr.Logger {
return logf.Log.WithValues(
"instance", j.Name,
"namespace", j.Namespace,
)
}

View File

@ -1,175 +0,0 @@
package v1
import (
"bytes"
"encoding/json"
"fmt"
"strings"
)
// Values hold a map, with string as the key and either a string or a slice of strings as the value
type Values map[string]interface{}
// DeepCopy indicate how to do a deep copy of Values type
func (v *Values) DeepCopy() *Values {
out := make(Values, len(*v))
for key, val := range *v {
switch val := val.(type) {
case string:
out[key] = val
case []string:
out[key] = append([]string(nil), val...)
}
}
return &out
}
// Options defines a common options parameter to the different structs
type Options struct {
opts Values `json:"-"`
json *[]byte `json:"-"`
}
// NewOptions build a new Options object based on the given map
func NewOptions(o map[string]interface{}) Options {
options := Options{}
options.parse(o)
return options
}
// Filter creates a new Options object with just the elements identified by the supplied prefix
func (o *Options) Filter(prefix string) Options {
options := Options{}
options.opts = make(map[string]interface{})
archivePrefix := prefix + "-archive."
prefix += "."
for k, v := range o.opts {
if strings.HasPrefix(k, prefix) || strings.HasPrefix(k, archivePrefix) {
options.opts[k] = v
}
}
return options
}
// UnmarshalJSON implements an alternative parser for this field
func (o *Options) UnmarshalJSON(b []byte) error {
var entries map[string]interface{}
d := json.NewDecoder(bytes.NewReader(b))
d.UseNumber()
if err := d.Decode(&entries); err != nil {
return err
}
if err := o.parse(entries); err != nil {
return err
}
o.json = &b
return nil
}
// MarshalJSON specifies how to convert this object into JSON
func (o Options) MarshalJSON() ([]byte, error) {
if nil != o.json {
return *o.json, nil
}
if len(o.opts) == 0 {
return []byte("{}"), nil
}
if len(o.opts) > 0 {
return json.Marshal(o.opts)
}
return *o.json, nil
}
func (o *Options) parse(entries map[string]interface{}) error {
o.json = nil
o.opts = make(map[string]interface{})
var err error
for k, v := range entries {
o.opts, err = entry(o.opts, k, v)
if err != nil {
return err
}
}
return nil
}
func entry(entries map[string]interface{}, key string, value interface{}) (map[string]interface{}, error) {
switch val := value.(type) {
case map[string]interface{}:
var err error
for k, v := range val {
entries, err = entry(entries, fmt.Sprintf("%s.%v", key, k), v)
if err != nil {
return nil, err
}
}
case []interface{}: // NOTE: content of the argument list is not returned as []string when decoding json.
values := make([]string, 0, len(val))
for _, v := range val {
str, ok := v.(string)
if !ok {
return nil, fmt.Errorf("invalid option type, expect: string, got: %T", v)
}
values = append(values, str)
}
entries[key] = values
case interface{}:
entries[key] = fmt.Sprintf("%v", value)
}
return entries, nil
}
// ToArgs converts the options to a value suitable for the Container.Args field
func (o *Options) ToArgs() []string {
if len(o.opts) > 0 {
args := make([]string, 0, len(o.opts))
for k, v := range o.opts {
switch v := v.(type) {
case string:
args = append(args, fmt.Sprintf("--%s=%v", k, v))
case []string:
for _, vv := range v {
args = append(args, fmt.Sprintf("--%s=%v", k, vv))
}
}
}
return args
}
return nil
}
// Map returns a map representing the option entries. Items are flattened, with dots as separators. For instance
// an option "cassandra" with a nested "servers" object becomes an entry with the key "cassandra.servers"
func (o *Options) Map() map[string]interface{} {
return o.opts
}
// StringMap returns a map representing the option entries,excluding entries that have multiple values.
// Items are flattened, with dots as separators in the same way as Map does.
func (o *Options) StringMap() map[string]string {
smap := make(map[string]string)
for k, v := range o.opts {
switch v := v.(type) {
case string:
smap[k] = v
}
}
return smap
}
// GenericMap returns the map representing the option entries as interface{}, suitable for usage with NewOptions()
func (o *Options) GenericMap() map[string]interface{} {
out := make(map[string]interface{})
for k, v := range o.opts {
out[k] = v
}
return out
}

View File

@ -1,189 +0,0 @@
package v1
import (
"encoding/json"
"sort"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestSimpleOption(t *testing.T) {
o := Options{}
o.UnmarshalJSON([]byte(`{"key": "value"}`))
args := o.ToArgs()
assert.Equal(t, "--key=value", args[0])
}
func TestNoOptions(t *testing.T) {
o := Options{}
assert.Empty(t, o.ToArgs())
}
func TestNestedOption(t *testing.T) {
o := NewOptions(nil)
o.UnmarshalJSON([]byte(`{"log-level": "debug", "memory": {"max-traces": 10000}}`))
args := o.ToArgs()
assert.Len(t, args, 2)
sort.Strings(args)
assert.Equal(t, "--log-level=debug", args[0])
assert.Equal(t, "--memory.max-traces=10000", args[1])
}
func TestMarshalling(t *testing.T) {
o := NewOptions(map[string]interface{}{
"es.server-urls": "http://elasticsearch.default.svc:9200",
"es.username": "elastic",
"es.password": "changeme",
})
b, err := json.Marshal(o)
require.NoError(t, err)
s := string(b)
assert.Contains(t, s, `"es.password":"changeme"`)
assert.Contains(t, s, `"es.server-urls":"http://elasticsearch.default.svc:9200"`)
assert.Contains(t, s, `"es.username":"elastic"`)
}
func TestMarshallingWithFilter(t *testing.T) {
o := NewOptions(map[string]interface{}{
"es.server-urls": "http://elasticsearch.default.svc:9200",
"memory.max-traces": "50000",
})
o = o.Filter("memory")
args := o.ToArgs()
assert.Len(t, args, 1)
assert.Equal(t, "50000", o.Map()["memory.max-traces"])
}
func TestMultipleSubValues(t *testing.T) {
o := NewOptions(nil)
o.UnmarshalJSON([]byte(`{"es": {"server-urls": "http://elasticsearch:9200", "username": "elastic", "password": "changeme"}}`))
args := o.ToArgs()
assert.Len(t, args, 3)
}
func TestUnmarshalToArgs(t *testing.T) {
tests := []struct {
in string
args []string
err string
}{
{in: `^`, err: "invalid character '^' looking for beginning of value"},
{
in: `{"a": 5000000000, "b": 15.222, "c":true, "d": "foo"}`,
args: []string{"--a=5000000000", "--b=15.222", "--c=true", "--d=foo"},
},
{
in: `{"a": {"b": {"c": [{"d": "e", "f": {"g": {"h": "i"}}}]}}}`,
err: "invalid option type, expect: string, got: map[string]interface {}",
},
}
for _, test := range tests {
opts := Options{}
err := opts.UnmarshalJSON([]byte(test.in))
if test.err != "" {
require.EqualError(t, err, test.err)
} else {
require.NoError(t, err)
args := opts.ToArgs()
sort.SliceStable(args, func(i, j int) bool {
return args[i] < args[j]
})
assert.Equal(t, test.args, args)
}
}
}
func TestMultipleSubValuesWithFilter(t *testing.T) {
o := NewOptions(nil)
o.UnmarshalJSON([]byte(`{"memory": {"max-traces": "50000"}, "es": {"server-urls": "http://elasticsearch:9200", "username": "elastic", "password": "changeme"}}`))
o = o.Filter("memory")
args := o.ToArgs()
assert.Len(t, args, 1)
assert.Equal(t, "50000", o.Map()["memory.max-traces"])
}
func TestMultipleSubValuesWithFilterWithArchive(t *testing.T) {
o := NewOptions(nil)
o.UnmarshalJSON([]byte(`{"memory": {"max-traces": "50000"}, "es": {"server-urls": "http://elasticsearch:9200", "username": "elastic", "password": "changeme"}, "es-archive": {"server-urls": "http://elasticsearch2:9200"}}`))
o = o.Filter("es")
args := o.ToArgs()
assert.Len(t, args, 4)
assert.Equal(t, "http://elasticsearch:9200", o.Map()["es.server-urls"])
assert.Equal(t, "http://elasticsearch2:9200", o.Map()["es-archive.server-urls"])
assert.Equal(t, "elastic", o.Map()["es.username"])
assert.Equal(t, "changeme", o.Map()["es.password"])
}
func TestExposedMap(t *testing.T) {
o := NewOptions(nil)
o.UnmarshalJSON([]byte(`{"cassandra": {"servers": "cassandra:9042"}}`))
assert.Equal(t, "cassandra:9042", o.Map()["cassandra.servers"])
}
func TestMarshallRaw(t *testing.T) {
json := []byte(`{"cassandra": {"servers": "cassandra:9042"}}`)
o := NewOptions(nil)
o.json = &json
bytes, err := o.MarshalJSON()
require.NoError(t, err)
assert.Equal(t, bytes, json)
}
func TestMarshallEmpty(t *testing.T) {
o := NewOptions(nil)
json := []byte(`{}`)
bytes, err := o.MarshalJSON()
require.NoError(t, err)
assert.Equal(t, bytes, json)
}
func TestUpdate(t *testing.T) {
// prepare
o := NewOptions(map[string]interface{}{
"key": "original",
})
// test
o.Map()["key"] = "new"
// verify
assert.Equal(t, "new", o.opts["key"])
}
func TestStringMap(t *testing.T) {
o := NewOptions(nil)
err := o.UnmarshalJSON([]byte(`{"firstsarg":"v1", "additional-headers":["whatever:thing", "access-control-allow-origin:blerg"]}`))
require.NoError(t, err)
expected := map[string]string{"firstsarg": "v1"}
strMap := o.StringMap()
assert.Len(t, strMap, 1)
assert.Equal(t, expected, strMap)
}
func TestDeepCopy(t *testing.T) {
o1 := NewOptions(nil)
err := o1.UnmarshalJSON([]byte(`{"firstsarg":"v1", "additional-headers":["whatever:thing", "access-control-allow-origin:blerg"]}`))
require.NoError(t, err)
copy := o1.opts.DeepCopy()
assert.Equal(t, &(o1.opts), copy)
}
func TestRepetitiveArguments(t *testing.T) {
o := NewOptions(nil)
err := o.UnmarshalJSON([]byte(`{"firstsarg":"v1", "additional-headers":["whatever:thing", "access-control-allow-origin:blerg"]}`))
require.NoError(t, err)
expected := []string{"--additional-headers=access-control-allow-origin:blerg", "--additional-headers=whatever:thing", "--firstsarg=v1"}
args := o.ToArgs()
sort.SliceStable(args, func(i, j int) bool {
return args[i] < args[j]
})
assert.Len(t, args, 3)
assert.Equal(t, expected, args)
}

View File

@ -1,821 +0,0 @@
//go:build !ignore_autogenerated
// Code generated by controller-gen. DO NOT EDIT.
package v1
import (
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoScaleSpec) DeepCopyInto(out *AutoScaleSpec) {
*out = *in
if in.Autoscale != nil {
in, out := &in.Autoscale, &out.Autoscale
*out = new(bool)
**out = **in
}
if in.MinReplicas != nil {
in, out := &in.MinReplicas, &out.MinReplicas
*out = new(int32)
**out = **in
}
if in.MaxReplicas != nil {
in, out := &in.MaxReplicas, &out.MaxReplicas
*out = new(int32)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoScaleSpec.
func (in *AutoScaleSpec) DeepCopy() *AutoScaleSpec {
if in == nil {
return nil
}
out := new(AutoScaleSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ElasticsearchSpec) DeepCopyInto(out *ElasticsearchSpec) {
*out = *in
if in.UseCertManagement != nil {
in, out := &in.UseCertManagement, &out.UseCertManagement
*out = new(bool)
**out = **in
}
if in.Resources != nil {
in, out := &in.Resources, &out.Resources
*out = new(corev1.ResourceRequirements)
(*in).DeepCopyInto(*out)
}
if in.NodeSelector != nil {
in, out := &in.NodeSelector, &out.NodeSelector
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
in.Storage.DeepCopyInto(&out.Storage)
if in.Tolerations != nil {
in, out := &in.Tolerations, &out.Tolerations
*out = make([]corev1.Toleration, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.ProxyResources != nil {
in, out := &in.ProxyResources, &out.ProxyResources
*out = new(corev1.ResourceRequirements)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ElasticsearchSpec.
func (in *ElasticsearchSpec) DeepCopy() *ElasticsearchSpec {
if in == nil {
return nil
}
out := new(ElasticsearchSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FreeForm) DeepCopyInto(out *FreeForm) {
*out = *in
if in.json != nil {
in, out := &in.json, &out.json
*out = new([]byte)
if **in != nil {
in, out := *in, *out
*out = make([]byte, len(*in))
copy(*out, *in)
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FreeForm.
func (in *FreeForm) DeepCopy() *FreeForm {
if in == nil {
return nil
}
out := new(FreeForm)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GRPCPluginSpec) DeepCopyInto(out *GRPCPluginSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GRPCPluginSpec.
func (in *GRPCPluginSpec) DeepCopy() *GRPCPluginSpec {
if in == nil {
return nil
}
out := new(GRPCPluginSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Jaeger) DeepCopyInto(out *Jaeger) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Jaeger.
func (in *Jaeger) DeepCopy() *Jaeger {
if in == nil {
return nil
}
out := new(Jaeger)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Jaeger) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerAgentSpec) DeepCopyInto(out *JaegerAgentSpec) {
*out = *in
in.Options.DeepCopyInto(&out.Options)
in.JaegerCommonSpec.DeepCopyInto(&out.JaegerCommonSpec)
in.Config.DeepCopyInto(&out.Config)
if in.SidecarSecurityContext != nil {
in, out := &in.SidecarSecurityContext, &out.SidecarSecurityContext
*out = new(corev1.SecurityContext)
(*in).DeepCopyInto(*out)
}
if in.HostNetwork != nil {
in, out := &in.HostNetwork, &out.HostNetwork
*out = new(bool)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerAgentSpec.
func (in *JaegerAgentSpec) DeepCopy() *JaegerAgentSpec {
if in == nil {
return nil
}
out := new(JaegerAgentSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerAllInOneSpec) DeepCopyInto(out *JaegerAllInOneSpec) {
*out = *in
in.Options.DeepCopyInto(&out.Options)
in.Config.DeepCopyInto(&out.Config)
out.MetricsStorage = in.MetricsStorage
in.JaegerCommonSpec.DeepCopyInto(&out.JaegerCommonSpec)
if in.TracingEnabled != nil {
in, out := &in.TracingEnabled, &out.TracingEnabled
*out = new(bool)
**out = **in
}
if in.Strategy != nil {
in, out := &in.Strategy, &out.Strategy
*out = new(appsv1.DeploymentStrategy)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerAllInOneSpec.
func (in *JaegerAllInOneSpec) DeepCopy() *JaegerAllInOneSpec {
if in == nil {
return nil
}
out := new(JaegerAllInOneSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerCassandraCreateSchemaSpec) DeepCopyInto(out *JaegerCassandraCreateSchemaSpec) {
*out = *in
if in.Enabled != nil {
in, out := &in.Enabled, &out.Enabled
*out = new(bool)
**out = **in
}
if in.Affinity != nil {
in, out := &in.Affinity, &out.Affinity
*out = new(corev1.Affinity)
(*in).DeepCopyInto(*out)
}
if in.TTLSecondsAfterFinished != nil {
in, out := &in.TTLSecondsAfterFinished, &out.TTLSecondsAfterFinished
*out = new(int32)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerCassandraCreateSchemaSpec.
func (in *JaegerCassandraCreateSchemaSpec) DeepCopy() *JaegerCassandraCreateSchemaSpec {
if in == nil {
return nil
}
out := new(JaegerCassandraCreateSchemaSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerCollectorSpec) DeepCopyInto(out *JaegerCollectorSpec) {
*out = *in
in.AutoScaleSpec.DeepCopyInto(&out.AutoScaleSpec)
if in.Replicas != nil {
in, out := &in.Replicas, &out.Replicas
*out = new(int32)
**out = **in
}
in.Options.DeepCopyInto(&out.Options)
in.JaegerCommonSpec.DeepCopyInto(&out.JaegerCommonSpec)
in.Config.DeepCopyInto(&out.Config)
if in.Strategy != nil {
in, out := &in.Strategy, &out.Strategy
*out = new(appsv1.DeploymentStrategy)
(*in).DeepCopyInto(*out)
}
if in.NodeSelector != nil {
in, out := &in.NodeSelector, &out.NodeSelector
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.Lifecycle != nil {
in, out := &in.Lifecycle, &out.Lifecycle
*out = new(corev1.Lifecycle)
(*in).DeepCopyInto(*out)
}
if in.TerminationGracePeriodSeconds != nil {
in, out := &in.TerminationGracePeriodSeconds, &out.TerminationGracePeriodSeconds
*out = new(int64)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerCollectorSpec.
func (in *JaegerCollectorSpec) DeepCopy() *JaegerCollectorSpec {
if in == nil {
return nil
}
out := new(JaegerCollectorSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerCommonSpec) DeepCopyInto(out *JaegerCommonSpec) {
*out = *in
if in.Volumes != nil {
in, out := &in.Volumes, &out.Volumes
*out = make([]corev1.Volume, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.VolumeMounts != nil {
in, out := &in.VolumeMounts, &out.VolumeMounts
*out = make([]corev1.VolumeMount, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Annotations != nil {
in, out := &in.Annotations, &out.Annotations
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
in.Resources.DeepCopyInto(&out.Resources)
if in.Affinity != nil {
in, out := &in.Affinity, &out.Affinity
*out = new(corev1.Affinity)
(*in).DeepCopyInto(*out)
}
if in.Tolerations != nil {
in, out := &in.Tolerations, &out.Tolerations
*out = make([]corev1.Toleration, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.SecurityContext != nil {
in, out := &in.SecurityContext, &out.SecurityContext
*out = new(corev1.PodSecurityContext)
(*in).DeepCopyInto(*out)
}
if in.ContainerSecurityContext != nil {
in, out := &in.ContainerSecurityContext, &out.ContainerSecurityContext
*out = new(corev1.SecurityContext)
(*in).DeepCopyInto(*out)
}
if in.LivenessProbe != nil {
in, out := &in.LivenessProbe, &out.LivenessProbe
*out = new(corev1.Probe)
(*in).DeepCopyInto(*out)
}
if in.ImagePullSecrets != nil {
in, out := &in.ImagePullSecrets, &out.ImagePullSecrets
*out = make([]corev1.LocalObjectReference, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerCommonSpec.
func (in *JaegerCommonSpec) DeepCopy() *JaegerCommonSpec {
if in == nil {
return nil
}
out := new(JaegerCommonSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerDependenciesSpec) DeepCopyInto(out *JaegerDependenciesSpec) {
*out = *in
if in.Enabled != nil {
in, out := &in.Enabled, &out.Enabled
*out = new(bool)
**out = **in
}
if in.SuccessfulJobsHistoryLimit != nil {
in, out := &in.SuccessfulJobsHistoryLimit, &out.SuccessfulJobsHistoryLimit
*out = new(int32)
**out = **in
}
if in.ElasticsearchClientNodeOnly != nil {
in, out := &in.ElasticsearchClientNodeOnly, &out.ElasticsearchClientNodeOnly
*out = new(bool)
**out = **in
}
if in.ElasticsearchNodesWanOnly != nil {
in, out := &in.ElasticsearchNodesWanOnly, &out.ElasticsearchNodesWanOnly
*out = new(bool)
**out = **in
}
if in.TTLSecondsAfterFinished != nil {
in, out := &in.TTLSecondsAfterFinished, &out.TTLSecondsAfterFinished
*out = new(int32)
**out = **in
}
if in.BackoffLimit != nil {
in, out := &in.BackoffLimit, &out.BackoffLimit
*out = new(int32)
**out = **in
}
in.JaegerCommonSpec.DeepCopyInto(&out.JaegerCommonSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerDependenciesSpec.
func (in *JaegerDependenciesSpec) DeepCopy() *JaegerDependenciesSpec {
if in == nil {
return nil
}
out := new(JaegerDependenciesSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerEsIndexCleanerSpec) DeepCopyInto(out *JaegerEsIndexCleanerSpec) {
*out = *in
if in.Enabled != nil {
in, out := &in.Enabled, &out.Enabled
*out = new(bool)
**out = **in
}
if in.NumberOfDays != nil {
in, out := &in.NumberOfDays, &out.NumberOfDays
*out = new(int)
**out = **in
}
if in.SuccessfulJobsHistoryLimit != nil {
in, out := &in.SuccessfulJobsHistoryLimit, &out.SuccessfulJobsHistoryLimit
*out = new(int32)
**out = **in
}
if in.TTLSecondsAfterFinished != nil {
in, out := &in.TTLSecondsAfterFinished, &out.TTLSecondsAfterFinished
*out = new(int32)
**out = **in
}
if in.BackoffLimit != nil {
in, out := &in.BackoffLimit, &out.BackoffLimit
*out = new(int32)
**out = **in
}
in.JaegerCommonSpec.DeepCopyInto(&out.JaegerCommonSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerEsIndexCleanerSpec.
func (in *JaegerEsIndexCleanerSpec) DeepCopy() *JaegerEsIndexCleanerSpec {
if in == nil {
return nil
}
out := new(JaegerEsIndexCleanerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerEsRolloverSpec) DeepCopyInto(out *JaegerEsRolloverSpec) {
*out = *in
if in.SuccessfulJobsHistoryLimit != nil {
in, out := &in.SuccessfulJobsHistoryLimit, &out.SuccessfulJobsHistoryLimit
*out = new(int32)
**out = **in
}
if in.TTLSecondsAfterFinished != nil {
in, out := &in.TTLSecondsAfterFinished, &out.TTLSecondsAfterFinished
*out = new(int32)
**out = **in
}
if in.BackoffLimit != nil {
in, out := &in.BackoffLimit, &out.BackoffLimit
*out = new(int32)
**out = **in
}
in.JaegerCommonSpec.DeepCopyInto(&out.JaegerCommonSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerEsRolloverSpec.
func (in *JaegerEsRolloverSpec) DeepCopy() *JaegerEsRolloverSpec {
if in == nil {
return nil
}
out := new(JaegerEsRolloverSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerIngesterSpec) DeepCopyInto(out *JaegerIngesterSpec) {
*out = *in
in.AutoScaleSpec.DeepCopyInto(&out.AutoScaleSpec)
if in.Replicas != nil {
in, out := &in.Replicas, &out.Replicas
*out = new(int32)
**out = **in
}
in.Options.DeepCopyInto(&out.Options)
in.JaegerCommonSpec.DeepCopyInto(&out.JaegerCommonSpec)
in.Config.DeepCopyInto(&out.Config)
if in.Strategy != nil {
in, out := &in.Strategy, &out.Strategy
*out = new(appsv1.DeploymentStrategy)
(*in).DeepCopyInto(*out)
}
if in.NodeSelector != nil {
in, out := &in.NodeSelector, &out.NodeSelector
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerIngesterSpec.
func (in *JaegerIngesterSpec) DeepCopy() *JaegerIngesterSpec {
if in == nil {
return nil
}
out := new(JaegerIngesterSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerIngressOpenShiftSpec) DeepCopyInto(out *JaegerIngressOpenShiftSpec) {
*out = *in
if in.SAR != nil {
in, out := &in.SAR, &out.SAR
*out = new(string)
**out = **in
}
if in.SkipLogout != nil {
in, out := &in.SkipLogout, &out.SkipLogout
*out = new(bool)
**out = **in
}
if in.Timeout != nil {
in, out := &in.Timeout, &out.Timeout
*out = new(metav1.Duration)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerIngressOpenShiftSpec.
func (in *JaegerIngressOpenShiftSpec) DeepCopy() *JaegerIngressOpenShiftSpec {
if in == nil {
return nil
}
out := new(JaegerIngressOpenShiftSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerIngressSpec) DeepCopyInto(out *JaegerIngressSpec) {
*out = *in
if in.Enabled != nil {
in, out := &in.Enabled, &out.Enabled
*out = new(bool)
**out = **in
}
in.Openshift.DeepCopyInto(&out.Openshift)
if in.Hosts != nil {
in, out := &in.Hosts, &out.Hosts
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.TLS != nil {
in, out := &in.TLS, &out.TLS
*out = make([]JaegerIngressTLSSpec, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
in.JaegerCommonSpec.DeepCopyInto(&out.JaegerCommonSpec)
in.Options.DeepCopyInto(&out.Options)
if in.IngressClassName != nil {
in, out := &in.IngressClassName, &out.IngressClassName
*out = new(string)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerIngressSpec.
func (in *JaegerIngressSpec) DeepCopy() *JaegerIngressSpec {
if in == nil {
return nil
}
out := new(JaegerIngressSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerIngressTLSSpec) DeepCopyInto(out *JaegerIngressTLSSpec) {
*out = *in
if in.Hosts != nil {
in, out := &in.Hosts, &out.Hosts
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerIngressTLSSpec.
func (in *JaegerIngressTLSSpec) DeepCopy() *JaegerIngressTLSSpec {
if in == nil {
return nil
}
out := new(JaegerIngressTLSSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerList) DeepCopyInto(out *JaegerList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Jaeger, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerList.
func (in *JaegerList) DeepCopy() *JaegerList {
if in == nil {
return nil
}
out := new(JaegerList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *JaegerList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerMetricsStorageSpec) DeepCopyInto(out *JaegerMetricsStorageSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerMetricsStorageSpec.
func (in *JaegerMetricsStorageSpec) DeepCopy() *JaegerMetricsStorageSpec {
if in == nil {
return nil
}
out := new(JaegerMetricsStorageSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerQuerySpec) DeepCopyInto(out *JaegerQuerySpec) {
*out = *in
if in.Replicas != nil {
in, out := &in.Replicas, &out.Replicas
*out = new(int32)
**out = **in
}
in.Options.DeepCopyInto(&out.Options)
out.MetricsStorage = in.MetricsStorage
in.JaegerCommonSpec.DeepCopyInto(&out.JaegerCommonSpec)
if in.TracingEnabled != nil {
in, out := &in.TracingEnabled, &out.TracingEnabled
*out = new(bool)
**out = **in
}
if in.Strategy != nil {
in, out := &in.Strategy, &out.Strategy
*out = new(appsv1.DeploymentStrategy)
(*in).DeepCopyInto(*out)
}
if in.NodeSelector != nil {
in, out := &in.NodeSelector, &out.NodeSelector
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerQuerySpec.
func (in *JaegerQuerySpec) DeepCopy() *JaegerQuerySpec {
if in == nil {
return nil
}
out := new(JaegerQuerySpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerSamplingSpec) DeepCopyInto(out *JaegerSamplingSpec) {
*out = *in
in.Options.DeepCopyInto(&out.Options)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerSamplingSpec.
func (in *JaegerSamplingSpec) DeepCopy() *JaegerSamplingSpec {
if in == nil {
return nil
}
out := new(JaegerSamplingSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerSpec) DeepCopyInto(out *JaegerSpec) {
*out = *in
in.AllInOne.DeepCopyInto(&out.AllInOne)
in.Query.DeepCopyInto(&out.Query)
in.Collector.DeepCopyInto(&out.Collector)
in.Ingester.DeepCopyInto(&out.Ingester)
in.Agent.DeepCopyInto(&out.Agent)
in.UI.DeepCopyInto(&out.UI)
in.Sampling.DeepCopyInto(&out.Sampling)
in.Storage.DeepCopyInto(&out.Storage)
in.Ingress.DeepCopyInto(&out.Ingress)
in.JaegerCommonSpec.DeepCopyInto(&out.JaegerCommonSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerSpec.
func (in *JaegerSpec) DeepCopy() *JaegerSpec {
if in == nil {
return nil
}
out := new(JaegerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerStatus) DeepCopyInto(out *JaegerStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerStatus.
func (in *JaegerStatus) DeepCopy() *JaegerStatus {
if in == nil {
return nil
}
out := new(JaegerStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerStorageSpec) DeepCopyInto(out *JaegerStorageSpec) {
*out = *in
in.Options.DeepCopyInto(&out.Options)
in.CassandraCreateSchema.DeepCopyInto(&out.CassandraCreateSchema)
in.Dependencies.DeepCopyInto(&out.Dependencies)
in.EsIndexCleaner.DeepCopyInto(&out.EsIndexCleaner)
in.EsRollover.DeepCopyInto(&out.EsRollover)
in.Elasticsearch.DeepCopyInto(&out.Elasticsearch)
out.GRPCPlugin = in.GRPCPlugin
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerStorageSpec.
func (in *JaegerStorageSpec) DeepCopy() *JaegerStorageSpec {
if in == nil {
return nil
}
out := new(JaegerStorageSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JaegerUISpec) DeepCopyInto(out *JaegerUISpec) {
*out = *in
in.Options.DeepCopyInto(&out.Options)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JaegerUISpec.
func (in *JaegerUISpec) DeepCopy() *JaegerUISpec {
if in == nil {
return nil
}
out := new(JaegerUISpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Options) DeepCopyInto(out *Options) {
*out = *in
in.opts.DeepCopyInto(&out.opts)
if in.json != nil {
in, out := &in.json, &out.json
*out = new([]byte)
if **in != nil {
in, out := *in, *out
*out = make([]byte, len(*in))
copy(*out, *in)
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Options.
func (in *Options) DeepCopy() *Options {
if in == nil {
return nil
}
out := new(Options)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in Values) DeepCopyInto(out *Values) {
{
in := &in
clone := in.DeepCopy()
*out = *clone
}
}

17
build/Dockerfile Normal file
View File

@ -0,0 +1,17 @@
FROM centos
RUN INSTALL_PKGS=" \
openssl \
" && \
yum install -y $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
yum clean all && \
mkdir /tmp/_working_dir && \
chmod og+w /tmp/_working_dir
COPY scripts/* /scripts/
USER nobody
ADD build/_output/bin/jaeger-operator /usr/local/bin/jaeger-operator
ENTRYPOINT ["/usr/local/bin/jaeger-operator"]

View File

@ -1,19 +0,0 @@
FROM scratch
# Core bundle labels.
LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1
LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/
LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/
LABEL operators.operatorframework.io.bundle.package.v1=jaeger
LABEL operators.operatorframework.io.bundle.channels.v1=stable
LABEL operators.operatorframework.io.bundle.channel.default.v1=stable
LABEL operators.operatorframework.io.metrics.builder=operator-sdk-v1.13.0+git
LABEL operators.operatorframework.io.metrics.mediatype.v1=metrics+v1
LABEL operators.operatorframework.io.metrics.project_layout=go.kubebuilder.io/v3
# OpenShift specific labels.
LABEL com.redhat.openshift.versions=v4.12
# Copy files to locations specified by labels.
COPY bundle/manifests /manifests/
COPY bundle/metadata /metadata/

View File

@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
labels:
name: jaeger-operator
name: jaeger-operator-metrics-reader
rules:
- nonResourceURLs:
- /metrics
verbs:
- get

View File

@ -1,18 +0,0 @@
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: metrics
name: jaeger-operator
name: jaeger-operator-metrics
spec:
ports:
- name: https
port: 8443
protocol: TCP
targetPort: https
selector:
name: jaeger-operator
status:
loadBalancer: {}

View File

@ -1,16 +0,0 @@
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
name: jaeger-operator
name: jaeger-operator-webhook-service
spec:
ports:
- port: 443
protocol: TCP
targetPort: 9443
selector:
name: jaeger-operator
status:
loadBalancer: {}

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -1,21 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
creationTimestamp: null
labels:
name: jaeger-operator
name: prometheus
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
verbs:
- get
- list
- watch

View File

@ -1,18 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
creationTimestamp: null
labels:
name: jaeger-operator
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus-k8s
namespace: openshift-monitoring

View File

@ -1,14 +0,0 @@
annotations:
# Core bundle annotations.
operators.operatorframework.io.bundle.mediatype.v1: registry+v1
operators.operatorframework.io.bundle.manifests.v1: manifests/
operators.operatorframework.io.bundle.metadata.v1: metadata/
operators.operatorframework.io.bundle.package.v1: jaeger
operators.operatorframework.io.bundle.channels.v1: stable
operators.operatorframework.io.bundle.channel.default.v1: stable
operators.operatorframework.io.metrics.builder: operator-sdk-v1.13.0+git
operators.operatorframework.io.metrics.mediatype.v1: metrics+v1
operators.operatorframework.io.metrics.project_layout: go.kubebuilder.io/v3
# OpenShift annotations
com.redhat.openshift.versions: v4.12

View File

@ -1,70 +0,0 @@
apiVersion: scorecard.operatorframework.io/v1alpha3
kind: Configuration
metadata:
name: config
stages:
- parallel: false
tests:
- entrypoint:
- scorecard-test
- basic-check-spec
image: quay.io/operator-framework/scorecard-test:v1.32.0
labels:
suite: basic
test: basic-check-spec-test
storage:
spec:
mountPath: {}
- entrypoint:
- scorecard-test
- olm-bundle-validation
image: quay.io/operator-framework/scorecard-test:v1.32.0
labels:
suite: olm
test: olm-bundle-validation-test
storage:
spec:
mountPath: {}
- entrypoint:
- scorecard-test
- olm-crds-have-validation
image: quay.io/operator-framework/scorecard-test:v1.32.0
labels:
suite: olm
test: olm-crds-have-validation-test
storage:
spec:
mountPath: {}
- entrypoint:
- scorecard-test
- olm-crds-have-resources
image: quay.io/operator-framework/scorecard-test:v1.32.0
labels:
suite: olm
test: olm-crds-have-resources-test
storage:
spec:
mountPath: {}
- entrypoint:
- scorecard-test
- olm-spec-descriptors
image: quay.io/operator-framework/scorecard-test:v1.32.0
labels:
suite: olm
test: olm-spec-descriptors-test
storage:
spec:
mountPath: {}
- entrypoint:
- scorecard-test
- olm-status-descriptors
image: quay.io/operator-framework/scorecard-test:v1.32.0
labels:
suite: olm
test: olm-status-descriptors-test
storage:
spec:
mountPath: {}
storage:
spec:
mountPath: {}

13
cmd/manager/main.go Normal file
View File

@ -0,0 +1,13 @@
package main
import "github.com/jaegertracing/jaeger-operator/cmd"
func main() {
// Note that this file should be identical to the main.go at the root of the project
// It would really be nice if this one here wouldn't be required, but the Operator SDK
// requires it...
// https://github.com/operator-framework/operator-sdk/blob/master/doc/migration/v0.1.0-migration-guide.md#copy-changes-from-maingo
// > operator-sdk now expects cmd/manager/main.go to be present in Go operator projects.
// > Go project-specific commands, ex. add [api, controller], will error if main.go is not found in its expected path.
cmd.Execute()
}

View File

@ -8,7 +8,6 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/jaegertracing/jaeger-operator/pkg/cmd/generate"
"github.com/jaegertracing/jaeger-operator/pkg/cmd/start"
"github.com/jaegertracing/jaeger-operator/pkg/cmd/version"
)
@ -38,7 +37,6 @@ func init() {
RootCmd.AddCommand(start.NewStartCommand())
RootCmd.AddCommand(version.NewVersionCommand())
RootCmd.AddCommand(generate.NewGenerateCommand())
}
// initConfig reads in config file and ENV variables if set.

View File

@ -1,28 +0,0 @@
# The following manifests contain a self-signed issuer CR and a certificate CR.
# More document can be found at https://docs.cert-manager.io
# WARNING: Targets CertManager v1.0. Check https://cert-manager.io/docs/installation/upgrading/ for breaking changes.
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned-issuer
namespace: system
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: serving-cert # this name should match the one appeared in kustomizeconfig.yaml
namespace: system
spec:
# $(SERVICE_NAME) and $(SERVICE_NAMESPACE) will be substituted by kustomize
dnsNames:
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc.cluster.local
issuerRef:
kind: Issuer
name: selfsigned-issuer
secretName: jaeger-operator-service-cert # this secret will not be prefixed, since it's not managed by kustomize
subject:
organizationalUnits:
- "jaeger-operator"

View File

@ -1,7 +0,0 @@
resources:
- certificate.yaml
namePrefix: jaeger-operator-
configurations:
- kustomizeconfig.yaml

View File

@ -1,16 +0,0 @@
# This configuration is for teaching kustomize how to update name ref and var substitution
nameReference:
- kind: Issuer
group: cert-manager.io
fieldSpecs:
- kind: Certificate
group: cert-manager.io
path: spec/issuerRef/name
varReference:
- kind: Certificate
group: cert-manager.io
path: spec/commonName
- kind: Certificate
group: cert-manager.io
path: spec/dnsNames

File diff suppressed because it is too large Load Diff

View File

@ -1,23 +0,0 @@
# This kustomization.yaml is not intended to be run by itself,
# since it depends on service name and namespace that are out of this kustomize package.
# It should be run by config/default
resources:
- bases/jaegertracing.io_jaegers.yaml
#+kubebuilder:scaffold:crdkustomizeresource
patchesStrategicMerge:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
#- patches/webhook_in_jaegers.yaml
#- patches/webhook_in_kafkas.yaml
#+kubebuilder:scaffold:crdkustomizewebhookpatch
# [CERTMANAGER] To enable cert-manager, uncomment all the sections with [CERTMANAGER] prefix.
# patches here are for enabling the CA injection for each CRD
- patches/cainjection_in_jaegers.yaml
#- patches/cainjection_in_kafkas.yaml
#+kubebuilder:scaffold:crdkustomizecainjectionpatch
# the following config is for teaching kustomize how to do kustomization for CRDs.
configurations:
- kustomizeconfig.yaml

View File

@ -1,19 +0,0 @@
# This file is for teaching kustomize how to substitute name and namespace reference in CRD
nameReference:
- kind: Service
version: v1
fieldSpecs:
- kind: CustomResourceDefinition
version: v1
group: apiextensions.k8s.io
path: spec/conversion/webhook/clientConfig/service/name
namespace:
- kind: CustomResourceDefinition
version: v1
group: apiextensions.k8s.io
path: spec/conversion/webhook/clientConfig/service/namespace
create: false
varReference:
- path: metadata/annotations

View File

@ -1,7 +0,0 @@
# The following patch adds a directive for certmanager to inject CA into the CRD
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
name: jaegers.jaegertracing.io

View File

@ -1,16 +0,0 @@
# The following patch enables a conversion webhook for the CRD
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: jaegers.jaegertracing.io
spec:
conversion:
strategy: Webhook
webhook:
clientConfig:
service:
namespace: system
name: jaeger-operator-webhook-service
path: /convert
conversionReviewVersions:
- v1

View File

@ -1,69 +0,0 @@
# Adds namespace to all resources.
namespace: observability
# Value of this field is prepended to the
# names of all resources, e.g. a deployment named
# "wordpress" becomes "alices-wordpress".
# Note that it should also match with the prefix (text before '-') of the namespace
# field above.
# The prefix is not used here because the manager's deployment name is jaeger-operator
# which means that the manifest would have to contain an empty name which is not allowed.
#namePrefix: jaeger-operator-
# Labels to add to all resources and selectors.
# https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/#labels
commonLabels:
name: jaeger-operator
bases:
- ../crd
- ../rbac
- ../manager
- ../webhook
- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
#- ../prometheus
patchesStrategicMerge:
# Protect the /metrics endpoint by putting it behind auth.
# If you want your controller-manager to expose the /metrics
# endpoint w/o any authn/z, please comment the following line.
- manager_auth_proxy_patch.yaml
- manager_webhook_patch.yaml
- webhookcainjection_patch.yaml
# Mount the controller config file for loading manager configurations
# through a ComponentConfig type
#- manager_config_patch.yaml
# the following config is for teaching kustomize how to do var substitution
vars:
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
objref:
kind: Certificate
group: cert-manager.io
version: v1
name: serving-cert # this name should match the one in certificate.yaml
fieldref:
fieldpath: metadata.namespace
- name: CERTIFICATE_NAME
objref:
kind: Certificate
group: cert-manager.io
version: v1
name: serving-cert # this name should match the one in certificate.yaml
- name: SERVICE_NAMESPACE # namespace of the service
objref:
kind: Service
version: v1
name: webhook-service
fieldref:
fieldpath: metadata.namespace
- name: SERVICE_NAME
objref:
kind: Service
version: v1
name: webhook-service

View File

@ -1,33 +0,0 @@
# This patch inject a sidecar container which is a HTTP proxy for the
# controller manager, it performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger-operator
spec:
template:
spec:
containers:
- name: kube-rbac-proxy
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1
args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8383/"
- "--logtostderr=true"
- "--v=0"
ports:
- containerPort: 8443
protocol: TCP
name: https
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 5m
memory: 64Mi
- name: jaeger-operator
args:
- "start"
- "--health-probe-bind-address=:8081"
- "--leader-elect"

View File

@ -1,19 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger-operator
spec:
template:
spec:
containers:
- name: manager
args:
- "--config=controller_manager_config.yaml"
volumeMounts:
- name: manager-config
mountPath: /controller_manager_config.yaml
subPath: controller_manager_config.yaml
volumes:
- name: manager-config
configMap:
name: manager-config

View File

@ -1,22 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger-operator
spec:
template:
spec:
containers:
- name: jaeger-operator
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
volumes:
- name: cert
secret:
defaultMode: 420
secretName: jaeger-operator-service-cert

View File

@ -1,15 +0,0 @@
# This patch add annotation to admission webhook config and
# the variables $(CERTIFICATE_NAMESPACE) and $(CERTIFICATE_NAME) will be substituted by kustomize.
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: mutating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: validating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)

View File

@ -1,11 +0,0 @@
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 31e04290.jaegertracing.io

View File

@ -1,8 +0,0 @@
resources:
- manager.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: controller
newName: quay.io/jaegertracing/jaeger-operator
newTag: 1.65.0

View File

@ -1,83 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger-operator
labels:
spec:
selector:
matchLabels:
strategy: {}
replicas: 1
template:
metadata:
labels:
spec:
securityContext:
runAsNonRoot: true
containers:
- command:
- /jaeger-operator
args:
- start
- --leader-elect
image: controller:latest
name: jaeger-operator
securityContext:
allowPrivilegeEscalation: false
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 128Mi
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.annotations['olm.targetNamespaces']
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OPERATOR_NAME
value: "jaeger-operator"
- name: LOG-LEVEL
value: DEBUG
- name: KAFKA-PROVISIONING-MINIMAL
value: "true"
- name: LOG-LEVEL
value: DEBUG
- name: KAFKA-PROVISIONING-MINIMAL
value: "true"
- name: LOG-LEVEL
value: DEBUG
- name: KAFKA-PROVISIONING-MINIMAL
value: "true"
- name: LOG-LEVEL
value: DEBUG
- name: KAFKA-PROVISIONING-MINIMAL
value: "true"
- name: LOG-LEVEL
value: DEBUG
- name: KAFKA-PROVISIONING-MINIMAL
value: "true"
- name: LOG-LEVEL
value: DEBUG
- name: KAFKA-PROVISIONING-MINIMAL
value: "true"
- name: LOG-LEVEL
value: DEBUG
- name: KAFKA-PROVISIONING-MINIMAL
value: "true"
serviceAccountName: jaeger-operator
terminationGracePeriodSeconds: 10

File diff suppressed because one or more lines are too long

View File

@ -1,27 +0,0 @@
# These resources constitute the fully configured set of manifests
# used to generate the 'manifests/' directory in a bundle.
resources:
- bases/jaeger-operator.clusterserviceversion.yaml
- ../default
- ../samples
#- ../scorecard
# [WEBHOOK] To enable webhooks, uncomment all the sections with [WEBHOOK] prefix.
# Do NOT uncomment sections with prefix [CERTMANAGER], as OLM does not support cert-manager.
# These patches remove the unnecessary "cert" volume and its manager container volumeMount.
#patchesJson6902:
#- target:
# group: apps
# version: v1
# kind: Deployment
# name: controller-manager
# namespace: system
# patch: |-
# # Remove the manager container's "cert" volumeMount, since OLM will create and mount a set of certs.
# # Update the indices in this path if adding or removing containers/volumeMounts in the manager's Deployment.
# - op: remove
# path: /spec/template/spec/containers/1/volumeMounts/0
# # Remove the "cert" volume, since OLM will create and mount a set of certs.
# # Update the indices in this path if adding or removing volumes in the manager's Deployment.
# - op: remove
# path: /spec/template/spec/volumes/0

View File

@ -1,8 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../default
components:
- ./patch

View File

@ -1,40 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- patch: |-
$patch: delete
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jaeger-operator-metrics-reader
- patch: |
- op: replace
path: /kind
value: Role
target:
group: rbac.authorization.k8s.io
kind: ClusterRole
- patch: |
- op: replace
path: /roleRef/kind
value: Role
target:
group: rbac.authorization.k8s.io
kind: ClusterRoleBinding
- patch: |
- op: replace
path: /kind
value: RoleBinding
target:
group: rbac.authorization.k8s.io
kind: ClusterRoleBinding
- target:
group: apps
version: v1
name: jaeger-operator
kind: Deployment
patch: |-
- op: replace
path: /spec/template/spec/containers/0/env/0/valueFrom/fieldRef/fieldPath
value: metadata.namespace

View File

@ -1,2 +0,0 @@
resources:
- monitor.yaml

View File

@ -1,22 +0,0 @@
# Prometheus Monitor Service (Metrics)
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
name: jaeger-operator
name: jaeger-operator-metrics-monitor
spec:
endpoints:
- path: /metrics
targetPort: 8443
scheme: https
interval: 30s
scrapeTimeout: 10s
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
name: jaeger-operator
app.kubernetes.io/component: metrics

View File

@ -1,9 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jaeger-operator-metrics-reader
rules:
- nonResourceURLs:
- "/metrics"
verbs:
- get

View File

@ -1,17 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: proxy-role
rules:
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create

View File

@ -1,11 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jaeger-operator-proxy-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: proxy-role
subjects:
- kind: ServiceAccount
name: jaeger-operator

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
labels:
name: jaeger-operator
app.kubernetes.io/component: metrics
name: jaeger-operator-metrics
spec:
ports:
- name: https
port: 8443
protocol: TCP
targetPort: https
selector:
name: jaeger-operator

Some files were not shown because too many files have changed in this diff Show More