Remove ocagent and rename unisvc to otelsvc (#21)

Keep occollector for testing until migration to otelsvc
This commit is contained in:
Steve Flanders 2019-06-19 08:51:26 -07:00 committed by GitHub
parent 559835ceee
commit d41326dc56
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 125 additions and 568 deletions

2
.gitignore vendored
View File

@ -16,8 +16,8 @@ bin/
# Binaries are copied (as needed) to the same location as their respective Dockerfile in
# order to simplify docker build commands. Ignore these files if proper clean up fails.
cmd/ocagent/ocagent_linux
cmd/occollector/occollector_linux
cmd/otelsvc/otelsvc_linux
# Coverage
coverage.txt

View File

@ -81,17 +81,13 @@ vet:
install-tools:
go install golang.org/x/lint/golint
.PHONY: agent
agent:
GO111MODULE=on CGO_ENABLED=0 go build -o ./bin/ocagent_$(GOOS) $(BUILD_INFO) ./cmd/ocagent
.PHONY: collector
collector:
GO111MODULE=on CGO_ENABLED=0 go build -o ./bin/occollector_$(GOOS) $(BUILD_INFO) ./cmd/occollector
.PHONY: unisvc
unisvc:
GO111MODULE=on CGO_ENABLED=0 go build -o ./bin/unisvc_$(GOOS) $(BUILD_INFO) ./cmd/unisvc
.PHONY: otelsvc
otelsvc:
GO111MODULE=on CGO_ENABLED=0 go build -o ./bin/otelsvc_$(GOOS) $(BUILD_INFO) ./cmd/otelsvc
.PHONY: docker-component # Not intended to be used directly
docker-component: check-component
@ -106,20 +102,16 @@ ifndef COMPONENT
$(error COMPONENT variable was not defined)
endif
.PHONY: docker-agent
docker-agent:
COMPONENT=agent $(MAKE) docker-component
.PHONY: docker-collector
docker-collector:
COMPONENT=collector $(MAKE) docker-component
.PHONY: docker-unisvc
docker-unisvc:
COMPONENT=unisvc $(MAKE) docker-component
.PHONY: docker-otelsvc
docker-otelsvc:
COMPONENT=otelsvc $(MAKE) docker-component
.PHONY: binaries
binaries: agent collector unisvc
binaries: collector otelsvc
.PHONY: binaries-all-sys
binaries-all-sys:

168
README.md
View File

@ -19,19 +19,16 @@ For now, please use the [OpenCensus Service](https://github.com/open-telemetry/o
- [Receivers](#config-receivers)
- [Exporters](#config-exporters)
- [Diagnostics](#config-diagnostics)
- [OpenTelemetry Agent](#opentelemetry-agent)
- [Usage](#agent-usage)
- [OpenTelemetry Collector](#opentelemetry-collector)
- [Global Attributes](#global-attributes)
- [Intelligent Sampling](#tail-sampling)
- [Usage](#collector-usage)
- [Usage](#usage)
## Introduction
The OpenTelemetry Service is an component that can collect traces and metrics
from processes instrumented by OpenTelemetry or other monitoring/tracing
libraries (Jaeger, Prometheus, etc.), does aggregation and smart sampling, and
export traces and metrics to one or more monitoring/tracing backends.
The OpenTelemetry Service can collect traces and metrics from processes
instrumented by OpenTelemetry or other monitoring/tracing libraries (Jaeger,
Prometheus, etc.), handles aggregation and smart sampling, and export traces
and metrics to one or more monitoring/tracing backends.
Some frameworks and ecosystems are now providing out-of-the-box instrumentation
by using OpenTelemetry, but the user is still expected to register an exporter
@ -50,18 +47,26 @@ is just configure and deploy the OpenTelemetry Service separately. The
OpenTelemetry Service will then automatically collect traces and metrics and
export to any backend of users' choice.
Currently the OpenTelemetry Service consists of two components, [OpenTelemetry
Agent](#opentelemetry-agent) and [OpenTelemetry Collector](#opentelemetry-collector).
Currently the OpenTelemetry Service consists of a single binary and two
deployment methods:
1. [Agent](#opentelemetry-agent) running with the application or on the same host as the application
2. [Collector](#opentelemetry-collector) running as a standalone application
For the detailed design specs, please see [DESIGN.md](DESIGN.md).
## <a name="deploy"></a>Deployment
The OpenTelemetry Service can be deployed in a variety of different ways. The
OpenTelemetry Agent can be deployed with the application either as a separate
process, as a sidecar, or via a Kubernetes daemonset. Typically, the
OpenTelemetry Collector is deployed separately as either a Docker container,
The OpenTelemetry Service can be deployed in a variety of different ways
depending on requirements. The Agent can be deployed with the application
either as a separate process, as a sidecar, or via a Kubernetes daemonset. The
Collector is deployed as a separate application as either a Docker container,
VM, or Kubernetes pod.
While the Agent and Collector share the same binary, the configuration between
the two may differ depending on requirements (e.g. queue size and feature-set
enabled).
![deployment-models](images/opentelemetry-service-deployment-models.png)
## <a name="getting-started"></a>Getting Started
@ -81,26 +86,26 @@ $ kubectl apply -f example/k8s.yaml
### <a name="getting-started-standalone"></a>Standalone
Create an Agent [configuration](#config) file based on the options described
below. Please note the Agent requires the `opentelemetry` receiver be enabled. By
default, the Agent has no exporters configured.
below. By default, the Agent has the `opencensus` receiver enabled, but no
exporters configured.
Build the Agent, see [Usage](##agent-usage),
and start it:
```shell
$ ./bin/ocagent_$(go env GOOS)
$ ./bin/otelsvc_$(go env GOOS)
$ 2018/10/08 21:38:00 Running OpenTelemetry receiver as a gRPC service at "127.0.0.1:55678"
```
Create an Collector [configuration](#config) file based on the options
described below. By default, the Collector has the `opentelemetry` receiver
enabled, but no exporters.
described below. By default, the Collector has the `opencensus` receiver
enabled, but no exporters configured.
Build the Collector and start it:
```shell
$ make collector
$ ./bin/occollector_$($GOOS)
$ make otelsvc
$ ./bin/otelsvc_$($GOOS)
```
Run the demo application:
@ -131,7 +136,7 @@ README.md](receiver/README.md).
```yaml
receivers:
opentelemetry:
opencensus:
address: "127.0.0.1:55678"
zipkin:
@ -162,7 +167,7 @@ README.md](exporter/README.md).
```yaml
exporters:
opentelemetry:
opencensus:
headers: {"X-test-header": "test-header"}
compression: "gzip"
cert-pem-file: "server_ca_public.pem" # optional to enable TLS
@ -202,83 +207,11 @@ zpages:
disabled: true
```
## OpenTelemetry Agent
### <a name="agent-usage"></a>Usage
> It is recommended that you use the latest [release](https://github.com/open-telemetry/opentelemetry-service/releases).
The ocagent can be run directly from sources, binary, or a Docker image. If you
are planning to run from sources or build on your machine start by cloning the
repo using `go get -d github.com/open-telemetry/opentelemetry-service`.
The minimum Go version required for this project is Go 1.12.5. In addition, you
must manually install
[Bazaar](https://github.com/open-telemetry/opentelemetry-service/blob/master/CONTRIBUTING.md#required-tools)
1. Run from sources:
```shell
$ GO111MODULE=on go run github.com/open-telemetry/opentelemetry-service/cmd/ocagent --help
```
2. Run from binary (from the root of your repo):
```shell
$ make agent
```
3. Build a Docker scratch image and use the appropriate Docker command for your scenario
(note: additional ports may be required depending on your receiver configuration):
A Docker scratch image can be built with make by targeting `docker-agent`.
```shell
$ make docker-agent
$ docker run \
--rm \
--interactive \
--tty \
--publish 55678:55678 --publish 55679:55679 \
--volume $(pwd)/ocagent-config.yaml:/conf/ocagent-config.yaml \
ocagent \
--config=/conf/ocagent-config.yaml
```
## OpenTelemetry Collector
The OpenTelemetry Collector is a component that runs “nearby” (e.g. in the same
VPC, AZ, etc.) a users application components and receives trace spans and
metrics emitted by the OpenTelemetry Agent or tasks instrumented with
OpenTelemetry instrumentation (or other supported protocols/libraries). The
received spans and metrics could be emitted directly by clients in instrumented
tasks, or potentially routed via intermediate proxy sidecar/daemon agents (such
as the OpenTelemetry Agent). The collector provides a central egress point for
exporting traces and metrics to one or more tracing and metrics backends, with
buffering and retries as well as advanced aggregation, filtering and annotation
capabilities.
The collector is extensible enabling it to support a range of out-of-the-box
(and custom) capabilities such as:
* Retroactive (tail-based) sampling of traces
* Cluster-wide z-pages
* Filtering of traces and metrics
* Aggregation of traces and metrics
* Decoration with meta-data from infrastructure provider (e.g. k8s master)
* much more ...
The collector also serves as a control plane for agents/clients by supplying
them updated configuration (e.g. trace sampling policies), and reporting
agent/client health information/inventory metadata to downstream exporters.
### <a name="receivers-configuration"></a> Receivers Configuration
For detailed information about configuring receivers for the collector refer to the [receivers README.md](receiver/README.md).
### <a name="global-attributes"></a> Global Attributes
The collector also takes some global configurations that modify its behavior for all receivers / exporters.
The OpenTelemetry Service also takes some global configurations that modify its
behavior for all receivers / exporters. This configuration is typically applied
on the Collector, but could also be added to the Agent.
1. Add Attributes to all spans passing through this collector. These additional
attributes can be configured to either overwrite existing keys if they
@ -308,8 +241,11 @@ global:
keep: true # keep the attribute with the original key
```
### <a name="tail-sampling"></a>Intelligent Sampling
### <a name="sampling"></a>Sampling
Sampling can also be configured on the OpenTelemetry Service. Tail sampling
must be configured on the Collector as it requires all spans for a given trace
to make a sampling decision.
```yaml
sampling:
mode: tail
@ -341,57 +277,57 @@ sampling:
> Note that an exporter can only have a single sampling policy today.
### <a name="collector-usage"></a>Usage
## <a name="collector-usage"></a>Usage
> It is recommended that you use the latest [release](https://github.com/open-telemetry/opentelemetry-service/releases).
The collector can be run directly from sources, binary, or a Docker image. If
you are planning to run from sources or build on your machine start by cloning
the repo using `go get -d
The OpenTelemetry Service can be run directly from sources, binary, or a Docker
image. If you are planning to run from sources or build on your machine start
by cloning the repo using `go get -d
github.com/open-telemetry/opentelemetry-service`.
The minimum Go version required for this project is Go 1.12.5.
1. Run from sources:
```shell
$ GO111MODULE=on go run github.com/open-telemetry/opentelemetry-service/cmd/occollector --help
$ GO111MODULE=on go run github.com/open-telemetry/opentelemetry-service/cmd/otelsvc --help
```
2. Run from binary (from the root of your repo):
```shell
$ make collector
$ ./bin/occollector_$($GOOS)
$ make otelsvc
$ ./bin/otelsvc_$($GOOS)
```
3. Build a Docker scratch image and use the appropriate Docker command for your
scenario (note: additional ports may be required depending on your receiver
configuration):
```shell
$ make docker-collector
$ make docker-otelsvc
$ docker run \
--rm \
--interactive \
-- tty \
--publish 55678:55678 --publish 8888:8888 \
--volume $(pwd)/occollector-config.yaml:/conf/occollector-config.yaml \
occollector \
--config=/conf/occollector-config.yaml
--publish 55678:55678 --publish 55679:55679 --publish 8888:8888 \
--volume $(pwd)/occollector-config.yaml:/conf/otelsvc-config.yaml \
otelsvc \
--config=/conf/otelsvc-config.yaml
```
It can be configured via command-line or config file:
```
OpenTelemetry Collector
OpenTelemetry Service
Usage:
occollector [flags]
otelsvc [flags]
Flags:
--config string Path to the config file
--health-check-http-port uint Port on which to run the healthcheck http server. (default 13133)
-h, --help help for occollector
-h, --help help for otelsvc
--http-pprof-port uint Port to be used by golang net/http/pprof (Performance Profiler), the profiler is disabled if no port or 0 is specified.
--log-level string Output level of logs (DEBUG, INFO, WARN, ERROR, FATAL) (default "INFO")
--logging-exporter Flag to add a logging exporter (combine with log level DEBUG to log incoming spans)
--metrics-level string Output level of telemetry metrics (NONE, BASIC, NORMAL, DETAILED) (default "BASIC")
--metrics-port uint Port exposing collector telemetry. (default 8888)
--metrics-port uint Port exposing telemetry. (default 8888)
--receive-jaeger Flag to run the Jaeger receiver (i.e.: Jaeger Collector), default settings: {ThriftTChannelPort:14267 ThriftHTTPPort:14268}
--receive-oc-trace Flag to run the OpenTelemetry trace receiver, default settings: {Port:55678} (default true)
--receive-zipkin Flag to run the Zipkin receiver, default settings: {Port:9411}
@ -404,7 +340,7 @@ Sample configuration file:
log-level: DEBUG
receivers:
opentelemetry: {} # Runs OpenTelemetry receiver with default configuration (default behavior).
opencensus: {} # Runs OpenCensus receiver with default configuration (default behavior).
queued-exporters:
jaeger-sender-test: # A friendly name for the exporter

View File

@ -1,13 +0,0 @@
# Use a helper to create an emtpy configuration since the agent requires such file
FROM alpine:3.7 as helper
RUN apk update \
&& apk add --no-cache ca-certificates \
&& update-ca-certificates \
&& touch ./config.yaml
FROM scratch
COPY ocagent_linux /
COPY --from=helper ./config.yaml config.yaml
COPY --from=helper /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
ENTRYPOINT ["/ocagent_linux"]
EXPOSE 55678 55679

View File

@ -1,17 +0,0 @@
receivers:
opencensus:
address: "127.0.0.1:55678"
zipkin:
address: "127.0.0.1:9411"
exporters:
stackdriver:
project: "project-id" # Optional if on GCP, defaults to agent project
enable_tracing: true
zipkin:
endpoint: "http://127.0.0.1:9411/api/v2/spans"
zpages:
port: 55679

View File

@ -1,349 +0,0 @@
// Copyright 2019, OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Program ocagent collects OpenCensus stats and traces
// to export to a configured backend.
package main
import (
"context"
"fmt"
"log"
"os"
"os/signal"
"syscall"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"go.opencensus.io/stats/view"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
"github.com/open-telemetry/opentelemetry-service/consumer"
"github.com/open-telemetry/opentelemetry-service/internal/config"
"github.com/open-telemetry/opentelemetry-service/internal/config/viperutils"
"github.com/open-telemetry/opentelemetry-service/internal/pprofserver"
"github.com/open-telemetry/opentelemetry-service/internal/version"
"github.com/open-telemetry/opentelemetry-service/internal/zpagesserver"
"github.com/open-telemetry/opentelemetry-service/observability"
"github.com/open-telemetry/opentelemetry-service/processor/multiconsumer"
"github.com/open-telemetry/opentelemetry-service/receiver/jaegerreceiver"
"github.com/open-telemetry/opentelemetry-service/receiver/opencensusreceiver"
"github.com/open-telemetry/opentelemetry-service/receiver/prometheusreceiver"
"github.com/open-telemetry/opentelemetry-service/receiver/vmmetricsreceiver"
"github.com/open-telemetry/opentelemetry-service/receiver/zipkinreceiver"
"github.com/open-telemetry/opentelemetry-service/receiver/zipkinreceiver/zipkinscribereceiver"
)
var rootCmd = &cobra.Command{
Use: "ocagent",
Short: "ocagent runs the OpenCensus service",
Run: func(cmd *cobra.Command, args []string) {
runOCAgent()
},
}
var viperCfg = viper.New()
var configYAMLFile string
func init() {
var versionCmd = &cobra.Command{
Use: "version",
Short: "Print the version information for ocagent",
Run: func(cmd *cobra.Command, args []string) {
fmt.Print(version.Info())
},
}
rootCmd.AddCommand(versionCmd)
rootCmd.PersistentFlags().StringVarP(&configYAMLFile, "config", "c", "config.yaml", "The YAML file with the configurations for the agent and various exporters")
viperutils.AddFlags(viperCfg, rootCmd, pprofserver.AddFlags)
}
func main() {
if err := rootCmd.Execute(); err != nil {
log.Fatal(err)
}
}
func runOCAgent() {
viperCfg.SetConfigFile(configYAMLFile)
err := viperCfg.ReadInConfig()
if err != nil {
log.Fatalf("Cannot read the YAML file %v error: %v", configYAMLFile, err)
}
var agentConfig config.Config
err = viperCfg.Unmarshal(&agentConfig)
if err != nil {
log.Fatalf("Error unmarshalling yaml config file %v: %v", configYAMLFile, err)
}
// Ensure that we check and catch any logical errors with the
// configuration e.g. if an receiver shares the same address
// as an exporter which would cause a self DOS and waste resources.
if err := agentConfig.CheckLogicalConflicts(); err != nil {
log.Fatalf("Configuration logical error: %v", err)
}
// TODO: don't hardcode info level logging
conf := zap.NewProductionConfig()
conf.Level.SetLevel(zapcore.InfoLevel)
logger, err := conf.Build()
if err != nil {
log.Fatalf("Could not instantiate logger: %v", err)
}
var asyncErrorChan = make(chan error)
err = pprofserver.SetupFromViper(asyncErrorChan, viperCfg, logger)
if err != nil {
log.Fatalf("Failed to start net/http/pprof: %v", err)
}
traceExporters, metricsExporters, closeFns, err := config.ExportersFromViperConfig(logger, viperCfg)
if err != nil {
log.Fatalf("Config: failed to create exporters from YAML: %v", err)
}
commonSpanSink := multiconsumer.NewTraceProcessor(traceExporters)
commonMetricsSink := multiconsumer.NewMetricsProcessor(metricsExporters)
// Add other receivers here as they are implemented
ocReceiverDoneFn, err := runOCReceiver(logger, &agentConfig, commonSpanSink, commonMetricsSink, asyncErrorChan)
if err != nil {
log.Fatal(err)
}
closeFns = append(closeFns, ocReceiverDoneFn)
// If zPages are enabled, run them
zPagesPort, zPagesEnabled := agentConfig.ZPagesPort()
if zPagesEnabled {
zCloseFn, err := zpagesserver.Run(asyncErrorChan, zPagesPort)
if err != nil {
log.Fatal(err)
}
log.Printf("Running zPages on port %d", zPagesPort)
closeFns = append(closeFns, zCloseFn)
}
// TODO: Generalize the startup of these receivers when unifying them w/ collector
// If the Zipkin receiver is enabled, then run it
if agentConfig.ZipkinReceiverEnabled() {
zipkinReceiverAddr := agentConfig.ZipkinReceiverAddress()
zipkinReceiverDoneFn, err := runZipkinReceiver(zipkinReceiverAddr, commonSpanSink, asyncErrorChan)
if err != nil {
log.Fatal(err)
}
closeFns = append(closeFns, zipkinReceiverDoneFn)
}
if agentConfig.ZipkinScribeReceiverEnabled() {
zipkinScribeDoneFn, err := runZipkinScribeReceiver(agentConfig.ZipkinScribeConfig(), commonSpanSink, asyncErrorChan)
if err != nil {
log.Fatal(err)
}
closeFns = append(closeFns, zipkinScribeDoneFn)
}
if agentConfig.JaegerReceiverEnabled() {
collectorHTTPPort, collectorThriftPort := agentConfig.JaegerReceiverPorts()
jaegerDoneFn, err := runJaegerReceiver(collectorThriftPort, collectorHTTPPort, commonSpanSink, asyncErrorChan)
if err != nil {
log.Fatal(err)
}
closeFns = append(closeFns, jaegerDoneFn)
}
// If the Prometheus receiver is enabled, then run it.
if agentConfig.PrometheusReceiverEnabled() {
promDoneFn, err := runPrometheusReceiver(viperCfg, commonMetricsSink, asyncErrorChan)
if err != nil {
log.Fatal(err)
}
closeFns = append(closeFns, promDoneFn)
}
// If the VMMetrics receiver is enabled, then run it.
if agentConfig.VMMetricsReceiverEnabled() {
vmmDoneFn, err := runVMMetricsReceiver(viperCfg, commonMetricsSink, asyncErrorChan)
if err != nil {
log.Fatal(err)
}
closeFns = append(closeFns, vmmDoneFn)
}
// Always cleanup finally
defer func() {
for _, closeFn := range closeFns {
if closeFn != nil {
closeFn()
}
}
}()
signalsChan := make(chan os.Signal, 1)
signal.Notify(signalsChan, os.Interrupt, syscall.SIGTERM)
select {
case err = <-asyncErrorChan:
log.Fatalf("Asynchronous error %q, terminating process", err)
case s := <-signalsChan:
log.Printf("Received %q signal from OS, terminating process", s)
}
}
func runOCReceiver(logger *zap.Logger, acfg *config.Config, tc consumer.TraceConsumer, mc consumer.MetricsConsumer, asyncErrorChan chan<- error) (doneFn func() error, err error) {
tlsCredsOption, hasTLSCreds, err := acfg.OpenCensusReceiverTLSCredentialsServerOption()
if err != nil {
return nil, fmt.Errorf("OpenCensus receiver TLS Credentials: %v", err)
}
addr := acfg.OpenCensusReceiverAddress()
corsOrigins := acfg.OpenCensusReceiverCorsAllowedOrigins()
ocr, err := opencensusreceiver.New(addr,
tc,
mc,
tlsCredsOption,
opencensusreceiver.WithCorsOrigins(corsOrigins))
if err != nil {
return nil, fmt.Errorf("failed to create the OpenCensus receiver on address %q: error %v", addr, err)
}
if err := view.Register(observability.AllViews...); err != nil {
return nil, fmt.Errorf("failed to register internal.AllViews: %v", err)
}
// Temporarily disabling the grpc metrics since they do not provide good data at this moment,
// See https://github.com/census-instrumentation/opencensus-service/issues/287
// if err := view.Register(ocgrpc.DefaultServerViews...); err != nil {
// return nil, fmt.Errorf("Failed to register ocgrpc.DefaultServerViews: %v", err)
// }
ctx := context.Background()
switch {
case acfg.CanRunOpenCensusTraceReceiver() && acfg.CanRunOpenCensusMetricsReceiver():
if err := ocr.Start(ctx); err != nil {
return nil, fmt.Errorf("failed to start Trace and Metrics Receivers: %v", err)
}
log.Printf("Running OpenCensus Trace and Metrics receivers as a gRPC service at %q", addr)
case acfg.CanRunOpenCensusTraceReceiver():
if err := ocr.StartTraceReception(ctx, asyncErrorChan); err != nil {
return nil, fmt.Errorf("failed to start TraceReceiver: %v", err)
}
log.Printf("Running OpenCensus Trace receiver as a gRPC service at %q", addr)
case acfg.CanRunOpenCensusMetricsReceiver():
if err := ocr.StartMetricsReception(ctx, asyncErrorChan); err != nil {
return nil, fmt.Errorf("failed to start MetricsReceiver: %v", err)
}
log.Printf("Running OpenCensus Metrics receiver as a gRPC service at %q", addr)
}
if hasTLSCreds {
tlsCreds := acfg.OpenCensusReceiverTLSServerCredentials()
logger.Info("OpenCensus receiver with TLS Credentials",
zap.String("cert_file", tlsCreds.CertFile),
zap.String("key_file", tlsCreds.KeyFile))
}
doneFn = ocr.Stop
return doneFn, nil
}
func runJaegerReceiver(collectorThriftPort, collectorHTTPPort int, next consumer.TraceConsumer, asyncErrorChan chan<- error) (doneFn func() error, err error) {
config := &jaegerreceiver.Configuration{
CollectorThriftPort: collectorThriftPort,
CollectorHTTPPort: collectorHTTPPort,
// TODO: (@odeke-em, @pjanotti) send a change
// to dynamically retrieve the Jaeger Agent's ports
// and not use their defaults of 5778, 6831, 6832
}
jtr, err := jaegerreceiver.New(context.Background(), config, next)
if err != nil {
return nil, fmt.Errorf("failed to create new Jaeger receiver: %v", err)
}
if err := jtr.StartTraceReception(context.Background(), asyncErrorChan); err != nil {
return nil, fmt.Errorf("failed to start Jaeger receiver: %v", err)
}
doneFn = func() error {
return jtr.StopTraceReception(context.Background())
}
log.Printf("Running Jaeger receiver with CollectorThriftPort %d CollectHTTPPort %d", collectorThriftPort, collectorHTTPPort)
return doneFn, nil
}
func runZipkinReceiver(addr string, next consumer.TraceConsumer, asyncErrorChan chan<- error) (doneFn func() error, err error) {
zi, err := zipkinreceiver.New(addr, next)
if err != nil {
return nil, fmt.Errorf("failed to create the Zipkin receiver: %v", err)
}
if err := zi.StartTraceReception(context.Background(), asyncErrorChan); err != nil {
return nil, fmt.Errorf("cannot start Zipkin receiver with address %q: %v", addr, err)
}
doneFn = func() error {
return zi.StopTraceReception(context.Background())
}
log.Printf("Running Zipkin receiver with address %q", addr)
return doneFn, nil
}
func runZipkinScribeReceiver(config *config.ScribeReceiverConfig, next consumer.TraceConsumer, asyncErrorChan chan<- error) (doneFn func() error, err error) {
zs, err := zipkinscribereceiver.NewReceiver(config.Address, config.Port, config.Category, next)
if err != nil {
return nil, fmt.Errorf("failed to create the Zipkin Scribe receiver: %v", err)
}
if err := zs.StartTraceReception(context.Background(), asyncErrorChan); err != nil {
return nil, fmt.Errorf("cannot start Zipkin Scribe receiver with %v: %v", config, err)
}
doneFn = func() error {
return zs.StopTraceReception(context.Background())
}
log.Printf("Running Zipkin Scribe receiver with %+v", *config)
return doneFn, nil
}
func runPrometheusReceiver(v *viper.Viper, next consumer.MetricsConsumer, asyncErrorChan chan<- error) (doneFn func() error, err error) {
pmr, err := prometheusreceiver.New(v.Sub("receivers.prometheus"), next)
if err != nil {
return nil, err
}
if err := pmr.StartMetricsReception(context.Background(), asyncErrorChan); err != nil {
return nil, err
}
doneFn = func() error {
return pmr.StopMetricsReception(context.Background())
}
log.Print("Running Prometheus receiver")
return doneFn, nil
}
func runVMMetricsReceiver(v *viper.Viper, next consumer.MetricsConsumer, asyncErrorChan chan<- error) (doneFn func() error, err error) {
vmr, err := vmmetricsreceiver.New(v.Sub("receivers.vmmetrics"), next)
if err != nil {
return nil, err
}
if err := vmr.StartMetricsReception(context.Background(), asyncErrorChan); err != nil {
return nil, err
}
doneFn = func() error {
return vmr.StopMetricsReception(context.Background())
}
log.Print("Running VMMetrics receiver")
return doneFn, nil
}

View File

@ -293,7 +293,7 @@ func (app *Application) executeUnified() {
// given by the user.
func (app *Application) StartUnified() error {
rootCmd := &cobra.Command{
Use: "unisvc",
Use: "otelsvc",
Long: "OpenTelemetry Service",
Run: func(cmd *cobra.Command, args []string) {
app.init()

View File

@ -80,7 +80,7 @@ func TestApplication_StartUnified(t *testing.T) {
App.v.Set(portArg[i], port)
}
App.v.Set("config", "testdata/unisvc-config.yaml")
App.v.Set("config", "testdata/otelsvc-config.yaml")
appDone := make(chan struct{})
go func() {

8
cmd/otelsvc/Dockerfile Normal file
View File

@ -0,0 +1,8 @@
FROM alpine:3.9 as certs
RUN apk --update add ca-certificates
FROM scratch
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY otelsvc_linux /
ENTRYPOINT ["/otelsvc_linux"]
EXPOSE 55678 55679

View File

@ -12,12 +12,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Program unisvc is the Open Telemetry Service that collects stats
// Program otelsvc is the Open Telemetry Service that collects stats
// and traces and exports to a configured backend.
package main
import "github.com/open-telemetry/opentelemetry-service/unisvc"
import "github.com/open-telemetry/opentelemetry-service/otelsvc"
func main() {
unisvc.Run()
otelsvc.Run()
}

View File

@ -15,11 +15,11 @@ services:
- "9411:9411"
# Collector
oc-collector:
image: occollector:latest
command: ["--config=/etc/oc-collector-config.yaml", "--http-pprof-port=1777"]
otelsvc-collector:
image: otelsvc:latest
command: ["--config=/etc/otelsvc-collector-config.yaml", "--http-pprof-port=1777"]
volumes:
- ./oc-collector-config.yaml:/etc/oc-collector-config.yaml
- ./otelsvc-collector-config.yaml:/etc/otelsvc-collector-config.yaml
ports:
- "55678"
- "55680:55679"
@ -30,24 +30,24 @@ services:
- zipkin-all-in-one
# Agent
oc-agent:
image: ocagent:latest
command: ["--config=/etc/oc-agent-config.yaml", "--http-pprof-port=1888"]
otelsvc-agent:
image: otelsvc:latest
command: ["--config=/etc/otelsvc-agent-config.yaml", "--http-pprof-port=1888"]
volumes:
- ./oc-agent-config.yaml:/etc/oc-agent-config.yaml
- ./otelsvc-agent-config.yaml:/etc/otelsvc-agent-config.yaml
ports:
- "1888:1888"
- "14268"
- "55678"
- "55679:55679"
depends_on:
- oc-collector
- otelsvc-collector
# Synthetic load generator
synthetic-load-generator:
image: omnition/synthetic-load-generator:1.0.25
environment:
- JAEGER_COLLECTOR_URL=http://oc-agent:14268
- JAEGER_COLLECTOR_URL=http://otelsvc-agent:14268
depends_on:
- oc-agent
- otelsvc-agent

View File

@ -2,40 +2,40 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: oc-agent-conf
name: otelsvc-agent-conf
labels:
app: opencensus
component: oc-agent-conf
app: opentelemetry
component: otelsvc-agent-conf
data:
oc-agent-config: |
otelsvc-agent-config: |
receivers:
opencensus: {}
# jaeger: {}
# zipkin: {}
exporters:
opencensus:
endpoint: "oc-collector.default:55678" # TODO: Update me
endpoint: "otelsvc-collector.default:55678" # TODO: Update me
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: oc-agent
name: otelsvc-agent
labels:
app: opencensus
component: oc-agent
app: opentelemetry
component: otelsvc-agent
spec:
template:
metadata:
labels:
app: opencensus
component: oc-agent
app: opentelemetry
component: otelsvc-agent
spec:
containers:
- command:
- "/ocagent_linux"
- "--config=/conf/oc-agent-config.yaml"
- "--config=/conf/otelsvc-agent-config.yaml"
image: omnition/opencensus-agent:0.1.6
name: oc-agent
name: otelsvc-agent
resources:
limits:
cpu: 500m
@ -50,25 +50,25 @@ spec:
# - containerPort: 14268
# - containerPort: 9411
volumeMounts:
- name: oc-agent-config-vol
- name: otelsvc-agent-config-vol
mountPath: /conf
volumes:
- configMap:
name: oc-agent-conf
name: otelsvc-agent-conf
items:
- key: oc-agent-config
path: oc-agent-config.yaml
name: oc-agent-config-vol
- key: otelsvc-agent-config
path: otelsvc-agent-config.yaml
name: otelsvc-agent-config-vol
---
apiVersion: v1
kind: ConfigMap
metadata:
name: oc-collector-conf
name: otelsvc-collector-conf
labels:
app: opencensus
component: oc-collector-conf
app: opentelemetry
component: otelsvc-collector-conf
data:
oc-collector-config: |
otelsvc-collector-config: |
receivers:
opencensus:
# keepalive settings can help load balancing, see receiver/README.md for more info.
@ -103,10 +103,10 @@ data:
apiVersion: v1
kind: Service
metadata:
name: oc-collector
name: otelsvc-collector
labels:
app: opencesus
component: oc-collector
component: otelsvc-collector
spec:
ports:
- name: opencensus
@ -120,15 +120,15 @@ spec:
# - name: zipkin
# port: 9411
selector:
component: oc-collector
component: otelsvc-collector
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oc-collector
name: otelsvc-collector
labels:
app: opencensus
component: oc-collector
app: opentelemetry
component: otelsvc-collector
spec:
minReadySeconds: 5
progressDeadlineSeconds: 120
@ -140,18 +140,18 @@ spec:
prometheus.io/port: "8888"
prometheus.io/scrape: "true"
labels:
app: opencensus
component: oc-collector
app: opentelemetry
component: otelsvc-collector
spec:
containers:
- command:
- "/occollector_linux"
- "--config=/conf/oc-collector-config.yaml"
- "--config=/conf/otelsvc-collector-config.yaml"
env:
- name: GOGC
value: "80"
image: omnition/opencensus-collector:0.1.6
name: oc-collector
name: otelsvc-collector
resources:
limits:
cpu: 1
@ -165,9 +165,9 @@ spec:
# - containerPort: 14268
# - containerPort: 9411
volumeMounts:
- name: oc-collector-config-vol
- name: otelsvc-collector-config-vol
mountPath: /conf
# - name: oc-collector-secrets
# - name: otelsvc-collector-secrets
# mountPath: /secrets
livenessProbe:
httpGet:
@ -179,13 +179,13 @@ spec:
port: 13133
volumes:
- configMap:
name: oc-collector-conf
name: otelsvc-collector-conf
items:
- key: oc-collector-config
path: oc-collector-config.yaml
name: oc-collector-config-vol
- key: otelsvc-collector-config
path: otelsvc-collector-config.yaml
name: otelsvc-collector-config-vol
# - secret:
# name: oc-collector-secrets
# name: otelsvc-collector-secrets
# items:
# - key: cert.pem
# path: cert.pem

1
otelsvc/empty_test.go Normal file
View File

@ -0,0 +1 @@
package otelsvc

View File

@ -12,9 +12,9 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package unisvc implements Open Telemetry Service that collects stats
// Package otelsvc implements Open Telemetry Service that collects stats
// and traces and exports to a configured backend.
package unisvc
package otelsvc
import (
"log"

View File

@ -1 +0,0 @@
package unisvc