- Remove unnecessary whitespace
- Other minor cleanup
This commit is contained in:
Steve Flanders 2020-11-12 09:07:54 -05:00 committed by GitHub
parent 1d5d158609
commit 1dedaeee84
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 84 additions and 67 deletions

View File

@ -26,4 +26,4 @@ A clear and concise description of any alternative solutions or features you've
**Additional context**
Add any other context or screenshots about the feature request here.
_Plesae delete paragraphs that you did not use before submitting._
_Please delete paragraphs that you did not use before submitting._

View File

@ -276,7 +276,7 @@ See [release](docs/release.md) for details.
## Common Issues
Build fails due to depenedency issues, e.g.
Build fails due to dependency issues, e.g.
```sh
go: github.com/golangci/golangci-lint@v1.31.0 requires

View File

@ -216,7 +216,7 @@ dedicated port for Agent, while there could be multiple instrumented processes.
were sent in a subsequent message. Identifier is no longer needed once the
streams are established.
3. On Sender side, if connection to Collector failed, Sender should retry
indefintely if possible, subject to available/configured memory buffer size.
indefinitely if possible, subject to available/configured memory buffer size.
(Reason: consider environments where the running applications are already
instrumented with OpenTelemetry Library but Collector is not deployed yet.
Sometime in the future, we can simply roll out the Collector to those

View File

@ -24,4 +24,4 @@ Support more application-specific metric collection (e.g. Kafka, Hadoop, etc)
| |
**Other Features**|
Graceful shutdown (pipeline draining)| |[#483](https://github.com/open-telemetry/opentelemetry-collector/issues/483)
Deprecate queue retry processor and enable queueing per exporter by default||[#1721](https://github.com/open-telemetry/opentelemetry-collector/issues/1721)
Deprecate queue retry processor and enable queuing per exporter by default||[#1721](https://github.com/open-telemetry/opentelemetry-collector/issues/1721)

View File

@ -27,7 +27,7 @@ by passing the `--metrics-addr` flag to the `otelcol` process. See `--help` for
more details.
```bash
# otelcol --metrics-addr 0.0.0.0:8888
$ otelcol --metrics-addr 0.0.0.0:8888
```
A grafana dashboard for these metrics can be found

5
examples/README.md Normal file
View File

@ -0,0 +1,5 @@
# Examples
Information on how the examples can be used can be found in the [Getting
Started
documentation](https://opentelemetry.io/docs/collector/getting-started/).

View File

@ -23,6 +23,6 @@ The following configuration options can be modified:
- `requests_per_second` is the average number of requests per seconds.
- `resource_to_telemetry_conversion`
- `enabled` (default = false): If `enabled` is `true`, all the resource attributes will be converted to metric labels by default.
- `timeout` (defult = 5s): Time to wait per individual attempt to send data to a backend.
- `timeout` (default = 5s): Time to wait per individual attempt to send data to a backend.
The full list of settings exposed for this helper exporter are documented [here](factory.go).

View File

@ -1,16 +1,25 @@
# General Information
Extensions provide capabilities on top of the primary functionality of the collector.
Generally, extensions are used for implementing components that can be added to the Collector, but which do not require direct access to telemetry data and are not part of the pipelines (like receivers, processors or exporters). Example extensions are: Health Check extension that responds to health check requests or PProf extension that allows fetching Collector's performance profile.
Extensions provide capabilities on top of the primary functionality of the
collector. Generally, extensions are used for implementing components that can
be added to the Collector, but which do not require direct access to telemetry
data and are not part of the pipelines (like receivers, processors or
exporters). Example extensions are: Health Check extension that responds to
health check requests or PProf extension that allows fetching Collector's
performance profile.
Supported service extensions (sorted alphabetically):
- [Health Check](healthcheckextension/README.md)
- [Performance Profiler](pprofextension/README.md)
- [zPages](zpagesextension/README.md)
The [contributors repository](https://github.com/open-telemetry/opentelemetry-collector-contrib)
may have more extensions that can be added to custom builds of the Collector.
The [contributors
repository](https://github.com/open-telemetry/opentelemetry-collector-contrib)
may have more extensions that can be added to custom builds of the Collector.
## Ordering Extensions
The order extensions are specified for the service is important as this is the
order in which each extension will be started and the reverse order in which they
will be shutdown. The ordering is determined in the `extensions` tag under the
@ -45,6 +54,7 @@ The full list of settings exposed for this exporter is documented [here](healthc
with detailed sample configurations [here](healthcheckextension/testdata/config.yaml).
## <a name="pprof"></a>Performance Profiler
Performance Profiler extension enables the golang `net/http/pprof` endpoint.
This is typically used by developers to collect performance profiles and
investigate issues with the service.
@ -66,8 +76,8 @@ The following settings can be optionally configured:
Collector starts and is saved to the file when the Collector is terminated.
Example:
```yaml
```yaml
extensions:
pprof:
```
@ -76,9 +86,10 @@ The full list of settings exposed for this exporter are documented [here](pprofe
with detailed sample configurations [here](pprofextension/testdata/config.yaml).
## <a name="zpages"></a>zPages
Enables an extension that serves zPages, an HTTP endpoint that provides live
data for debugging different components that were properly instrumented for such.
All core exporters and receivers provide some zPage instrumentation.
All core exporters and receivers provide some zPages instrumentation.
The following settings are required:
@ -86,6 +97,7 @@ The following settings are required:
zPages.
Example:
```yaml
extensions:
zpages:

View File

@ -40,12 +40,12 @@ processor documentation for more information.
2. *any sampling processors*
3. [batch](batchprocessor/README.md)
4. *any other processors*
5. [queued_retry](queuedprocessor/README.md)
### Metrics
1. [memory_limiter](memorylimiter/README.md)
2. [batch](batchprocessor/README.md)
3. *any other processors*
## <a name="data-ownership"></a>Data Ownership

View File

@ -46,7 +46,7 @@ allocated by the process heap. Note that typically the total memory usage of
process will be about 50MiB higher than this value.
- `spike_limit_mib` (default = 0): Maximum spike expected between the
measurements of memory usage. The value must be less than `limit_mib`.
- `limit_percentage` (default = 0): Maximum amount of total memory, in percents, targeted to be
- `limit_percentage` (default = 0): Maximum amount of total memory targeted to be
allocated by the process heap. This configuration is supported on Linux systems with cgroups
and it's intended to be used in dynamic platforms like docker.
This option is used to calculate `memory_limit` from the total available memory.

View File

@ -256,7 +256,7 @@ type MetricsData struct {
The scrape page as whole also can be fit into the above `MetricsData` data
structure, and all the metrics data points can be stored with the `Metrics`
array. We will explain the mappings of individual metirc types in the following
array. We will explain the mappings of individual metric types in the following
couple sections
### Metric Value Mapping