Spelling fixes for blog posts (#2734)

Co-authored-by: Phillip Carter <pcarter@fastmail.com>
This commit is contained in:
Severin Neumann 2023-05-18 18:33:53 +02:00 committed by GitHub
parent 1e1fe5c93f
commit e9e11143ad
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 96 additions and 80 deletions

View File

@ -4,6 +4,7 @@ linkTitle: Governance Committee Explained
date: 2019-11-06
canonical_url: https://medium.com/opentelemetry/opentelemetry-governance-committee-explained-860353baba0
author: '[Sergey Kanzhelev](https://github.com/SergeyKanzhelev)'
spelling: cSpell:ignore Sergey Kanzhelev
---
This article describes the functions and responsibilities of the OpenTelemetry
@ -74,6 +75,6 @@ Thanks [Sarah Novotny](https://twitter.com/sarahnovotny) for review and
feedback!
_A version of this article was [originally posted][] on
[medium.com/opentelemetry.](https://medium.com/opentelemetry)._
[medium.com/opentelemetry](https://medium.com/opentelemetry)._
[originally posted]: {{% param canonical_url %}}

View File

@ -7,7 +7,7 @@ canonical_url: https://medium.com/opentelemetry/trace-based-testing-with-opentel
spelling: cSpell:ignore Malabi
---
This article introduces you to to Malabi, a new open-source tool for trace-based
This article introduces you to to Malabi, a new open source tool for trace-based
testing. For all the details, see the [original post][].
[original post]: {{% param canonical_url %}}

View File

@ -4,7 +4,8 @@ linkTitle: Apache APISIX-Opentelemetry Integration
date: 2022-03-26
author: '[Haochao Zhuang](https://github.com/dmsolr), Fei Han'
canonical_url: https://apisix.apache.org/blog/2022/02/28/apisix-integration-opentelemetry-plugin/
spelling: cSpell:ignore APISIX
spelling: cSpell:ignore APISIX Haochao Zhuang roundrobin httpbin openzipkin
spelling: cSpell:ignore pprof zpages
---
This article introduces the Apache APISIX's `opentelemetry` plugin concept and

View File

@ -6,6 +6,8 @@ author: >-
[Kumar Pratyush](https://github.com/kpratyus), [Sanket
Mehta](https://github.com/sanketmehta28), [Severin
Neumann](https://github.com/svrnm) (Cisco)
spelling: cSpell:ignore Kumar Pratyush Sanket Mehta Neumann nginx webserver
spelling: cSpell:ignore WORKDIR linux proto distro traceparent tracestate xvfz
---
OpenTelemetry is here to help us find the root cause of issues in our software
@ -23,13 +25,13 @@ issues.
### Describe the bug
For the blog post [Learn how to instrument nginx with OpenTelemetry][] we
created a small sample app that had a frontend application in Node.JS, that
created a small sample app that had a frontend application in Node.js, that
called an nginx, which acted as a reverse proxy for a backend application in
python.
Our goal was to create a re-usable `docker-compose` that would not only show
people how to instrument nginx with OpenTelemetry, but also how a distributed
trace crossing the webserver would look like.
trace crossing the web server would look like.
While Jaeger showed us a trace flowing from the frontend application down to the
nginx, the connection between nginx and python app was not visible: we had two
@ -142,12 +144,12 @@ to the frontend with `curl localhost:8000`
### What did you expect to see?
In the jaeger UI at [localhost:16686][] you would expect to see traces going
In the Jaeger UI at [localhost:16686][] you would expect to see traces going
from the `frontend` through nginx down to the `python-app`.
### What did you see instead?
In the jaeger UI at [localhost:16686][] you will see two traces, one going from
In the Jaeger UI at [localhost:16686][] you will see two traces, one going from
the `frontend` down to nginx, and another one only for the `python-app`.
## The solution
@ -159,12 +161,12 @@ problem was either caused by the python application or by the combination of the
nginx instrumentation and the python application.
We could quickly rule out that the python application alone was the issue:
trying out a simple Node.JS application as backend, we got the same result: two
traces, one from frontend to nginx, another one for the Node.JS application
trying out a simple Node.js application as backend, we got the same result: two
traces, one from frontend to nginx, another one for the Node.js application
alone.
With that, we knew that we had a propagation issue: the trace context was not
transferred successfully from nginx down to the python and Node.JS application.
transferred successfully from nginx down to the python and Node.js application.
### The analysis

View File

@ -6,7 +6,7 @@ author: '[Reese Lee](https://github.com/reese-lee)'
---
Since July, end users have been getting together to discuss their OpenTelemetry
adoption and implementation practices in a vendor-netural space known as the
adoption and implementation practices in a vendor-neutral space known as the
[Monthly Discussion Groups](/community/end-user/discussion-group/) (also
referenced as the End User Discussion Groups).

View File

@ -4,6 +4,7 @@ linkTitle: Exponential Histograms
date: 2022-08-24
author: '[Jack Berg](https://github.com/jack-berg)'
canonical_url: https://newrelic.com/blog/best-practices/opentelemetry-histograms
spelling: cSpell:ignore proto
---
Histograms are a powerful tool in the observability tool belt. OpenTelemetry
@ -280,7 +281,6 @@ _A version of this article was [originally posted][] on the New Relic blog._
[originally posted]: {{% param canonical_url %}}
[percentiles]: https://en.wikipedia.org/wiki/Percentile
[api]: /docs/specs/otel/metrics/api/
[sdk]: /docs/specs/otel/metrics/sdk/
[meter provider]: /docs/specs/otel/metrics/api/#meterprovider

View File

@ -152,7 +152,7 @@ We now have our todo app ready and instrumented. Its time to utilize
OpenTelemetry to its full potential. Our ability to visualize traces is where
the true troubleshooting power of this technology comes into play.
For visualization, well be using the open-source Jaeger Tracing.
For visualization, well be using the open source Jaeger Tracing.
## Visualization with Jaeger
@ -160,8 +160,8 @@ For visualization, well be using the open-source Jaeger Tracing.
[Jaeger Tracing](https://www.aspecto.io/blog/jaeger-tracing-the-ultimate-guide/)
is a suite of open source projects managing the entire distributed tracing
“stack”: client, collector, and UI. Jaeger UI is the most commonly used
open-source to visualize traces.
“stack”: client, collector, and UI. Jaeger UI is the most commonly used open
source to visualize traces.
Heres what the setup looks like:
@ -402,7 +402,7 @@ should see your trace on the right:
![Jaeger UI displays opentelemetry traces in go for our todo-service](jaeger-otel-todo.png)
Jaeger UI displays opentelemetry traces in go for our todo-service By clicking
Jaeger UI displays OpenTelemetry traces in go for our todo-service By clicking
the trace, you can drill down and see more details about it that allow you to
further investigate on your own:

View File

@ -2,8 +2,8 @@
title: Learn how to instrument Apache Http Server with OpenTelemetry
linkTitle: Instrument Apache Http Server
date: 2022-05-27
spelling:
cSpell:ignore Centos centos7 Debajit debuggability libmod OLTP uncompress
spelling: cSpell:ignore Centos centos7 Debajit debuggability libmod OLTP
spelling: cSpell:ignore uncompress webserver linux
author: '[Debajit Das](https://github.com/DebajitDas) (Cisco)'
---
@ -223,7 +223,7 @@ writing this blog, support for other architectures is not provided.
In the case of Apache 2.2, `libmod_apache_otel22.so` needs to be used
instead of `libmod_apache_otel.so`
- The following directive should be ON for the openTelemetry module to be
- The following directive should be ON for the OpenTelemetry module to be
enabled, else it would be disabled.
![enabled](enabled.png)
@ -249,7 +249,7 @@ writing this blog, support for other architectures is not provided.
![verify-module](verify-module.png)
- Now, restart the apache module and open telemetry module should be
- Now, restart the apache module and OpenTelemetry module should be
instrumented.
[docker-compose.yml]:

View File

@ -5,7 +5,7 @@ date: 2022-06-29
author: '[Ruben Vargas](https://github.com/rubenvp8510)'
spelling:
cSpell:ignore k8sattributes k8sattributesprocessor K8sattributes k8sprocessor
cSpell:ignore K8sprocessor KUBE
cSpell:ignore K8sprocessor KUBE resourcedetection replicaset
---
Attaching Kubernetes resource metadata to OpenTelemetry traces is useful to
@ -19,14 +19,14 @@ use the [k8sattributesprocessor][] in different scenarios.
Details of the OpenTelemetry collector pipeline won't be covered in this post.
For those details, refer to the [collector documentation](/docs/collector/).
## How k8s attributes are attached
## How K8s attributes are attached
At a high level, k8s attributes are attached to traces as
At a high level, K8s attributes are attached to traces as
[resources](/docs/concepts/glossary/#resource). This is for two reasons:
1. K8s attributes fit the definition of what a resource is: an entity for which
telemetry is recorded
2. It centralizes this metadata, which is relevant for any generated span.
1. K8s attributes fit the definition of what a resource is: an entity for which
telemetry is recorded
2. It centralizes this metadata, which is relevant for any generated span.
Let's dive in and see how to do it!
@ -170,7 +170,7 @@ reducing the scope of the collector service account to a single namespace.
As of [recently][pr#832], the [OpenTelemetry operator][] sets the
`OTEL_RESOURCE_ATTRIBUTES` environment variable on the collector container with
the k8s pod attributes. This lets you to use the resource detector processor,
the K8s pod attributes. This lets you to use the resource detector processor,
which attaches the environment variable values to the spans. This only works
when the collector is deployed in sidecar mode.
@ -217,7 +217,7 @@ spec:
exporters: [jaeger]
```
And then deploy the [vert.x app example][], you can see the
And then deploy the [vert.x example app][], you can see the
`OTEL_RESOURCE_ATTRIBUTES` environment variable gets injected with some values
in the sidecar container. Some of them use the Kubernetes downward API to get
the attribute values.

View File

@ -5,13 +5,15 @@ date: 2022-09-08
author: '[Benedikt Bongartz](https://github.com/frzifus)'
spelling:
cSpell:ignore k8sattributes k8sattributesprocessor K8sattributes k8sprocessor
cSpell:ignore K8sprocessor KUBE
cSpell:ignore K8sprocessor KUBE Benedikt Bongartz OIDC Juraci Paixão Kröhling
cSpell:ignore Keycloak dXNlci0xOjEyMzQK nginx basicauth htpasswd llczt
cSpell:ignore letsencrypt kubernetes frzifus oidc rolebinding
---
Exposing an [OpenTelemetry Collector](/docs/collector/) currently requires a
number of configuration steps. The goal of this blog post is to demonstrate
`how to establish a secure communication` between two collectors in different
kubernetes clusters.
Kubernetes clusters.
Details of CRDs and dependency installations are not covered by this post.
@ -25,7 +27,7 @@ services from sending data.
The OpenTelemetry Collector supports different authentication methods. The most
used are probably:
1. TLS Authentification
1. TLS Authentication
2. OpenID Connect (OIDC-Authentication)
3. HTTP Basic Authentication
@ -56,7 +58,7 @@ The HTTP Basic Authentication mechanism is quite simple. An HTTP user agent
request. Transmitted credentials are included in the HTTP header by the key
`Authorization` when the connection is established. As a value the
authentication method `basic` is mentioned first, followed by the encoded
crendentials. Note that the credential form is `username:password`.
credentials. Note that the credential form is `username:password`.
In the following example, `dXNlci0xOjEyMzQK` is the encoding for a combination
of `username=user-1` and `password=1234`. Note to encode or decode base64
@ -141,7 +143,7 @@ not contain the
extension. This extension was configured with the name `basicauth/server` and
registered in `otlp/basicauth`. As
[otlp exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/v0.58.0/exporter/otlpexporter)
endpoint the jaeger inmemory service was configured.
endpoint the Jaeger in-memory service was configured.
```yaml
apiVersion: opentelemetry.io/v1alpha1
@ -204,7 +206,7 @@ otel-collector-app-collector-monitoring ClusterIP 10.245.116.38 <none>
```
Finally, cert-manager is configured to automatically request TLS certificates
from [lets encrypt](https://letsencrypt.org/) and make it available to the
from [Lets Encrypt](https://letsencrypt.org/) and make it available to the
Ingress TLS configuration. The following `ClusterIssuer` and `Ingress` entries
expose the `otel-collector-app-collector` service. Note that you'll need to
replace values for the `email` and `host` fields.
@ -257,10 +259,10 @@ spec:
In order to be able to determine the origin of the transmitted traces, the
span-tags are extended by identifying metadata with the help of the
[k8sattributes processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.58.0/processor/k8sattributesprocessor).
K8sattribute processor is available in the OpenTelemetry Collector contrib
version. In the next step we create a service account with the necessary
permissions. If you want to learn more about the k8s metadata, you can read this
post "[Improved troubleshooting using k8s metadata](/blog/2022/k8s-metadata)".
It is available in the OpenTelemetry Collector contrib version. In the next step
we create a service account with the necessary permissions. If you want to learn
more about the K8s metadata, you can read this post
"[Improved troubleshooting using K8s metadata](/blog/2022/k8s-metadata)".
```yaml
apiVersion: rbac.authorization.k8s.io/v1

View File

@ -4,6 +4,7 @@ linkTitle: TroublesShooting Node.js Tracing Issues
date: 2022-02-22
canonical_url: https://www.aspecto.io/blog/checklist-for-troubleshooting-opentelemetry-nodejs-tracing-issues
author: '[Amir Blum](https://github.com/blumamir) (Aspecto)'
spelling: cSpell:ignore Parentfor proto bootcamp Preconfigured
---
Ill try to make this one short and to the point. You are probably here because
@ -67,11 +68,11 @@ trace
To use an auto instrumentation library in your service, youll need to:
1. Install it: `npm install @opentelemetry/instrumentation-foo`. You can search
the OpenTelemetry Registry to find available instrumentations
2. Create the instrumentation object: `new FooInstrumentation(config)`
3. Make sure instrumentation is enabled: call `registerInstrumentations(...)`
4. Verify you are using the right TracerProvider
1. Install it: `npm install @opentelemetry/instrumentation-foo`. You can search
the OpenTelemetry Registry to find available instrumentations
2. Create the instrumentation object: `new FooInstrumentation(config)`
3. Make sure instrumentation is enabled: call `registerInstrumentations(...)`
4. Verify you are using the right TracerProvider
For most users, the following should cover it:
@ -276,12 +277,12 @@ A a few common configuration errors are covered in the following subsections.
support.
- **Path** — If you set http collector endpoint (via config in code or
environment variables), **you must also set the path**:
“http://my-collector-host:4318/v1/traces”. If you forget the path, the export
`http://my-collector-host:4318/v1/traces`. If you forget the path, the export
will fail. In gRPC, you must not add path: “grpc://localhost:4317”. This can
be a bit confusing to get right at first.
- **Secure Connection** — Check if your collector expects a secure or insecure
connection. In http, this is determined by the URL scheme (`http:` /
`https:`). In grpc, the scheme has no effect and the connection security is
`https:`). In gRPC, the scheme has no effect and the connection security is
set exclusively by the credentials parameter: `grpc.credentials.createSsl()`,
`grpc.credentials.createInsecure()`, etc. The default security for both HTTP
and gRPC is **Insecure**.
@ -368,11 +369,11 @@ channels:
- [Opentelemetry-js GitHub repo](https://github.com/open-telemetry/opentelemetry-js)
- [The OpenTelemetry Bootcamp](https://www.aspecto.io/opentelemetry-bootcamp/)
- [Opentelemetry docs](/docs/)
- [OpenTelemetry docs](/docs/)
### Should I Use a Vendor?
Another alternative is to use a vendors distribution of opentelemetry. These
Another alternative is to use a vendors distribution of OpenTelemetry. These
distributions can save you time and effort:
- Technical support

View File

@ -4,6 +4,13 @@ linkTitle: eBay OpenTelemetry
date: 2022-12-19
author: '[Vijay Samuel](https://github.com/vjsamuel) (eBay)'
canonical_url: https://tech.ebayinc.com/engineering/why-and-how-ebay-pivoted-to-opentelemetry/
spelling: cSpell:ignore Vijay Metricbeat sharded Filebeat autodiscover
spelling: cSpell:ignore Dropwizard kube Auditbeat metricbeat statefulset
spelling: cSpell:ignore clusterlocal filereloadreceiver Premendra Aishwarya
spelling: cSpell:ignore Yandapalli Santanu Bhattacharya Feldmeier Rami Charif
spelling: cSpell:ignore Sarbu Golubenco Ruflin Steffen Siering Pérez Aradros
spelling: cSpell:ignore Kroh Christos Markou Soriano Tigran Nigaryan Bogdan
spelling: cSpell:ignore Drutu Ashpole Mirabella Juraci Paixão Kröhling Teoh
---
eBay makes a crucial pivot to OpenTelemetry to better align with industry
@ -22,10 +29,10 @@ order to pivot to using it. eBays observability platform Sherlock.io provides
developers and Site Reliability Engineers (SREs) with a robust set of
cloud-native offerings to observe the various applications that power the eBay
ecosystem. Sherlock.io supports the three pillars of observability — metrics,
logs and traces. The platforms metricstore is a clustered and sharded
logs and traces. The platforms metric store is a clustered and sharded
implementation of the Prometheus storage engine. We use the Metricbeat agent to
scrape around 1.5 million Prometheus endpoints every minute, which are ingested
into the metricstores. These endpoints along with recording rules result in
into the metric stores. These endpoints along with recording rules result in
ingesting around 40 million samples per second. The ingested samples result in 3
billion active series being stored on Prometheus. As a result, eBays
observability platform operates at an uncommonly massive scale, which brings
@ -44,7 +51,7 @@ cluster. However, an experiment performed during an internal hack week provided
some surprising conclusions and led to us reconsidering our usage of DaemonSets.
In this blog post, we discuss some of the problems we ran into, especially for
metrics scraping, and how we evolved our own solution. We will also discuss in
detail about how we have been navigating the evolving open-source landscape with
detail about how we have been navigating the evolving open source landscape with
regards to licensing and how we intend to align with OpenTelemetry as an
initiative.
@ -88,7 +95,7 @@ Kubernetes API server to deliver to the agent information such as:
like an SSL certificate?
With more and more complex discovery patterns required, we worked with the Beats
open-source community to enhance the power of autodiscover for our specific
open source community to enhance the power of autodiscover for our specific
needs. Some of the features we contributed include:
- [Discovering multiple sets of configurations](https://github.com/elastic/beats/pull/18883):
@ -269,19 +276,19 @@ include:
- [Align sanitizing labels and metric names that start with “\_” with Prometheus](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/7112)
- [Ability to disable label sanitization](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/8270)
- [Correctly handle metric names starting with “:”](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/14158)
- [Ability to extract pod labels using regexes](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/9525)
- [Ability to extract pod labels using regular expressions](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/9525)
These issues proved difficult to catch, and sometimes only surfaced when we
attempted to upgrade a Kubernetes cluster to use OpenTelemetry Collector. Once
we hit such an issue, rollback was the only option and we were forced to go back
to the drawing board. One partial solution involved writing a comparison script
that can scrape an endpoint using Metricbeat and OpenTelemetry Collector,
simultaneously ingest them nto the metricstore and compare the metricname and
simultaneously ingest them to the metric store and compare the metric name and
labels to ensure that the scrapes are on par with each other. This greatly
improved our confidence in moving forward.
Sometimes moving forward simply means dropping support for certain features. We
did just that with support for dropwizard metrics and had users migrate away
did just that with support for Dropwizard metrics and had users migrate away
from the same. Outside of semantic differences, we are also actively working on
adding features that we feel are critical for the project, like
[supporting Exemplars](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/14132).
@ -325,7 +332,7 @@ the many thought leaders who have been involved in these activities:
- [Rami El-Charif](https://www.linkedin.com/in/ramielcharif)
We are extremely grateful to both the Elastic Beats community of the past and
the present Open Telemetry community for supporting and working with us as we
the present OpenTelemetry community for supporting and working with us as we
strive to build world-class Observability offerings for our eBay developer
community.
@ -340,7 +347,7 @@ Elastic community:
- [Christos Markou](https://www.linkedin.com/in/christos-markou-a6542ab4)
- [Jaime Soriano Pastor](https://www.linkedin.com/in/jaimesorianopastor/)
Open Telemetry Collector community:
OpenTelemetry Collector community:
- [Tigran Nigaryan](https://github.com/tigrannajaryan)
- [Bogdan Drutu](https://github.com/bogdandrutu)

View File

@ -18,7 +18,7 @@ single open schema that is maintained by OpenTelemetry, so that OpenTelemetry
Semantic Conventions truly is a successor of the Elastic Common Schema.
OpenTelemetry shares the same interest of improving the convergence of
observability and security in this space. We believe this schema merge brings
huge value to the open-source community because:
huge value to the open source community because:
- ECS has years of proven success in the logs, metrics, traces and security
events schema, providing great coverage of the common problem domains.

View File

@ -4,6 +4,8 @@ linkTitle: End-User Discussions Mar 2023
date: 2023-03-30
author: '[Reese Lee](https://github.com/reese-lee) (New Relic)'
body_class: otel-with-contributions-from
spelling: cSpell:ignore Rexed Hausenblas Rynn Mancuso Villela Pranay Prateek
spelling: cSpell:ignore EMEA APAC distro firehosing distros telecommand endusers
---
With contributions from [Henrik Rexed](https://github.com/henrikrexed)
@ -195,7 +197,7 @@ agent to the host metrics receiver for infrastructure monitoring.
[Node.js](/docs/instrumentation/js/libraries/#node-autoinstrumentation-package).
- If youre using Kubernetes, they can use the
[OTel operator](https://github.com/open-telemetry/opentelemetry-operator),
which takes care of instrumentations for applications deployed on k8s. The
which takes care of instrumentations for applications deployed on K8s. The
OTel Operator also supports injecting and configuring auto-instrumentation
libraries where available (see point above).
- If youre using AWS lambda, you should check out the
@ -268,7 +270,7 @@ For a deeper dive into the above topics, check out the following:
- [APAC](https://docs.google.com/document/d/1eDYC97LfvE428cpIf3A_hSGirdNzglPurlxgKCmw8o4)
meeting notes
## Join us!
## Join us
If you have a story to share about how you use OpenTelemetry at your
organization, wed love to hear from you! Ways to share:

View File

@ -49,7 +49,7 @@ J also shared:
Js company has a diverse tech ecosystem, ranging from on-premise old-school
mainframes, to AWS Cloud and Azure Cloud, where they run both Windows and Linux
servers. They also use a number of different languages, including
[Node.JS](https://nodejs.org/en/), [.NET](https://dotnet.microsoft.com/en-us/),
[Node.js](https://nodejs.org/en/), [.NET](https://dotnet.microsoft.com/en-us/),
[Java](https://www.java.com/en/), C, C++, and
[PL/I](https://en.wikipedia.org/wiki/PL/I) (mainframe).
@ -66,7 +66,7 @@ was to use a standard, vendor-neutral way to emit telemetry: OpenTelemetry.
Another reason that his team took to using OpenTelemetry was
[GraphQL](https://graphql.org), which they had been using for four years.
GraphQL
[is an open-source language used to query and manipulate APIs](https://en.wikipedia.org/wiki/GraphQL).
[is an open source language used to query and manipulate APIs](https://en.wikipedia.org/wiki/GraphQL).
With GraphQL, everything is held in the body of data: request, response and
errors, and as a result everything returns an
[HTTP status of 200](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200),
@ -77,7 +77,7 @@ They pass a lot of data into a GraphQL response, because they have a main
gateway that brings all of the different GraphQL endpoints into a single one, so
it all looks like one massive query. OpenTelemetry exposed massive amounts of
data from their GraphQL systemswith traces as large as **three to four
thousand** spans! Instrumentation has been done around Node.JS GraphQL systems,
thousand** spans! Instrumentation has been done around Node.js GraphQL systems,
and instrumentation has also started for their .NET GraphQL systems.
Another black box that they are still facing is around AWS, and they are looking
@ -96,7 +96,7 @@ manage deployments. The GitLab custom pipelines deploy Kubernetes YAML files
The team is currently in the early stages of planning to use Amazons
[cdk8s](https://aws.amazon.com/about-aws/whats-new/2021/10/cdk-kubernetes-cdk8s-available/)
to deploy to Kubernetes, and Flagger to manage those deployments (including
[Canary deploymentss](https://martinfowler.com/bliki/CanaryRelease.html)).
[Canary deployments](https://martinfowler.com/bliki/CanaryRelease.html)).
### How are queries built in GraphQL?
@ -117,9 +117,9 @@ analysis.
### How do you generate traces?
To instrument their code, they configure the
[Node.JS SDK](/docs/instrumentation/js/getting-started/nodejs/) and use a number
[Node.js SDK](/docs/instrumentation/js/getting-started/nodejs/) and use a number
of
[Node.JS auto-instrumentation plug-ins](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node).
[Node.js auto-instrumentation plug-ins](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node).
While the team is currently only using
[auto-instrumentation](/docs/specs/otel/glossary/#automatic-instrumentation) to
generate traces and [spans](/docs/concepts/observability-primer/#spans), they do
@ -138,7 +138,7 @@ is installed on all of their nodes.
### Besides traces, do you use other signals?
The team has implemented a custom Node.JS plugin for getting certain
The team has implemented a custom Node.js plugin for getting certain
[metrics](/docs/concepts/signals/metrics/) data about GraphQL, such as
deprecated field usage and overall query usage, which is something that they
cant get from their traces. These metrics are being sent to the observability
@ -164,7 +164,7 @@ logs to their observability back-end. The ultimate goal is to have
[traces](/docs/concepts/signals/traces/) under one roof.
They have currently been able to automatically link traces to logs in ELK using
[Node.JS Bunyan](https://nodejs.org/en/blog/module/service-logging-in-json-with-bunyan/).
[Node.js Bunyan](https://nodejs.org/en/blog/module/service-logging-in-json-with-bunyan/).
They are hoping to leverage
[OpenTelemetrys Exemplars](/docs/specs/otel/metrics/data-model/#exemplars) to
link traces and metrics.
@ -215,13 +215,13 @@ organization.
### Are you seeing the benefits of using OpenTelemetry with GraphQL in your production environments?
Using the
[GraphQL OpenTelemetry plugin-for Node.JS](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-graphql)
[GraphQL OpenTelemetry plugin-for Node.js](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-graphql)
made it super easy to identify an issue with a GraphQL resolver that was acting
up in production.
### Were the outputs produced by the instrumentation libraries that you used meaningful to you, or did you have to make any adjustments?
On the Node.JS side, the team used auto-instrumentation for
On the Node.js side, the team used auto-instrumentation for
[HTTP](https://www.npmjs.com/package/@opentelemetry/instrumentation-http),
[Express](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-express),
[GraphQL](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-graphql),
@ -277,19 +277,19 @@ There is also a huge focus on
and as part of that effort, maintainers plan to go through the existing
instrumentation packages and to make sure that theyre all up to date with the
latest conventions. While its very well-maintained for certain languages, such
as Java, that is not the case for other languages, such as Node.JS.
as Java, that is not the case for other languages, such as Node.js.
JavaScript environments are akin to the Wild West of Development due to:
- Multiple facets: web side vs server side
- Multiple languages: JavaScript, TypeScript, Elm
- Two similar, but different server-side runtimes: Node.JS and
- Two similar, but different server-side runtimes: Node.js and
[Deno](https://deno.land)
One of Js suggestions is to treat OTel Javascript as a hierarchy, which starts
with a Core JavaScript team that splits into two subgroups: front-end web group,
and back-end group. Front-end and back-end would in turn split. For example, for
the back-end, have a separate Deno and Node.JS group.
the back-end, have a separate Deno and Node.js group.
Another suggestion is to have a contrib maintainers group, separate from core
SDK and API maintainers group.
@ -309,7 +309,7 @@ contributors.
J and his team have also experienced some challenges with documentation, noting
that there are some gaps in the online docs:
- Under metrics for JavaScript, there is no mention of the Observable Guage at
- Under metrics for JavaScript, there is no mention of the Observable Gauge at
all. J had to go into the code to find it.
- There are some short, very high-level metric API examples. Those examples
currently don't show which libraries you need to bring in. It also doesn't

View File

@ -6,11 +6,11 @@ author: '[Przemek Delewski](https://github.com/pdelewski/) (Sumo Logic)'
---
Automatic Instrumentation is a process of adding tracing capabilities into user
application without modyfing its source code. There are several techniques to do
that, but all of them more or less work in the same way by injecting additional
code into original one during compile time, link time, run-time or by extending
the operating system in case of [ebpf](https://ebpf.io/). This blogpost presents
method used by OpenTelemetry PHP auto-instrumentation.
application without modifying its source code. There are several techniques to
do that, but all of them more or less work in the same way by injecting
additional code into original one during compile time, link time, run-time or by
extending the operating system in case of [eBPF](https://ebpf.io/). This blog
post presents method used by OpenTelemetry PHP auto-instrumentation.
## Prerequisites
@ -128,7 +128,7 @@ The final step is to run your application with `run-with-otel-instrumentation`:
The run-with-otel-instrumentation isn't magic: everything it does can be done by
hand by setting environment variables and running your application normally. It
is a convenience tool for rapidly testing out open-telemetry against an
is a convenience tool for rapidly testing out OpenTelemetry against an
application with a working default configuration.
```sh

View File

@ -89,7 +89,7 @@ One exception to this is the
[^spec-next-release]:
The
[OpenCensus Compatability specification](/docs/specs/otel/compatibility/opencensus/)
[OpenCensus Compatibility specification](/docs/specs/otel/compatibility/opencensus/)
is marked stable for the next specification release.
[^shim-support]: