diff --git a/.markdown_link_check_config.json b/.markdown_link_check_config.json index ea53f8a19..368aba40a 100644 --- a/.markdown_link_check_config.json +++ b/.markdown_link_check_config.json @@ -15,8 +15,5 @@ } ], "retryOn429": true, - "aliveStatusCodes": [ - 200, - 403 - ] + "aliveStatusCodes": [200, 403] } diff --git a/.prettierignore b/.prettierignore new file mode 100644 index 000000000..aa4edb375 --- /dev/null +++ b/.prettierignore @@ -0,0 +1,8 @@ +/.github + +# for the first iteration, we only reformat /docs/cloud* and will add the rest in individual PRs +/docs/** +!/docs/cloud* +!/docs/cloud*/** +/model +/schemas diff --git a/CHANGELOG.md b/CHANGELOG.md index 6580b21cd..a3eeafab7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -12,9 +12,9 @@ release. Note: This is the first release of Semantic Conventions separate from the Specification. - Add GCP Bare Metal Solution as a cloud platform - ([#64](https://github.com/open-telemetry/semantic-conventions/pull/64)) + ([#64](https://github.com/open-telemetry/semantic-conventions/pull/64)) - Clarify the scope of the HTTP client span. - ([#3290](https://github.com/open-telemetry/opentelemetry-specification/pull/3290)) + ([#3290](https://github.com/open-telemetry/opentelemetry-specification/pull/3290)) - Add moratorium on relying on schema transformations for telemetry stability ([#3380](https://github.com/open-telemetry/opentelemetry-specification/pull/3380)) - Mark "Instrumentation Units" and "Instrumentation Types" sections of the general @@ -34,15 +34,15 @@ Note: This is the first release of Semantic Conventions separate from the Specif ([#3443](https://github.com/open-telemetry/opentelemetry-specification/pull/3443)) - Rename `net.peer.*`, `net.host.*`, and `net.sock.*` attributes to align with ECS ([#3402](https://github.com/open-telemetry/opentelemetry-specification/pull/3402)) - BREAKING: rename `net.peer.name` to `server.address` on client side and to `client.address` on server side, - `net.peer.port` to `server.port` on client side and to `client.port` on server side, - `net.host.name` and `net.host.port` to `server.address` and `server.port` (since `net.host.*` attributes only applied to server instrumentation), - `net.sock.peer.addr` to `server.socket.address` on client side and to `client.socket.address` on server side, - `net.sock.peer.port` to `server.socket.port` on client side and to `client.socket.port` on server side, - `net.sock.peer.name` to `server.socket.domain` (since `net.sock.peer.name` only applied to client instrumentation), - `net.sock.host.addr` to `server.socket.address` (since `net.sock.host.*` only applied to server instrumentation), - `net.sock.host.port` to `server.socket.port` (similarly since `net.sock.host.*` only applied to server instrumentation), - `http.client_ip` to `client.address` + BREAKING: rename `net.peer.name` to `server.address` on client side and to `client.address` on server side, + `net.peer.port` to `server.port` on client side and to `client.port` on server side, + `net.host.name` and `net.host.port` to `server.address` and `server.port` (since `net.host.*` attributes only applied to server instrumentation), + `net.sock.peer.addr` to `server.socket.address` on client side and to `client.socket.address` on server side, + `net.sock.peer.port` to `server.socket.port` on client side and to `client.socket.port` on server side, + `net.sock.peer.name` to `server.socket.domain` (since `net.sock.peer.name` only applied to client instrumentation), + `net.sock.host.addr` to `server.socket.address` (since `net.sock.host.*` only applied to server instrumentation), + `net.sock.host.port` to `server.socket.port` (similarly since `net.sock.host.*` only applied to server instrumentation), + `http.client_ip` to `client.address` - BREAKING: Introduce `network.transport` defined as [OSI Transport Layer](https://osi-model.com/transport-layer/) or [Inter-process Communication method](https://en.wikipedia.org/wiki/Inter-process_communication). @@ -349,7 +349,7 @@ This and earlier versions were released as part of [the Specification](https://g ([#2508](https://github.com/open-telemetry/opentelemetry-specification/pull/2508)). - Refactor jvm classes semantic conventions ([#2550](https://github.com/open-telemetry/opentelemetry-specification/pull/2550)). -- Add browser.* attributes +- Add browser.\* attributes ([#2353](https://github.com/open-telemetry/opentelemetry-specification/pull/2353)). - Change JVM runtime metric `process.runtime.jvm.memory.max` to `process.runtime.jvm.memory.limit` @@ -487,13 +487,13 @@ This and earlier versions were released as part of [the Specification](https://g - Add `arch` to `host` semantic conventions ([#1483](https://github.com/open-telemetry/opentelemetry-specification/pull/1483)) - Add `runtime` to `container` semantic conventions ([#1482](https://github.com/open-telemetry/opentelemetry-specification/pull/1482)) - Rename `gcp_gke` to `gcp_kubernetes_engine` to have consistency with other -Google products under `cloud.infrastructure_service` ([#1496](https://github.com/open-telemetry/opentelemetry-specification/pull/1496)) + Google products under `cloud.infrastructure_service` ([#1496](https://github.com/open-telemetry/opentelemetry-specification/pull/1496)) - `http.url` MUST NOT contain credentials ([#1502](https://github.com/open-telemetry/opentelemetry-specification/pull/1502)) - Add `aws.eks.cluster.arn` to EKS specific semantic conventions ([#1484](https://github.com/open-telemetry/opentelemetry-specification/pull/1484)) - Rename `zone` to `availability_zone` in `cloud` semantic conventions ([#1495](https://github.com/open-telemetry/opentelemetry-specification/pull/1495)) - Rename `cloud.infrastructure_service` to `cloud.platform` ([#1530](https://github.com/open-telemetry/opentelemetry-specification/pull/1530)) - Add section describing that libraries and the collector should autogenerate -the semantic convention keys. ([#1515](https://github.com/open-telemetry/opentelemetry-specification/pull/1515)) + the semantic convention keys. ([#1515](https://github.com/open-telemetry/opentelemetry-specification/pull/1515)) ## v1.0.1 (2021-02-11) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 158f74ee1..bc249e082 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -22,7 +22,7 @@ The Specification has a number of tools it uses to check things like style, spelling and link validity. Before using the tools: - Install the latest LTS release of **[Node](https://nodejs.org/)**. -For example, using **[nvm][]** under Linux run: + For example, using **[nvm][]** under Linux run: ```bash nvm install --lts @@ -41,7 +41,7 @@ make check ``` Note: This can take a long time as it checks all links. You should use this -prior to submitting a PR to ensure validity. However, you can run individual +prior to submitting a PR to ensure validity. However, you can run individual checks directly. See: @@ -49,6 +49,7 @@ See: - [MarkdownStyle](#markdown-style) - [Misspell Check](#misspell-check) - Markdown link checking (docs TODO) +- Prettier formatting ### YAML to Markdown @@ -125,7 +126,7 @@ make misspell-correction ## Making a Release -- Ensure the referenced specification version is up to date. Use +- Ensure the referenced specification version is up to date. Use [tooling to update the spec](#updating-the-referenced-specification-version) if needed. - Create a staging branch for the release. diff --git a/Makefile b/Makefile index 2e928e04f..4d04b73d8 100644 --- a/Makefile +++ b/Makefile @@ -102,14 +102,22 @@ table-check: schema-check: $(TOOLS_DIR)/schema_check.sh +.PHONY: check-format +check-format: + npm run check:format + +.PHONY: fix-format +fix-format: + npm run fix:format + # Run all checks in order of speed / likely failure. .PHONY: check -check: misspell markdownlint markdown-link-check +check: misspell markdownlint markdown-link-check check-format @echo "All checks complete" # Attempt to fix issues / regenerate tables. .PHONY: fix -fix: table-generation misspell-correction +fix: table-generation misspell-correction fix-format @echo "All autofixes complete" .PHONY: install-tools diff --git a/README.md b/README.md index 9967f6fb6..d568e1955 100644 --- a/README.md +++ b/README.md @@ -25,7 +25,7 @@ Approvers ([@open-telemetry/specs-semconv-approvers](https://github.com/orgs/ope - [Sean Marciniak](https://github.com/MovieStoreGuy), Atlassian - [Ted Young](https://github.com/tedsuo), Lightstep -*Find more about the approver role in [community repository](https://github.com/open-telemetry/community/blob/master/community-membership.md#approver).* +_Find more about the approver role in [community repository](https://github.com/open-telemetry/community/blob/master/community-membership.md#approver)._ Maintainers ([@open-telemetry/specs-semconv-maintainers](https://github.com/orgs/open-telemetry/teams/specs-semconv-maintainers)): @@ -33,6 +33,6 @@ Maintainers ([@open-telemetry/specs-semconv-maintainers](https://github.com/orgs - [Armin Ruech](https://github.com/arminru) - [Reiley Yang](https://github.com/reyang) -*Find more about the maintainer role in [community repository](https://github.com/open-telemetry/community/blob/master/community-membership.md#maintainer).* +_Find more about the maintainer role in [community repository](https://github.com/open-telemetry/community/blob/master/community-membership.md#maintainer)._ [SpecificationVersion]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0 diff --git a/docs/cloud-providers/README.md b/docs/cloud-providers/README.md index c50f4993c..d9d43acce 100644 --- a/docs/cloud-providers/README.md +++ b/docs/cloud-providers/README.md @@ -13,6 +13,6 @@ This document defines semantic conventions for cloud provider SDK spans, metrics Semantic conventions exist for the following cloud provider SDKs: -* [AWS SDK](aws-sdk.md): Semantic Conventions for the *AWS SDK*. +- [AWS SDK](aws-sdk.md): Semantic Conventions for the _AWS SDK_. [DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md diff --git a/docs/cloud-providers/aws-sdk.md b/docs/cloud-providers/aws-sdk.md index 27856a3ac..2c4d0ae01 100644 --- a/docs/cloud-providers/aws-sdk.md +++ b/docs/cloud-providers/aws-sdk.md @@ -23,6 +23,7 @@ The span name MUST be of the format `Service.Operation` as per the AWS HTTP API, `S3.ListBuckets`. This is equivalent to concatenating `rpc.service` and `rpc.method` with `.` and consistent with the naming guidelines for RPC client spans. + | Attribute | Type | Description | Examples | Requirement Level | |---|---|---|---|---| @@ -35,12 +36,13 @@ with the naming guidelines for RPC client spans. **[2]:** This is the logical name of the service from the RPC interface perspective, which can be different from the name of any implementing class. The `code.namespace` attribute may be used to store the latter (despite the attribute name, it may include a class name; e.g., class with method actually executing the call on the server side, RPC client stub class on the client side). + ## AWS service specific attributes The following Semantic Conventions extend the general AWS SDK attributes for specific AWS services: -* [AWS DynamoDB](/docs/database/dynamodb.md): Semantic Conventions for *AWS DynamoDB*. -* [AWS S3](/docs/object-stores/s3.md): Semantic Conventions for *AWS S3*. +- [AWS DynamoDB](/docs/database/dynamodb.md): Semantic Conventions for _AWS DynamoDB_. +- [AWS S3](/docs/object-stores/s3.md): Semantic Conventions for _AWS S3_. [DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md diff --git a/docs/cloudevents/README.md b/docs/cloudevents/README.md index 4ba6cde6b..b6d66070c 100644 --- a/docs/cloudevents/README.md +++ b/docs/cloudevents/README.md @@ -13,6 +13,6 @@ This document defines semantic conventions for the [CloudEvents specification](h Semantic conventions for CloudEvents are defined for the following signals: -* [CloudEvents Spans](cloudevents-spans.md): Semantic Conventions for modeling CloudEvents as *spans*. +- [CloudEvents Spans](cloudevents-spans.md): Semantic Conventions for modeling CloudEvents as _spans_. [DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md diff --git a/docs/cloudevents/cloudevents-spans.md b/docs/cloudevents/cloudevents-spans.md index d2e94c847..d13a8d530 100644 --- a/docs/cloudevents/cloudevents-spans.md +++ b/docs/cloudevents/cloudevents-spans.md @@ -6,7 +6,7 @@ linkTitle: CloudEvents Spans **Status**: [Experimental][DocumentStatus] - + @@ -20,14 +20,15 @@ linkTitle: CloudEvents Spans + + ## Definitions - From the - [CloudEvents specification](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#overview): +From the +[CloudEvents specification](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#overview): > CloudEvents is a specification for describing event data in common formats -to provide interoperability across services, platforms and systems. -> +> to provide interoperability across services, platforms and systems. For more information on the concepts, terminology and background of CloudEvents consult the @@ -39,7 +40,7 @@ document. A CloudEvent can be sent directly from producer to consumer. For such a scenario, the traditional parent-child trace model works well. However, CloudEvents are also used in distributed systems where an event -can go through many [hops](https://en.wikipedia.org/wiki/Hop_(networking)) +can go through many [hops]() until it reaches a consumer. In this scenario, the traditional parent-child trace model is not sufficient to produce a meaningful trace. @@ -49,13 +50,13 @@ Consider the following scenario: +----------+ +--------------+ | Producer | ---------------> | Intermediary | +----------+ +--------------+ - | - | - | - v -+----------+ +----------+ -| Consumer | <----------------- | Queue | -+----------+ +----------+ + | + | + | + v ++----------+ +----------+ +| Consumer | <----------------- | Queue | ++----------+ +----------+ ``` With the traditional parent-child trace model, the above scenario would produce @@ -159,12 +160,12 @@ Instrumentation SHOULD create a new span and populate the on the event. This applies when: - A CloudEvent is created by the instrumented library. -It may be impossible or impractical to create the Span during event -creation (e.g. inside the constructor or in a factory method), -so instrumentation MAY create the Span later, when passing the event to the transport layer. + It may be impossible or impractical to create the Span during event + creation (e.g. inside the constructor or in a factory method), + so instrumentation MAY create the Span later, when passing the event to the transport layer. - A CloudEvent is created outside of the instrumented library -(e.g. directly constructed by the application owner, without calling a constructor or factory method), -and passed without the Distributed Tracing Extension populated. + (e.g. directly constructed by the application owner, without calling a constructor or factory method), + and passed without the Distributed Tracing Extension populated. In case a CloudEvent is passed to the instrumented library with the Distributed Tracing Extension already populated, instrumentation MUST NOT create @@ -175,7 +176,7 @@ a span and MUST NOT modify the Distributed Tracing Extension on the event. - Span kind: PRODUCER - Span attributes: Instrumentation MUST add the required attributes defined -in the [table below](#attributes). + in the [table below](#attributes). #### Processing @@ -191,12 +192,13 @@ Instrumentation MAY add attributes to the link to further describe it. - Span kind: CONSUMER - Span attributes: Instrumentation MUST add the required attributes defined -in the [table below](#attributes). + in the [table below](#attributes). ### Attributes The following attributes are applicable to creation and processing Spans. + | Attribute | Type | Description | Examples | Requirement Level | |---|---|---|---|---| @@ -206,5 +208,6 @@ The following attributes are applicable to creation and processing Spans. | `cloudevents.event_type` | string | The [event_type](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#type) contains a value describing the type of event related to the originating occurrence. | `com.github.pull_request.opened`; `com.example.object.deleted.v2` | Recommended | | `cloudevents.event_subject` | string | The [subject](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#subject) of the event in the context of the event producer (identified by source). | `mynewfile.jpg` | Recommended | + [DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md diff --git a/package.json b/package.json index 0bbc27cb3..d51304272 100644 --- a/package.json +++ b/package.json @@ -1,4 +1,13 @@ { + "scripts": { + "_check:format": "npx prettier --check .", + "_check:format:any": "npx prettier --check --ignore-path ''", + "check": "make check", + "check:format": "npm run _check:format || (echo '[help] Run: npm run format'; exit 1)", + "fix": "make fix", + "fix:format": "npm run _check:format -- --write", + "test": "npm run check" + }, "devDependencies": { "gulp": "^4.0.2", "js-yaml": "^4.1.0", diff --git a/supplementary-guidelines/compatibility/aws.md b/supplementary-guidelines/compatibility/aws.md index db8086066..3777cb22a 100644 --- a/supplementary-guidelines/compatibility/aws.md +++ b/supplementary-guidelines/compatibility/aws.md @@ -29,11 +29,11 @@ Propagation headers must be added before the signature is calculated to prevent errors on signed requests. If injecting into the request itself (not just adding additional HTTP headers), additional considerations may apply (for example, the .NET AWS SDK calculates a hash of the attributes it sends and compares it with -the `MD5OfMessageAttributes` that it receives). +the `MD5OfMessageAttributes` that it receives). The following formats are currently natively supported by AWS services for propagation: -* [AWS X-Ray](https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html) +- [AWS X-Ray](https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html) AWS service-supported context propagation is necessary to allow context propagation through AWS managed services, for example: `S3 -> SNS -> SQS -> Lambda`.