Generated code / vendoring

This commit is contained in:
justinsb 2023-10-14 15:09:40 -04:00
parent d8c449a4f8
commit e0a79e1bd1
129 changed files with 25919 additions and 2 deletions

View File

@ -1122,8 +1122,8 @@ go.opentelemetry.io/otel v1.19.0 h1:MuS/TNf4/j4IXsZuJegVzI1cwut7Qc00344rgH7p8bs=
go.opentelemetry.io/otel v1.19.0/go.mod h1:i0QyjOq3UPoTzff0PJB2N66fb4S0+rSbSB15/oyH9fY= go.opentelemetry.io/otel v1.19.0/go.mod h1:i0QyjOq3UPoTzff0PJB2N66fb4S0+rSbSB15/oyH9fY=
go.opentelemetry.io/otel/metric v1.19.0 h1:aTzpGtV0ar9wlV4Sna9sdJyII5jTVJEvKETPiOKwvpE= go.opentelemetry.io/otel/metric v1.19.0 h1:aTzpGtV0ar9wlV4Sna9sdJyII5jTVJEvKETPiOKwvpE=
go.opentelemetry.io/otel/metric v1.19.0/go.mod h1:L5rUsV9kM1IxCj1MmSdS+JQAcVm319EUrDVLrt7jqt8= go.opentelemetry.io/otel/metric v1.19.0/go.mod h1:L5rUsV9kM1IxCj1MmSdS+JQAcVm319EUrDVLrt7jqt8=
go.opentelemetry.io/otel/sdk v1.16.0 h1:Z1Ok1YsijYL0CSJpHt4cS3wDDh7p572grzNrBMiMWgE= go.opentelemetry.io/otel/sdk v1.19.0 h1:6USY6zH+L8uMH8L3t1enZPR3WFEmSTADlqldyHtJi3o=
go.opentelemetry.io/otel/sdk v1.16.0/go.mod h1:tMsIuKXuuIWPBAOrH+eHtvhTL+SntFtXF9QD68aP6p4= go.opentelemetry.io/otel/sdk v1.19.0/go.mod h1:NedEbbS4w3C6zElbLdPJKOpJQOrGUJ+GfzpjUvI0v1A=
go.opentelemetry.io/otel/trace v1.19.0 h1:DFVQmlVbfVeOuBRrwdtaehRrWiL1JoVs9CPIQ1Dzxpg= go.opentelemetry.io/otel/trace v1.19.0 h1:DFVQmlVbfVeOuBRrwdtaehRrWiL1JoVs9CPIQ1Dzxpg=
go.opentelemetry.io/otel/trace v1.19.0/go.mod h1:mfaSyvGyEJEI0nyV2I4qhNQnbBOUUmYZpYojqMnX2vo= go.opentelemetry.io/otel/trace v1.19.0/go.mod h1:mfaSyvGyEJEI0nyV2I4qhNQnbBOUUmYZpYojqMnX2vo=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI= go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=

0
vendor/github.com/felixge/httpsnoop/.gitignore generated vendored Normal file
View File

6
vendor/github.com/felixge/httpsnoop/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,6 @@
language: go
go:
- 1.6
- 1.7
- 1.8

19
vendor/github.com/felixge/httpsnoop/LICENSE.txt generated vendored Normal file
View File

@ -0,0 +1,19 @@
Copyright (c) 2016 Felix Geisendörfer (felix@debuggable.com)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

10
vendor/github.com/felixge/httpsnoop/Makefile generated vendored Normal file
View File

@ -0,0 +1,10 @@
.PHONY: ci generate clean
ci: clean generate
go test -v ./...
generate:
go generate .
clean:
rm -rf *_generated*.go

95
vendor/github.com/felixge/httpsnoop/README.md generated vendored Normal file
View File

@ -0,0 +1,95 @@
# httpsnoop
Package httpsnoop provides an easy way to capture http related metrics (i.e.
response time, bytes written, and http status code) from your application's
http.Handlers.
Doing this requires non-trivial wrapping of the http.ResponseWriter interface,
which is also exposed for users interested in a more low-level API.
[![GoDoc](https://godoc.org/github.com/felixge/httpsnoop?status.svg)](https://godoc.org/github.com/felixge/httpsnoop)
[![Build Status](https://travis-ci.org/felixge/httpsnoop.svg?branch=master)](https://travis-ci.org/felixge/httpsnoop)
## Usage Example
```go
// myH is your app's http handler, perhaps a http.ServeMux or similar.
var myH http.Handler
// wrappedH wraps myH in order to log every request.
wrappedH := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
m := httpsnoop.CaptureMetrics(myH, w, r)
log.Printf(
"%s %s (code=%d dt=%s written=%d)",
r.Method,
r.URL,
m.Code,
m.Duration,
m.Written,
)
})
http.ListenAndServe(":8080", wrappedH)
```
## Why this package exists
Instrumenting an application's http.Handler is surprisingly difficult.
However if you google for e.g. "capture ResponseWriter status code" you'll find
lots of advise and code examples that suggest it to be a fairly trivial
undertaking. Unfortunately everything I've seen so far has a high chance of
breaking your application.
The main problem is that a `http.ResponseWriter` often implements additional
interfaces such as `http.Flusher`, `http.CloseNotifier`, `http.Hijacker`, `http.Pusher`, and
`io.ReaderFrom`. So the naive approach of just wrapping `http.ResponseWriter`
in your own struct that also implements the `http.ResponseWriter` interface
will hide the additional interfaces mentioned above. This has a high change of
introducing subtle bugs into any non-trivial application.
Another approach I've seen people take is to return a struct that implements
all of the interfaces above. However, that's also problematic, because it's
difficult to fake some of these interfaces behaviors when the underlying
`http.ResponseWriter` doesn't have an implementation. It's also dangerous,
because an application may choose to operate differently, merely because it
detects the presence of these additional interfaces.
This package solves this problem by checking which additional interfaces a
`http.ResponseWriter` implements, returning a wrapped version implementing the
exact same set of interfaces.
Additionally this package properly handles edge cases such as `WriteHeader` not
being called, or called more than once, as well as concurrent calls to
`http.ResponseWriter` methods, and even calls happening after the wrapped
`ServeHTTP` has already returned.
Unfortunately this package is not perfect either. It's possible that it is
still missing some interfaces provided by the go core (let me know if you find
one), and it won't work for applications adding their own interfaces into the
mix. You can however use `httpsnoop.Unwrap(w)` to access the underlying
`http.ResponseWriter` and type-assert the result to its other interfaces.
However, hopefully the explanation above has sufficiently scared you of rolling
your own solution to this problem. httpsnoop may still break your application,
but at least it tries to avoid it as much as possible.
Anyway, the real problem here is that smuggling additional interfaces inside
`http.ResponseWriter` is a problematic design choice, but it probably goes as
deep as the Go language specification itself. But that's okay, I still prefer
Go over the alternatives ;).
## Performance
```
BenchmarkBaseline-8 20000 94912 ns/op
BenchmarkCaptureMetrics-8 20000 95461 ns/op
```
As you can see, using `CaptureMetrics` on a vanilla http.Handler introduces an
overhead of ~500 ns per http request on my machine. However, the margin of
error appears to be larger than that, therefor it should be reasonable to
assume that the overhead introduced by `CaptureMetrics` is absolutely
negligible.
## License
MIT

86
vendor/github.com/felixge/httpsnoop/capture_metrics.go generated vendored Normal file
View File

@ -0,0 +1,86 @@
package httpsnoop
import (
"io"
"net/http"
"time"
)
// Metrics holds metrics captured from CaptureMetrics.
type Metrics struct {
// Code is the first http response code passed to the WriteHeader func of
// the ResponseWriter. If no such call is made, a default code of 200 is
// assumed instead.
Code int
// Duration is the time it took to execute the handler.
Duration time.Duration
// Written is the number of bytes successfully written by the Write or
// ReadFrom function of the ResponseWriter. ResponseWriters may also write
// data to their underlaying connection directly (e.g. headers), but those
// are not tracked. Therefor the number of Written bytes will usually match
// the size of the response body.
Written int64
}
// CaptureMetrics wraps the given hnd, executes it with the given w and r, and
// returns the metrics it captured from it.
func CaptureMetrics(hnd http.Handler, w http.ResponseWriter, r *http.Request) Metrics {
return CaptureMetricsFn(w, func(ww http.ResponseWriter) {
hnd.ServeHTTP(ww, r)
})
}
// CaptureMetricsFn wraps w and calls fn with the wrapped w and returns the
// resulting metrics. This is very similar to CaptureMetrics (which is just
// sugar on top of this func), but is a more usable interface if your
// application doesn't use the Go http.Handler interface.
func CaptureMetricsFn(w http.ResponseWriter, fn func(http.ResponseWriter)) Metrics {
m := Metrics{Code: http.StatusOK}
m.CaptureMetrics(w, fn)
return m
}
// CaptureMetrics wraps w and calls fn with the wrapped w and updates
// Metrics m with the resulting metrics. This is similar to CaptureMetricsFn,
// but allows one to customize starting Metrics object.
func (m *Metrics) CaptureMetrics(w http.ResponseWriter, fn func(http.ResponseWriter)) {
var (
start = time.Now()
headerWritten bool
hooks = Hooks{
WriteHeader: func(next WriteHeaderFunc) WriteHeaderFunc {
return func(code int) {
next(code)
if !headerWritten {
m.Code = code
headerWritten = true
}
}
},
Write: func(next WriteFunc) WriteFunc {
return func(p []byte) (int, error) {
n, err := next(p)
m.Written += int64(n)
headerWritten = true
return n, err
}
},
ReadFrom: func(next ReadFromFunc) ReadFromFunc {
return func(src io.Reader) (int64, error) {
n, err := next(src)
headerWritten = true
m.Written += n
return n, err
}
},
}
)
fn(Wrap(w, hooks))
m.Duration += time.Since(start)
}

10
vendor/github.com/felixge/httpsnoop/docs.go generated vendored Normal file
View File

@ -0,0 +1,10 @@
// Package httpsnoop provides an easy way to capture http related metrics (i.e.
// response time, bytes written, and http status code) from your application's
// http.Handlers.
//
// Doing this requires non-trivial wrapping of the http.ResponseWriter
// interface, which is also exposed for users interested in a more low-level
// API.
package httpsnoop
//go:generate go run codegen/main.go

View File

@ -0,0 +1,436 @@
// +build go1.8
// Code generated by "httpsnoop/codegen"; DO NOT EDIT
package httpsnoop
import (
"bufio"
"io"
"net"
"net/http"
)
// HeaderFunc is part of the http.ResponseWriter interface.
type HeaderFunc func() http.Header
// WriteHeaderFunc is part of the http.ResponseWriter interface.
type WriteHeaderFunc func(code int)
// WriteFunc is part of the http.ResponseWriter interface.
type WriteFunc func(b []byte) (int, error)
// FlushFunc is part of the http.Flusher interface.
type FlushFunc func()
// CloseNotifyFunc is part of the http.CloseNotifier interface.
type CloseNotifyFunc func() <-chan bool
// HijackFunc is part of the http.Hijacker interface.
type HijackFunc func() (net.Conn, *bufio.ReadWriter, error)
// ReadFromFunc is part of the io.ReaderFrom interface.
type ReadFromFunc func(src io.Reader) (int64, error)
// PushFunc is part of the http.Pusher interface.
type PushFunc func(target string, opts *http.PushOptions) error
// Hooks defines a set of method interceptors for methods included in
// http.ResponseWriter as well as some others. You can think of them as
// middleware for the function calls they target. See Wrap for more details.
type Hooks struct {
Header func(HeaderFunc) HeaderFunc
WriteHeader func(WriteHeaderFunc) WriteHeaderFunc
Write func(WriteFunc) WriteFunc
Flush func(FlushFunc) FlushFunc
CloseNotify func(CloseNotifyFunc) CloseNotifyFunc
Hijack func(HijackFunc) HijackFunc
ReadFrom func(ReadFromFunc) ReadFromFunc
Push func(PushFunc) PushFunc
}
// Wrap returns a wrapped version of w that provides the exact same interface
// as w. Specifically if w implements any combination of:
//
// - http.Flusher
// - http.CloseNotifier
// - http.Hijacker
// - io.ReaderFrom
// - http.Pusher
//
// The wrapped version will implement the exact same combination. If no hooks
// are set, the wrapped version also behaves exactly as w. Hooks targeting
// methods not supported by w are ignored. Any other hooks will intercept the
// method they target and may modify the call's arguments and/or return values.
// The CaptureMetrics implementation serves as a working example for how the
// hooks can be used.
func Wrap(w http.ResponseWriter, hooks Hooks) http.ResponseWriter {
rw := &rw{w: w, h: hooks}
_, i0 := w.(http.Flusher)
_, i1 := w.(http.CloseNotifier)
_, i2 := w.(http.Hijacker)
_, i3 := w.(io.ReaderFrom)
_, i4 := w.(http.Pusher)
switch {
// combination 1/32
case !i0 && !i1 && !i2 && !i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
}{rw, rw}
// combination 2/32
case !i0 && !i1 && !i2 && !i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.Pusher
}{rw, rw, rw}
// combination 3/32
case !i0 && !i1 && !i2 && i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
io.ReaderFrom
}{rw, rw, rw}
// combination 4/32
case !i0 && !i1 && !i2 && i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
io.ReaderFrom
http.Pusher
}{rw, rw, rw, rw}
// combination 5/32
case !i0 && !i1 && i2 && !i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.Hijacker
}{rw, rw, rw}
// combination 6/32
case !i0 && !i1 && i2 && !i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.Hijacker
http.Pusher
}{rw, rw, rw, rw}
// combination 7/32
case !i0 && !i1 && i2 && i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.Hijacker
io.ReaderFrom
}{rw, rw, rw, rw}
// combination 8/32
case !i0 && !i1 && i2 && i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.Hijacker
io.ReaderFrom
http.Pusher
}{rw, rw, rw, rw, rw}
// combination 9/32
case !i0 && i1 && !i2 && !i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
}{rw, rw, rw}
// combination 10/32
case !i0 && i1 && !i2 && !i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
http.Pusher
}{rw, rw, rw, rw}
// combination 11/32
case !i0 && i1 && !i2 && i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
io.ReaderFrom
}{rw, rw, rw, rw}
// combination 12/32
case !i0 && i1 && !i2 && i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
io.ReaderFrom
http.Pusher
}{rw, rw, rw, rw, rw}
// combination 13/32
case !i0 && i1 && i2 && !i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
http.Hijacker
}{rw, rw, rw, rw}
// combination 14/32
case !i0 && i1 && i2 && !i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
http.Hijacker
http.Pusher
}{rw, rw, rw, rw, rw}
// combination 15/32
case !i0 && i1 && i2 && i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
http.Hijacker
io.ReaderFrom
}{rw, rw, rw, rw, rw}
// combination 16/32
case !i0 && i1 && i2 && i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
http.Hijacker
io.ReaderFrom
http.Pusher
}{rw, rw, rw, rw, rw, rw}
// combination 17/32
case i0 && !i1 && !i2 && !i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
}{rw, rw, rw}
// combination 18/32
case i0 && !i1 && !i2 && !i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.Pusher
}{rw, rw, rw, rw}
// combination 19/32
case i0 && !i1 && !i2 && i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
io.ReaderFrom
}{rw, rw, rw, rw}
// combination 20/32
case i0 && !i1 && !i2 && i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
io.ReaderFrom
http.Pusher
}{rw, rw, rw, rw, rw}
// combination 21/32
case i0 && !i1 && i2 && !i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.Hijacker
}{rw, rw, rw, rw}
// combination 22/32
case i0 && !i1 && i2 && !i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.Hijacker
http.Pusher
}{rw, rw, rw, rw, rw}
// combination 23/32
case i0 && !i1 && i2 && i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.Hijacker
io.ReaderFrom
}{rw, rw, rw, rw, rw}
// combination 24/32
case i0 && !i1 && i2 && i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.Hijacker
io.ReaderFrom
http.Pusher
}{rw, rw, rw, rw, rw, rw}
// combination 25/32
case i0 && i1 && !i2 && !i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
}{rw, rw, rw, rw}
// combination 26/32
case i0 && i1 && !i2 && !i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
http.Pusher
}{rw, rw, rw, rw, rw}
// combination 27/32
case i0 && i1 && !i2 && i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
io.ReaderFrom
}{rw, rw, rw, rw, rw}
// combination 28/32
case i0 && i1 && !i2 && i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
io.ReaderFrom
http.Pusher
}{rw, rw, rw, rw, rw, rw}
// combination 29/32
case i0 && i1 && i2 && !i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
http.Hijacker
}{rw, rw, rw, rw, rw}
// combination 30/32
case i0 && i1 && i2 && !i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
http.Hijacker
http.Pusher
}{rw, rw, rw, rw, rw, rw}
// combination 31/32
case i0 && i1 && i2 && i3 && !i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
http.Hijacker
io.ReaderFrom
}{rw, rw, rw, rw, rw, rw}
// combination 32/32
case i0 && i1 && i2 && i3 && i4:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
http.Hijacker
io.ReaderFrom
http.Pusher
}{rw, rw, rw, rw, rw, rw, rw}
}
panic("unreachable")
}
type rw struct {
w http.ResponseWriter
h Hooks
}
func (w *rw) Unwrap() http.ResponseWriter {
return w.w
}
func (w *rw) Header() http.Header {
f := w.w.(http.ResponseWriter).Header
if w.h.Header != nil {
f = w.h.Header(f)
}
return f()
}
func (w *rw) WriteHeader(code int) {
f := w.w.(http.ResponseWriter).WriteHeader
if w.h.WriteHeader != nil {
f = w.h.WriteHeader(f)
}
f(code)
}
func (w *rw) Write(b []byte) (int, error) {
f := w.w.(http.ResponseWriter).Write
if w.h.Write != nil {
f = w.h.Write(f)
}
return f(b)
}
func (w *rw) Flush() {
f := w.w.(http.Flusher).Flush
if w.h.Flush != nil {
f = w.h.Flush(f)
}
f()
}
func (w *rw) CloseNotify() <-chan bool {
f := w.w.(http.CloseNotifier).CloseNotify
if w.h.CloseNotify != nil {
f = w.h.CloseNotify(f)
}
return f()
}
func (w *rw) Hijack() (net.Conn, *bufio.ReadWriter, error) {
f := w.w.(http.Hijacker).Hijack
if w.h.Hijack != nil {
f = w.h.Hijack(f)
}
return f()
}
func (w *rw) ReadFrom(src io.Reader) (int64, error) {
f := w.w.(io.ReaderFrom).ReadFrom
if w.h.ReadFrom != nil {
f = w.h.ReadFrom(f)
}
return f(src)
}
func (w *rw) Push(target string, opts *http.PushOptions) error {
f := w.w.(http.Pusher).Push
if w.h.Push != nil {
f = w.h.Push(f)
}
return f(target, opts)
}
type Unwrapper interface {
Unwrap() http.ResponseWriter
}
// Unwrap returns the underlying http.ResponseWriter from within zero or more
// layers of httpsnoop wrappers.
func Unwrap(w http.ResponseWriter) http.ResponseWriter {
if rw, ok := w.(Unwrapper); ok {
// recurse until rw.Unwrap() returns a non-Unwrapper
return Unwrap(rw.Unwrap())
} else {
return w
}
}

View File

@ -0,0 +1,278 @@
// +build !go1.8
// Code generated by "httpsnoop/codegen"; DO NOT EDIT
package httpsnoop
import (
"bufio"
"io"
"net"
"net/http"
)
// HeaderFunc is part of the http.ResponseWriter interface.
type HeaderFunc func() http.Header
// WriteHeaderFunc is part of the http.ResponseWriter interface.
type WriteHeaderFunc func(code int)
// WriteFunc is part of the http.ResponseWriter interface.
type WriteFunc func(b []byte) (int, error)
// FlushFunc is part of the http.Flusher interface.
type FlushFunc func()
// CloseNotifyFunc is part of the http.CloseNotifier interface.
type CloseNotifyFunc func() <-chan bool
// HijackFunc is part of the http.Hijacker interface.
type HijackFunc func() (net.Conn, *bufio.ReadWriter, error)
// ReadFromFunc is part of the io.ReaderFrom interface.
type ReadFromFunc func(src io.Reader) (int64, error)
// Hooks defines a set of method interceptors for methods included in
// http.ResponseWriter as well as some others. You can think of them as
// middleware for the function calls they target. See Wrap for more details.
type Hooks struct {
Header func(HeaderFunc) HeaderFunc
WriteHeader func(WriteHeaderFunc) WriteHeaderFunc
Write func(WriteFunc) WriteFunc
Flush func(FlushFunc) FlushFunc
CloseNotify func(CloseNotifyFunc) CloseNotifyFunc
Hijack func(HijackFunc) HijackFunc
ReadFrom func(ReadFromFunc) ReadFromFunc
}
// Wrap returns a wrapped version of w that provides the exact same interface
// as w. Specifically if w implements any combination of:
//
// - http.Flusher
// - http.CloseNotifier
// - http.Hijacker
// - io.ReaderFrom
//
// The wrapped version will implement the exact same combination. If no hooks
// are set, the wrapped version also behaves exactly as w. Hooks targeting
// methods not supported by w are ignored. Any other hooks will intercept the
// method they target and may modify the call's arguments and/or return values.
// The CaptureMetrics implementation serves as a working example for how the
// hooks can be used.
func Wrap(w http.ResponseWriter, hooks Hooks) http.ResponseWriter {
rw := &rw{w: w, h: hooks}
_, i0 := w.(http.Flusher)
_, i1 := w.(http.CloseNotifier)
_, i2 := w.(http.Hijacker)
_, i3 := w.(io.ReaderFrom)
switch {
// combination 1/16
case !i0 && !i1 && !i2 && !i3:
return struct {
Unwrapper
http.ResponseWriter
}{rw, rw}
// combination 2/16
case !i0 && !i1 && !i2 && i3:
return struct {
Unwrapper
http.ResponseWriter
io.ReaderFrom
}{rw, rw, rw}
// combination 3/16
case !i0 && !i1 && i2 && !i3:
return struct {
Unwrapper
http.ResponseWriter
http.Hijacker
}{rw, rw, rw}
// combination 4/16
case !i0 && !i1 && i2 && i3:
return struct {
Unwrapper
http.ResponseWriter
http.Hijacker
io.ReaderFrom
}{rw, rw, rw, rw}
// combination 5/16
case !i0 && i1 && !i2 && !i3:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
}{rw, rw, rw}
// combination 6/16
case !i0 && i1 && !i2 && i3:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
io.ReaderFrom
}{rw, rw, rw, rw}
// combination 7/16
case !i0 && i1 && i2 && !i3:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
http.Hijacker
}{rw, rw, rw, rw}
// combination 8/16
case !i0 && i1 && i2 && i3:
return struct {
Unwrapper
http.ResponseWriter
http.CloseNotifier
http.Hijacker
io.ReaderFrom
}{rw, rw, rw, rw, rw}
// combination 9/16
case i0 && !i1 && !i2 && !i3:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
}{rw, rw, rw}
// combination 10/16
case i0 && !i1 && !i2 && i3:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
io.ReaderFrom
}{rw, rw, rw, rw}
// combination 11/16
case i0 && !i1 && i2 && !i3:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.Hijacker
}{rw, rw, rw, rw}
// combination 12/16
case i0 && !i1 && i2 && i3:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.Hijacker
io.ReaderFrom
}{rw, rw, rw, rw, rw}
// combination 13/16
case i0 && i1 && !i2 && !i3:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
}{rw, rw, rw, rw}
// combination 14/16
case i0 && i1 && !i2 && i3:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
io.ReaderFrom
}{rw, rw, rw, rw, rw}
// combination 15/16
case i0 && i1 && i2 && !i3:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
http.Hijacker
}{rw, rw, rw, rw, rw}
// combination 16/16
case i0 && i1 && i2 && i3:
return struct {
Unwrapper
http.ResponseWriter
http.Flusher
http.CloseNotifier
http.Hijacker
io.ReaderFrom
}{rw, rw, rw, rw, rw, rw}
}
panic("unreachable")
}
type rw struct {
w http.ResponseWriter
h Hooks
}
func (w *rw) Unwrap() http.ResponseWriter {
return w.w
}
func (w *rw) Header() http.Header {
f := w.w.(http.ResponseWriter).Header
if w.h.Header != nil {
f = w.h.Header(f)
}
return f()
}
func (w *rw) WriteHeader(code int) {
f := w.w.(http.ResponseWriter).WriteHeader
if w.h.WriteHeader != nil {
f = w.h.WriteHeader(f)
}
f(code)
}
func (w *rw) Write(b []byte) (int, error) {
f := w.w.(http.ResponseWriter).Write
if w.h.Write != nil {
f = w.h.Write(f)
}
return f(b)
}
func (w *rw) Flush() {
f := w.w.(http.Flusher).Flush
if w.h.Flush != nil {
f = w.h.Flush(f)
}
f()
}
func (w *rw) CloseNotify() <-chan bool {
f := w.w.(http.CloseNotifier).CloseNotify
if w.h.CloseNotify != nil {
f = w.h.CloseNotify(f)
}
return f()
}
func (w *rw) Hijack() (net.Conn, *bufio.ReadWriter, error) {
f := w.w.(http.Hijacker).Hijack
if w.h.Hijack != nil {
f = w.h.Hijack(f)
}
return f()
}
func (w *rw) ReadFrom(src io.Reader) (int64, error) {
f := w.w.(io.ReaderFrom).ReadFrom
if w.h.ReadFrom != nil {
f = w.h.ReadFrom(f)
}
return f(src)
}
type Unwrapper interface {
Unwrap() http.ResponseWriter
}
// Unwrap returns the underlying http.ResponseWriter from within zero or more
// layers of httpsnoop wrappers.
func Unwrap(w http.ResponseWriter) http.ResponseWriter {
if rw, ok := w.(Unwrapper); ok {
// recurse until rw.Unwrap() returns a non-Unwrapper
return Unwrap(rw.Unwrap())
} else {
return w
}
}

View File

@ -0,0 +1,27 @@
Copyright (c) 2015, Gengo, Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of Gengo, Inc. nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -0,0 +1,35 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
package(default_visibility = ["//visibility:public"])
go_library(
name = "httprule",
srcs = [
"compile.go",
"parse.go",
"types.go",
],
importpath = "github.com/grpc-ecosystem/grpc-gateway/v2/internal/httprule",
deps = ["//utilities"],
)
go_test(
name = "httprule_test",
size = "small",
srcs = [
"compile_test.go",
"parse_test.go",
"types_test.go",
],
embed = [":httprule"],
deps = [
"//utilities",
"@com_github_golang_glog//:glog",
],
)
alias(
name = "go_default_library",
actual = ":httprule",
visibility = ["//:__subpackages__"],
)

View File

@ -0,0 +1,121 @@
package httprule
import (
"github.com/grpc-ecosystem/grpc-gateway/v2/utilities"
)
const (
opcodeVersion = 1
)
// Template is a compiled representation of path templates.
type Template struct {
// Version is the version number of the format.
Version int
// OpCodes is a sequence of operations.
OpCodes []int
// Pool is a constant pool
Pool []string
// Verb is a VERB part in the template.
Verb string
// Fields is a list of field paths bound in this template.
Fields []string
// Original template (example: /v1/a_bit_of_everything)
Template string
}
// Compiler compiles utilities representation of path templates into marshallable operations.
// They can be unmarshalled by runtime.NewPattern.
type Compiler interface {
Compile() Template
}
type op struct {
// code is the opcode of the operation
code utilities.OpCode
// str is a string operand of the code.
// num is ignored if str is not empty.
str string
// num is a numeric operand of the code.
num int
}
func (w wildcard) compile() []op {
return []op{
{code: utilities.OpPush},
}
}
func (w deepWildcard) compile() []op {
return []op{
{code: utilities.OpPushM},
}
}
func (l literal) compile() []op {
return []op{
{
code: utilities.OpLitPush,
str: string(l),
},
}
}
func (v variable) compile() []op {
var ops []op
for _, s := range v.segments {
ops = append(ops, s.compile()...)
}
ops = append(ops, op{
code: utilities.OpConcatN,
num: len(v.segments),
}, op{
code: utilities.OpCapture,
str: v.path,
})
return ops
}
func (t template) Compile() Template {
var rawOps []op
for _, s := range t.segments {
rawOps = append(rawOps, s.compile()...)
}
var (
ops []int
pool []string
fields []string
)
consts := make(map[string]int)
for _, op := range rawOps {
ops = append(ops, int(op.code))
if op.str == "" {
ops = append(ops, op.num)
} else {
// eof segment literal represents the "/" path pattern
if op.str == eof {
op.str = ""
}
if _, ok := consts[op.str]; !ok {
consts[op.str] = len(pool)
pool = append(pool, op.str)
}
ops = append(ops, consts[op.str])
}
if op.code == utilities.OpCapture {
fields = append(fields, op.str)
}
}
return Template{
Version: opcodeVersion,
OpCodes: ops,
Pool: pool,
Verb: t.verb,
Fields: fields,
Template: t.template,
}
}

View File

@ -0,0 +1,11 @@
//go:build gofuzz
// +build gofuzz
package httprule
func Fuzz(data []byte) int {
if _, err := Parse(string(data)); err != nil {
return 0
}
return 0
}

View File

@ -0,0 +1,368 @@
package httprule
import (
"errors"
"fmt"
"strings"
)
// InvalidTemplateError indicates that the path template is not valid.
type InvalidTemplateError struct {
tmpl string
msg string
}
func (e InvalidTemplateError) Error() string {
return fmt.Sprintf("%s: %s", e.msg, e.tmpl)
}
// Parse parses the string representation of path template
func Parse(tmpl string) (Compiler, error) {
if !strings.HasPrefix(tmpl, "/") {
return template{}, InvalidTemplateError{tmpl: tmpl, msg: "no leading /"}
}
tokens, verb := tokenize(tmpl[1:])
p := parser{tokens: tokens}
segs, err := p.topLevelSegments()
if err != nil {
return template{}, InvalidTemplateError{tmpl: tmpl, msg: err.Error()}
}
return template{
segments: segs,
verb: verb,
template: tmpl,
}, nil
}
func tokenize(path string) (tokens []string, verb string) {
if path == "" {
return []string{eof}, ""
}
const (
init = iota
field
nested
)
st := init
for path != "" {
var idx int
switch st {
case init:
idx = strings.IndexAny(path, "/{")
case field:
idx = strings.IndexAny(path, ".=}")
case nested:
idx = strings.IndexAny(path, "/}")
}
if idx < 0 {
tokens = append(tokens, path)
break
}
switch r := path[idx]; r {
case '/', '.':
case '{':
st = field
case '=':
st = nested
case '}':
st = init
}
if idx == 0 {
tokens = append(tokens, path[idx:idx+1])
} else {
tokens = append(tokens, path[:idx], path[idx:idx+1])
}
path = path[idx+1:]
}
l := len(tokens)
// See
// https://github.com/grpc-ecosystem/grpc-gateway/pull/1947#issuecomment-774523693 ;
// although normal and backwards-compat logic here is to use the last index
// of a colon, if the final segment is a variable followed by a colon, the
// part following the colon must be a verb. Hence if the previous token is
// an end var marker, we switch the index we're looking for to Index instead
// of LastIndex, so that we correctly grab the remaining part of the path as
// the verb.
var penultimateTokenIsEndVar bool
switch l {
case 0, 1:
// Not enough to be variable so skip this logic and don't result in an
// invalid index
default:
penultimateTokenIsEndVar = tokens[l-2] == "}"
}
t := tokens[l-1]
var idx int
if penultimateTokenIsEndVar {
idx = strings.Index(t, ":")
} else {
idx = strings.LastIndex(t, ":")
}
if idx == 0 {
tokens, verb = tokens[:l-1], t[1:]
} else if idx > 0 {
tokens[l-1], verb = t[:idx], t[idx+1:]
}
tokens = append(tokens, eof)
return tokens, verb
}
// parser is a parser of the template syntax defined in github.com/googleapis/googleapis/google/api/http.proto.
type parser struct {
tokens []string
accepted []string
}
// topLevelSegments is the target of this parser.
func (p *parser) topLevelSegments() ([]segment, error) {
if _, err := p.accept(typeEOF); err == nil {
p.tokens = p.tokens[:0]
return []segment{literal(eof)}, nil
}
segs, err := p.segments()
if err != nil {
return nil, err
}
if _, err := p.accept(typeEOF); err != nil {
return nil, fmt.Errorf("unexpected token %q after segments %q", p.tokens[0], strings.Join(p.accepted, ""))
}
return segs, nil
}
func (p *parser) segments() ([]segment, error) {
s, err := p.segment()
if err != nil {
return nil, err
}
segs := []segment{s}
for {
if _, err := p.accept("/"); err != nil {
return segs, nil
}
s, err := p.segment()
if err != nil {
return segs, err
}
segs = append(segs, s)
}
}
func (p *parser) segment() (segment, error) {
if _, err := p.accept("*"); err == nil {
return wildcard{}, nil
}
if _, err := p.accept("**"); err == nil {
return deepWildcard{}, nil
}
if l, err := p.literal(); err == nil {
return l, nil
}
v, err := p.variable()
if err != nil {
return nil, fmt.Errorf("segment neither wildcards, literal or variable: %w", err)
}
return v, nil
}
func (p *parser) literal() (segment, error) {
lit, err := p.accept(typeLiteral)
if err != nil {
return nil, err
}
return literal(lit), nil
}
func (p *parser) variable() (segment, error) {
if _, err := p.accept("{"); err != nil {
return nil, err
}
path, err := p.fieldPath()
if err != nil {
return nil, err
}
var segs []segment
if _, err := p.accept("="); err == nil {
segs, err = p.segments()
if err != nil {
return nil, fmt.Errorf("invalid segment in variable %q: %w", path, err)
}
} else {
segs = []segment{wildcard{}}
}
if _, err := p.accept("}"); err != nil {
return nil, fmt.Errorf("unterminated variable segment: %s", path)
}
return variable{
path: path,
segments: segs,
}, nil
}
func (p *parser) fieldPath() (string, error) {
c, err := p.accept(typeIdent)
if err != nil {
return "", err
}
components := []string{c}
for {
if _, err := p.accept("."); err != nil {
return strings.Join(components, "."), nil
}
c, err := p.accept(typeIdent)
if err != nil {
return "", fmt.Errorf("invalid field path component: %w", err)
}
components = append(components, c)
}
}
// A termType is a type of terminal symbols.
type termType string
// These constants define some of valid values of termType.
// They improve readability of parse functions.
//
// You can also use "/", "*", "**", "." or "=" as valid values.
const (
typeIdent = termType("ident")
typeLiteral = termType("literal")
typeEOF = termType("$")
)
// eof is the terminal symbol which always appears at the end of token sequence.
const eof = "\u0000"
// accept tries to accept a token in "p".
// This function consumes a token and returns it if it matches to the specified "term".
// If it doesn't match, the function does not consume any tokens and return an error.
func (p *parser) accept(term termType) (string, error) {
t := p.tokens[0]
switch term {
case "/", "*", "**", ".", "=", "{", "}":
if t != string(term) && t != "/" {
return "", fmt.Errorf("expected %q but got %q", term, t)
}
case typeEOF:
if t != eof {
return "", fmt.Errorf("expected EOF but got %q", t)
}
case typeIdent:
if err := expectIdent(t); err != nil {
return "", err
}
case typeLiteral:
if err := expectPChars(t); err != nil {
return "", err
}
default:
return "", fmt.Errorf("unknown termType %q", term)
}
p.tokens = p.tokens[1:]
p.accepted = append(p.accepted, t)
return t, nil
}
// expectPChars determines if "t" consists of only pchars defined in RFC3986.
//
// https://www.ietf.org/rfc/rfc3986.txt, P.49
//
// pchar = unreserved / pct-encoded / sub-delims / ":" / "@"
// unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
// sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
// / "*" / "+" / "," / ";" / "="
// pct-encoded = "%" HEXDIG HEXDIG
func expectPChars(t string) error {
const (
init = iota
pct1
pct2
)
st := init
for _, r := range t {
if st != init {
if !isHexDigit(r) {
return fmt.Errorf("invalid hexdigit: %c(%U)", r, r)
}
switch st {
case pct1:
st = pct2
case pct2:
st = init
}
continue
}
// unreserved
switch {
case 'A' <= r && r <= 'Z':
continue
case 'a' <= r && r <= 'z':
continue
case '0' <= r && r <= '9':
continue
}
switch r {
case '-', '.', '_', '~':
// unreserved
case '!', '$', '&', '\'', '(', ')', '*', '+', ',', ';', '=':
// sub-delims
case ':', '@':
// rest of pchar
case '%':
// pct-encoded
st = pct1
default:
return fmt.Errorf("invalid character in path segment: %q(%U)", r, r)
}
}
if st != init {
return fmt.Errorf("invalid percent-encoding in %q", t)
}
return nil
}
// expectIdent determines if "ident" is a valid identifier in .proto schema ([[:alpha:]_][[:alphanum:]_]*).
func expectIdent(ident string) error {
if ident == "" {
return errors.New("empty identifier")
}
for pos, r := range ident {
switch {
case '0' <= r && r <= '9':
if pos == 0 {
return fmt.Errorf("identifier starting with digit: %s", ident)
}
continue
case 'A' <= r && r <= 'Z':
continue
case 'a' <= r && r <= 'z':
continue
case r == '_':
continue
default:
return fmt.Errorf("invalid character %q(%U) in identifier: %s", r, r, ident)
}
}
return nil
}
func isHexDigit(r rune) bool {
switch {
case '0' <= r && r <= '9':
return true
case 'A' <= r && r <= 'F':
return true
case 'a' <= r && r <= 'f':
return true
}
return false
}

View File

@ -0,0 +1,60 @@
package httprule
import (
"fmt"
"strings"
)
type template struct {
segments []segment
verb string
template string
}
type segment interface {
fmt.Stringer
compile() (ops []op)
}
type wildcard struct{}
type deepWildcard struct{}
type literal string
type variable struct {
path string
segments []segment
}
func (wildcard) String() string {
return "*"
}
func (deepWildcard) String() string {
return "**"
}
func (l literal) String() string {
return string(l)
}
func (v variable) String() string {
var segs []string
for _, s := range v.segments {
segs = append(segs, s.String())
}
return fmt.Sprintf("{%s=%s}", v.path, strings.Join(segs, "/"))
}
func (t template) String() string {
var segs []string
for _, s := range t.segments {
segs = append(segs, s.String())
}
str := strings.Join(segs, "/")
if t.verb != "" {
str = fmt.Sprintf("%s:%s", str, t.verb)
}
return "/" + str
}

View File

@ -0,0 +1,97 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
package(default_visibility = ["//visibility:public"])
go_library(
name = "runtime",
srcs = [
"context.go",
"convert.go",
"doc.go",
"errors.go",
"fieldmask.go",
"handler.go",
"marshal_httpbodyproto.go",
"marshal_json.go",
"marshal_jsonpb.go",
"marshal_proto.go",
"marshaler.go",
"marshaler_registry.go",
"mux.go",
"pattern.go",
"proto2_convert.go",
"query.go",
],
importpath = "github.com/grpc-ecosystem/grpc-gateway/v2/runtime",
deps = [
"//internal/httprule",
"//utilities",
"@go_googleapis//google/api:httpbody_go_proto",
"@org_golang_google_grpc//codes",
"@org_golang_google_grpc//grpclog",
"@org_golang_google_grpc//health/grpc_health_v1",
"@org_golang_google_grpc//metadata",
"@org_golang_google_grpc//status",
"@org_golang_google_protobuf//encoding/protojson",
"@org_golang_google_protobuf//proto",
"@org_golang_google_protobuf//reflect/protoreflect",
"@org_golang_google_protobuf//reflect/protoregistry",
"@org_golang_google_protobuf//types/known/durationpb",
"@org_golang_google_protobuf//types/known/fieldmaskpb",
"@org_golang_google_protobuf//types/known/structpb",
"@org_golang_google_protobuf//types/known/timestamppb",
"@org_golang_google_protobuf//types/known/wrapperspb",
],
)
go_test(
name = "runtime_test",
size = "small",
srcs = [
"context_test.go",
"convert_test.go",
"errors_test.go",
"fieldmask_test.go",
"handler_test.go",
"marshal_httpbodyproto_test.go",
"marshal_json_test.go",
"marshal_jsonpb_test.go",
"marshal_proto_test.go",
"marshaler_registry_test.go",
"mux_internal_test.go",
"mux_test.go",
"pattern_test.go",
"query_fuzz_test.go",
"query_test.go",
],
embed = [":runtime"],
deps = [
"//runtime/internal/examplepb",
"//utilities",
"@com_github_google_go_cmp//cmp",
"@com_github_google_go_cmp//cmp/cmpopts",
"@go_googleapis//google/api:httpbody_go_proto",
"@go_googleapis//google/rpc:errdetails_go_proto",
"@go_googleapis//google/rpc:status_go_proto",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//codes",
"@org_golang_google_grpc//health/grpc_health_v1",
"@org_golang_google_grpc//metadata",
"@org_golang_google_grpc//status",
"@org_golang_google_protobuf//encoding/protojson",
"@org_golang_google_protobuf//proto",
"@org_golang_google_protobuf//testing/protocmp",
"@org_golang_google_protobuf//types/known/durationpb",
"@org_golang_google_protobuf//types/known/emptypb",
"@org_golang_google_protobuf//types/known/fieldmaskpb",
"@org_golang_google_protobuf//types/known/structpb",
"@org_golang_google_protobuf//types/known/timestamppb",
"@org_golang_google_protobuf//types/known/wrapperspb",
],
)
alias(
name = "go_default_library",
actual = ":runtime",
visibility = ["//visibility:public"],
)

View File

@ -0,0 +1,401 @@
package runtime
import (
"context"
"encoding/base64"
"fmt"
"net"
"net/http"
"net/textproto"
"strconv"
"strings"
"sync"
"time"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"
)
// MetadataHeaderPrefix is the http prefix that represents custom metadata
// parameters to or from a gRPC call.
const MetadataHeaderPrefix = "Grpc-Metadata-"
// MetadataPrefix is prepended to permanent HTTP header keys (as specified
// by the IANA) when added to the gRPC context.
const MetadataPrefix = "grpcgateway-"
// MetadataTrailerPrefix is prepended to gRPC metadata as it is converted to
// HTTP headers in a response handled by grpc-gateway
const MetadataTrailerPrefix = "Grpc-Trailer-"
const metadataGrpcTimeout = "Grpc-Timeout"
const metadataHeaderBinarySuffix = "-Bin"
const xForwardedFor = "X-Forwarded-For"
const xForwardedHost = "X-Forwarded-Host"
// DefaultContextTimeout is used for gRPC call context.WithTimeout whenever a Grpc-Timeout inbound
// header isn't present. If the value is 0 the sent `context` will not have a timeout.
var DefaultContextTimeout = 0 * time.Second
// malformedHTTPHeaders lists the headers that the gRPC server may reject outright as malformed.
// See https://github.com/grpc/grpc-go/pull/4803#issuecomment-986093310 for more context.
var malformedHTTPHeaders = map[string]struct{}{
"connection": {},
}
type (
rpcMethodKey struct{}
httpPathPatternKey struct{}
AnnotateContextOption func(ctx context.Context) context.Context
)
func WithHTTPPathPattern(pattern string) AnnotateContextOption {
return func(ctx context.Context) context.Context {
return withHTTPPathPattern(ctx, pattern)
}
}
func decodeBinHeader(v string) ([]byte, error) {
if len(v)%4 == 0 {
// Input was padded, or padding was not necessary.
return base64.StdEncoding.DecodeString(v)
}
return base64.RawStdEncoding.DecodeString(v)
}
/*
AnnotateContext adds context information such as metadata from the request.
At a minimum, the RemoteAddr is included in the fashion of "X-Forwarded-For",
except that the forwarded destination is not another HTTP service but rather
a gRPC service.
*/
func AnnotateContext(ctx context.Context, mux *ServeMux, req *http.Request, rpcMethodName string, options ...AnnotateContextOption) (context.Context, error) {
ctx, md, err := annotateContext(ctx, mux, req, rpcMethodName, options...)
if err != nil {
return nil, err
}
if md == nil {
return ctx, nil
}
return metadata.NewOutgoingContext(ctx, md), nil
}
// AnnotateIncomingContext adds context information such as metadata from the request.
// Attach metadata as incoming context.
func AnnotateIncomingContext(ctx context.Context, mux *ServeMux, req *http.Request, rpcMethodName string, options ...AnnotateContextOption) (context.Context, error) {
ctx, md, err := annotateContext(ctx, mux, req, rpcMethodName, options...)
if err != nil {
return nil, err
}
if md == nil {
return ctx, nil
}
return metadata.NewIncomingContext(ctx, md), nil
}
func isValidGRPCMetadataKey(key string) bool {
// Must be a valid gRPC "Header-Name" as defined here:
// https://github.com/grpc/grpc/blob/4b05dc88b724214d0c725c8e7442cbc7a61b1374/doc/PROTOCOL-HTTP2.md
// This means 0-9 a-z _ - .
// Only lowercase letters are valid in the wire protocol, but the client library will normalize
// uppercase ASCII to lowercase, so uppercase ASCII is also acceptable.
bytes := []byte(key) // gRPC validates strings on the byte level, not Unicode.
for _, ch := range bytes {
validLowercaseLetter := ch >= 'a' && ch <= 'z'
validUppercaseLetter := ch >= 'A' && ch <= 'Z'
validDigit := ch >= '0' && ch <= '9'
validOther := ch == '.' || ch == '-' || ch == '_'
if !validLowercaseLetter && !validUppercaseLetter && !validDigit && !validOther {
return false
}
}
return true
}
func isValidGRPCMetadataTextValue(textValue string) bool {
// Must be a valid gRPC "ASCII-Value" as defined here:
// https://github.com/grpc/grpc/blob/4b05dc88b724214d0c725c8e7442cbc7a61b1374/doc/PROTOCOL-HTTP2.md
// This means printable ASCII (including/plus spaces); 0x20 to 0x7E inclusive.
bytes := []byte(textValue) // gRPC validates strings on the byte level, not Unicode.
for _, ch := range bytes {
if ch < 0x20 || ch > 0x7E {
return false
}
}
return true
}
func annotateContext(ctx context.Context, mux *ServeMux, req *http.Request, rpcMethodName string, options ...AnnotateContextOption) (context.Context, metadata.MD, error) {
ctx = withRPCMethod(ctx, rpcMethodName)
for _, o := range options {
ctx = o(ctx)
}
timeout := DefaultContextTimeout
if tm := req.Header.Get(metadataGrpcTimeout); tm != "" {
var err error
timeout, err = timeoutDecode(tm)
if err != nil {
return nil, nil, status.Errorf(codes.InvalidArgument, "invalid grpc-timeout: %s", tm)
}
}
var pairs []string
for key, vals := range req.Header {
key = textproto.CanonicalMIMEHeaderKey(key)
for _, val := range vals {
// For backwards-compatibility, pass through 'authorization' header with no prefix.
if key == "Authorization" {
pairs = append(pairs, "authorization", val)
}
if h, ok := mux.incomingHeaderMatcher(key); ok {
if !isValidGRPCMetadataKey(h) {
grpclog.Errorf("HTTP header name %q is not valid as gRPC metadata key; skipping", h)
continue
}
// Handles "-bin" metadata in grpc, since grpc will do another base64
// encode before sending to server, we need to decode it first.
if strings.HasSuffix(key, metadataHeaderBinarySuffix) {
b, err := decodeBinHeader(val)
if err != nil {
return nil, nil, status.Errorf(codes.InvalidArgument, "invalid binary header %s: %s", key, err)
}
val = string(b)
} else if !isValidGRPCMetadataTextValue(val) {
grpclog.Errorf("Value of HTTP header %q contains non-ASCII value (not valid as gRPC metadata): skipping", h)
continue
}
pairs = append(pairs, h, val)
}
}
}
if host := req.Header.Get(xForwardedHost); host != "" {
pairs = append(pairs, strings.ToLower(xForwardedHost), host)
} else if req.Host != "" {
pairs = append(pairs, strings.ToLower(xForwardedHost), req.Host)
}
if addr := req.RemoteAddr; addr != "" {
if remoteIP, _, err := net.SplitHostPort(addr); err == nil {
if fwd := req.Header.Get(xForwardedFor); fwd == "" {
pairs = append(pairs, strings.ToLower(xForwardedFor), remoteIP)
} else {
pairs = append(pairs, strings.ToLower(xForwardedFor), fmt.Sprintf("%s, %s", fwd, remoteIP))
}
}
}
if timeout != 0 {
//nolint:govet // The context outlives this function
ctx, _ = context.WithTimeout(ctx, timeout)
}
if len(pairs) == 0 {
return ctx, nil, nil
}
md := metadata.Pairs(pairs...)
for _, mda := range mux.metadataAnnotators {
md = metadata.Join(md, mda(ctx, req))
}
return ctx, md, nil
}
// ServerMetadata consists of metadata sent from gRPC server.
type ServerMetadata struct {
HeaderMD metadata.MD
TrailerMD metadata.MD
}
type serverMetadataKey struct{}
// NewServerMetadataContext creates a new context with ServerMetadata
func NewServerMetadataContext(ctx context.Context, md ServerMetadata) context.Context {
if ctx == nil {
ctx = context.Background()
}
return context.WithValue(ctx, serverMetadataKey{}, md)
}
// ServerMetadataFromContext returns the ServerMetadata in ctx
func ServerMetadataFromContext(ctx context.Context) (md ServerMetadata, ok bool) {
if ctx == nil {
return md, false
}
md, ok = ctx.Value(serverMetadataKey{}).(ServerMetadata)
return
}
// ServerTransportStream implements grpc.ServerTransportStream.
// It should only be used by the generated files to support grpc.SendHeader
// outside of gRPC server use.
type ServerTransportStream struct {
mu sync.Mutex
header metadata.MD
trailer metadata.MD
}
// Method returns the method for the stream.
func (s *ServerTransportStream) Method() string {
return ""
}
// Header returns the header metadata of the stream.
func (s *ServerTransportStream) Header() metadata.MD {
s.mu.Lock()
defer s.mu.Unlock()
return s.header.Copy()
}
// SetHeader sets the header metadata.
func (s *ServerTransportStream) SetHeader(md metadata.MD) error {
if md.Len() == 0 {
return nil
}
s.mu.Lock()
s.header = metadata.Join(s.header, md)
s.mu.Unlock()
return nil
}
// SendHeader sets the header metadata.
func (s *ServerTransportStream) SendHeader(md metadata.MD) error {
return s.SetHeader(md)
}
// Trailer returns the cached trailer metadata.
func (s *ServerTransportStream) Trailer() metadata.MD {
s.mu.Lock()
defer s.mu.Unlock()
return s.trailer.Copy()
}
// SetTrailer sets the trailer metadata.
func (s *ServerTransportStream) SetTrailer(md metadata.MD) error {
if md.Len() == 0 {
return nil
}
s.mu.Lock()
s.trailer = metadata.Join(s.trailer, md)
s.mu.Unlock()
return nil
}
func timeoutDecode(s string) (time.Duration, error) {
size := len(s)
if size < 2 {
return 0, fmt.Errorf("timeout string is too short: %q", s)
}
d, ok := timeoutUnitToDuration(s[size-1])
if !ok {
return 0, fmt.Errorf("timeout unit is not recognized: %q", s)
}
t, err := strconv.ParseInt(s[:size-1], 10, 64)
if err != nil {
return 0, err
}
return d * time.Duration(t), nil
}
func timeoutUnitToDuration(u uint8) (d time.Duration, ok bool) {
switch u {
case 'H':
return time.Hour, true
case 'M':
return time.Minute, true
case 'S':
return time.Second, true
case 'm':
return time.Millisecond, true
case 'u':
return time.Microsecond, true
case 'n':
return time.Nanosecond, true
default:
return
}
}
// isPermanentHTTPHeader checks whether hdr belongs to the list of
// permanent request headers maintained by IANA.
// http://www.iana.org/assignments/message-headers/message-headers.xml
func isPermanentHTTPHeader(hdr string) bool {
switch hdr {
case
"Accept",
"Accept-Charset",
"Accept-Language",
"Accept-Ranges",
"Authorization",
"Cache-Control",
"Content-Type",
"Cookie",
"Date",
"Expect",
"From",
"Host",
"If-Match",
"If-Modified-Since",
"If-None-Match",
"If-Schedule-Tag-Match",
"If-Unmodified-Since",
"Max-Forwards",
"Origin",
"Pragma",
"Referer",
"User-Agent",
"Via",
"Warning":
return true
}
return false
}
// isMalformedHTTPHeader checks whether header belongs to the list of
// "malformed headers" and would be rejected by the gRPC server.
func isMalformedHTTPHeader(header string) bool {
_, isMalformed := malformedHTTPHeaders[strings.ToLower(header)]
return isMalformed
}
// RPCMethod returns the method string for the server context. The returned
// string is in the format of "/package.service/method".
func RPCMethod(ctx context.Context) (string, bool) {
m := ctx.Value(rpcMethodKey{})
if m == nil {
return "", false
}
ms, ok := m.(string)
if !ok {
return "", false
}
return ms, true
}
func withRPCMethod(ctx context.Context, rpcMethodName string) context.Context {
return context.WithValue(ctx, rpcMethodKey{}, rpcMethodName)
}
// HTTPPathPattern returns the HTTP path pattern string relating to the HTTP handler, if one exists.
// The format of the returned string is defined by the google.api.http path template type.
func HTTPPathPattern(ctx context.Context) (string, bool) {
m := ctx.Value(httpPathPatternKey{})
if m == nil {
return "", false
}
ms, ok := m.(string)
if !ok {
return "", false
}
return ms, true
}
func withHTTPPathPattern(ctx context.Context, httpPathPattern string) context.Context {
return context.WithValue(ctx, httpPathPatternKey{}, httpPathPattern)
}

View File

@ -0,0 +1,318 @@
package runtime
import (
"encoding/base64"
"fmt"
"strconv"
"strings"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/types/known/durationpb"
"google.golang.org/protobuf/types/known/timestamppb"
"google.golang.org/protobuf/types/known/wrapperspb"
)
// String just returns the given string.
// It is just for compatibility to other types.
func String(val string) (string, error) {
return val, nil
}
// StringSlice converts 'val' where individual strings are separated by
// 'sep' into a string slice.
func StringSlice(val, sep string) ([]string, error) {
return strings.Split(val, sep), nil
}
// Bool converts the given string representation of a boolean value into bool.
func Bool(val string) (bool, error) {
return strconv.ParseBool(val)
}
// BoolSlice converts 'val' where individual booleans are separated by
// 'sep' into a bool slice.
func BoolSlice(val, sep string) ([]bool, error) {
s := strings.Split(val, sep)
values := make([]bool, len(s))
for i, v := range s {
value, err := Bool(v)
if err != nil {
return nil, err
}
values[i] = value
}
return values, nil
}
// Float64 converts the given string representation into representation of a floating point number into float64.
func Float64(val string) (float64, error) {
return strconv.ParseFloat(val, 64)
}
// Float64Slice converts 'val' where individual floating point numbers are separated by
// 'sep' into a float64 slice.
func Float64Slice(val, sep string) ([]float64, error) {
s := strings.Split(val, sep)
values := make([]float64, len(s))
for i, v := range s {
value, err := Float64(v)
if err != nil {
return nil, err
}
values[i] = value
}
return values, nil
}
// Float32 converts the given string representation of a floating point number into float32.
func Float32(val string) (float32, error) {
f, err := strconv.ParseFloat(val, 32)
if err != nil {
return 0, err
}
return float32(f), nil
}
// Float32Slice converts 'val' where individual floating point numbers are separated by
// 'sep' into a float32 slice.
func Float32Slice(val, sep string) ([]float32, error) {
s := strings.Split(val, sep)
values := make([]float32, len(s))
for i, v := range s {
value, err := Float32(v)
if err != nil {
return nil, err
}
values[i] = value
}
return values, nil
}
// Int64 converts the given string representation of an integer into int64.
func Int64(val string) (int64, error) {
return strconv.ParseInt(val, 0, 64)
}
// Int64Slice converts 'val' where individual integers are separated by
// 'sep' into a int64 slice.
func Int64Slice(val, sep string) ([]int64, error) {
s := strings.Split(val, sep)
values := make([]int64, len(s))
for i, v := range s {
value, err := Int64(v)
if err != nil {
return nil, err
}
values[i] = value
}
return values, nil
}
// Int32 converts the given string representation of an integer into int32.
func Int32(val string) (int32, error) {
i, err := strconv.ParseInt(val, 0, 32)
if err != nil {
return 0, err
}
return int32(i), nil
}
// Int32Slice converts 'val' where individual integers are separated by
// 'sep' into a int32 slice.
func Int32Slice(val, sep string) ([]int32, error) {
s := strings.Split(val, sep)
values := make([]int32, len(s))
for i, v := range s {
value, err := Int32(v)
if err != nil {
return nil, err
}
values[i] = value
}
return values, nil
}
// Uint64 converts the given string representation of an integer into uint64.
func Uint64(val string) (uint64, error) {
return strconv.ParseUint(val, 0, 64)
}
// Uint64Slice converts 'val' where individual integers are separated by
// 'sep' into a uint64 slice.
func Uint64Slice(val, sep string) ([]uint64, error) {
s := strings.Split(val, sep)
values := make([]uint64, len(s))
for i, v := range s {
value, err := Uint64(v)
if err != nil {
return nil, err
}
values[i] = value
}
return values, nil
}
// Uint32 converts the given string representation of an integer into uint32.
func Uint32(val string) (uint32, error) {
i, err := strconv.ParseUint(val, 0, 32)
if err != nil {
return 0, err
}
return uint32(i), nil
}
// Uint32Slice converts 'val' where individual integers are separated by
// 'sep' into a uint32 slice.
func Uint32Slice(val, sep string) ([]uint32, error) {
s := strings.Split(val, sep)
values := make([]uint32, len(s))
for i, v := range s {
value, err := Uint32(v)
if err != nil {
return nil, err
}
values[i] = value
}
return values, nil
}
// Bytes converts the given string representation of a byte sequence into a slice of bytes
// A bytes sequence is encoded in URL-safe base64 without padding
func Bytes(val string) ([]byte, error) {
b, err := base64.StdEncoding.DecodeString(val)
if err != nil {
b, err = base64.URLEncoding.DecodeString(val)
if err != nil {
return nil, err
}
}
return b, nil
}
// BytesSlice converts 'val' where individual bytes sequences, encoded in URL-safe
// base64 without padding, are separated by 'sep' into a slice of bytes slices slice.
func BytesSlice(val, sep string) ([][]byte, error) {
s := strings.Split(val, sep)
values := make([][]byte, len(s))
for i, v := range s {
value, err := Bytes(v)
if err != nil {
return nil, err
}
values[i] = value
}
return values, nil
}
// Timestamp converts the given RFC3339 formatted string into a timestamp.Timestamp.
func Timestamp(val string) (*timestamppb.Timestamp, error) {
var r timestamppb.Timestamp
val = strconv.Quote(strings.Trim(val, `"`))
unmarshaler := &protojson.UnmarshalOptions{}
if err := unmarshaler.Unmarshal([]byte(val), &r); err != nil {
return nil, err
}
return &r, nil
}
// Duration converts the given string into a timestamp.Duration.
func Duration(val string) (*durationpb.Duration, error) {
var r durationpb.Duration
val = strconv.Quote(strings.Trim(val, `"`))
unmarshaler := &protojson.UnmarshalOptions{}
if err := unmarshaler.Unmarshal([]byte(val), &r); err != nil {
return nil, err
}
return &r, nil
}
// Enum converts the given string into an int32 that should be type casted into the
// correct enum proto type.
func Enum(val string, enumValMap map[string]int32) (int32, error) {
e, ok := enumValMap[val]
if ok {
return e, nil
}
i, err := Int32(val)
if err != nil {
return 0, fmt.Errorf("%s is not valid", val)
}
for _, v := range enumValMap {
if v == i {
return i, nil
}
}
return 0, fmt.Errorf("%s is not valid", val)
}
// EnumSlice converts 'val' where individual enums are separated by 'sep'
// into a int32 slice. Each individual int32 should be type casted into the
// correct enum proto type.
func EnumSlice(val, sep string, enumValMap map[string]int32) ([]int32, error) {
s := strings.Split(val, sep)
values := make([]int32, len(s))
for i, v := range s {
value, err := Enum(v, enumValMap)
if err != nil {
return nil, err
}
values[i] = value
}
return values, nil
}
// Support for google.protobuf.wrappers on top of primitive types
// StringValue well-known type support as wrapper around string type
func StringValue(val string) (*wrapperspb.StringValue, error) {
return wrapperspb.String(val), nil
}
// FloatValue well-known type support as wrapper around float32 type
func FloatValue(val string) (*wrapperspb.FloatValue, error) {
parsedVal, err := Float32(val)
return wrapperspb.Float(parsedVal), err
}
// DoubleValue well-known type support as wrapper around float64 type
func DoubleValue(val string) (*wrapperspb.DoubleValue, error) {
parsedVal, err := Float64(val)
return wrapperspb.Double(parsedVal), err
}
// BoolValue well-known type support as wrapper around bool type
func BoolValue(val string) (*wrapperspb.BoolValue, error) {
parsedVal, err := Bool(val)
return wrapperspb.Bool(parsedVal), err
}
// Int32Value well-known type support as wrapper around int32 type
func Int32Value(val string) (*wrapperspb.Int32Value, error) {
parsedVal, err := Int32(val)
return wrapperspb.Int32(parsedVal), err
}
// UInt32Value well-known type support as wrapper around uint32 type
func UInt32Value(val string) (*wrapperspb.UInt32Value, error) {
parsedVal, err := Uint32(val)
return wrapperspb.UInt32(parsedVal), err
}
// Int64Value well-known type support as wrapper around int64 type
func Int64Value(val string) (*wrapperspb.Int64Value, error) {
parsedVal, err := Int64(val)
return wrapperspb.Int64(parsedVal), err
}
// UInt64Value well-known type support as wrapper around uint64 type
func UInt64Value(val string) (*wrapperspb.UInt64Value, error) {
parsedVal, err := Uint64(val)
return wrapperspb.UInt64(parsedVal), err
}
// BytesValue well-known type support as wrapper around bytes[] type
func BytesValue(val string) (*wrapperspb.BytesValue, error) {
parsedVal, err := Bytes(val)
return wrapperspb.Bytes(parsedVal), err
}

View File

@ -0,0 +1,5 @@
/*
Package runtime contains runtime helper functions used by
servers which protoc-gen-grpc-gateway generates.
*/
package runtime

View File

@ -0,0 +1,181 @@
package runtime
import (
"context"
"errors"
"io"
"net/http"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/status"
)
// ErrorHandlerFunc is the signature used to configure error handling.
type ErrorHandlerFunc func(context.Context, *ServeMux, Marshaler, http.ResponseWriter, *http.Request, error)
// StreamErrorHandlerFunc is the signature used to configure stream error handling.
type StreamErrorHandlerFunc func(context.Context, error) *status.Status
// RoutingErrorHandlerFunc is the signature used to configure error handling for routing errors.
type RoutingErrorHandlerFunc func(context.Context, *ServeMux, Marshaler, http.ResponseWriter, *http.Request, int)
// HTTPStatusError is the error to use when needing to provide a different HTTP status code for an error
// passed to the DefaultRoutingErrorHandler.
type HTTPStatusError struct {
HTTPStatus int
Err error
}
func (e *HTTPStatusError) Error() string {
return e.Err.Error()
}
// HTTPStatusFromCode converts a gRPC error code into the corresponding HTTP response status.
// See: https://github.com/googleapis/googleapis/blob/master/google/rpc/code.proto
func HTTPStatusFromCode(code codes.Code) int {
switch code {
case codes.OK:
return http.StatusOK
case codes.Canceled:
return 499
case codes.Unknown:
return http.StatusInternalServerError
case codes.InvalidArgument:
return http.StatusBadRequest
case codes.DeadlineExceeded:
return http.StatusGatewayTimeout
case codes.NotFound:
return http.StatusNotFound
case codes.AlreadyExists:
return http.StatusConflict
case codes.PermissionDenied:
return http.StatusForbidden
case codes.Unauthenticated:
return http.StatusUnauthorized
case codes.ResourceExhausted:
return http.StatusTooManyRequests
case codes.FailedPrecondition:
// Note, this deliberately doesn't translate to the similarly named '412 Precondition Failed' HTTP response status.
return http.StatusBadRequest
case codes.Aborted:
return http.StatusConflict
case codes.OutOfRange:
return http.StatusBadRequest
case codes.Unimplemented:
return http.StatusNotImplemented
case codes.Internal:
return http.StatusInternalServerError
case codes.Unavailable:
return http.StatusServiceUnavailable
case codes.DataLoss:
return http.StatusInternalServerError
default:
grpclog.Infof("Unknown gRPC error code: %v", code)
return http.StatusInternalServerError
}
}
// HTTPError uses the mux-configured error handler.
func HTTPError(ctx context.Context, mux *ServeMux, marshaler Marshaler, w http.ResponseWriter, r *http.Request, err error) {
mux.errorHandler(ctx, mux, marshaler, w, r, err)
}
// DefaultHTTPErrorHandler is the default error handler.
// If "err" is a gRPC Status, the function replies with the status code mapped by HTTPStatusFromCode.
// If "err" is a HTTPStatusError, the function replies with the status code provide by that struct. This is
// intended to allow passing through of specific statuses via the function set via WithRoutingErrorHandler
// for the ServeMux constructor to handle edge cases which the standard mappings in HTTPStatusFromCode
// are insufficient for.
// If otherwise, it replies with http.StatusInternalServerError.
//
// The response body written by this function is a Status message marshaled by the Marshaler.
func DefaultHTTPErrorHandler(ctx context.Context, mux *ServeMux, marshaler Marshaler, w http.ResponseWriter, r *http.Request, err error) {
// return Internal when Marshal failed
const fallback = `{"code": 13, "message": "failed to marshal error message"}`
var customStatus *HTTPStatusError
if errors.As(err, &customStatus) {
err = customStatus.Err
}
s := status.Convert(err)
pb := s.Proto()
w.Header().Del("Trailer")
w.Header().Del("Transfer-Encoding")
contentType := marshaler.ContentType(pb)
w.Header().Set("Content-Type", contentType)
if s.Code() == codes.Unauthenticated {
w.Header().Set("WWW-Authenticate", s.Message())
}
buf, merr := marshaler.Marshal(pb)
if merr != nil {
grpclog.Infof("Failed to marshal error message %q: %v", s, merr)
w.WriteHeader(http.StatusInternalServerError)
if _, err := io.WriteString(w, fallback); err != nil {
grpclog.Infof("Failed to write response: %v", err)
}
return
}
md, ok := ServerMetadataFromContext(ctx)
if !ok {
grpclog.Infof("Failed to extract ServerMetadata from context")
}
handleForwardResponseServerMetadata(w, mux, md)
// RFC 7230 https://tools.ietf.org/html/rfc7230#section-4.1.2
// Unless the request includes a TE header field indicating "trailers"
// is acceptable, as described in Section 4.3, a server SHOULD NOT
// generate trailer fields that it believes are necessary for the user
// agent to receive.
doForwardTrailers := requestAcceptsTrailers(r)
if doForwardTrailers {
handleForwardResponseTrailerHeader(w, md)
w.Header().Set("Transfer-Encoding", "chunked")
}
st := HTTPStatusFromCode(s.Code())
if customStatus != nil {
st = customStatus.HTTPStatus
}
w.WriteHeader(st)
if _, err := w.Write(buf); err != nil {
grpclog.Infof("Failed to write response: %v", err)
}
if doForwardTrailers {
handleForwardResponseTrailer(w, md)
}
}
func DefaultStreamErrorHandler(_ context.Context, err error) *status.Status {
return status.Convert(err)
}
// DefaultRoutingErrorHandler is our default handler for routing errors.
// By default http error codes mapped on the following error codes:
//
// NotFound -> grpc.NotFound
// StatusBadRequest -> grpc.InvalidArgument
// MethodNotAllowed -> grpc.Unimplemented
// Other -> grpc.Internal, method is not expecting to be called for anything else
func DefaultRoutingErrorHandler(ctx context.Context, mux *ServeMux, marshaler Marshaler, w http.ResponseWriter, r *http.Request, httpStatus int) {
sterr := status.Error(codes.Internal, "Unexpected routing error")
switch httpStatus {
case http.StatusBadRequest:
sterr = status.Error(codes.InvalidArgument, http.StatusText(httpStatus))
case http.StatusMethodNotAllowed:
sterr = status.Error(codes.Unimplemented, http.StatusText(httpStatus))
case http.StatusNotFound:
sterr = status.Error(codes.NotFound, http.StatusText(httpStatus))
}
mux.errorHandler(ctx, mux, marshaler, w, r, sterr)
}

View File

@ -0,0 +1,166 @@
package runtime
import (
"encoding/json"
"errors"
"fmt"
"io"
"sort"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
field_mask "google.golang.org/protobuf/types/known/fieldmaskpb"
)
func getFieldByName(fields protoreflect.FieldDescriptors, name string) protoreflect.FieldDescriptor {
fd := fields.ByName(protoreflect.Name(name))
if fd != nil {
return fd
}
return fields.ByJSONName(name)
}
// FieldMaskFromRequestBody creates a FieldMask printing all complete paths from the JSON body.
func FieldMaskFromRequestBody(r io.Reader, msg proto.Message) (*field_mask.FieldMask, error) {
fm := &field_mask.FieldMask{}
var root interface{}
if err := json.NewDecoder(r).Decode(&root); err != nil {
if err == io.EOF {
return fm, nil
}
return nil, err
}
queue := []fieldMaskPathItem{{node: root, msg: msg.ProtoReflect()}}
for len(queue) > 0 {
// dequeue an item
item := queue[0]
queue = queue[1:]
m, ok := item.node.(map[string]interface{})
switch {
case ok:
// if the item is an object, then enqueue all of its children
for k, v := range m {
if item.msg == nil {
return nil, errors.New("JSON structure did not match request type")
}
fd := getFieldByName(item.msg.Descriptor().Fields(), k)
if fd == nil {
return nil, fmt.Errorf("could not find field %q in %q", k, item.msg.Descriptor().FullName())
}
if isDynamicProtoMessage(fd.Message()) {
for _, p := range buildPathsBlindly(string(fd.FullName().Name()), v) {
newPath := p
if item.path != "" {
newPath = item.path + "." + newPath
}
queue = append(queue, fieldMaskPathItem{path: newPath})
}
continue
}
if isProtobufAnyMessage(fd.Message()) && !fd.IsList() {
_, hasTypeField := v.(map[string]interface{})["@type"]
if hasTypeField {
queue = append(queue, fieldMaskPathItem{path: k})
continue
} else {
return nil, fmt.Errorf("could not find field @type in %q in message %q", k, item.msg.Descriptor().FullName())
}
}
child := fieldMaskPathItem{
node: v,
}
if item.path == "" {
child.path = string(fd.FullName().Name())
} else {
child.path = item.path + "." + string(fd.FullName().Name())
}
switch {
case fd.IsList(), fd.IsMap():
// As per: https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/field_mask.proto#L85-L86
// Do not recurse into repeated fields. The repeated field goes on the end of the path and we stop.
fm.Paths = append(fm.Paths, child.path)
case fd.Message() != nil:
child.msg = item.msg.Get(fd).Message()
fallthrough
default:
queue = append(queue, child)
}
}
case len(item.path) > 0:
// otherwise, it's a leaf node so print its path
fm.Paths = append(fm.Paths, item.path)
}
}
// Sort for deterministic output in the presence
// of repeated fields.
sort.Strings(fm.Paths)
return fm, nil
}
func isProtobufAnyMessage(md protoreflect.MessageDescriptor) bool {
return md != nil && (md.FullName() == "google.protobuf.Any")
}
func isDynamicProtoMessage(md protoreflect.MessageDescriptor) bool {
return md != nil && (md.FullName() == "google.protobuf.Struct" || md.FullName() == "google.protobuf.Value")
}
// buildPathsBlindly does not attempt to match proto field names to the
// json value keys. Instead it relies completely on the structure of
// the unmarshalled json contained within in.
// Returns a slice containing all subpaths with the root at the
// passed in name and json value.
func buildPathsBlindly(name string, in interface{}) []string {
m, ok := in.(map[string]interface{})
if !ok {
return []string{name}
}
var paths []string
queue := []fieldMaskPathItem{{path: name, node: m}}
for len(queue) > 0 {
cur := queue[0]
queue = queue[1:]
m, ok := cur.node.(map[string]interface{})
if !ok {
// This should never happen since we should always check that we only add
// nodes of type map[string]interface{} to the queue.
continue
}
for k, v := range m {
if mi, ok := v.(map[string]interface{}); ok {
queue = append(queue, fieldMaskPathItem{path: cur.path + "." + k, node: mi})
} else {
// This is not a struct, so there are no more levels to descend.
curPath := cur.path + "." + k
paths = append(paths, curPath)
}
}
}
return paths
}
// fieldMaskPathItem stores a in-progress deconstruction of a path for a fieldmask
type fieldMaskPathItem struct {
// the list of prior fields leading up to node connected by dots
path string
// a generic decoded json object the current item to inspect for further path extraction
node interface{}
// parent message
msg protoreflect.Message
}

View File

@ -0,0 +1,227 @@
package runtime
import (
"context"
"fmt"
"io"
"net/http"
"net/textproto"
"strings"
"google.golang.org/genproto/googleapis/api/httpbody"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/proto"
)
// ForwardResponseStream forwards the stream from gRPC server to REST client.
func ForwardResponseStream(ctx context.Context, mux *ServeMux, marshaler Marshaler, w http.ResponseWriter, req *http.Request, recv func() (proto.Message, error), opts ...func(context.Context, http.ResponseWriter, proto.Message) error) {
f, ok := w.(http.Flusher)
if !ok {
grpclog.Infof("Flush not supported in %T", w)
http.Error(w, "unexpected type of web server", http.StatusInternalServerError)
return
}
md, ok := ServerMetadataFromContext(ctx)
if !ok {
grpclog.Infof("Failed to extract ServerMetadata from context")
http.Error(w, "unexpected error", http.StatusInternalServerError)
return
}
handleForwardResponseServerMetadata(w, mux, md)
w.Header().Set("Transfer-Encoding", "chunked")
if err := handleForwardResponseOptions(ctx, w, nil, opts); err != nil {
HTTPError(ctx, mux, marshaler, w, req, err)
return
}
var delimiter []byte
if d, ok := marshaler.(Delimited); ok {
delimiter = d.Delimiter()
} else {
delimiter = []byte("\n")
}
var wroteHeader bool
for {
resp, err := recv()
if err == io.EOF {
return
}
if err != nil {
handleForwardResponseStreamError(ctx, wroteHeader, marshaler, w, req, mux, err, delimiter)
return
}
if err := handleForwardResponseOptions(ctx, w, resp, opts); err != nil {
handleForwardResponseStreamError(ctx, wroteHeader, marshaler, w, req, mux, err, delimiter)
return
}
if !wroteHeader {
w.Header().Set("Content-Type", marshaler.ContentType(resp))
}
var buf []byte
httpBody, isHTTPBody := resp.(*httpbody.HttpBody)
switch {
case resp == nil:
buf, err = marshaler.Marshal(errorChunk(status.New(codes.Internal, "empty response")))
case isHTTPBody:
buf = httpBody.GetData()
default:
result := map[string]interface{}{"result": resp}
if rb, ok := resp.(responseBody); ok {
result["result"] = rb.XXX_ResponseBody()
}
buf, err = marshaler.Marshal(result)
}
if err != nil {
grpclog.Infof("Failed to marshal response chunk: %v", err)
handleForwardResponseStreamError(ctx, wroteHeader, marshaler, w, req, mux, err, delimiter)
return
}
if _, err := w.Write(buf); err != nil {
grpclog.Infof("Failed to send response chunk: %v", err)
return
}
wroteHeader = true
if _, err := w.Write(delimiter); err != nil {
grpclog.Infof("Failed to send delimiter chunk: %v", err)
return
}
f.Flush()
}
}
func handleForwardResponseServerMetadata(w http.ResponseWriter, mux *ServeMux, md ServerMetadata) {
for k, vs := range md.HeaderMD {
if h, ok := mux.outgoingHeaderMatcher(k); ok {
for _, v := range vs {
w.Header().Add(h, v)
}
}
}
}
func handleForwardResponseTrailerHeader(w http.ResponseWriter, md ServerMetadata) {
for k := range md.TrailerMD {
tKey := textproto.CanonicalMIMEHeaderKey(fmt.Sprintf("%s%s", MetadataTrailerPrefix, k))
w.Header().Add("Trailer", tKey)
}
}
func handleForwardResponseTrailer(w http.ResponseWriter, md ServerMetadata) {
for k, vs := range md.TrailerMD {
tKey := fmt.Sprintf("%s%s", MetadataTrailerPrefix, k)
for _, v := range vs {
w.Header().Add(tKey, v)
}
}
}
// responseBody interface contains method for getting field for marshaling to the response body
// this method is generated for response struct from the value of `response_body` in the `google.api.HttpRule`
type responseBody interface {
XXX_ResponseBody() interface{}
}
// ForwardResponseMessage forwards the message "resp" from gRPC server to REST client.
func ForwardResponseMessage(ctx context.Context, mux *ServeMux, marshaler Marshaler, w http.ResponseWriter, req *http.Request, resp proto.Message, opts ...func(context.Context, http.ResponseWriter, proto.Message) error) {
md, ok := ServerMetadataFromContext(ctx)
if !ok {
grpclog.Infof("Failed to extract ServerMetadata from context")
}
handleForwardResponseServerMetadata(w, mux, md)
// RFC 7230 https://tools.ietf.org/html/rfc7230#section-4.1.2
// Unless the request includes a TE header field indicating "trailers"
// is acceptable, as described in Section 4.3, a server SHOULD NOT
// generate trailer fields that it believes are necessary for the user
// agent to receive.
doForwardTrailers := requestAcceptsTrailers(req)
if doForwardTrailers {
handleForwardResponseTrailerHeader(w, md)
w.Header().Set("Transfer-Encoding", "chunked")
}
handleForwardResponseTrailerHeader(w, md)
contentType := marshaler.ContentType(resp)
w.Header().Set("Content-Type", contentType)
if err := handleForwardResponseOptions(ctx, w, resp, opts); err != nil {
HTTPError(ctx, mux, marshaler, w, req, err)
return
}
var buf []byte
var err error
if rb, ok := resp.(responseBody); ok {
buf, err = marshaler.Marshal(rb.XXX_ResponseBody())
} else {
buf, err = marshaler.Marshal(resp)
}
if err != nil {
grpclog.Infof("Marshal error: %v", err)
HTTPError(ctx, mux, marshaler, w, req, err)
return
}
if _, err = w.Write(buf); err != nil {
grpclog.Infof("Failed to write response: %v", err)
}
if doForwardTrailers {
handleForwardResponseTrailer(w, md)
}
}
func requestAcceptsTrailers(req *http.Request) bool {
te := req.Header.Get("TE")
return strings.Contains(strings.ToLower(te), "trailers")
}
func handleForwardResponseOptions(ctx context.Context, w http.ResponseWriter, resp proto.Message, opts []func(context.Context, http.ResponseWriter, proto.Message) error) error {
if len(opts) == 0 {
return nil
}
for _, opt := range opts {
if err := opt(ctx, w, resp); err != nil {
grpclog.Infof("Error handling ForwardResponseOptions: %v", err)
return err
}
}
return nil
}
func handleForwardResponseStreamError(ctx context.Context, wroteHeader bool, marshaler Marshaler, w http.ResponseWriter, req *http.Request, mux *ServeMux, err error, delimiter []byte) {
st := mux.streamErrorHandler(ctx, err)
msg := errorChunk(st)
if !wroteHeader {
w.Header().Set("Content-Type", marshaler.ContentType(msg))
w.WriteHeader(HTTPStatusFromCode(st.Code()))
}
buf, err := marshaler.Marshal(msg)
if err != nil {
grpclog.Infof("Failed to marshal an error: %v", err)
return
}
if _, err := w.Write(buf); err != nil {
grpclog.Infof("Failed to notify error to client: %v", err)
return
}
if _, err := w.Write(delimiter); err != nil {
grpclog.Infof("Failed to send delimiter chunk: %v", err)
return
}
}
func errorChunk(st *status.Status) map[string]proto.Message {
return map[string]proto.Message{"error": st.Proto()}
}

View File

@ -0,0 +1,32 @@
package runtime
import (
"google.golang.org/genproto/googleapis/api/httpbody"
)
// HTTPBodyMarshaler is a Marshaler which supports marshaling of a
// google.api.HttpBody message as the full response body if it is
// the actual message used as the response. If not, then this will
// simply fallback to the Marshaler specified as its default Marshaler.
type HTTPBodyMarshaler struct {
Marshaler
}
// ContentType returns its specified content type in case v is a
// google.api.HttpBody message, otherwise it will fall back to the default Marshalers
// content type.
func (h *HTTPBodyMarshaler) ContentType(v interface{}) string {
if httpBody, ok := v.(*httpbody.HttpBody); ok {
return httpBody.GetContentType()
}
return h.Marshaler.ContentType(v)
}
// Marshal marshals "v" by returning the body bytes if v is a
// google.api.HttpBody message, otherwise it falls back to the default Marshaler.
func (h *HTTPBodyMarshaler) Marshal(v interface{}) ([]byte, error) {
if httpBody, ok := v.(*httpbody.HttpBody); ok {
return httpBody.Data, nil
}
return h.Marshaler.Marshal(v)
}

View File

@ -0,0 +1,45 @@
package runtime
import (
"encoding/json"
"io"
)
// JSONBuiltin is a Marshaler which marshals/unmarshals into/from JSON
// with the standard "encoding/json" package of Golang.
// Although it is generally faster for simple proto messages than JSONPb,
// it does not support advanced features of protobuf, e.g. map, oneof, ....
//
// The NewEncoder and NewDecoder types return *json.Encoder and
// *json.Decoder respectively.
type JSONBuiltin struct{}
// ContentType always Returns "application/json".
func (*JSONBuiltin) ContentType(_ interface{}) string {
return "application/json"
}
// Marshal marshals "v" into JSON
func (j *JSONBuiltin) Marshal(v interface{}) ([]byte, error) {
return json.Marshal(v)
}
// Unmarshal unmarshals JSON data into "v".
func (j *JSONBuiltin) Unmarshal(data []byte, v interface{}) error {
return json.Unmarshal(data, v)
}
// NewDecoder returns a Decoder which reads JSON stream from "r".
func (j *JSONBuiltin) NewDecoder(r io.Reader) Decoder {
return json.NewDecoder(r)
}
// NewEncoder returns an Encoder which writes JSON stream into "w".
func (j *JSONBuiltin) NewEncoder(w io.Writer) Encoder {
return json.NewEncoder(w)
}
// Delimiter for newline encoded JSON streams.
func (j *JSONBuiltin) Delimiter() []byte {
return []byte("\n")
}

View File

@ -0,0 +1,348 @@
package runtime
import (
"bytes"
"encoding/json"
"fmt"
"io"
"reflect"
"strconv"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
)
// JSONPb is a Marshaler which marshals/unmarshals into/from JSON
// with the "google.golang.org/protobuf/encoding/protojson" marshaler.
// It supports the full functionality of protobuf unlike JSONBuiltin.
//
// The NewDecoder method returns a DecoderWrapper, so the underlying
// *json.Decoder methods can be used.
type JSONPb struct {
protojson.MarshalOptions
protojson.UnmarshalOptions
}
// ContentType always returns "application/json".
func (*JSONPb) ContentType(_ interface{}) string {
return "application/json"
}
// Marshal marshals "v" into JSON.
func (j *JSONPb) Marshal(v interface{}) ([]byte, error) {
if _, ok := v.(proto.Message); !ok {
return j.marshalNonProtoField(v)
}
var buf bytes.Buffer
if err := j.marshalTo(&buf, v); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func (j *JSONPb) marshalTo(w io.Writer, v interface{}) error {
p, ok := v.(proto.Message)
if !ok {
buf, err := j.marshalNonProtoField(v)
if err != nil {
return err
}
_, err = w.Write(buf)
return err
}
b, err := j.MarshalOptions.Marshal(p)
if err != nil {
return err
}
_, err = w.Write(b)
return err
}
var (
// protoMessageType is stored to prevent constant lookup of the same type at runtime.
protoMessageType = reflect.TypeOf((*proto.Message)(nil)).Elem()
)
// marshalNonProto marshals a non-message field of a protobuf message.
// This function does not correctly marshal arbitrary data structures into JSON,
// it is only capable of marshaling non-message field values of protobuf,
// i.e. primitive types, enums; pointers to primitives or enums; maps from
// integer/string types to primitives/enums/pointers to messages.
func (j *JSONPb) marshalNonProtoField(v interface{}) ([]byte, error) {
if v == nil {
return []byte("null"), nil
}
rv := reflect.ValueOf(v)
for rv.Kind() == reflect.Ptr {
if rv.IsNil() {
return []byte("null"), nil
}
rv = rv.Elem()
}
if rv.Kind() == reflect.Slice {
if rv.IsNil() {
if j.EmitUnpopulated {
return []byte("[]"), nil
}
return []byte("null"), nil
}
if rv.Type().Elem().Implements(protoMessageType) {
var buf bytes.Buffer
if err := buf.WriteByte('['); err != nil {
return nil, err
}
for i := 0; i < rv.Len(); i++ {
if i != 0 {
if err := buf.WriteByte(','); err != nil {
return nil, err
}
}
if err := j.marshalTo(&buf, rv.Index(i).Interface().(proto.Message)); err != nil {
return nil, err
}
}
if err := buf.WriteByte(']'); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
if rv.Type().Elem().Implements(typeProtoEnum) {
var buf bytes.Buffer
if err := buf.WriteByte('['); err != nil {
return nil, err
}
for i := 0; i < rv.Len(); i++ {
if i != 0 {
if err := buf.WriteByte(','); err != nil {
return nil, err
}
}
var err error
if j.UseEnumNumbers {
_, err = buf.WriteString(strconv.FormatInt(rv.Index(i).Int(), 10))
} else {
_, err = buf.WriteString("\"" + rv.Index(i).Interface().(protoEnum).String() + "\"")
}
if err != nil {
return nil, err
}
}
if err := buf.WriteByte(']'); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
}
if rv.Kind() == reflect.Map {
m := make(map[string]*json.RawMessage)
for _, k := range rv.MapKeys() {
buf, err := j.Marshal(rv.MapIndex(k).Interface())
if err != nil {
return nil, err
}
m[fmt.Sprintf("%v", k.Interface())] = (*json.RawMessage)(&buf)
}
if j.Indent != "" {
return json.MarshalIndent(m, "", j.Indent)
}
return json.Marshal(m)
}
if enum, ok := rv.Interface().(protoEnum); ok && !j.UseEnumNumbers {
return json.Marshal(enum.String())
}
return json.Marshal(rv.Interface())
}
// Unmarshal unmarshals JSON "data" into "v"
func (j *JSONPb) Unmarshal(data []byte, v interface{}) error {
return unmarshalJSONPb(data, j.UnmarshalOptions, v)
}
// NewDecoder returns a Decoder which reads JSON stream from "r".
func (j *JSONPb) NewDecoder(r io.Reader) Decoder {
d := json.NewDecoder(r)
return DecoderWrapper{
Decoder: d,
UnmarshalOptions: j.UnmarshalOptions,
}
}
// DecoderWrapper is a wrapper around a *json.Decoder that adds
// support for protos to the Decode method.
type DecoderWrapper struct {
*json.Decoder
protojson.UnmarshalOptions
}
// Decode wraps the embedded decoder's Decode method to support
// protos using a jsonpb.Unmarshaler.
func (d DecoderWrapper) Decode(v interface{}) error {
return decodeJSONPb(d.Decoder, d.UnmarshalOptions, v)
}
// NewEncoder returns an Encoder which writes JSON stream into "w".
func (j *JSONPb) NewEncoder(w io.Writer) Encoder {
return EncoderFunc(func(v interface{}) error {
if err := j.marshalTo(w, v); err != nil {
return err
}
// mimic json.Encoder by adding a newline (makes output
// easier to read when it contains multiple encoded items)
_, err := w.Write(j.Delimiter())
return err
})
}
func unmarshalJSONPb(data []byte, unmarshaler protojson.UnmarshalOptions, v interface{}) error {
d := json.NewDecoder(bytes.NewReader(data))
return decodeJSONPb(d, unmarshaler, v)
}
func decodeJSONPb(d *json.Decoder, unmarshaler protojson.UnmarshalOptions, v interface{}) error {
p, ok := v.(proto.Message)
if !ok {
return decodeNonProtoField(d, unmarshaler, v)
}
// Decode into bytes for marshalling
var b json.RawMessage
if err := d.Decode(&b); err != nil {
return err
}
return unmarshaler.Unmarshal([]byte(b), p)
}
func decodeNonProtoField(d *json.Decoder, unmarshaler protojson.UnmarshalOptions, v interface{}) error {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return fmt.Errorf("%T is not a pointer", v)
}
for rv.Kind() == reflect.Ptr {
if rv.IsNil() {
rv.Set(reflect.New(rv.Type().Elem()))
}
if rv.Type().ConvertibleTo(typeProtoMessage) {
// Decode into bytes for marshalling
var b json.RawMessage
if err := d.Decode(&b); err != nil {
return err
}
return unmarshaler.Unmarshal([]byte(b), rv.Interface().(proto.Message))
}
rv = rv.Elem()
}
if rv.Kind() == reflect.Map {
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
conv, ok := convFromType[rv.Type().Key().Kind()]
if !ok {
return fmt.Errorf("unsupported type of map field key: %v", rv.Type().Key())
}
m := make(map[string]*json.RawMessage)
if err := d.Decode(&m); err != nil {
return err
}
for k, v := range m {
result := conv.Call([]reflect.Value{reflect.ValueOf(k)})
if err := result[1].Interface(); err != nil {
return err.(error)
}
bk := result[0]
bv := reflect.New(rv.Type().Elem())
if v == nil {
null := json.RawMessage("null")
v = &null
}
if err := unmarshalJSONPb([]byte(*v), unmarshaler, bv.Interface()); err != nil {
return err
}
rv.SetMapIndex(bk, bv.Elem())
}
return nil
}
if rv.Kind() == reflect.Slice {
if rv.Type().Elem().Kind() == reflect.Uint8 {
var sl []byte
if err := d.Decode(&sl); err != nil {
return err
}
if sl != nil {
rv.SetBytes(sl)
}
return nil
}
var sl []json.RawMessage
if err := d.Decode(&sl); err != nil {
return err
}
if sl != nil {
rv.Set(reflect.MakeSlice(rv.Type(), 0, 0))
}
for _, item := range sl {
bv := reflect.New(rv.Type().Elem())
if err := unmarshalJSONPb([]byte(item), unmarshaler, bv.Interface()); err != nil {
return err
}
rv.Set(reflect.Append(rv, bv.Elem()))
}
return nil
}
if _, ok := rv.Interface().(protoEnum); ok {
var repr interface{}
if err := d.Decode(&repr); err != nil {
return err
}
switch v := repr.(type) {
case string:
// TODO(yugui) Should use proto.StructProperties?
return fmt.Errorf("unmarshaling of symbolic enum %q not supported: %T", repr, rv.Interface())
case float64:
rv.Set(reflect.ValueOf(int32(v)).Convert(rv.Type()))
return nil
default:
return fmt.Errorf("cannot assign %#v into Go type %T", repr, rv.Interface())
}
}
return d.Decode(v)
}
type protoEnum interface {
fmt.Stringer
EnumDescriptor() ([]byte, []int)
}
var typeProtoEnum = reflect.TypeOf((*protoEnum)(nil)).Elem()
var typeProtoMessage = reflect.TypeOf((*proto.Message)(nil)).Elem()
// Delimiter for newline encoded JSON streams.
func (j *JSONPb) Delimiter() []byte {
return []byte("\n")
}
var (
convFromType = map[reflect.Kind]reflect.Value{
reflect.String: reflect.ValueOf(String),
reflect.Bool: reflect.ValueOf(Bool),
reflect.Float64: reflect.ValueOf(Float64),
reflect.Float32: reflect.ValueOf(Float32),
reflect.Int64: reflect.ValueOf(Int64),
reflect.Int32: reflect.ValueOf(Int32),
reflect.Uint64: reflect.ValueOf(Uint64),
reflect.Uint32: reflect.ValueOf(Uint32),
reflect.Slice: reflect.ValueOf(Bytes),
}
)

View File

@ -0,0 +1,60 @@
package runtime
import (
"errors"
"io"
"google.golang.org/protobuf/proto"
)
// ProtoMarshaller is a Marshaller which marshals/unmarshals into/from serialize proto bytes
type ProtoMarshaller struct{}
// ContentType always returns "application/octet-stream".
func (*ProtoMarshaller) ContentType(_ interface{}) string {
return "application/octet-stream"
}
// Marshal marshals "value" into Proto
func (*ProtoMarshaller) Marshal(value interface{}) ([]byte, error) {
message, ok := value.(proto.Message)
if !ok {
return nil, errors.New("unable to marshal non proto field")
}
return proto.Marshal(message)
}
// Unmarshal unmarshals proto "data" into "value"
func (*ProtoMarshaller) Unmarshal(data []byte, value interface{}) error {
message, ok := value.(proto.Message)
if !ok {
return errors.New("unable to unmarshal non proto field")
}
return proto.Unmarshal(data, message)
}
// NewDecoder returns a Decoder which reads proto stream from "reader".
func (marshaller *ProtoMarshaller) NewDecoder(reader io.Reader) Decoder {
return DecoderFunc(func(value interface{}) error {
buffer, err := io.ReadAll(reader)
if err != nil {
return err
}
return marshaller.Unmarshal(buffer, value)
})
}
// NewEncoder returns an Encoder which writes proto stream into "writer".
func (marshaller *ProtoMarshaller) NewEncoder(writer io.Writer) Encoder {
return EncoderFunc(func(value interface{}) error {
buffer, err := marshaller.Marshal(value)
if err != nil {
return err
}
if _, err := writer.Write(buffer); err != nil {
return err
}
return nil
})
}

View File

@ -0,0 +1,50 @@
package runtime
import (
"io"
)
// Marshaler defines a conversion between byte sequence and gRPC payloads / fields.
type Marshaler interface {
// Marshal marshals "v" into byte sequence.
Marshal(v interface{}) ([]byte, error)
// Unmarshal unmarshals "data" into "v".
// "v" must be a pointer value.
Unmarshal(data []byte, v interface{}) error
// NewDecoder returns a Decoder which reads byte sequence from "r".
NewDecoder(r io.Reader) Decoder
// NewEncoder returns an Encoder which writes bytes sequence into "w".
NewEncoder(w io.Writer) Encoder
// ContentType returns the Content-Type which this marshaler is responsible for.
// The parameter describes the type which is being marshalled, which can sometimes
// affect the content type returned.
ContentType(v interface{}) string
}
// Decoder decodes a byte sequence
type Decoder interface {
Decode(v interface{}) error
}
// Encoder encodes gRPC payloads / fields into byte sequence.
type Encoder interface {
Encode(v interface{}) error
}
// DecoderFunc adapts an decoder function into Decoder.
type DecoderFunc func(v interface{}) error
// Decode delegates invocations to the underlying function itself.
func (f DecoderFunc) Decode(v interface{}) error { return f(v) }
// EncoderFunc adapts an encoder function into Encoder
type EncoderFunc func(v interface{}) error
// Encode delegates invocations to the underlying function itself.
func (f EncoderFunc) Encode(v interface{}) error { return f(v) }
// Delimited defines the streaming delimiter.
type Delimited interface {
// Delimiter returns the record separator for the stream.
Delimiter() []byte
}

View File

@ -0,0 +1,109 @@
package runtime
import (
"errors"
"mime"
"net/http"
"google.golang.org/grpc/grpclog"
"google.golang.org/protobuf/encoding/protojson"
)
// MIMEWildcard is the fallback MIME type used for requests which do not match
// a registered MIME type.
const MIMEWildcard = "*"
var (
acceptHeader = http.CanonicalHeaderKey("Accept")
contentTypeHeader = http.CanonicalHeaderKey("Content-Type")
defaultMarshaler = &HTTPBodyMarshaler{
Marshaler: &JSONPb{
MarshalOptions: protojson.MarshalOptions{
EmitUnpopulated: true,
},
UnmarshalOptions: protojson.UnmarshalOptions{
DiscardUnknown: true,
},
},
}
)
// MarshalerForRequest returns the inbound/outbound marshalers for this request.
// It checks the registry on the ServeMux for the MIME type set by the Content-Type header.
// If it isn't set (or the request Content-Type is empty), checks for "*".
// If there are multiple Content-Type headers set, choose the first one that it can
// exactly match in the registry.
// Otherwise, it follows the above logic for "*"/InboundMarshaler/OutboundMarshaler.
func MarshalerForRequest(mux *ServeMux, r *http.Request) (inbound Marshaler, outbound Marshaler) {
for _, acceptVal := range r.Header[acceptHeader] {
if m, ok := mux.marshalers.mimeMap[acceptVal]; ok {
outbound = m
break
}
}
for _, contentTypeVal := range r.Header[contentTypeHeader] {
contentType, _, err := mime.ParseMediaType(contentTypeVal)
if err != nil {
grpclog.Infof("Failed to parse Content-Type %s: %v", contentTypeVal, err)
continue
}
if m, ok := mux.marshalers.mimeMap[contentType]; ok {
inbound = m
break
}
}
if inbound == nil {
inbound = mux.marshalers.mimeMap[MIMEWildcard]
}
if outbound == nil {
outbound = inbound
}
return inbound, outbound
}
// marshalerRegistry is a mapping from MIME types to Marshalers.
type marshalerRegistry struct {
mimeMap map[string]Marshaler
}
// add adds a marshaler for a case-sensitive MIME type string ("*" to match any
// MIME type).
func (m marshalerRegistry) add(mime string, marshaler Marshaler) error {
if len(mime) == 0 {
return errors.New("empty MIME type")
}
m.mimeMap[mime] = marshaler
return nil
}
// makeMarshalerMIMERegistry returns a new registry of marshalers.
// It allows for a mapping of case-sensitive Content-Type MIME type string to runtime.Marshaler interfaces.
//
// For example, you could allow the client to specify the use of the runtime.JSONPb marshaler
// with a "application/jsonpb" Content-Type and the use of the runtime.JSONBuiltin marshaler
// with a "application/json" Content-Type.
// "*" can be used to match any Content-Type.
// This can be attached to a ServerMux with the marshaler option.
func makeMarshalerMIMERegistry() marshalerRegistry {
return marshalerRegistry{
mimeMap: map[string]Marshaler{
MIMEWildcard: defaultMarshaler,
},
}
}
// WithMarshalerOption returns a ServeMuxOption which associates inbound and outbound
// Marshalers to a MIME type in mux.
func WithMarshalerOption(mime string, marshaler Marshaler) ServeMuxOption {
return func(mux *ServeMux) {
if err := mux.marshalers.add(mime, marshaler); err != nil {
panic(err)
}
}
}

View File

@ -0,0 +1,466 @@
package runtime
import (
"context"
"errors"
"fmt"
"net/http"
"net/textproto"
"regexp"
"strings"
"github.com/grpc-ecosystem/grpc-gateway/v2/internal/httprule"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/health/grpc_health_v1"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/proto"
)
// UnescapingMode defines the behavior of ServeMux when unescaping path parameters.
type UnescapingMode int
const (
// UnescapingModeLegacy is the default V2 behavior, which escapes the entire
// path string before doing any routing.
UnescapingModeLegacy UnescapingMode = iota
// UnescapingModeAllExceptReserved unescapes all path parameters except RFC 6570
// reserved characters.
UnescapingModeAllExceptReserved
// UnescapingModeAllExceptSlash unescapes URL path parameters except path
// separators, which will be left as "%2F".
UnescapingModeAllExceptSlash
// UnescapingModeAllCharacters unescapes all URL path parameters.
UnescapingModeAllCharacters
// UnescapingModeDefault is the default escaping type.
// TODO(v3): default this to UnescapingModeAllExceptReserved per grpc-httpjson-transcoding's
// reference implementation
UnescapingModeDefault = UnescapingModeLegacy
)
var encodedPathSplitter = regexp.MustCompile("(/|%2F)")
// A HandlerFunc handles a specific pair of path pattern and HTTP method.
type HandlerFunc func(w http.ResponseWriter, r *http.Request, pathParams map[string]string)
// ServeMux is a request multiplexer for grpc-gateway.
// It matches http requests to patterns and invokes the corresponding handler.
type ServeMux struct {
// handlers maps HTTP method to a list of handlers.
handlers map[string][]handler
forwardResponseOptions []func(context.Context, http.ResponseWriter, proto.Message) error
marshalers marshalerRegistry
incomingHeaderMatcher HeaderMatcherFunc
outgoingHeaderMatcher HeaderMatcherFunc
metadataAnnotators []func(context.Context, *http.Request) metadata.MD
errorHandler ErrorHandlerFunc
streamErrorHandler StreamErrorHandlerFunc
routingErrorHandler RoutingErrorHandlerFunc
disablePathLengthFallback bool
unescapingMode UnescapingMode
}
// ServeMuxOption is an option that can be given to a ServeMux on construction.
type ServeMuxOption func(*ServeMux)
// WithForwardResponseOption returns a ServeMuxOption representing the forwardResponseOption.
//
// forwardResponseOption is an option that will be called on the relevant context.Context,
// http.ResponseWriter, and proto.Message before every forwarded response.
//
// The message may be nil in the case where just a header is being sent.
func WithForwardResponseOption(forwardResponseOption func(context.Context, http.ResponseWriter, proto.Message) error) ServeMuxOption {
return func(serveMux *ServeMux) {
serveMux.forwardResponseOptions = append(serveMux.forwardResponseOptions, forwardResponseOption)
}
}
// WithUnescapingMode sets the escaping type. See the definitions of UnescapingMode
// for more information.
func WithUnescapingMode(mode UnescapingMode) ServeMuxOption {
return func(serveMux *ServeMux) {
serveMux.unescapingMode = mode
}
}
// SetQueryParameterParser sets the query parameter parser, used to populate message from query parameters.
// Configuring this will mean the generated OpenAPI output is no longer correct, and it should be
// done with careful consideration.
func SetQueryParameterParser(queryParameterParser QueryParameterParser) ServeMuxOption {
return func(serveMux *ServeMux) {
currentQueryParser = queryParameterParser
}
}
// HeaderMatcherFunc checks whether a header key should be forwarded to/from gRPC context.
type HeaderMatcherFunc func(string) (string, bool)
// DefaultHeaderMatcher is used to pass http request headers to/from gRPC context. This adds permanent HTTP header
// keys (as specified by the IANA, e.g: Accept, Cookie, Host) to the gRPC metadata with the grpcgateway- prefix. If you want to know which headers are considered permanent, you can view the isPermanentHTTPHeader function.
// HTTP headers that start with 'Grpc-Metadata-' are mapped to gRPC metadata after removing the prefix 'Grpc-Metadata-'.
// Other headers are not added to the gRPC metadata.
func DefaultHeaderMatcher(key string) (string, bool) {
switch key = textproto.CanonicalMIMEHeaderKey(key); {
case isPermanentHTTPHeader(key):
return MetadataPrefix + key, true
case strings.HasPrefix(key, MetadataHeaderPrefix):
return key[len(MetadataHeaderPrefix):], true
}
return "", false
}
// WithIncomingHeaderMatcher returns a ServeMuxOption representing a headerMatcher for incoming request to gateway.
//
// This matcher will be called with each header in http.Request. If matcher returns true, that header will be
// passed to gRPC context. To transform the header before passing to gRPC context, matcher should return modified header.
func WithIncomingHeaderMatcher(fn HeaderMatcherFunc) ServeMuxOption {
for _, header := range fn.matchedMalformedHeaders() {
grpclog.Warningf("The configured forwarding filter would allow %q to be sent to the gRPC server, which will likely cause errors. See https://github.com/grpc/grpc-go/pull/4803#issuecomment-986093310 for more information.", header)
}
return func(mux *ServeMux) {
mux.incomingHeaderMatcher = fn
}
}
// matchedMalformedHeaders returns the malformed headers that would be forwarded to gRPC server.
func (fn HeaderMatcherFunc) matchedMalformedHeaders() []string {
if fn == nil {
return nil
}
headers := make([]string, 0)
for header := range malformedHTTPHeaders {
out, accept := fn(header)
if accept && isMalformedHTTPHeader(out) {
headers = append(headers, out)
}
}
return headers
}
// WithOutgoingHeaderMatcher returns a ServeMuxOption representing a headerMatcher for outgoing response from gateway.
//
// This matcher will be called with each header in response header metadata. If matcher returns true, that header will be
// passed to http response returned from gateway. To transform the header before passing to response,
// matcher should return modified header.
func WithOutgoingHeaderMatcher(fn HeaderMatcherFunc) ServeMuxOption {
return func(mux *ServeMux) {
mux.outgoingHeaderMatcher = fn
}
}
// WithMetadata returns a ServeMuxOption for passing metadata to a gRPC context.
//
// This can be used by services that need to read from http.Request and modify gRPC context. A common use case
// is reading token from cookie and adding it in gRPC context.
func WithMetadata(annotator func(context.Context, *http.Request) metadata.MD) ServeMuxOption {
return func(serveMux *ServeMux) {
serveMux.metadataAnnotators = append(serveMux.metadataAnnotators, annotator)
}
}
// WithErrorHandler returns a ServeMuxOption for configuring a custom error handler.
//
// This can be used to configure a custom error response.
func WithErrorHandler(fn ErrorHandlerFunc) ServeMuxOption {
return func(serveMux *ServeMux) {
serveMux.errorHandler = fn
}
}
// WithStreamErrorHandler returns a ServeMuxOption that will use the given custom stream
// error handler, which allows for customizing the error trailer for server-streaming
// calls.
//
// For stream errors that occur before any response has been written, the mux's
// ErrorHandler will be invoked. However, once data has been written, the errors must
// be handled differently: they must be included in the response body. The response body's
// final message will include the error details returned by the stream error handler.
func WithStreamErrorHandler(fn StreamErrorHandlerFunc) ServeMuxOption {
return func(serveMux *ServeMux) {
serveMux.streamErrorHandler = fn
}
}
// WithRoutingErrorHandler returns a ServeMuxOption for configuring a custom error handler to handle http routing errors.
//
// Method called for errors which can happen before gRPC route selected or executed.
// The following error codes: StatusMethodNotAllowed StatusNotFound StatusBadRequest
func WithRoutingErrorHandler(fn RoutingErrorHandlerFunc) ServeMuxOption {
return func(serveMux *ServeMux) {
serveMux.routingErrorHandler = fn
}
}
// WithDisablePathLengthFallback returns a ServeMuxOption for disable path length fallback.
func WithDisablePathLengthFallback() ServeMuxOption {
return func(serveMux *ServeMux) {
serveMux.disablePathLengthFallback = true
}
}
// WithHealthEndpointAt returns a ServeMuxOption that will add an endpoint to the created ServeMux at the path specified by endpointPath.
// When called the handler will forward the request to the upstream grpc service health check (defined in the
// gRPC Health Checking Protocol).
//
// See here https://grpc-ecosystem.github.io/grpc-gateway/docs/operations/health_check/ for more information on how
// to setup the protocol in the grpc server.
//
// If you define a service as query parameter, this will also be forwarded as service in the HealthCheckRequest.
func WithHealthEndpointAt(healthCheckClient grpc_health_v1.HealthClient, endpointPath string) ServeMuxOption {
return func(s *ServeMux) {
// error can be ignored since pattern is definitely valid
_ = s.HandlePath(
http.MethodGet, endpointPath, func(w http.ResponseWriter, r *http.Request, _ map[string]string,
) {
_, outboundMarshaler := MarshalerForRequest(s, r)
resp, err := healthCheckClient.Check(r.Context(), &grpc_health_v1.HealthCheckRequest{
Service: r.URL.Query().Get("service"),
})
if err != nil {
s.errorHandler(r.Context(), s, outboundMarshaler, w, r, err)
return
}
w.Header().Set("Content-Type", "application/json")
if resp.GetStatus() != grpc_health_v1.HealthCheckResponse_SERVING {
switch resp.GetStatus() {
case grpc_health_v1.HealthCheckResponse_NOT_SERVING, grpc_health_v1.HealthCheckResponse_UNKNOWN:
err = status.Error(codes.Unavailable, resp.String())
case grpc_health_v1.HealthCheckResponse_SERVICE_UNKNOWN:
err = status.Error(codes.NotFound, resp.String())
}
s.errorHandler(r.Context(), s, outboundMarshaler, w, r, err)
return
}
_ = outboundMarshaler.NewEncoder(w).Encode(resp)
})
}
}
// WithHealthzEndpoint returns a ServeMuxOption that will add a /healthz endpoint to the created ServeMux.
//
// See WithHealthEndpointAt for the general implementation.
func WithHealthzEndpoint(healthCheckClient grpc_health_v1.HealthClient) ServeMuxOption {
return WithHealthEndpointAt(healthCheckClient, "/healthz")
}
// NewServeMux returns a new ServeMux whose internal mapping is empty.
func NewServeMux(opts ...ServeMuxOption) *ServeMux {
serveMux := &ServeMux{
handlers: make(map[string][]handler),
forwardResponseOptions: make([]func(context.Context, http.ResponseWriter, proto.Message) error, 0),
marshalers: makeMarshalerMIMERegistry(),
errorHandler: DefaultHTTPErrorHandler,
streamErrorHandler: DefaultStreamErrorHandler,
routingErrorHandler: DefaultRoutingErrorHandler,
unescapingMode: UnescapingModeDefault,
}
for _, opt := range opts {
opt(serveMux)
}
if serveMux.incomingHeaderMatcher == nil {
serveMux.incomingHeaderMatcher = DefaultHeaderMatcher
}
if serveMux.outgoingHeaderMatcher == nil {
serveMux.outgoingHeaderMatcher = func(key string) (string, bool) {
return fmt.Sprintf("%s%s", MetadataHeaderPrefix, key), true
}
}
return serveMux
}
// Handle associates "h" to the pair of HTTP method and path pattern.
func (s *ServeMux) Handle(meth string, pat Pattern, h HandlerFunc) {
s.handlers[meth] = append([]handler{{pat: pat, h: h}}, s.handlers[meth]...)
}
// HandlePath allows users to configure custom path handlers.
// refer: https://grpc-ecosystem.github.io/grpc-gateway/docs/operations/inject_router/
func (s *ServeMux) HandlePath(meth string, pathPattern string, h HandlerFunc) error {
compiler, err := httprule.Parse(pathPattern)
if err != nil {
return fmt.Errorf("parsing path pattern: %w", err)
}
tp := compiler.Compile()
pattern, err := NewPattern(tp.Version, tp.OpCodes, tp.Pool, tp.Verb)
if err != nil {
return fmt.Errorf("creating new pattern: %w", err)
}
s.Handle(meth, pattern, h)
return nil
}
// ServeHTTP dispatches the request to the first handler whose pattern matches to r.Method and r.URL.Path.
func (s *ServeMux) ServeHTTP(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
path := r.URL.Path
if !strings.HasPrefix(path, "/") {
_, outboundMarshaler := MarshalerForRequest(s, r)
s.routingErrorHandler(ctx, s, outboundMarshaler, w, r, http.StatusBadRequest)
return
}
// TODO(v3): remove UnescapingModeLegacy
if s.unescapingMode != UnescapingModeLegacy && r.URL.RawPath != "" {
path = r.URL.RawPath
}
if override := r.Header.Get("X-HTTP-Method-Override"); override != "" && s.isPathLengthFallback(r) {
r.Method = strings.ToUpper(override)
if err := r.ParseForm(); err != nil {
_, outboundMarshaler := MarshalerForRequest(s, r)
sterr := status.Error(codes.InvalidArgument, err.Error())
s.errorHandler(ctx, s, outboundMarshaler, w, r, sterr)
return
}
}
var pathComponents []string
// since in UnescapeModeLegacy, the URL will already have been fully unescaped, if we also split on "%2F"
// in this escaping mode we would be double unescaping but in UnescapingModeAllCharacters, we still do as the
// path is the RawPath (i.e. unescaped). That does mean that the behavior of this function will change its default
// behavior when the UnescapingModeDefault gets changed from UnescapingModeLegacy to UnescapingModeAllExceptReserved
if s.unescapingMode == UnescapingModeAllCharacters {
pathComponents = encodedPathSplitter.Split(path[1:], -1)
} else {
pathComponents = strings.Split(path[1:], "/")
}
lastPathComponent := pathComponents[len(pathComponents)-1]
for _, h := range s.handlers[r.Method] {
// If the pattern has a verb, explicitly look for a suffix in the last
// component that matches a colon plus the verb. This allows us to
// handle some cases that otherwise can't be correctly handled by the
// former LastIndex case, such as when the verb literal itself contains
// a colon. This should work for all cases that have run through the
// parser because we know what verb we're looking for, however, there
// are still some cases that the parser itself cannot disambiguate. See
// the comment there if interested.
var verb string
patVerb := h.pat.Verb()
idx := -1
if patVerb != "" && strings.HasSuffix(lastPathComponent, ":"+patVerb) {
idx = len(lastPathComponent) - len(patVerb) - 1
}
if idx == 0 {
_, outboundMarshaler := MarshalerForRequest(s, r)
s.routingErrorHandler(ctx, s, outboundMarshaler, w, r, http.StatusNotFound)
return
}
comps := make([]string, len(pathComponents))
copy(comps, pathComponents)
if idx > 0 {
comps[len(comps)-1], verb = lastPathComponent[:idx], lastPathComponent[idx+1:]
}
pathParams, err := h.pat.MatchAndEscape(comps, verb, s.unescapingMode)
if err != nil {
var mse MalformedSequenceError
if ok := errors.As(err, &mse); ok {
_, outboundMarshaler := MarshalerForRequest(s, r)
s.errorHandler(ctx, s, outboundMarshaler, w, r, &HTTPStatusError{
HTTPStatus: http.StatusBadRequest,
Err: mse,
})
}
continue
}
h.h(w, r, pathParams)
return
}
// if no handler has found for the request, lookup for other methods
// to handle POST -> GET fallback if the request is subject to path
// length fallback.
// Note we are not eagerly checking the request here as we want to return the
// right HTTP status code, and we need to process the fallback candidates in
// order to do that.
for m, handlers := range s.handlers {
if m == r.Method {
continue
}
for _, h := range handlers {
var verb string
patVerb := h.pat.Verb()
idx := -1
if patVerb != "" && strings.HasSuffix(lastPathComponent, ":"+patVerb) {
idx = len(lastPathComponent) - len(patVerb) - 1
}
comps := make([]string, len(pathComponents))
copy(comps, pathComponents)
if idx > 0 {
comps[len(comps)-1], verb = lastPathComponent[:idx], lastPathComponent[idx+1:]
}
pathParams, err := h.pat.MatchAndEscape(comps, verb, s.unescapingMode)
if err != nil {
var mse MalformedSequenceError
if ok := errors.As(err, &mse); ok {
_, outboundMarshaler := MarshalerForRequest(s, r)
s.errorHandler(ctx, s, outboundMarshaler, w, r, &HTTPStatusError{
HTTPStatus: http.StatusBadRequest,
Err: mse,
})
}
continue
}
// X-HTTP-Method-Override is optional. Always allow fallback to POST.
// Also, only consider POST -> GET fallbacks, and avoid falling back to
// potentially dangerous operations like DELETE.
if s.isPathLengthFallback(r) && m == http.MethodGet {
if err := r.ParseForm(); err != nil {
_, outboundMarshaler := MarshalerForRequest(s, r)
sterr := status.Error(codes.InvalidArgument, err.Error())
s.errorHandler(ctx, s, outboundMarshaler, w, r, sterr)
return
}
h.h(w, r, pathParams)
return
}
_, outboundMarshaler := MarshalerForRequest(s, r)
s.routingErrorHandler(ctx, s, outboundMarshaler, w, r, http.StatusMethodNotAllowed)
return
}
}
_, outboundMarshaler := MarshalerForRequest(s, r)
s.routingErrorHandler(ctx, s, outboundMarshaler, w, r, http.StatusNotFound)
}
// GetForwardResponseOptions returns the ForwardResponseOptions associated with this ServeMux.
func (s *ServeMux) GetForwardResponseOptions() []func(context.Context, http.ResponseWriter, proto.Message) error {
return s.forwardResponseOptions
}
func (s *ServeMux) isPathLengthFallback(r *http.Request) bool {
return !s.disablePathLengthFallback && r.Method == "POST" && r.Header.Get("Content-Type") == "application/x-www-form-urlencoded"
}
type handler struct {
pat Pattern
h HandlerFunc
}

View File

@ -0,0 +1,381 @@
package runtime
import (
"errors"
"fmt"
"strconv"
"strings"
"github.com/grpc-ecosystem/grpc-gateway/v2/utilities"
"google.golang.org/grpc/grpclog"
)
var (
// ErrNotMatch indicates that the given HTTP request path does not match to the pattern.
ErrNotMatch = errors.New("not match to the path pattern")
// ErrInvalidPattern indicates that the given definition of Pattern is not valid.
ErrInvalidPattern = errors.New("invalid pattern")
)
type MalformedSequenceError string
func (e MalformedSequenceError) Error() string {
return "malformed path escape " + strconv.Quote(string(e))
}
type op struct {
code utilities.OpCode
operand int
}
// Pattern is a template pattern of http request paths defined in
// https://github.com/googleapis/googleapis/blob/master/google/api/http.proto
type Pattern struct {
// ops is a list of operations
ops []op
// pool is a constant pool indexed by the operands or vars.
pool []string
// vars is a list of variables names to be bound by this pattern
vars []string
// stacksize is the max depth of the stack
stacksize int
// tailLen is the length of the fixed-size segments after a deep wildcard
tailLen int
// verb is the VERB part of the path pattern. It is empty if the pattern does not have VERB part.
verb string
}
// NewPattern returns a new Pattern from the given definition values.
// "ops" is a sequence of op codes. "pool" is a constant pool.
// "verb" is the verb part of the pattern. It is empty if the pattern does not have the part.
// "version" must be 1 for now.
// It returns an error if the given definition is invalid.
func NewPattern(version int, ops []int, pool []string, verb string) (Pattern, error) {
if version != 1 {
grpclog.Infof("unsupported version: %d", version)
return Pattern{}, ErrInvalidPattern
}
l := len(ops)
if l%2 != 0 {
grpclog.Infof("odd number of ops codes: %d", l)
return Pattern{}, ErrInvalidPattern
}
var (
typedOps []op
stack, maxstack int
tailLen int
pushMSeen bool
vars []string
)
for i := 0; i < l; i += 2 {
op := op{code: utilities.OpCode(ops[i]), operand: ops[i+1]}
switch op.code {
case utilities.OpNop:
continue
case utilities.OpPush:
if pushMSeen {
tailLen++
}
stack++
case utilities.OpPushM:
if pushMSeen {
grpclog.Infof("pushM appears twice")
return Pattern{}, ErrInvalidPattern
}
pushMSeen = true
stack++
case utilities.OpLitPush:
if op.operand < 0 || len(pool) <= op.operand {
grpclog.Infof("negative literal index: %d", op.operand)
return Pattern{}, ErrInvalidPattern
}
if pushMSeen {
tailLen++
}
stack++
case utilities.OpConcatN:
if op.operand <= 0 {
grpclog.Infof("negative concat size: %d", op.operand)
return Pattern{}, ErrInvalidPattern
}
stack -= op.operand
if stack < 0 {
grpclog.Info("stack underflow")
return Pattern{}, ErrInvalidPattern
}
stack++
case utilities.OpCapture:
if op.operand < 0 || len(pool) <= op.operand {
grpclog.Infof("variable name index out of bound: %d", op.operand)
return Pattern{}, ErrInvalidPattern
}
v := pool[op.operand]
op.operand = len(vars)
vars = append(vars, v)
stack--
if stack < 0 {
grpclog.Infof("stack underflow")
return Pattern{}, ErrInvalidPattern
}
default:
grpclog.Infof("invalid opcode: %d", op.code)
return Pattern{}, ErrInvalidPattern
}
if maxstack < stack {
maxstack = stack
}
typedOps = append(typedOps, op)
}
return Pattern{
ops: typedOps,
pool: pool,
vars: vars,
stacksize: maxstack,
tailLen: tailLen,
verb: verb,
}, nil
}
// MustPattern is a helper function which makes it easier to call NewPattern in variable initialization.
func MustPattern(p Pattern, err error) Pattern {
if err != nil {
grpclog.Fatalf("Pattern initialization failed: %v", err)
}
return p
}
// MatchAndEscape examines components to determine if they match to a Pattern.
// MatchAndEscape will return an error if no Patterns matched or if a pattern
// matched but contained malformed escape sequences. If successful, the function
// returns a mapping from field paths to their captured values.
func (p Pattern) MatchAndEscape(components []string, verb string, unescapingMode UnescapingMode) (map[string]string, error) {
if p.verb != verb {
if p.verb != "" {
return nil, ErrNotMatch
}
if len(components) == 0 {
components = []string{":" + verb}
} else {
components = append([]string{}, components...)
components[len(components)-1] += ":" + verb
}
}
var pos int
stack := make([]string, 0, p.stacksize)
captured := make([]string, len(p.vars))
l := len(components)
for _, op := range p.ops {
var err error
switch op.code {
case utilities.OpNop:
continue
case utilities.OpPush, utilities.OpLitPush:
if pos >= l {
return nil, ErrNotMatch
}
c := components[pos]
if op.code == utilities.OpLitPush {
if lit := p.pool[op.operand]; c != lit {
return nil, ErrNotMatch
}
} else if op.code == utilities.OpPush {
if c, err = unescape(c, unescapingMode, false); err != nil {
return nil, err
}
}
stack = append(stack, c)
pos++
case utilities.OpPushM:
end := len(components)
if end < pos+p.tailLen {
return nil, ErrNotMatch
}
end -= p.tailLen
c := strings.Join(components[pos:end], "/")
if c, err = unescape(c, unescapingMode, true); err != nil {
return nil, err
}
stack = append(stack, c)
pos = end
case utilities.OpConcatN:
n := op.operand
l := len(stack) - n
stack = append(stack[:l], strings.Join(stack[l:], "/"))
case utilities.OpCapture:
n := len(stack) - 1
captured[op.operand] = stack[n]
stack = stack[:n]
}
}
if pos < l {
return nil, ErrNotMatch
}
bindings := make(map[string]string)
for i, val := range captured {
bindings[p.vars[i]] = val
}
return bindings, nil
}
// MatchAndEscape examines components to determine if they match to a Pattern.
// It will never perform per-component unescaping (see: UnescapingModeLegacy).
// MatchAndEscape will return an error if no Patterns matched. If successful,
// the function returns a mapping from field paths to their captured values.
//
// Deprecated: Use MatchAndEscape.
func (p Pattern) Match(components []string, verb string) (map[string]string, error) {
return p.MatchAndEscape(components, verb, UnescapingModeDefault)
}
// Verb returns the verb part of the Pattern.
func (p Pattern) Verb() string { return p.verb }
func (p Pattern) String() string {
var stack []string
for _, op := range p.ops {
switch op.code {
case utilities.OpNop:
continue
case utilities.OpPush:
stack = append(stack, "*")
case utilities.OpLitPush:
stack = append(stack, p.pool[op.operand])
case utilities.OpPushM:
stack = append(stack, "**")
case utilities.OpConcatN:
n := op.operand
l := len(stack) - n
stack = append(stack[:l], strings.Join(stack[l:], "/"))
case utilities.OpCapture:
n := len(stack) - 1
stack[n] = fmt.Sprintf("{%s=%s}", p.vars[op.operand], stack[n])
}
}
segs := strings.Join(stack, "/")
if p.verb != "" {
return fmt.Sprintf("/%s:%s", segs, p.verb)
}
return "/" + segs
}
/*
* The following code is adopted and modified from Go's standard library
* and carries the attached license.
*
* Copyright 2009 The Go Authors. All rights reserved.
* Use of this source code is governed by a BSD-style
* license that can be found in the LICENSE file.
*/
// ishex returns whether or not the given byte is a valid hex character
func ishex(c byte) bool {
switch {
case '0' <= c && c <= '9':
return true
case 'a' <= c && c <= 'f':
return true
case 'A' <= c && c <= 'F':
return true
}
return false
}
func isRFC6570Reserved(c byte) bool {
switch c {
case '!', '#', '$', '&', '\'', '(', ')', '*',
'+', ',', '/', ':', ';', '=', '?', '@', '[', ']':
return true
default:
return false
}
}
// unhex converts a hex point to the bit representation
func unhex(c byte) byte {
switch {
case '0' <= c && c <= '9':
return c - '0'
case 'a' <= c && c <= 'f':
return c - 'a' + 10
case 'A' <= c && c <= 'F':
return c - 'A' + 10
}
return 0
}
// shouldUnescapeWithMode returns true if the character is escapable with the
// given mode
func shouldUnescapeWithMode(c byte, mode UnescapingMode) bool {
switch mode {
case UnescapingModeAllExceptReserved:
if isRFC6570Reserved(c) {
return false
}
case UnescapingModeAllExceptSlash:
if c == '/' {
return false
}
case UnescapingModeAllCharacters:
return true
}
return true
}
// unescape unescapes a path string using the provided mode
func unescape(s string, mode UnescapingMode, multisegment bool) (string, error) {
// TODO(v3): remove UnescapingModeLegacy
if mode == UnescapingModeLegacy {
return s, nil
}
if !multisegment {
mode = UnescapingModeAllCharacters
}
// Count %, check that they're well-formed.
n := 0
for i := 0; i < len(s); {
if s[i] == '%' {
n++
if i+2 >= len(s) || !ishex(s[i+1]) || !ishex(s[i+2]) {
s = s[i:]
if len(s) > 3 {
s = s[:3]
}
return "", MalformedSequenceError(s)
}
i += 3
} else {
i++
}
}
if n == 0 {
return s, nil
}
var t strings.Builder
t.Grow(len(s))
for i := 0; i < len(s); i++ {
switch s[i] {
case '%':
c := unhex(s[i+1])<<4 | unhex(s[i+2])
if shouldUnescapeWithMode(c, mode) {
t.WriteByte(c)
i += 2
continue
}
fallthrough
default:
t.WriteByte(s[i])
}
}
return t.String(), nil
}

View File

@ -0,0 +1,80 @@
package runtime
import (
"google.golang.org/protobuf/proto"
)
// StringP returns a pointer to a string whose pointee is same as the given string value.
func StringP(val string) (*string, error) {
return proto.String(val), nil
}
// BoolP parses the given string representation of a boolean value,
// and returns a pointer to a bool whose value is same as the parsed value.
func BoolP(val string) (*bool, error) {
b, err := Bool(val)
if err != nil {
return nil, err
}
return proto.Bool(b), nil
}
// Float64P parses the given string representation of a floating point number,
// and returns a pointer to a float64 whose value is same as the parsed number.
func Float64P(val string) (*float64, error) {
f, err := Float64(val)
if err != nil {
return nil, err
}
return proto.Float64(f), nil
}
// Float32P parses the given string representation of a floating point number,
// and returns a pointer to a float32 whose value is same as the parsed number.
func Float32P(val string) (*float32, error) {
f, err := Float32(val)
if err != nil {
return nil, err
}
return proto.Float32(f), nil
}
// Int64P parses the given string representation of an integer
// and returns a pointer to a int64 whose value is same as the parsed integer.
func Int64P(val string) (*int64, error) {
i, err := Int64(val)
if err != nil {
return nil, err
}
return proto.Int64(i), nil
}
// Int32P parses the given string representation of an integer
// and returns a pointer to a int32 whose value is same as the parsed integer.
func Int32P(val string) (*int32, error) {
i, err := Int32(val)
if err != nil {
return nil, err
}
return proto.Int32(i), err
}
// Uint64P parses the given string representation of an integer
// and returns a pointer to a uint64 whose value is same as the parsed integer.
func Uint64P(val string) (*uint64, error) {
i, err := Uint64(val)
if err != nil {
return nil, err
}
return proto.Uint64(i), err
}
// Uint32P parses the given string representation of an integer
// and returns a pointer to a uint32 whose value is same as the parsed integer.
func Uint32P(val string) (*uint32, error) {
i, err := Uint32(val)
if err != nil {
return nil, err
}
return proto.Uint32(i), err
}

View File

@ -0,0 +1,338 @@
package runtime
import (
"errors"
"fmt"
"net/url"
"regexp"
"strconv"
"strings"
"time"
"github.com/grpc-ecosystem/grpc-gateway/v2/utilities"
"google.golang.org/grpc/grpclog"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/reflect/protoregistry"
"google.golang.org/protobuf/types/known/durationpb"
field_mask "google.golang.org/protobuf/types/known/fieldmaskpb"
"google.golang.org/protobuf/types/known/structpb"
"google.golang.org/protobuf/types/known/timestamppb"
"google.golang.org/protobuf/types/known/wrapperspb"
)
var valuesKeyRegexp = regexp.MustCompile(`^(.*)\[(.*)\]$`)
var currentQueryParser QueryParameterParser = &DefaultQueryParser{}
// QueryParameterParser defines interface for all query parameter parsers
type QueryParameterParser interface {
Parse(msg proto.Message, values url.Values, filter *utilities.DoubleArray) error
}
// PopulateQueryParameters parses query parameters
// into "msg" using current query parser
func PopulateQueryParameters(msg proto.Message, values url.Values, filter *utilities.DoubleArray) error {
return currentQueryParser.Parse(msg, values, filter)
}
// DefaultQueryParser is a QueryParameterParser which implements the default
// query parameters parsing behavior.
//
// See https://github.com/grpc-ecosystem/grpc-gateway/issues/2632 for more context.
type DefaultQueryParser struct{}
// Parse populates "values" into "msg".
// A value is ignored if its key starts with one of the elements in "filter".
func (*DefaultQueryParser) Parse(msg proto.Message, values url.Values, filter *utilities.DoubleArray) error {
for key, values := range values {
if match := valuesKeyRegexp.FindStringSubmatch(key); len(match) == 3 {
key = match[1]
values = append([]string{match[2]}, values...)
}
fieldPath := strings.Split(key, ".")
if filter.HasCommonPrefix(fieldPath) {
continue
}
if err := populateFieldValueFromPath(msg.ProtoReflect(), fieldPath, values); err != nil {
return err
}
}
return nil
}
// PopulateFieldFromPath sets a value in a nested Protobuf structure.
func PopulateFieldFromPath(msg proto.Message, fieldPathString string, value string) error {
fieldPath := strings.Split(fieldPathString, ".")
return populateFieldValueFromPath(msg.ProtoReflect(), fieldPath, []string{value})
}
func populateFieldValueFromPath(msgValue protoreflect.Message, fieldPath []string, values []string) error {
if len(fieldPath) < 1 {
return errors.New("no field path")
}
if len(values) < 1 {
return errors.New("no value provided")
}
var fieldDescriptor protoreflect.FieldDescriptor
for i, fieldName := range fieldPath {
fields := msgValue.Descriptor().Fields()
// Get field by name
fieldDescriptor = fields.ByName(protoreflect.Name(fieldName))
if fieldDescriptor == nil {
fieldDescriptor = fields.ByJSONName(fieldName)
if fieldDescriptor == nil {
// We're not returning an error here because this could just be
// an extra query parameter that isn't part of the request.
grpclog.Infof("field not found in %q: %q", msgValue.Descriptor().FullName(), strings.Join(fieldPath, "."))
return nil
}
}
// If this is the last element, we're done
if i == len(fieldPath)-1 {
break
}
// Only singular message fields are allowed
if fieldDescriptor.Message() == nil || fieldDescriptor.Cardinality() == protoreflect.Repeated {
return fmt.Errorf("invalid path: %q is not a message", fieldName)
}
// Get the nested message
msgValue = msgValue.Mutable(fieldDescriptor).Message()
}
// Check if oneof already set
if of := fieldDescriptor.ContainingOneof(); of != nil {
if f := msgValue.WhichOneof(of); f != nil {
return fmt.Errorf("field already set for oneof %q", of.FullName().Name())
}
}
switch {
case fieldDescriptor.IsList():
return populateRepeatedField(fieldDescriptor, msgValue.Mutable(fieldDescriptor).List(), values)
case fieldDescriptor.IsMap():
return populateMapField(fieldDescriptor, msgValue.Mutable(fieldDescriptor).Map(), values)
}
if len(values) > 1 {
return fmt.Errorf("too many values for field %q: %s", fieldDescriptor.FullName().Name(), strings.Join(values, ", "))
}
return populateField(fieldDescriptor, msgValue, values[0])
}
func populateField(fieldDescriptor protoreflect.FieldDescriptor, msgValue protoreflect.Message, value string) error {
v, err := parseField(fieldDescriptor, value)
if err != nil {
return fmt.Errorf("parsing field %q: %w", fieldDescriptor.FullName().Name(), err)
}
msgValue.Set(fieldDescriptor, v)
return nil
}
func populateRepeatedField(fieldDescriptor protoreflect.FieldDescriptor, list protoreflect.List, values []string) error {
for _, value := range values {
v, err := parseField(fieldDescriptor, value)
if err != nil {
return fmt.Errorf("parsing list %q: %w", fieldDescriptor.FullName().Name(), err)
}
list.Append(v)
}
return nil
}
func populateMapField(fieldDescriptor protoreflect.FieldDescriptor, mp protoreflect.Map, values []string) error {
if len(values) != 2 {
return fmt.Errorf("more than one value provided for key %q in map %q", values[0], fieldDescriptor.FullName())
}
key, err := parseField(fieldDescriptor.MapKey(), values[0])
if err != nil {
return fmt.Errorf("parsing map key %q: %w", fieldDescriptor.FullName().Name(), err)
}
value, err := parseField(fieldDescriptor.MapValue(), values[1])
if err != nil {
return fmt.Errorf("parsing map value %q: %w", fieldDescriptor.FullName().Name(), err)
}
mp.Set(key.MapKey(), value)
return nil
}
func parseField(fieldDescriptor protoreflect.FieldDescriptor, value string) (protoreflect.Value, error) {
switch fieldDescriptor.Kind() {
case protoreflect.BoolKind:
v, err := strconv.ParseBool(value)
if err != nil {
return protoreflect.Value{}, err
}
return protoreflect.ValueOfBool(v), nil
case protoreflect.EnumKind:
enum, err := protoregistry.GlobalTypes.FindEnumByName(fieldDescriptor.Enum().FullName())
if err != nil {
if errors.Is(err, protoregistry.NotFound) {
return protoreflect.Value{}, fmt.Errorf("enum %q is not registered", fieldDescriptor.Enum().FullName())
}
return protoreflect.Value{}, fmt.Errorf("failed to look up enum: %w", err)
}
// Look for enum by name
v := enum.Descriptor().Values().ByName(protoreflect.Name(value))
if v == nil {
i, err := strconv.Atoi(value)
if err != nil {
return protoreflect.Value{}, fmt.Errorf("%q is not a valid value", value)
}
// Look for enum by number
if v = enum.Descriptor().Values().ByNumber(protoreflect.EnumNumber(i)); v == nil {
return protoreflect.Value{}, fmt.Errorf("%q is not a valid value", value)
}
}
return protoreflect.ValueOfEnum(v.Number()), nil
case protoreflect.Int32Kind, protoreflect.Sint32Kind, protoreflect.Sfixed32Kind:
v, err := strconv.ParseInt(value, 10, 32)
if err != nil {
return protoreflect.Value{}, err
}
return protoreflect.ValueOfInt32(int32(v)), nil
case protoreflect.Int64Kind, protoreflect.Sint64Kind, protoreflect.Sfixed64Kind:
v, err := strconv.ParseInt(value, 10, 64)
if err != nil {
return protoreflect.Value{}, err
}
return protoreflect.ValueOfInt64(v), nil
case protoreflect.Uint32Kind, protoreflect.Fixed32Kind:
v, err := strconv.ParseUint(value, 10, 32)
if err != nil {
return protoreflect.Value{}, err
}
return protoreflect.ValueOfUint32(uint32(v)), nil
case protoreflect.Uint64Kind, protoreflect.Fixed64Kind:
v, err := strconv.ParseUint(value, 10, 64)
if err != nil {
return protoreflect.Value{}, err
}
return protoreflect.ValueOfUint64(v), nil
case protoreflect.FloatKind:
v, err := strconv.ParseFloat(value, 32)
if err != nil {
return protoreflect.Value{}, err
}
return protoreflect.ValueOfFloat32(float32(v)), nil
case protoreflect.DoubleKind:
v, err := strconv.ParseFloat(value, 64)
if err != nil {
return protoreflect.Value{}, err
}
return protoreflect.ValueOfFloat64(v), nil
case protoreflect.StringKind:
return protoreflect.ValueOfString(value), nil
case protoreflect.BytesKind:
v, err := Bytes(value)
if err != nil {
return protoreflect.Value{}, err
}
return protoreflect.ValueOfBytes(v), nil
case protoreflect.MessageKind, protoreflect.GroupKind:
return parseMessage(fieldDescriptor.Message(), value)
default:
panic(fmt.Sprintf("unknown field kind: %v", fieldDescriptor.Kind()))
}
}
func parseMessage(msgDescriptor protoreflect.MessageDescriptor, value string) (protoreflect.Value, error) {
var msg proto.Message
switch msgDescriptor.FullName() {
case "google.protobuf.Timestamp":
t, err := time.Parse(time.RFC3339Nano, value)
if err != nil {
return protoreflect.Value{}, err
}
msg = timestamppb.New(t)
case "google.protobuf.Duration":
d, err := time.ParseDuration(value)
if err != nil {
return protoreflect.Value{}, err
}
msg = durationpb.New(d)
case "google.protobuf.DoubleValue":
v, err := strconv.ParseFloat(value, 64)
if err != nil {
return protoreflect.Value{}, err
}
msg = wrapperspb.Double(v)
case "google.protobuf.FloatValue":
v, err := strconv.ParseFloat(value, 32)
if err != nil {
return protoreflect.Value{}, err
}
msg = wrapperspb.Float(float32(v))
case "google.protobuf.Int64Value":
v, err := strconv.ParseInt(value, 10, 64)
if err != nil {
return protoreflect.Value{}, err
}
msg = wrapperspb.Int64(v)
case "google.protobuf.Int32Value":
v, err := strconv.ParseInt(value, 10, 32)
if err != nil {
return protoreflect.Value{}, err
}
msg = wrapperspb.Int32(int32(v))
case "google.protobuf.UInt64Value":
v, err := strconv.ParseUint(value, 10, 64)
if err != nil {
return protoreflect.Value{}, err
}
msg = wrapperspb.UInt64(v)
case "google.protobuf.UInt32Value":
v, err := strconv.ParseUint(value, 10, 32)
if err != nil {
return protoreflect.Value{}, err
}
msg = wrapperspb.UInt32(uint32(v))
case "google.protobuf.BoolValue":
v, err := strconv.ParseBool(value)
if err != nil {
return protoreflect.Value{}, err
}
msg = wrapperspb.Bool(v)
case "google.protobuf.StringValue":
msg = wrapperspb.String(value)
case "google.protobuf.BytesValue":
v, err := Bytes(value)
if err != nil {
return protoreflect.Value{}, err
}
msg = wrapperspb.Bytes(v)
case "google.protobuf.FieldMask":
fm := &field_mask.FieldMask{}
fm.Paths = append(fm.Paths, strings.Split(value, ",")...)
msg = fm
case "google.protobuf.Value":
var v structpb.Value
if err := protojson.Unmarshal([]byte(value), &v); err != nil {
return protoreflect.Value{}, err
}
msg = &v
case "google.protobuf.Struct":
var v structpb.Struct
if err := protojson.Unmarshal([]byte(value), &v); err != nil {
return protoreflect.Value{}, err
}
msg = &v
default:
return protoreflect.Value{}, fmt.Errorf("unsupported message type: %q", string(msgDescriptor.FullName()))
}
return protoreflect.ValueOfMessage(msg.ProtoReflect()), nil
}

View File

@ -0,0 +1,31 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
package(default_visibility = ["//visibility:public"])
go_library(
name = "utilities",
srcs = [
"doc.go",
"pattern.go",
"readerfactory.go",
"string_array_flag.go",
"trie.go",
],
importpath = "github.com/grpc-ecosystem/grpc-gateway/v2/utilities",
)
go_test(
name = "utilities_test",
size = "small",
srcs = [
"string_array_flag_test.go",
"trie_test.go",
],
deps = [":utilities"],
)
alias(
name = "go_default_library",
actual = ":utilities",
visibility = ["//visibility:public"],
)

View File

@ -0,0 +1,2 @@
// Package utilities provides members for internal use in grpc-gateway.
package utilities

View File

@ -0,0 +1,22 @@
package utilities
// An OpCode is a opcode of compiled path patterns.
type OpCode int
// These constants are the valid values of OpCode.
const (
// OpNop does nothing
OpNop = OpCode(iota)
// OpPush pushes a component to stack
OpPush
// OpLitPush pushes a component to stack if it matches to the literal
OpLitPush
// OpPushM concatenates the remaining components and pushes it to stack
OpPushM
// OpConcatN pops N items from stack, concatenates them and pushes it back to stack
OpConcatN
// OpCapture pops an item and binds it to the variable
OpCapture
// OpEnd is the least positive invalid opcode.
OpEnd
)

View File

@ -0,0 +1,19 @@
package utilities
import (
"bytes"
"io"
)
// IOReaderFactory takes in an io.Reader and returns a function that will allow you to create a new reader that begins
// at the start of the stream
func IOReaderFactory(r io.Reader) (func() io.Reader, error) {
b, err := io.ReadAll(r)
if err != nil {
return nil, err
}
return func() io.Reader {
return bytes.NewReader(b)
}, nil
}

View File

@ -0,0 +1,33 @@
package utilities
import (
"flag"
"strings"
)
// flagInterface is an cut down interface to `flag`
type flagInterface interface {
Var(value flag.Value, name string, usage string)
}
// StringArrayFlag defines a flag with the specified name and usage string.
// The return value is the address of a `StringArrayFlags` variable that stores the repeated values of the flag.
func StringArrayFlag(f flagInterface, name string, usage string) *StringArrayFlags {
value := &StringArrayFlags{}
f.Var(value, name, usage)
return value
}
// StringArrayFlags is a wrapper of `[]string` to provider an interface for `flag.Var`
type StringArrayFlags []string
// String returns a string representation of `StringArrayFlags`
func (i *StringArrayFlags) String() string {
return strings.Join(*i, ",")
}
// Set appends a value to `StringArrayFlags`
func (i *StringArrayFlags) Set(value string) error {
*i = append(*i, value)
return nil
}

View File

@ -0,0 +1,174 @@
package utilities
import (
"sort"
)
// DoubleArray is a Double Array implementation of trie on sequences of strings.
type DoubleArray struct {
// Encoding keeps an encoding from string to int
Encoding map[string]int
// Base is the base array of Double Array
Base []int
// Check is the check array of Double Array
Check []int
}
// NewDoubleArray builds a DoubleArray from a set of sequences of strings.
func NewDoubleArray(seqs [][]string) *DoubleArray {
da := &DoubleArray{Encoding: make(map[string]int)}
if len(seqs) == 0 {
return da
}
encoded := registerTokens(da, seqs)
sort.Sort(byLex(encoded))
root := node{row: -1, col: -1, left: 0, right: len(encoded)}
addSeqs(da, encoded, 0, root)
for i := len(da.Base); i > 0; i-- {
if da.Check[i-1] != 0 {
da.Base = da.Base[:i]
da.Check = da.Check[:i]
break
}
}
return da
}
func registerTokens(da *DoubleArray, seqs [][]string) [][]int {
var result [][]int
for _, seq := range seqs {
encoded := make([]int, 0, len(seq))
for _, token := range seq {
if _, ok := da.Encoding[token]; !ok {
da.Encoding[token] = len(da.Encoding)
}
encoded = append(encoded, da.Encoding[token])
}
result = append(result, encoded)
}
for i := range result {
result[i] = append(result[i], len(da.Encoding))
}
return result
}
type node struct {
row, col int
left, right int
}
func (n node) value(seqs [][]int) int {
return seqs[n.row][n.col]
}
func (n node) children(seqs [][]int) []*node {
var result []*node
lastVal := int(-1)
last := new(node)
for i := n.left; i < n.right; i++ {
if lastVal == seqs[i][n.col+1] {
continue
}
last.right = i
last = &node{
row: i,
col: n.col + 1,
left: i,
}
result = append(result, last)
}
last.right = n.right
return result
}
func addSeqs(da *DoubleArray, seqs [][]int, pos int, n node) {
ensureSize(da, pos)
children := n.children(seqs)
var i int
for i = 1; ; i++ {
ok := func() bool {
for _, child := range children {
code := child.value(seqs)
j := i + code
ensureSize(da, j)
if da.Check[j] != 0 {
return false
}
}
return true
}()
if ok {
break
}
}
da.Base[pos] = i
for _, child := range children {
code := child.value(seqs)
j := i + code
da.Check[j] = pos + 1
}
terminator := len(da.Encoding)
for _, child := range children {
code := child.value(seqs)
if code == terminator {
continue
}
j := i + code
addSeqs(da, seqs, j, *child)
}
}
func ensureSize(da *DoubleArray, i int) {
for i >= len(da.Base) {
da.Base = append(da.Base, make([]int, len(da.Base)+1)...)
da.Check = append(da.Check, make([]int, len(da.Check)+1)...)
}
}
type byLex [][]int
func (l byLex) Len() int { return len(l) }
func (l byLex) Swap(i, j int) { l[i], l[j] = l[j], l[i] }
func (l byLex) Less(i, j int) bool {
si := l[i]
sj := l[j]
var k int
for k = 0; k < len(si) && k < len(sj); k++ {
if si[k] < sj[k] {
return true
}
if si[k] > sj[k] {
return false
}
}
return k < len(sj)
}
// HasCommonPrefix determines if any sequence in the DoubleArray is a prefix of the given sequence.
func (da *DoubleArray) HasCommonPrefix(seq []string) bool {
if len(da.Base) == 0 {
return false
}
var i int
for _, t := range seq {
code, ok := da.Encoding[t]
if !ok {
break
}
j := da.Base[i] + code
if len(da.Check) <= j || da.Check[j] != i+1 {
break
}
i = j
}
j := da.Base[i] + len(da.Encoding)
if len(da.Check) <= j || da.Check[j] != i+1 {
return false
}
return true
}

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,61 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"io"
"net/http"
"net/url"
"strings"
)
// DefaultClient is the default Client and is used by Get, Head, Post and PostForm.
// Please be careful of intitialization order - for example, if you change
// the global propagator, the DefaultClient might still be using the old one.
var DefaultClient = &http.Client{Transport: NewTransport(http.DefaultTransport)}
// Get is a convenient replacement for http.Get that adds a span around the request.
func Get(ctx context.Context, targetURL string) (resp *http.Response, err error) {
req, err := http.NewRequestWithContext(ctx, "GET", targetURL, nil)
if err != nil {
return nil, err
}
return DefaultClient.Do(req)
}
// Head is a convenient replacement for http.Head that adds a span around the request.
func Head(ctx context.Context, targetURL string) (resp *http.Response, err error) {
req, err := http.NewRequestWithContext(ctx, "HEAD", targetURL, nil)
if err != nil {
return nil, err
}
return DefaultClient.Do(req)
}
// Post is a convenient replacement for http.Post that adds a span around the request.
func Post(ctx context.Context, targetURL, contentType string, body io.Reader) (resp *http.Response, err error) {
req, err := http.NewRequestWithContext(ctx, "POST", targetURL, body)
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", contentType)
return DefaultClient.Do(req)
}
// PostForm is a convenient replacement for http.PostForm that adds a span around the request.
func PostForm(ctx context.Context, targetURL string, data url.Values) (resp *http.Response, err error) {
return Post(ctx, targetURL, "application/x-www-form-urlencoded", strings.NewReader(data.Encode()))
}

View File

@ -0,0 +1,46 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"net/http"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
)
// Attribute keys that can be added to a span.
const (
ReadBytesKey = attribute.Key("http.read_bytes") // if anything was read from the request body, the total number of bytes read
ReadErrorKey = attribute.Key("http.read_error") // If an error occurred while reading a request, the string of the error (io.EOF is not recorded)
WroteBytesKey = attribute.Key("http.wrote_bytes") // if anything was written to the response writer, the total number of bytes written
WriteErrorKey = attribute.Key("http.write_error") // if an error occurred while writing a reply, the string of the error (io.EOF is not recorded)
)
// Server HTTP metrics.
const (
RequestCount = "http.server.request_count" // Incoming request count total
RequestContentLength = "http.server.request_content_length" // Incoming request bytes total
ResponseContentLength = "http.server.response_content_length" // Incoming response bytes total
ServerLatency = "http.server.duration" // Incoming end to end duration, microseconds
)
// Filter is a predicate used to determine whether a given http.request should
// be traced. A Filter must return true if the request should be traced.
type Filter func(*http.Request) bool
func newTracer(tp trace.TracerProvider) trace.Tracer {
return tp.Tracer(instrumentationName, trace.WithInstrumentationVersion(Version()))
}

View File

@ -0,0 +1,208 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"net/http"
"net/http/httptrace"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/trace"
)
const (
instrumentationName = "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
)
// config represents the configuration options available for the http.Handler
// and http.Transport types.
type config struct {
ServerName string
Tracer trace.Tracer
Meter metric.Meter
Propagators propagation.TextMapPropagator
SpanStartOptions []trace.SpanStartOption
PublicEndpoint bool
PublicEndpointFn func(*http.Request) bool
ReadEvent bool
WriteEvent bool
Filters []Filter
SpanNameFormatter func(string, *http.Request) string
ClientTrace func(context.Context) *httptrace.ClientTrace
TracerProvider trace.TracerProvider
MeterProvider metric.MeterProvider
}
// Option interface used for setting optional config properties.
type Option interface {
apply(*config)
}
type optionFunc func(*config)
func (o optionFunc) apply(c *config) {
o(c)
}
// newConfig creates a new config struct and applies opts to it.
func newConfig(opts ...Option) *config {
c := &config{
Propagators: otel.GetTextMapPropagator(),
MeterProvider: otel.GetMeterProvider(),
}
for _, opt := range opts {
opt.apply(c)
}
// Tracer is only initialized if manually specified. Otherwise, can be passed with the tracing context.
if c.TracerProvider != nil {
c.Tracer = newTracer(c.TracerProvider)
}
c.Meter = c.MeterProvider.Meter(
instrumentationName,
metric.WithInstrumentationVersion(Version()),
)
return c
}
// WithTracerProvider specifies a tracer provider to use for creating a tracer.
// If none is specified, the global provider is used.
func WithTracerProvider(provider trace.TracerProvider) Option {
return optionFunc(func(cfg *config) {
if provider != nil {
cfg.TracerProvider = provider
}
})
}
// WithMeterProvider specifies a meter provider to use for creating a meter.
// If none is specified, the global provider is used.
func WithMeterProvider(provider metric.MeterProvider) Option {
return optionFunc(func(cfg *config) {
if provider != nil {
cfg.MeterProvider = provider
}
})
}
// WithPublicEndpoint configures the Handler to link the span with an incoming
// span context. If this option is not provided, then the association is a child
// association instead of a link.
func WithPublicEndpoint() Option {
return optionFunc(func(c *config) {
c.PublicEndpoint = true
})
}
// WithPublicEndpointFn runs with every request, and allows conditionnally
// configuring the Handler to link the span with an incoming span context. If
// this option is not provided or returns false, then the association is a
// child association instead of a link.
// Note: WithPublicEndpoint takes precedence over WithPublicEndpointFn.
func WithPublicEndpointFn(fn func(*http.Request) bool) Option {
return optionFunc(func(c *config) {
c.PublicEndpointFn = fn
})
}
// WithPropagators configures specific propagators. If this
// option isn't specified, then the global TextMapPropagator is used.
func WithPropagators(ps propagation.TextMapPropagator) Option {
return optionFunc(func(c *config) {
if ps != nil {
c.Propagators = ps
}
})
}
// WithSpanOptions configures an additional set of
// trace.SpanOptions, which are applied to each new span.
func WithSpanOptions(opts ...trace.SpanStartOption) Option {
return optionFunc(func(c *config) {
c.SpanStartOptions = append(c.SpanStartOptions, opts...)
})
}
// WithFilter adds a filter to the list of filters used by the handler.
// If any filter indicates to exclude a request then the request will not be
// traced. All filters must allow a request to be traced for a Span to be created.
// If no filters are provided then all requests are traced.
// Filters will be invoked for each processed request, it is advised to make them
// simple and fast.
func WithFilter(f Filter) Option {
return optionFunc(func(c *config) {
c.Filters = append(c.Filters, f)
})
}
type event int
// Different types of events that can be recorded, see WithMessageEvents.
const (
ReadEvents event = iota
WriteEvents
)
// WithMessageEvents configures the Handler to record the specified events
// (span.AddEvent) on spans. By default only summary attributes are added at the
// end of the request.
//
// Valid events are:
// - ReadEvents: Record the number of bytes read after every http.Request.Body.Read
// using the ReadBytesKey
// - WriteEvents: Record the number of bytes written after every http.ResponeWriter.Write
// using the WriteBytesKey
func WithMessageEvents(events ...event) Option {
return optionFunc(func(c *config) {
for _, e := range events {
switch e {
case ReadEvents:
c.ReadEvent = true
case WriteEvents:
c.WriteEvent = true
}
}
})
}
// WithSpanNameFormatter takes a function that will be called on every
// request and the returned string will become the Span Name.
func WithSpanNameFormatter(f func(operation string, r *http.Request) string) Option {
return optionFunc(func(c *config) {
c.SpanNameFormatter = f
})
}
// WithClientTrace takes a function that returns client trace instance that will be
// applied to the requests sent through the otelhttp Transport.
func WithClientTrace(f func(context.Context) *httptrace.ClientTrace) Option {
return optionFunc(func(c *config) {
c.ClientTrace = f
})
}
// WithServerName returns an Option that sets the name of the (virtual) server
// handling requests.
func WithServerName(server string) Option {
return optionFunc(func(c *config) {
c.ServerName = server
})
}

View File

@ -0,0 +1,18 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package otelhttp provides an http.Handler and functions that are intended
// to be used to add tracing by wrapping existing handlers (with Handler) and
// routes WithRouteTag.
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"

View File

@ -0,0 +1,275 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"io"
"net/http"
"time"
"github.com/felixge/httpsnoop"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/propagation"
semconv "go.opentelemetry.io/otel/semconv/v1.17.0"
"go.opentelemetry.io/otel/trace"
)
// middleware is an http middleware which wraps the next handler in a span.
type middleware struct {
operation string
server string
tracer trace.Tracer
meter metric.Meter
propagators propagation.TextMapPropagator
spanStartOptions []trace.SpanStartOption
readEvent bool
writeEvent bool
filters []Filter
spanNameFormatter func(string, *http.Request) string
counters map[string]metric.Int64Counter
valueRecorders map[string]metric.Float64Histogram
publicEndpoint bool
publicEndpointFn func(*http.Request) bool
}
func defaultHandlerFormatter(operation string, _ *http.Request) string {
return operation
}
// NewHandler wraps the passed handler in a span named after the operation and
// enriches it with metrics.
func NewHandler(handler http.Handler, operation string, opts ...Option) http.Handler {
return NewMiddleware(operation, opts...)(handler)
}
// NewMiddleware returns a tracing and metrics instrumentation middleware.
// The handler returned by the middleware wraps a handler
// in a span named after the operation and enriches it with metrics.
func NewMiddleware(operation string, opts ...Option) func(http.Handler) http.Handler {
h := middleware{
operation: operation,
}
defaultOpts := []Option{
WithSpanOptions(trace.WithSpanKind(trace.SpanKindServer)),
WithSpanNameFormatter(defaultHandlerFormatter),
}
c := newConfig(append(defaultOpts, opts...)...)
h.configure(c)
h.createMeasures()
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
h.serveHTTP(w, r, next)
})
}
}
func (h *middleware) configure(c *config) {
h.tracer = c.Tracer
h.meter = c.Meter
h.propagators = c.Propagators
h.spanStartOptions = c.SpanStartOptions
h.readEvent = c.ReadEvent
h.writeEvent = c.WriteEvent
h.filters = c.Filters
h.spanNameFormatter = c.SpanNameFormatter
h.publicEndpoint = c.PublicEndpoint
h.publicEndpointFn = c.PublicEndpointFn
h.server = c.ServerName
}
func handleErr(err error) {
if err != nil {
otel.Handle(err)
}
}
func (h *middleware) createMeasures() {
h.counters = make(map[string]metric.Int64Counter)
h.valueRecorders = make(map[string]metric.Float64Histogram)
requestBytesCounter, err := h.meter.Int64Counter(RequestContentLength)
handleErr(err)
responseBytesCounter, err := h.meter.Int64Counter(ResponseContentLength)
handleErr(err)
serverLatencyMeasure, err := h.meter.Float64Histogram(ServerLatency)
handleErr(err)
h.counters[RequestContentLength] = requestBytesCounter
h.counters[ResponseContentLength] = responseBytesCounter
h.valueRecorders[ServerLatency] = serverLatencyMeasure
}
// serveHTTP sets up tracing and calls the given next http.Handler with the span
// context injected into the request context.
func (h *middleware) serveHTTP(w http.ResponseWriter, r *http.Request, next http.Handler) {
requestStartTime := time.Now()
for _, f := range h.filters {
if !f(r) {
// Simply pass through to the handler if a filter rejects the request
next.ServeHTTP(w, r)
return
}
}
ctx := h.propagators.Extract(r.Context(), propagation.HeaderCarrier(r.Header))
opts := []trace.SpanStartOption{
trace.WithAttributes(semconvutil.HTTPServerRequest(h.server, r)...),
}
if h.server != "" {
hostAttr := semconv.NetHostName(h.server)
opts = append(opts, trace.WithAttributes(hostAttr))
}
opts = append(opts, h.spanStartOptions...)
if h.publicEndpoint || (h.publicEndpointFn != nil && h.publicEndpointFn(r.WithContext(ctx))) {
opts = append(opts, trace.WithNewRoot())
// Linking incoming span context if any for public endpoint.
if s := trace.SpanContextFromContext(ctx); s.IsValid() && s.IsRemote() {
opts = append(opts, trace.WithLinks(trace.Link{SpanContext: s}))
}
}
tracer := h.tracer
if tracer == nil {
if span := trace.SpanFromContext(r.Context()); span.SpanContext().IsValid() {
tracer = newTracer(span.TracerProvider())
} else {
tracer = newTracer(otel.GetTracerProvider())
}
}
ctx, span := tracer.Start(ctx, h.spanNameFormatter(h.operation, r), opts...)
defer span.End()
readRecordFunc := func(int64) {}
if h.readEvent {
readRecordFunc = func(n int64) {
span.AddEvent("read", trace.WithAttributes(ReadBytesKey.Int64(n)))
}
}
var bw bodyWrapper
// if request body is nil or NoBody, we don't want to mutate the body as it
// will affect the identity of it in an unforeseeable way because we assert
// ReadCloser fulfills a certain interface and it is indeed nil or NoBody.
if r.Body != nil && r.Body != http.NoBody {
bw.ReadCloser = r.Body
bw.record = readRecordFunc
r.Body = &bw
}
writeRecordFunc := func(int64) {}
if h.writeEvent {
writeRecordFunc = func(n int64) {
span.AddEvent("write", trace.WithAttributes(WroteBytesKey.Int64(n)))
}
}
rww := &respWriterWrapper{
ResponseWriter: w,
record: writeRecordFunc,
ctx: ctx,
props: h.propagators,
statusCode: http.StatusOK, // default status code in case the Handler doesn't write anything
}
// Wrap w to use our ResponseWriter methods while also exposing
// other interfaces that w may implement (http.CloseNotifier,
// http.Flusher, http.Hijacker, http.Pusher, io.ReaderFrom).
w = httpsnoop.Wrap(w, httpsnoop.Hooks{
Header: func(httpsnoop.HeaderFunc) httpsnoop.HeaderFunc {
return rww.Header
},
Write: func(httpsnoop.WriteFunc) httpsnoop.WriteFunc {
return rww.Write
},
WriteHeader: func(httpsnoop.WriteHeaderFunc) httpsnoop.WriteHeaderFunc {
return rww.WriteHeader
},
})
labeler := &Labeler{}
ctx = injectLabeler(ctx, labeler)
next.ServeHTTP(w, r.WithContext(ctx))
setAfterServeAttributes(span, bw.read, rww.written, rww.statusCode, bw.err, rww.err)
// Add metrics
attributes := append(labeler.Get(), semconvutil.HTTPServerRequestMetrics(h.server, r)...)
if rww.statusCode > 0 {
attributes = append(attributes, semconv.HTTPStatusCode(rww.statusCode))
}
o := metric.WithAttributes(attributes...)
h.counters[RequestContentLength].Add(ctx, bw.read, o)
h.counters[ResponseContentLength].Add(ctx, rww.written, o)
// Use floating point division here for higher precision (instead of Millisecond method).
elapsedTime := float64(time.Since(requestStartTime)) / float64(time.Millisecond)
h.valueRecorders[ServerLatency].Record(ctx, elapsedTime, o)
}
func setAfterServeAttributes(span trace.Span, read, wrote int64, statusCode int, rerr, werr error) {
attributes := []attribute.KeyValue{}
// TODO: Consider adding an event after each read and write, possibly as an
// option (defaulting to off), so as to not create needlessly verbose spans.
if read > 0 {
attributes = append(attributes, ReadBytesKey.Int64(read))
}
if rerr != nil && rerr != io.EOF {
attributes = append(attributes, ReadErrorKey.String(rerr.Error()))
}
if wrote > 0 {
attributes = append(attributes, WroteBytesKey.Int64(wrote))
}
if statusCode > 0 {
attributes = append(attributes, semconv.HTTPStatusCode(statusCode))
}
span.SetStatus(semconvutil.HTTPServerStatus(statusCode))
if werr != nil && werr != io.EOF {
attributes = append(attributes, WriteErrorKey.String(werr.Error()))
}
span.SetAttributes(attributes...)
}
// WithRouteTag annotates spans and metrics with the provided route name
// with HTTP route attribute.
func WithRouteTag(route string, h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
attr := semconv.HTTPRouteKey.String(route)
span := trace.SpanFromContext(r.Context())
span.SetAttributes(attr)
labeler, _ := LabelerFromContext(r.Context())
labeler.Add(attr)
h.ServeHTTP(w, r)
})
}

View File

@ -0,0 +1,21 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package semconvutil // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
// Generate semconvutil package:
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/httpconv_test.go.tmpl "--data={}" --out=httpconv_test.go
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/httpconv.go.tmpl "--data={}" --out=httpconv.go
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/netconv_test.go.tmpl "--data={}" --out=netconv_test.go
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/netconv.go.tmpl "--data={}" --out=netconv.go

View File

@ -0,0 +1,552 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/semconvutil/httpconv.go.tmpl
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package semconvutil // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
import (
"fmt"
"net/http"
"strings"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
semconv "go.opentelemetry.io/otel/semconv/v1.17.0"
)
// HTTPClientResponse returns trace attributes for an HTTP response received by a
// client from a server. It will return the following attributes if the related
// values are defined in resp: "http.status.code",
// "http.response_content_length".
//
// This does not add all OpenTelemetry required attributes for an HTTP event,
// it assumes ClientRequest was used to create the span with a complete set of
// attributes. If a complete set of attributes can be generated using the
// request contained in resp. For example:
//
// append(HTTPClientResponse(resp), ClientRequest(resp.Request)...)
func HTTPClientResponse(resp *http.Response) []attribute.KeyValue {
return hc.ClientResponse(resp)
}
// HTTPClientRequest returns trace attributes for an HTTP request made by a client.
// The following attributes are always returned: "http.url", "http.flavor",
// "http.method", "net.peer.name". The following attributes are returned if the
// related values are defined in req: "net.peer.port", "http.user_agent",
// "http.request_content_length", "enduser.id".
func HTTPClientRequest(req *http.Request) []attribute.KeyValue {
return hc.ClientRequest(req)
}
// HTTPClientStatus returns a span status code and message for an HTTP status code
// value received by a client.
func HTTPClientStatus(code int) (codes.Code, string) {
return hc.ClientStatus(code)
}
// HTTPServerRequest returns trace attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "http.flavor", "http.target", "net.host.name". The following attributes are
// returned if they related values are defined in req: "net.host.port",
// "net.sock.peer.addr", "net.sock.peer.port", "http.user_agent", "enduser.id",
// "http.client_ip".
func HTTPServerRequest(server string, req *http.Request) []attribute.KeyValue {
return hc.ServerRequest(server, req)
}
// HTTPServerRequestMetrics returns metric attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "http.flavor", "net.host.name". The following attributes are
// returned if they related values are defined in req: "net.host.port".
func HTTPServerRequestMetrics(server string, req *http.Request) []attribute.KeyValue {
return hc.ServerRequestMetrics(server, req)
}
// HTTPServerStatus returns a span status code and message for an HTTP status code
// value returned by a server. Status codes in the 400-499 range are not
// returned as errors.
func HTTPServerStatus(code int) (codes.Code, string) {
return hc.ServerStatus(code)
}
// HTTPRequestHeader returns the contents of h as attributes.
//
// Instrumentation should require an explicit configuration of which headers to
// captured and then prune what they pass here. Including all headers can be a
// security risk - explicit configuration helps avoid leaking sensitive
// information.
//
// The User-Agent header is already captured in the http.user_agent attribute
// from ClientRequest and ServerRequest. Instrumentation may provide an option
// to capture that header here even though it is not recommended. Otherwise,
// instrumentation should filter that out of what is passed.
func HTTPRequestHeader(h http.Header) []attribute.KeyValue {
return hc.RequestHeader(h)
}
// HTTPResponseHeader returns the contents of h as attributes.
//
// Instrumentation should require an explicit configuration of which headers to
// captured and then prune what they pass here. Including all headers can be a
// security risk - explicit configuration helps avoid leaking sensitive
// information.
//
// The User-Agent header is already captured in the http.user_agent attribute
// from ClientRequest and ServerRequest. Instrumentation may provide an option
// to capture that header here even though it is not recommended. Otherwise,
// instrumentation should filter that out of what is passed.
func HTTPResponseHeader(h http.Header) []attribute.KeyValue {
return hc.ResponseHeader(h)
}
// httpConv are the HTTP semantic convention attributes defined for a version
// of the OpenTelemetry specification.
type httpConv struct {
NetConv *netConv
EnduserIDKey attribute.Key
HTTPClientIPKey attribute.Key
HTTPFlavorKey attribute.Key
HTTPMethodKey attribute.Key
HTTPRequestContentLengthKey attribute.Key
HTTPResponseContentLengthKey attribute.Key
HTTPRouteKey attribute.Key
HTTPSchemeHTTP attribute.KeyValue
HTTPSchemeHTTPS attribute.KeyValue
HTTPStatusCodeKey attribute.Key
HTTPTargetKey attribute.Key
HTTPURLKey attribute.Key
HTTPUserAgentKey attribute.Key
}
var hc = &httpConv{
NetConv: nc,
EnduserIDKey: semconv.EnduserIDKey,
HTTPClientIPKey: semconv.HTTPClientIPKey,
HTTPFlavorKey: semconv.HTTPFlavorKey,
HTTPMethodKey: semconv.HTTPMethodKey,
HTTPRequestContentLengthKey: semconv.HTTPRequestContentLengthKey,
HTTPResponseContentLengthKey: semconv.HTTPResponseContentLengthKey,
HTTPRouteKey: semconv.HTTPRouteKey,
HTTPSchemeHTTP: semconv.HTTPSchemeHTTP,
HTTPSchemeHTTPS: semconv.HTTPSchemeHTTPS,
HTTPStatusCodeKey: semconv.HTTPStatusCodeKey,
HTTPTargetKey: semconv.HTTPTargetKey,
HTTPURLKey: semconv.HTTPURLKey,
HTTPUserAgentKey: semconv.HTTPUserAgentKey,
}
// ClientResponse returns attributes for an HTTP response received by a client
// from a server. The following attributes are returned if the related values
// are defined in resp: "http.status.code", "http.response_content_length".
//
// This does not add all OpenTelemetry required attributes for an HTTP event,
// it assumes ClientRequest was used to create the span with a complete set of
// attributes. If a complete set of attributes can be generated using the
// request contained in resp. For example:
//
// append(ClientResponse(resp), ClientRequest(resp.Request)...)
func (c *httpConv) ClientResponse(resp *http.Response) []attribute.KeyValue {
var n int
if resp.StatusCode > 0 {
n++
}
if resp.ContentLength > 0 {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
if resp.StatusCode > 0 {
attrs = append(attrs, c.HTTPStatusCodeKey.Int(resp.StatusCode))
}
if resp.ContentLength > 0 {
attrs = append(attrs, c.HTTPResponseContentLengthKey.Int(int(resp.ContentLength)))
}
return attrs
}
// ClientRequest returns attributes for an HTTP request made by a client. The
// following attributes are always returned: "http.url", "http.flavor",
// "http.method", "net.peer.name". The following attributes are returned if the
// related values are defined in req: "net.peer.port", "http.user_agent",
// "http.request_content_length", "enduser.id".
func (c *httpConv) ClientRequest(req *http.Request) []attribute.KeyValue {
n := 3 // URL, peer name, proto, and method.
var h string
if req.URL != nil {
h = req.URL.Host
}
peer, p := firstHostPort(h, req.Header.Get("Host"))
port := requiredHTTPPort(req.URL != nil && req.URL.Scheme == "https", p)
if port > 0 {
n++
}
useragent := req.UserAgent()
if useragent != "" {
n++
}
if req.ContentLength > 0 {
n++
}
userID, _, hasUserID := req.BasicAuth()
if hasUserID {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.method(req.Method))
attrs = append(attrs, c.flavor(req.Proto))
var u string
if req.URL != nil {
// Remove any username/password info that may be in the URL.
userinfo := req.URL.User
req.URL.User = nil
u = req.URL.String()
// Restore any username/password info that was removed.
req.URL.User = userinfo
}
attrs = append(attrs, c.HTTPURLKey.String(u))
attrs = append(attrs, c.NetConv.PeerName(peer))
if port > 0 {
attrs = append(attrs, c.NetConv.PeerPort(port))
}
if useragent != "" {
attrs = append(attrs, c.HTTPUserAgentKey.String(useragent))
}
if l := req.ContentLength; l > 0 {
attrs = append(attrs, c.HTTPRequestContentLengthKey.Int64(l))
}
if hasUserID {
attrs = append(attrs, c.EnduserIDKey.String(userID))
}
return attrs
}
// ServerRequest returns attributes for an HTTP request received by a server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "http.flavor", "http.target", "net.host.name". The following attributes are
// returned if they related values are defined in req: "net.host.port",
// "net.sock.peer.addr", "net.sock.peer.port", "http.user_agent", "enduser.id",
// "http.client_ip".
func (c *httpConv) ServerRequest(server string, req *http.Request) []attribute.KeyValue {
// TODO: This currently does not add the specification required
// `http.target` attribute. It has too high of a cardinality to safely be
// added. An alternate should be added, or this comment removed, when it is
// addressed by the specification. If it is ultimately decided to continue
// not including the attribute, the HTTPTargetKey field of the httpConv
// should be removed as well.
n := 4 // Method, scheme, proto, and host name.
var host string
var p int
if server == "" {
host, p = splitHostPort(req.Host)
} else {
// Prioritize the primary server name.
host, p = splitHostPort(server)
if p < 0 {
_, p = splitHostPort(req.Host)
}
}
hostPort := requiredHTTPPort(req.TLS != nil, p)
if hostPort > 0 {
n++
}
peer, peerPort := splitHostPort(req.RemoteAddr)
if peer != "" {
n++
if peerPort > 0 {
n++
}
}
useragent := req.UserAgent()
if useragent != "" {
n++
}
userID, _, hasUserID := req.BasicAuth()
if hasUserID {
n++
}
clientIP := serverClientIP(req.Header.Get("X-Forwarded-For"))
if clientIP != "" {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.method(req.Method))
attrs = append(attrs, c.scheme(req.TLS != nil))
attrs = append(attrs, c.flavor(req.Proto))
attrs = append(attrs, c.NetConv.HostName(host))
if hostPort > 0 {
attrs = append(attrs, c.NetConv.HostPort(hostPort))
}
if peer != "" {
// The Go HTTP server sets RemoteAddr to "IP:port", this will not be a
// file-path that would be interpreted with a sock family.
attrs = append(attrs, c.NetConv.SockPeerAddr(peer))
if peerPort > 0 {
attrs = append(attrs, c.NetConv.SockPeerPort(peerPort))
}
}
if useragent != "" {
attrs = append(attrs, c.HTTPUserAgentKey.String(useragent))
}
if hasUserID {
attrs = append(attrs, c.EnduserIDKey.String(userID))
}
if clientIP != "" {
attrs = append(attrs, c.HTTPClientIPKey.String(clientIP))
}
return attrs
}
// ServerRequestMetrics returns metric attributes for an HTTP request received
// by a server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "http.flavor", "net.host.name". The following attributes are
// returned if they related values are defined in req: "net.host.port".
func (c *httpConv) ServerRequestMetrics(server string, req *http.Request) []attribute.KeyValue {
// TODO: This currently does not add the specification required
// `http.target` attribute. It has too high of a cardinality to safely be
// added. An alternate should be added, or this comment removed, when it is
// addressed by the specification. If it is ultimately decided to continue
// not including the attribute, the HTTPTargetKey field of the httpConv
// should be removed as well.
n := 4 // Method, scheme, proto, and host name.
var host string
var p int
if server == "" {
host, p = splitHostPort(req.Host)
} else {
// Prioritize the primary server name.
host, p = splitHostPort(server)
if p < 0 {
_, p = splitHostPort(req.Host)
}
}
hostPort := requiredHTTPPort(req.TLS != nil, p)
if hostPort > 0 {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.methodMetric(req.Method))
attrs = append(attrs, c.scheme(req.TLS != nil))
attrs = append(attrs, c.flavor(req.Proto))
attrs = append(attrs, c.NetConv.HostName(host))
if hostPort > 0 {
attrs = append(attrs, c.NetConv.HostPort(hostPort))
}
return attrs
}
func (c *httpConv) method(method string) attribute.KeyValue {
if method == "" {
return c.HTTPMethodKey.String(http.MethodGet)
}
return c.HTTPMethodKey.String(method)
}
func (c *httpConv) methodMetric(method string) attribute.KeyValue {
method = strings.ToUpper(method)
switch method {
case http.MethodConnect, http.MethodDelete, http.MethodGet, http.MethodHead, http.MethodOptions, http.MethodPatch, http.MethodPost, http.MethodPut, http.MethodTrace:
default:
method = "_OTHER"
}
return c.HTTPMethodKey.String(method)
}
func (c *httpConv) scheme(https bool) attribute.KeyValue { // nolint:revive
if https {
return c.HTTPSchemeHTTPS
}
return c.HTTPSchemeHTTP
}
func (c *httpConv) flavor(proto string) attribute.KeyValue {
switch proto {
case "HTTP/1.0":
return c.HTTPFlavorKey.String("1.0")
case "HTTP/1.1":
return c.HTTPFlavorKey.String("1.1")
case "HTTP/2":
return c.HTTPFlavorKey.String("2.0")
case "HTTP/3":
return c.HTTPFlavorKey.String("3.0")
default:
return c.HTTPFlavorKey.String(proto)
}
}
func serverClientIP(xForwardedFor string) string {
if idx := strings.Index(xForwardedFor, ","); idx >= 0 {
xForwardedFor = xForwardedFor[:idx]
}
return xForwardedFor
}
func requiredHTTPPort(https bool, port int) int { // nolint:revive
if https {
if port > 0 && port != 443 {
return port
}
} else {
if port > 0 && port != 80 {
return port
}
}
return -1
}
// Return the request host and port from the first non-empty source.
func firstHostPort(source ...string) (host string, port int) {
for _, hostport := range source {
host, port = splitHostPort(hostport)
if host != "" || port > 0 {
break
}
}
return
}
// RequestHeader returns the contents of h as OpenTelemetry attributes.
func (c *httpConv) RequestHeader(h http.Header) []attribute.KeyValue {
return c.header("http.request.header", h)
}
// ResponseHeader returns the contents of h as OpenTelemetry attributes.
func (c *httpConv) ResponseHeader(h http.Header) []attribute.KeyValue {
return c.header("http.response.header", h)
}
func (c *httpConv) header(prefix string, h http.Header) []attribute.KeyValue {
key := func(k string) attribute.Key {
k = strings.ToLower(k)
k = strings.ReplaceAll(k, "-", "_")
k = fmt.Sprintf("%s.%s", prefix, k)
return attribute.Key(k)
}
attrs := make([]attribute.KeyValue, 0, len(h))
for k, v := range h {
attrs = append(attrs, key(k).StringSlice(v))
}
return attrs
}
// ClientStatus returns a span status code and message for an HTTP status code
// value received by a client.
func (c *httpConv) ClientStatus(code int) (codes.Code, string) {
if code < 100 || code >= 600 {
return codes.Error, fmt.Sprintf("Invalid HTTP status code %d", code)
}
if code >= 400 {
return codes.Error, ""
}
return codes.Unset, ""
}
// ServerStatus returns a span status code and message for an HTTP status code
// value returned by a server. Status codes in the 400-499 range are not
// returned as errors.
func (c *httpConv) ServerStatus(code int) (codes.Code, string) {
if code < 100 || code >= 600 {
return codes.Error, fmt.Sprintf("Invalid HTTP status code %d", code)
}
if code >= 500 {
return codes.Error, ""
}
return codes.Unset, ""
}

View File

@ -0,0 +1,368 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/semconvutil/netconv.go.tmpl
// Copyright The OpenTelemetry Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package semconvutil // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
import (
"net"
"strconv"
"strings"
"go.opentelemetry.io/otel/attribute"
semconv "go.opentelemetry.io/otel/semconv/v1.17.0"
)
// NetTransport returns a trace attribute describing the transport protocol of the
// passed network. See the net.Dial for information about acceptable network
// values.
func NetTransport(network string) attribute.KeyValue {
return nc.Transport(network)
}
// NetClient returns trace attributes for a client network connection to address.
// See net.Dial for information about acceptable address values, address should
// be the same as the one used to create conn. If conn is nil, only network
// peer attributes will be returned that describe address. Otherwise, the
// socket level information about conn will also be included.
func NetClient(address string, conn net.Conn) []attribute.KeyValue {
return nc.Client(address, conn)
}
// NetServer returns trace attributes for a network listener listening at address.
// See net.Listen for information about acceptable address values, address
// should be the same as the one used to create ln. If ln is nil, only network
// host attributes will be returned that describe address. Otherwise, the
// socket level information about ln will also be included.
func NetServer(address string, ln net.Listener) []attribute.KeyValue {
return nc.Server(address, ln)
}
// netConv are the network semantic convention attributes defined for a version
// of the OpenTelemetry specification.
type netConv struct {
NetHostNameKey attribute.Key
NetHostPortKey attribute.Key
NetPeerNameKey attribute.Key
NetPeerPortKey attribute.Key
NetSockFamilyKey attribute.Key
NetSockPeerAddrKey attribute.Key
NetSockPeerPortKey attribute.Key
NetSockHostAddrKey attribute.Key
NetSockHostPortKey attribute.Key
NetTransportOther attribute.KeyValue
NetTransportTCP attribute.KeyValue
NetTransportUDP attribute.KeyValue
NetTransportInProc attribute.KeyValue
}
var nc = &netConv{
NetHostNameKey: semconv.NetHostNameKey,
NetHostPortKey: semconv.NetHostPortKey,
NetPeerNameKey: semconv.NetPeerNameKey,
NetPeerPortKey: semconv.NetPeerPortKey,
NetSockFamilyKey: semconv.NetSockFamilyKey,
NetSockPeerAddrKey: semconv.NetSockPeerAddrKey,
NetSockPeerPortKey: semconv.NetSockPeerPortKey,
NetSockHostAddrKey: semconv.NetSockHostAddrKey,
NetSockHostPortKey: semconv.NetSockHostPortKey,
NetTransportOther: semconv.NetTransportOther,
NetTransportTCP: semconv.NetTransportTCP,
NetTransportUDP: semconv.NetTransportUDP,
NetTransportInProc: semconv.NetTransportInProc,
}
func (c *netConv) Transport(network string) attribute.KeyValue {
switch network {
case "tcp", "tcp4", "tcp6":
return c.NetTransportTCP
case "udp", "udp4", "udp6":
return c.NetTransportUDP
case "unix", "unixgram", "unixpacket":
return c.NetTransportInProc
default:
// "ip:*", "ip4:*", and "ip6:*" all are considered other.
return c.NetTransportOther
}
}
// Host returns attributes for a network host address.
func (c *netConv) Host(address string) []attribute.KeyValue {
h, p := splitHostPort(address)
var n int
if h != "" {
n++
if p > 0 {
n++
}
}
if n == 0 {
return nil
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.HostName(h))
if p > 0 {
attrs = append(attrs, c.HostPort(int(p)))
}
return attrs
}
// Server returns attributes for a network listener listening at address. See
// net.Listen for information about acceptable address values, address should
// be the same as the one used to create ln. If ln is nil, only network host
// attributes will be returned that describe address. Otherwise, the socket
// level information about ln will also be included.
func (c *netConv) Server(address string, ln net.Listener) []attribute.KeyValue {
if ln == nil {
return c.Host(address)
}
lAddr := ln.Addr()
if lAddr == nil {
return c.Host(address)
}
hostName, hostPort := splitHostPort(address)
sockHostAddr, sockHostPort := splitHostPort(lAddr.String())
network := lAddr.Network()
sockFamily := family(network, sockHostAddr)
n := nonZeroStr(hostName, network, sockHostAddr, sockFamily)
n += positiveInt(hostPort, sockHostPort)
attr := make([]attribute.KeyValue, 0, n)
if hostName != "" {
attr = append(attr, c.HostName(hostName))
if hostPort > 0 {
// Only if net.host.name is set should net.host.port be.
attr = append(attr, c.HostPort(hostPort))
}
}
if network != "" {
attr = append(attr, c.Transport(network))
}
if sockFamily != "" {
attr = append(attr, c.NetSockFamilyKey.String(sockFamily))
}
if sockHostAddr != "" {
attr = append(attr, c.NetSockHostAddrKey.String(sockHostAddr))
if sockHostPort > 0 {
// Only if net.sock.host.addr is set should net.sock.host.port be.
attr = append(attr, c.NetSockHostPortKey.Int(sockHostPort))
}
}
return attr
}
func (c *netConv) HostName(name string) attribute.KeyValue {
return c.NetHostNameKey.String(name)
}
func (c *netConv) HostPort(port int) attribute.KeyValue {
return c.NetHostPortKey.Int(port)
}
// Client returns attributes for a client network connection to address. See
// net.Dial for information about acceptable address values, address should be
// the same as the one used to create conn. If conn is nil, only network peer
// attributes will be returned that describe address. Otherwise, the socket
// level information about conn will also be included.
func (c *netConv) Client(address string, conn net.Conn) []attribute.KeyValue {
if conn == nil {
return c.Peer(address)
}
lAddr, rAddr := conn.LocalAddr(), conn.RemoteAddr()
var network string
switch {
case lAddr != nil:
network = lAddr.Network()
case rAddr != nil:
network = rAddr.Network()
default:
return c.Peer(address)
}
peerName, peerPort := splitHostPort(address)
var (
sockFamily string
sockPeerAddr string
sockPeerPort int
sockHostAddr string
sockHostPort int
)
if lAddr != nil {
sockHostAddr, sockHostPort = splitHostPort(lAddr.String())
}
if rAddr != nil {
sockPeerAddr, sockPeerPort = splitHostPort(rAddr.String())
}
switch {
case sockHostAddr != "":
sockFamily = family(network, sockHostAddr)
case sockPeerAddr != "":
sockFamily = family(network, sockPeerAddr)
}
n := nonZeroStr(peerName, network, sockPeerAddr, sockHostAddr, sockFamily)
n += positiveInt(peerPort, sockPeerPort, sockHostPort)
attr := make([]attribute.KeyValue, 0, n)
if peerName != "" {
attr = append(attr, c.PeerName(peerName))
if peerPort > 0 {
// Only if net.peer.name is set should net.peer.port be.
attr = append(attr, c.PeerPort(peerPort))
}
}
if network != "" {
attr = append(attr, c.Transport(network))
}
if sockFamily != "" {
attr = append(attr, c.NetSockFamilyKey.String(sockFamily))
}
if sockPeerAddr != "" {
attr = append(attr, c.NetSockPeerAddrKey.String(sockPeerAddr))
if sockPeerPort > 0 {
// Only if net.sock.peer.addr is set should net.sock.peer.port be.
attr = append(attr, c.NetSockPeerPortKey.Int(sockPeerPort))
}
}
if sockHostAddr != "" {
attr = append(attr, c.NetSockHostAddrKey.String(sockHostAddr))
if sockHostPort > 0 {
// Only if net.sock.host.addr is set should net.sock.host.port be.
attr = append(attr, c.NetSockHostPortKey.Int(sockHostPort))
}
}
return attr
}
func family(network, address string) string {
switch network {
case "unix", "unixgram", "unixpacket":
return "unix"
default:
if ip := net.ParseIP(address); ip != nil {
if ip.To4() == nil {
return "inet6"
}
return "inet"
}
}
return ""
}
func nonZeroStr(strs ...string) int {
var n int
for _, str := range strs {
if str != "" {
n++
}
}
return n
}
func positiveInt(ints ...int) int {
var n int
for _, i := range ints {
if i > 0 {
n++
}
}
return n
}
// Peer returns attributes for a network peer address.
func (c *netConv) Peer(address string) []attribute.KeyValue {
h, p := splitHostPort(address)
var n int
if h != "" {
n++
if p > 0 {
n++
}
}
if n == 0 {
return nil
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.PeerName(h))
if p > 0 {
attrs = append(attrs, c.PeerPort(int(p)))
}
return attrs
}
func (c *netConv) PeerName(name string) attribute.KeyValue {
return c.NetPeerNameKey.String(name)
}
func (c *netConv) PeerPort(port int) attribute.KeyValue {
return c.NetPeerPortKey.Int(port)
}
func (c *netConv) SockPeerAddr(addr string) attribute.KeyValue {
return c.NetSockPeerAddrKey.String(addr)
}
func (c *netConv) SockPeerPort(port int) attribute.KeyValue {
return c.NetSockPeerPortKey.Int(port)
}
// splitHostPort splits a network address hostport of the form "host",
// "host%zone", "[host]", "[host%zone], "host:port", "host%zone:port",
// "[host]:port", "[host%zone]:port", or ":port" into host or host%zone and
// port.
//
// An empty host is returned if it is not provided or unparsable. A negative
// port is returned if it is not provided or unparsable.
func splitHostPort(hostport string) (host string, port int) {
port = -1
if strings.HasPrefix(hostport, "[") {
addrEnd := strings.LastIndex(hostport, "]")
if addrEnd < 0 {
// Invalid hostport.
return
}
if i := strings.LastIndex(hostport[addrEnd:], ":"); i < 0 {
host = hostport[1:addrEnd]
return
}
} else {
if i := strings.LastIndex(hostport, ":"); i < 0 {
host = hostport
return
}
}
host, pStr, err := net.SplitHostPort(hostport)
if err != nil {
return
}
p, err := strconv.ParseUint(pStr, 10, 16)
if err != nil {
return
}
return host, int(p)
}

View File

@ -0,0 +1,65 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"sync"
"go.opentelemetry.io/otel/attribute"
)
// Labeler is used to allow instrumented HTTP handlers to add custom attributes to
// the metrics recorded by the net/http instrumentation.
type Labeler struct {
mu sync.Mutex
attributes []attribute.KeyValue
}
// Add attributes to a Labeler.
func (l *Labeler) Add(ls ...attribute.KeyValue) {
l.mu.Lock()
defer l.mu.Unlock()
l.attributes = append(l.attributes, ls...)
}
// Get returns a copy of the attributes added to the Labeler.
func (l *Labeler) Get() []attribute.KeyValue {
l.mu.Lock()
defer l.mu.Unlock()
ret := make([]attribute.KeyValue, len(l.attributes))
copy(ret, l.attributes)
return ret
}
type labelerContextKeyType int
const lablelerContextKey labelerContextKeyType = 0
func injectLabeler(ctx context.Context, l *Labeler) context.Context {
return context.WithValue(ctx, lablelerContextKey, l)
}
// LabelerFromContext retrieves a Labeler instance from the provided context if
// one is available. If no Labeler was found in the provided context a new, empty
// Labeler is returned and the second return value is false. In this case it is
// safe to use the Labeler but any attributes added to it will not be used.
func LabelerFromContext(ctx context.Context) (*Labeler, bool) {
l, ok := ctx.Value(lablelerContextKey).(*Labeler)
if !ok {
l = &Labeler{}
}
return l, ok
}

View File

@ -0,0 +1,193 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"io"
"net/http"
"net/http/httptrace"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/trace"
)
// Transport implements the http.RoundTripper interface and wraps
// outbound HTTP(S) requests with a span.
type Transport struct {
rt http.RoundTripper
tracer trace.Tracer
propagators propagation.TextMapPropagator
spanStartOptions []trace.SpanStartOption
filters []Filter
spanNameFormatter func(string, *http.Request) string
clientTrace func(context.Context) *httptrace.ClientTrace
}
var _ http.RoundTripper = &Transport{}
// NewTransport wraps the provided http.RoundTripper with one that
// starts a span and injects the span context into the outbound request headers.
//
// If the provided http.RoundTripper is nil, http.DefaultTransport will be used
// as the base http.RoundTripper.
func NewTransport(base http.RoundTripper, opts ...Option) *Transport {
if base == nil {
base = http.DefaultTransport
}
t := Transport{
rt: base,
}
defaultOpts := []Option{
WithSpanOptions(trace.WithSpanKind(trace.SpanKindClient)),
WithSpanNameFormatter(defaultTransportFormatter),
}
c := newConfig(append(defaultOpts, opts...)...)
t.applyConfig(c)
return &t
}
func (t *Transport) applyConfig(c *config) {
t.tracer = c.Tracer
t.propagators = c.Propagators
t.spanStartOptions = c.SpanStartOptions
t.filters = c.Filters
t.spanNameFormatter = c.SpanNameFormatter
t.clientTrace = c.ClientTrace
}
func defaultTransportFormatter(_ string, r *http.Request) string {
return "HTTP " + r.Method
}
// RoundTrip creates a Span and propagates its context via the provided request's headers
// before handing the request to the configured base RoundTripper. The created span will
// end when the response body is closed or when a read from the body returns io.EOF.
func (t *Transport) RoundTrip(r *http.Request) (*http.Response, error) {
for _, f := range t.filters {
if !f(r) {
// Simply pass through to the base RoundTripper if a filter rejects the request
return t.rt.RoundTrip(r)
}
}
tracer := t.tracer
if tracer == nil {
if span := trace.SpanFromContext(r.Context()); span.SpanContext().IsValid() {
tracer = newTracer(span.TracerProvider())
} else {
tracer = newTracer(otel.GetTracerProvider())
}
}
opts := append([]trace.SpanStartOption{}, t.spanStartOptions...) // start with the configured options
ctx, span := tracer.Start(r.Context(), t.spanNameFormatter("", r), opts...)
if t.clientTrace != nil {
ctx = httptrace.WithClientTrace(ctx, t.clientTrace(ctx))
}
r = r.Clone(ctx) // According to RoundTripper spec, we shouldn't modify the origin request.
span.SetAttributes(semconvutil.HTTPClientRequest(r)...)
t.propagators.Inject(ctx, propagation.HeaderCarrier(r.Header))
res, err := t.rt.RoundTrip(r)
if err != nil {
span.RecordError(err)
span.SetStatus(codes.Error, err.Error())
span.End()
return res, err
}
span.SetAttributes(semconvutil.HTTPClientResponse(res)...)
span.SetStatus(semconvutil.HTTPClientStatus(res.StatusCode))
res.Body = newWrappedBody(span, res.Body)
return res, err
}
// newWrappedBody returns a new and appropriately scoped *wrappedBody as an
// io.ReadCloser. If the passed body implements io.Writer, the returned value
// will implement io.ReadWriteCloser.
func newWrappedBody(span trace.Span, body io.ReadCloser) io.ReadCloser {
// The successful protocol switch responses will have a body that
// implement an io.ReadWriteCloser. Ensure this interface type continues
// to be satisfied if that is the case.
if _, ok := body.(io.ReadWriteCloser); ok {
return &wrappedBody{span: span, body: body}
}
// Remove the implementation of the io.ReadWriteCloser and only implement
// the io.ReadCloser.
return struct{ io.ReadCloser }{&wrappedBody{span: span, body: body}}
}
// wrappedBody is the response body type returned by the transport
// instrumentation to complete a span. Errors encountered when using the
// response body are recorded in span tracking the response.
//
// The span tracking the response is ended when this body is closed.
//
// If the response body implements the io.Writer interface (i.e. for
// successful protocol switches), the wrapped body also will.
type wrappedBody struct {
span trace.Span
body io.ReadCloser
}
var _ io.ReadWriteCloser = &wrappedBody{}
func (wb *wrappedBody) Write(p []byte) (int, error) {
// This will not panic given the guard in newWrappedBody.
n, err := wb.body.(io.Writer).Write(p)
if err != nil {
wb.span.RecordError(err)
wb.span.SetStatus(codes.Error, err.Error())
}
return n, err
}
func (wb *wrappedBody) Read(b []byte) (int, error) {
n, err := wb.body.Read(b)
switch err {
case nil:
// nothing to do here but fall through to the return
case io.EOF:
wb.span.End()
default:
wb.span.RecordError(err)
wb.span.SetStatus(codes.Error, err.Error())
}
return n, err
}
func (wb *wrappedBody) Close() error {
wb.span.End()
if wb.body != nil {
return wb.body.Close()
}
return nil
}

View File

@ -0,0 +1,28 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
// Version is the current release version of the otelhttp instrumentation.
func Version() string {
return "0.45.0"
// This string is updated by the pre_release.sh script during release
}
// SemVersion is the semantic version to be supplied to tracer/meter creation.
//
// Deprecated: Use [Version] instead.
func SemVersion() string {
return Version()
}

View File

@ -0,0 +1,99 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"io"
"net/http"
"go.opentelemetry.io/otel/propagation"
)
var _ io.ReadCloser = &bodyWrapper{}
// bodyWrapper wraps a http.Request.Body (an io.ReadCloser) to track the number
// of bytes read and the last error.
type bodyWrapper struct {
io.ReadCloser
record func(n int64) // must not be nil
read int64
err error
}
func (w *bodyWrapper) Read(b []byte) (int, error) {
n, err := w.ReadCloser.Read(b)
n1 := int64(n)
w.read += n1
w.err = err
w.record(n1)
return n, err
}
func (w *bodyWrapper) Close() error {
return w.ReadCloser.Close()
}
var _ http.ResponseWriter = &respWriterWrapper{}
// respWriterWrapper wraps a http.ResponseWriter in order to track the number of
// bytes written, the last error, and to catch the first written statusCode.
// TODO: The wrapped http.ResponseWriter doesn't implement any of the optional
// types (http.Hijacker, http.Pusher, http.CloseNotifier, http.Flusher, etc)
// that may be useful when using it in real life situations.
type respWriterWrapper struct {
http.ResponseWriter
record func(n int64) // must not be nil
// used to inject the header
ctx context.Context
props propagation.TextMapPropagator
written int64
statusCode int
err error
wroteHeader bool
}
func (w *respWriterWrapper) Header() http.Header {
return w.ResponseWriter.Header()
}
func (w *respWriterWrapper) Write(p []byte) (int, error) {
if !w.wroteHeader {
w.WriteHeader(http.StatusOK)
}
n, err := w.ResponseWriter.Write(p)
n1 := int64(n)
w.record(n1)
w.written += n1
w.err = err
return n, err
}
// WriteHeader persists initial statusCode for span attribution.
// All calls to WriteHeader will be propagated to the underlying ResponseWriter
// and will persist the statusCode from the first call.
// Blocking consecutive calls to WriteHeader alters expected behavior and will
// remove warning logs from net/http where developers will notice incorrect handler implementations.
func (w *respWriterWrapper) WriteHeader(statusCode int) {
if !w.wroteHeader {
w.wroteHeader = true
w.statusCode = statusCode
}
w.ResponseWriter.WriteHeader(statusCode)
}

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,51 @@
# OpenTelemetry-Go OTLP Span Exporter
[![Go Reference](https://pkg.go.dev/badge/go.opentelemetry.io/otel/exporters/otlp/otlptrace.svg)](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace)
[OpenTelemetry Protocol Exporter](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.20.0/specification/protocol/exporter.md) implementation.
## Installation
```
go get -u go.opentelemetry.io/otel/exporters/otlp/otlptrace
```
## Examples
- [HTTP Exporter setup and examples](./otlptracehttp/example_test.go)
- [Full example of gRPC Exporter sending telemetry to a local collector](../../../example/otel-collector)
## [`otlptrace`](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace)
The `otlptrace` package provides an exporter implementing the OTel span exporter interface.
This exporter is configured using a client satisfying the `otlptrace.Client` interface.
This client handles the transformation of data into wire format and the transmission of that data to the collector.
## [`otlptracegrpc`](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc)
The `otlptracegrpc` package implements a client for the span exporter that sends trace telemetry data to the collector using gRPC with protobuf-encoded payloads.
## [`otlptracehttp`](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp)
The `otlptracehttp` package implements a client for the span exporter that sends trace telemetry data to the collector using HTTP with protobuf-encoded payloads.
## Configuration
### Environment Variables
The following environment variables can be used (instead of options objects) to
override the default configuration. For more information about how each of
these environment variables is interpreted, see [the OpenTelemetry
specification](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.20.0/specification/protocol/exporter.md).
| Environment variable | Option | Default value |
| ------------------------------------------------------------------------ |------------------------------ | -------------------------------------------------------- |
| `OTEL_EXPORTER_OTLP_ENDPOINT` `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` | `WithEndpoint` `WithInsecure` | `https://localhost:4317` or `https://localhost:4318`[^1] |
| `OTEL_EXPORTER_OTLP_CERTIFICATE` `OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE` | `WithTLSClientConfig` | |
| `OTEL_EXPORTER_OTLP_HEADERS` `OTEL_EXPORTER_OTLP_TRACES_HEADERS` | `WithHeaders` | |
| `OTEL_EXPORTER_OTLP_COMPRESSION` `OTEL_EXPORTER_OTLP_TRACES_COMPRESSION` | `WithCompression` | |
| `OTEL_EXPORTER_OTLP_TIMEOUT` `OTEL_EXPORTER_OTLP_TRACES_TIMEOUT` | `WithTimeout` | `10s` |
[^1]: The gRPC client defaults to `https://localhost:4317` and the HTTP client `https://localhost:4318`.
Configuration using options have precedence over the environment variables.

View File

@ -0,0 +1,54 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace"
import (
"context"
tracepb "go.opentelemetry.io/proto/otlp/trace/v1"
)
// Client manages connections to the collector, handles the
// transformation of data into wire format, and the transmission of that
// data to the collector.
type Client interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Start should establish connection(s) to endpoint(s). It is
// called just once by the exporter, so the implementation
// does not need to worry about idempotence and locking.
Start(ctx context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Stop should close the connections. The function is called
// only once by the exporter, so the implementation does not
// need to worry about idempotence, but it may be called
// concurrently with UploadTraces, so proper
// locking is required. The function serves as a
// synchronization point - after the function returns, the
// process of closing connections is assumed to be finished.
Stop(ctx context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// UploadTraces should transform the passed traces to the wire
// format and send it to the collector. May be called
// concurrently.
UploadTraces(ctx context.Context, protoSpans []*tracepb.ResourceSpans) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}

View File

@ -0,0 +1,118 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace"
import (
"context"
"errors"
"fmt"
"sync"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
tracesdk "go.opentelemetry.io/otel/sdk/trace"
)
var (
errAlreadyStarted = errors.New("already started")
)
// Exporter exports trace data in the OTLP wire format.
type Exporter struct {
client Client
mu sync.RWMutex
started bool
startOnce sync.Once
stopOnce sync.Once
}
// ExportSpans exports a batch of spans.
func (e *Exporter) ExportSpans(ctx context.Context, ss []tracesdk.ReadOnlySpan) error {
protoSpans := tracetransform.Spans(ss)
if len(protoSpans) == 0 {
return nil
}
err := e.client.UploadTraces(ctx, protoSpans)
if err != nil {
return fmt.Errorf("traces export: %w", err)
}
return nil
}
// Start establishes a connection to the receiving endpoint.
func (e *Exporter) Start(ctx context.Context) error {
var err = errAlreadyStarted
e.startOnce.Do(func() {
e.mu.Lock()
e.started = true
e.mu.Unlock()
err = e.client.Start(ctx)
})
return err
}
// Shutdown flushes all exports and closes all connections to the receiving endpoint.
func (e *Exporter) Shutdown(ctx context.Context) error {
e.mu.RLock()
started := e.started
e.mu.RUnlock()
if !started {
return nil
}
var err error
e.stopOnce.Do(func() {
err = e.client.Stop(ctx)
e.mu.Lock()
e.started = false
e.mu.Unlock()
})
return err
}
var _ tracesdk.SpanExporter = (*Exporter)(nil)
// New constructs a new Exporter and starts it.
func New(ctx context.Context, client Client) (*Exporter, error) {
exp := NewUnstarted(client)
if err := exp.Start(ctx); err != nil {
return nil, err
}
return exp, nil
}
// NewUnstarted constructs a new Exporter and does not start it.
func NewUnstarted(client Client) *Exporter {
return &Exporter{
client: client,
}
}
// MarshalLog is the marshaling function used by the logging system to represent this exporter.
func (e *Exporter) MarshalLog() interface{} {
return struct {
Type string
Client Client
}{
Type: "otlptrace",
Client: e.client,
}
}

View File

@ -0,0 +1,158 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk/resource"
commonpb "go.opentelemetry.io/proto/otlp/common/v1"
)
// KeyValues transforms a slice of attribute KeyValues into OTLP key-values.
func KeyValues(attrs []attribute.KeyValue) []*commonpb.KeyValue {
if len(attrs) == 0 {
return nil
}
out := make([]*commonpb.KeyValue, 0, len(attrs))
for _, kv := range attrs {
out = append(out, KeyValue(kv))
}
return out
}
// Iterator transforms an attribute iterator into OTLP key-values.
func Iterator(iter attribute.Iterator) []*commonpb.KeyValue {
l := iter.Len()
if l == 0 {
return nil
}
out := make([]*commonpb.KeyValue, 0, l)
for iter.Next() {
out = append(out, KeyValue(iter.Attribute()))
}
return out
}
// ResourceAttributes transforms a Resource OTLP key-values.
func ResourceAttributes(res *resource.Resource) []*commonpb.KeyValue {
return Iterator(res.Iter())
}
// KeyValue transforms an attribute KeyValue into an OTLP key-value.
func KeyValue(kv attribute.KeyValue) *commonpb.KeyValue {
return &commonpb.KeyValue{Key: string(kv.Key), Value: Value(kv.Value)}
}
// Value transforms an attribute Value into an OTLP AnyValue.
func Value(v attribute.Value) *commonpb.AnyValue {
av := new(commonpb.AnyValue)
switch v.Type() {
case attribute.BOOL:
av.Value = &commonpb.AnyValue_BoolValue{
BoolValue: v.AsBool(),
}
case attribute.BOOLSLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: boolSliceValues(v.AsBoolSlice()),
},
}
case attribute.INT64:
av.Value = &commonpb.AnyValue_IntValue{
IntValue: v.AsInt64(),
}
case attribute.INT64SLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: int64SliceValues(v.AsInt64Slice()),
},
}
case attribute.FLOAT64:
av.Value = &commonpb.AnyValue_DoubleValue{
DoubleValue: v.AsFloat64(),
}
case attribute.FLOAT64SLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: float64SliceValues(v.AsFloat64Slice()),
},
}
case attribute.STRING:
av.Value = &commonpb.AnyValue_StringValue{
StringValue: v.AsString(),
}
case attribute.STRINGSLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: stringSliceValues(v.AsStringSlice()),
},
}
default:
av.Value = &commonpb.AnyValue_StringValue{
StringValue: "INVALID",
}
}
return av
}
func boolSliceValues(vals []bool) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_BoolValue{
BoolValue: v,
},
}
}
return converted
}
func int64SliceValues(vals []int64) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_IntValue{
IntValue: v,
},
}
}
return converted
}
func float64SliceValues(vals []float64) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_DoubleValue{
DoubleValue: v,
},
}
}
return converted
}
func stringSliceValues(vals []string) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_StringValue{
StringValue: v,
},
}
}
return converted
}

View File

@ -0,0 +1,30 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/sdk/instrumentation"
commonpb "go.opentelemetry.io/proto/otlp/common/v1"
)
func InstrumentationScope(il instrumentation.Scope) *commonpb.InstrumentationScope {
if il == (instrumentation.Scope{}) {
return nil
}
return &commonpb.InstrumentationScope{
Name: il.Name,
Version: il.Version,
}
}

View File

@ -0,0 +1,28 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/sdk/resource"
resourcepb "go.opentelemetry.io/proto/otlp/resource/v1"
)
// Resource transforms a Resource into an OTLP Resource.
func Resource(r *resource.Resource) *resourcepb.Resource {
if r == nil {
return nil
}
return &resourcepb.Resource{Attributes: ResourceAttributes(r)}
}

View File

@ -0,0 +1,205 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/sdk/instrumentation"
tracesdk "go.opentelemetry.io/otel/sdk/trace"
"go.opentelemetry.io/otel/trace"
tracepb "go.opentelemetry.io/proto/otlp/trace/v1"
)
// Spans transforms a slice of OpenTelemetry spans into a slice of OTLP
// ResourceSpans.
func Spans(sdl []tracesdk.ReadOnlySpan) []*tracepb.ResourceSpans {
if len(sdl) == 0 {
return nil
}
rsm := make(map[attribute.Distinct]*tracepb.ResourceSpans)
type key struct {
r attribute.Distinct
is instrumentation.Scope
}
ssm := make(map[key]*tracepb.ScopeSpans)
var resources int
for _, sd := range sdl {
if sd == nil {
continue
}
rKey := sd.Resource().Equivalent()
k := key{
r: rKey,
is: sd.InstrumentationScope(),
}
scopeSpan, iOk := ssm[k]
if !iOk {
// Either the resource or instrumentation scope were unknown.
scopeSpan = &tracepb.ScopeSpans{
Scope: InstrumentationScope(sd.InstrumentationScope()),
Spans: []*tracepb.Span{},
SchemaUrl: sd.InstrumentationScope().SchemaURL,
}
}
scopeSpan.Spans = append(scopeSpan.Spans, span(sd))
ssm[k] = scopeSpan
rs, rOk := rsm[rKey]
if !rOk {
resources++
// The resource was unknown.
rs = &tracepb.ResourceSpans{
Resource: Resource(sd.Resource()),
ScopeSpans: []*tracepb.ScopeSpans{scopeSpan},
SchemaUrl: sd.Resource().SchemaURL(),
}
rsm[rKey] = rs
continue
}
// The resource has been seen before. Check if the instrumentation
// library lookup was unknown because if so we need to add it to the
// ResourceSpans. Otherwise, the instrumentation library has already
// been seen and the append we did above will be included it in the
// ScopeSpans reference.
if !iOk {
rs.ScopeSpans = append(rs.ScopeSpans, scopeSpan)
}
}
// Transform the categorized map into a slice
rss := make([]*tracepb.ResourceSpans, 0, resources)
for _, rs := range rsm {
rss = append(rss, rs)
}
return rss
}
// span transforms a Span into an OTLP span.
func span(sd tracesdk.ReadOnlySpan) *tracepb.Span {
if sd == nil {
return nil
}
tid := sd.SpanContext().TraceID()
sid := sd.SpanContext().SpanID()
s := &tracepb.Span{
TraceId: tid[:],
SpanId: sid[:],
TraceState: sd.SpanContext().TraceState().String(),
Status: status(sd.Status().Code, sd.Status().Description),
StartTimeUnixNano: uint64(sd.StartTime().UnixNano()),
EndTimeUnixNano: uint64(sd.EndTime().UnixNano()),
Links: links(sd.Links()),
Kind: spanKind(sd.SpanKind()),
Name: sd.Name(),
Attributes: KeyValues(sd.Attributes()),
Events: spanEvents(sd.Events()),
DroppedAttributesCount: uint32(sd.DroppedAttributes()),
DroppedEventsCount: uint32(sd.DroppedEvents()),
DroppedLinksCount: uint32(sd.DroppedLinks()),
}
if psid := sd.Parent().SpanID(); psid.IsValid() {
s.ParentSpanId = psid[:]
}
return s
}
// status transform a span code and message into an OTLP span status.
func status(status codes.Code, message string) *tracepb.Status {
var c tracepb.Status_StatusCode
switch status {
case codes.Ok:
c = tracepb.Status_STATUS_CODE_OK
case codes.Error:
c = tracepb.Status_STATUS_CODE_ERROR
default:
c = tracepb.Status_STATUS_CODE_UNSET
}
return &tracepb.Status{
Code: c,
Message: message,
}
}
// links transforms span Links to OTLP span links.
func links(links []tracesdk.Link) []*tracepb.Span_Link {
if len(links) == 0 {
return nil
}
sl := make([]*tracepb.Span_Link, 0, len(links))
for _, otLink := range links {
// This redefinition is necessary to prevent otLink.*ID[:] copies
// being reused -- in short we need a new otLink per iteration.
otLink := otLink
tid := otLink.SpanContext.TraceID()
sid := otLink.SpanContext.SpanID()
sl = append(sl, &tracepb.Span_Link{
TraceId: tid[:],
SpanId: sid[:],
Attributes: KeyValues(otLink.Attributes),
DroppedAttributesCount: uint32(otLink.DroppedAttributeCount),
})
}
return sl
}
// spanEvents transforms span Events to an OTLP span events.
func spanEvents(es []tracesdk.Event) []*tracepb.Span_Event {
if len(es) == 0 {
return nil
}
events := make([]*tracepb.Span_Event, len(es))
// Transform message events
for i := 0; i < len(es); i++ {
events[i] = &tracepb.Span_Event{
Name: es[i].Name,
TimeUnixNano: uint64(es[i].Time.UnixNano()),
Attributes: KeyValues(es[i].Attributes),
DroppedAttributesCount: uint32(es[i].DroppedAttributeCount),
}
}
return events
}
// spanKind transforms a SpanKind to an OTLP span kind.
func spanKind(kind trace.SpanKind) tracepb.Span_SpanKind {
switch kind {
case trace.SpanKindInternal:
return tracepb.Span_SPAN_KIND_INTERNAL
case trace.SpanKindClient:
return tracepb.Span_SPAN_KIND_CLIENT
case trace.SpanKindServer:
return tracepb.Span_SPAN_KIND_SERVER
case trace.SpanKindProducer:
return tracepb.Span_SPAN_KIND_PRODUCER
case trace.SpanKindConsumer:
return tracepb.Span_SPAN_KIND_CONSUMER
default:
return tracepb.Span_SPAN_KIND_UNSPECIFIED
}
}

View File

@ -0,0 +1,20 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace"
// Version is the current release version of the OpenTelemetry OTLP trace exporter in use.
func Version() string {
return "1.19.0"
}

201
vendor/go.opentelemetry.io/otel/sdk/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,24 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package instrumentation provides types to represent the code libraries that
// provide OpenTelemetry instrumentation. These types are used in the
// OpenTelemetry signal pipelines to identify the source of telemetry.
//
// See
// https://github.com/open-telemetry/oteps/blob/d226b677d73a785523fe9b9701be13225ebc528d/text/0083-component.md
// and
// https://github.com/open-telemetry/oteps/blob/d226b677d73a785523fe9b9701be13225ebc528d/text/0201-scope-attributes.md
// for more information.
package instrumentation // import "go.opentelemetry.io/otel/sdk/instrumentation"

View File

@ -0,0 +1,19 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package instrumentation // import "go.opentelemetry.io/otel/sdk/instrumentation"
// Library represents the instrumentation library.
// Deprecated: please use Scope instead.
type Library = Scope

View File

@ -0,0 +1,26 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package instrumentation // import "go.opentelemetry.io/otel/sdk/instrumentation"
// Scope represents the instrumentation scope.
type Scope struct {
// Name is the name of the instrumentation scope. This should be the
// Go package name of that scope.
Name string
// Version is the version of the instrumentation scope.
Version string
// SchemaURL of the telemetry emitted by the scope.
SchemaURL string
}

177
vendor/go.opentelemetry.io/otel/sdk/internal/env/env.go generated vendored Normal file
View File

@ -0,0 +1,177 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package env // import "go.opentelemetry.io/otel/sdk/internal/env"
import (
"os"
"strconv"
"go.opentelemetry.io/otel/internal/global"
)
// Environment variable names.
const (
// BatchSpanProcessorScheduleDelayKey is the delay interval between two
// consecutive exports (i.e. 5000).
BatchSpanProcessorScheduleDelayKey = "OTEL_BSP_SCHEDULE_DELAY"
// BatchSpanProcessorExportTimeoutKey is the maximum allowed time to
// export data (i.e. 3000).
BatchSpanProcessorExportTimeoutKey = "OTEL_BSP_EXPORT_TIMEOUT"
// BatchSpanProcessorMaxQueueSizeKey is the maximum queue size (i.e. 2048).
BatchSpanProcessorMaxQueueSizeKey = "OTEL_BSP_MAX_QUEUE_SIZE"
// BatchSpanProcessorMaxExportBatchSizeKey is the maximum batch size (i.e.
// 512). Note: it must be less than or equal to
// EnvBatchSpanProcessorMaxQueueSize.
BatchSpanProcessorMaxExportBatchSizeKey = "OTEL_BSP_MAX_EXPORT_BATCH_SIZE"
// AttributeValueLengthKey is the maximum allowed attribute value size.
AttributeValueLengthKey = "OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT"
// AttributeCountKey is the maximum allowed span attribute count.
AttributeCountKey = "OTEL_ATTRIBUTE_COUNT_LIMIT"
// SpanAttributeValueLengthKey is the maximum allowed attribute value size
// for a span.
SpanAttributeValueLengthKey = "OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT"
// SpanAttributeCountKey is the maximum allowed span attribute count for a
// span.
SpanAttributeCountKey = "OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT"
// SpanEventCountKey is the maximum allowed span event count.
SpanEventCountKey = "OTEL_SPAN_EVENT_COUNT_LIMIT"
// SpanEventAttributeCountKey is the maximum allowed attribute per span
// event count.
SpanEventAttributeCountKey = "OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT"
// SpanLinkCountKey is the maximum allowed span link count.
SpanLinkCountKey = "OTEL_SPAN_LINK_COUNT_LIMIT"
// SpanLinkAttributeCountKey is the maximum allowed attribute per span
// link count.
SpanLinkAttributeCountKey = "OTEL_LINK_ATTRIBUTE_COUNT_LIMIT"
)
// firstInt returns the value of the first matching environment variable from
// keys. If the value is not an integer or no match is found, defaultValue is
// returned.
func firstInt(defaultValue int, keys ...string) int {
for _, key := range keys {
value := os.Getenv(key)
if value == "" {
continue
}
intValue, err := strconv.Atoi(value)
if err != nil {
global.Info("Got invalid value, number value expected.", key, value)
return defaultValue
}
return intValue
}
return defaultValue
}
// IntEnvOr returns the int value of the environment variable with name key if
// it exists, it is not empty, and the value is an int. Otherwise, defaultValue is returned.
func IntEnvOr(key string, defaultValue int) int {
value := os.Getenv(key)
if value == "" {
return defaultValue
}
intValue, err := strconv.Atoi(value)
if err != nil {
global.Info("Got invalid value, number value expected.", key, value)
return defaultValue
}
return intValue
}
// BatchSpanProcessorScheduleDelay returns the environment variable value for
// the OTEL_BSP_SCHEDULE_DELAY key if it exists, otherwise defaultValue is
// returned.
func BatchSpanProcessorScheduleDelay(defaultValue int) int {
return IntEnvOr(BatchSpanProcessorScheduleDelayKey, defaultValue)
}
// BatchSpanProcessorExportTimeout returns the environment variable value for
// the OTEL_BSP_EXPORT_TIMEOUT key if it exists, otherwise defaultValue is
// returned.
func BatchSpanProcessorExportTimeout(defaultValue int) int {
return IntEnvOr(BatchSpanProcessorExportTimeoutKey, defaultValue)
}
// BatchSpanProcessorMaxQueueSize returns the environment variable value for
// the OTEL_BSP_MAX_QUEUE_SIZE key if it exists, otherwise defaultValue is
// returned.
func BatchSpanProcessorMaxQueueSize(defaultValue int) int {
return IntEnvOr(BatchSpanProcessorMaxQueueSizeKey, defaultValue)
}
// BatchSpanProcessorMaxExportBatchSize returns the environment variable value for
// the OTEL_BSP_MAX_EXPORT_BATCH_SIZE key if it exists, otherwise defaultValue
// is returned.
func BatchSpanProcessorMaxExportBatchSize(defaultValue int) int {
return IntEnvOr(BatchSpanProcessorMaxExportBatchSizeKey, defaultValue)
}
// SpanAttributeValueLength returns the environment variable value for the
// OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT key if it exists. Otherwise, the
// environment variable value for OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT is
// returned or defaultValue if that is not set.
func SpanAttributeValueLength(defaultValue int) int {
return firstInt(defaultValue, SpanAttributeValueLengthKey, AttributeValueLengthKey)
}
// SpanAttributeCount returns the environment variable value for the
// OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT key if it exists. Otherwise, the
// environment variable value for OTEL_ATTRIBUTE_COUNT_LIMIT is returned or
// defaultValue if that is not set.
func SpanAttributeCount(defaultValue int) int {
return firstInt(defaultValue, SpanAttributeCountKey, AttributeCountKey)
}
// SpanEventCount returns the environment variable value for the
// OTEL_SPAN_EVENT_COUNT_LIMIT key if it exists, otherwise defaultValue is
// returned.
func SpanEventCount(defaultValue int) int {
return IntEnvOr(SpanEventCountKey, defaultValue)
}
// SpanEventAttributeCount returns the environment variable value for the
// OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT key if it exists, otherwise defaultValue
// is returned.
func SpanEventAttributeCount(defaultValue int) int {
return IntEnvOr(SpanEventAttributeCountKey, defaultValue)
}
// SpanLinkCount returns the environment variable value for the
// OTEL_SPAN_LINK_COUNT_LIMIT key if it exists, otherwise defaultValue is
// returned.
func SpanLinkCount(defaultValue int) int {
return IntEnvOr(SpanLinkCountKey, defaultValue)
}
// SpanLinkAttributeCount returns the environment variable value for the
// OTEL_LINK_ATTRIBUTE_COUNT_LIMIT key if it exists, otherwise defaultValue is
// returned.
func SpanLinkAttributeCount(defaultValue int) int {
return IntEnvOr(SpanLinkAttributeCountKey, defaultValue)
}

29
vendor/go.opentelemetry.io/otel/sdk/internal/gen.go generated vendored Normal file
View File

@ -0,0 +1,29 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package internal // import "go.opentelemetry.io/otel/sdk/internal"
//go:generate gotmpl --body=../../internal/shared/matchers/expectation.go.tmpl "--data={}" --out=matchers/expectation.go
//go:generate gotmpl --body=../../internal/shared/matchers/expecter.go.tmpl "--data={}" --out=matchers/expecter.go
//go:generate gotmpl --body=../../internal/shared/matchers/temporal_matcher.go.tmpl "--data={}" --out=matchers/temporal_matcher.go
//go:generate gotmpl --body=../../internal/shared/internaltest/alignment.go.tmpl "--data={}" --out=internaltest/alignment.go
//go:generate gotmpl --body=../../internal/shared/internaltest/env.go.tmpl "--data={}" --out=internaltest/env.go
//go:generate gotmpl --body=../../internal/shared/internaltest/env_test.go.tmpl "--data={}" --out=internaltest/env_test.go
//go:generate gotmpl --body=../../internal/shared/internaltest/errors.go.tmpl "--data={}" --out=internaltest/errors.go
//go:generate gotmpl --body=../../internal/shared/internaltest/harness.go.tmpl "--data={\"matchersImportPath\": \"go.opentelemetry.io/otel/sdk/internal/matchers\"}" --out=internaltest/harness.go
//go:generate gotmpl --body=../../internal/shared/internaltest/text_map_carrier.go.tmpl "--data={}" --out=internaltest/text_map_carrier.go
//go:generate gotmpl --body=../../internal/shared/internaltest/text_map_carrier_test.go.tmpl "--data={}" --out=internaltest/text_map_carrier_test.go
//go:generate gotmpl --body=../../internal/shared/internaltest/text_map_propagator.go.tmpl "--data={}" --out=internaltest/text_map_propagator.go
//go:generate gotmpl --body=../../internal/shared/internaltest/text_map_propagator_test.go.tmpl "--data={}" --out=internaltest/text_map_propagator_test.go

View File

@ -0,0 +1,28 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package internal // import "go.opentelemetry.io/otel/sdk/internal"
import "time"
// MonotonicEndTime returns the end time at present
// but offset from start, monotonically.
//
// The monotonic clock is used in subtractions hence
// the duration since start added back to start gives
// end as a monotonic time.
// See https://golang.org/pkg/time/#hdr-Monotonic_Clocks
func MonotonicEndTime(start time.Time) time.Time {
return start.Add(time.Since(start))
}

110
vendor/go.opentelemetry.io/otel/sdk/resource/auto.go generated vendored Normal file
View File

@ -0,0 +1,110 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"errors"
"fmt"
"strings"
)
var (
// ErrPartialResource is returned by a detector when complete source
// information for a Resource is unavailable or the source information
// contains invalid values that are omitted from the returned Resource.
ErrPartialResource = errors.New("partial resource")
)
// Detector detects OpenTelemetry resource information.
type Detector interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Detect returns an initialized Resource based on gathered information.
// If the source information to construct a Resource contains invalid
// values, a Resource is returned with the valid parts of the source
// information used for initialization along with an appropriately
// wrapped ErrPartialResource error.
Detect(ctx context.Context) (*Resource, error)
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}
// Detect calls all input detectors sequentially and merges each result with the previous one.
// It returns the merged error too.
func Detect(ctx context.Context, detectors ...Detector) (*Resource, error) {
r := new(Resource)
return r, detect(ctx, r, detectors)
}
// detect runs all detectors using ctx and merges the result into res. This
// assumes res is allocated and not nil, it will panic otherwise.
func detect(ctx context.Context, res *Resource, detectors []Detector) error {
var (
r *Resource
errs detectErrs
err error
)
for _, detector := range detectors {
if detector == nil {
continue
}
r, err = detector.Detect(ctx)
if err != nil {
errs = append(errs, err)
if !errors.Is(err, ErrPartialResource) {
continue
}
}
r, err = Merge(res, r)
if err != nil {
errs = append(errs, err)
}
*res = *r
}
if len(errs) == 0 {
return nil
}
return errs
}
type detectErrs []error
func (e detectErrs) Error() string {
errStr := make([]string, len(e))
for i, err := range e {
errStr[i] = fmt.Sprintf("* %s", err)
}
format := "%d errors occurred detecting resource:\n\t%s"
return fmt.Sprintf(format, len(e), strings.Join(errStr, "\n\t"))
}
func (e detectErrs) Unwrap() error {
switch len(e) {
case 0:
return nil
case 1:
return e[0]
}
return e[1:]
}
func (e detectErrs) Is(target error) bool {
return len(e) != 0 && errors.Is(e[0], target)
}

108
vendor/go.opentelemetry.io/otel/sdk/resource/builtin.go generated vendored Normal file
View File

@ -0,0 +1,108 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"fmt"
"os"
"path/filepath"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk"
semconv "go.opentelemetry.io/otel/semconv/v1.21.0"
)
type (
// telemetrySDK is a Detector that provides information about
// the OpenTelemetry SDK used. This Detector is included as a
// builtin. If these resource attributes are not wanted, use
// the WithTelemetrySDK(nil) or WithoutBuiltin() options to
// explicitly disable them.
telemetrySDK struct{}
// host is a Detector that provides information about the host
// being run on. This Detector is included as a builtin. If
// these resource attributes are not wanted, use the
// WithHost(nil) or WithoutBuiltin() options to explicitly
// disable them.
host struct{}
stringDetector struct {
schemaURL string
K attribute.Key
F func() (string, error)
}
defaultServiceNameDetector struct{}
)
var (
_ Detector = telemetrySDK{}
_ Detector = host{}
_ Detector = stringDetector{}
_ Detector = defaultServiceNameDetector{}
)
// Detect returns a *Resource that describes the OpenTelemetry SDK used.
func (telemetrySDK) Detect(context.Context) (*Resource, error) {
return NewWithAttributes(
semconv.SchemaURL,
semconv.TelemetrySDKName("opentelemetry"),
semconv.TelemetrySDKLanguageGo,
semconv.TelemetrySDKVersion(sdk.Version()),
), nil
}
// Detect returns a *Resource that describes the host being run on.
func (host) Detect(ctx context.Context) (*Resource, error) {
return StringDetector(semconv.SchemaURL, semconv.HostNameKey, os.Hostname).Detect(ctx)
}
// StringDetector returns a Detector that will produce a *Resource
// containing the string as a value corresponding to k. The resulting Resource
// will have the specified schemaURL.
func StringDetector(schemaURL string, k attribute.Key, f func() (string, error)) Detector {
return stringDetector{schemaURL: schemaURL, K: k, F: f}
}
// Detect returns a *Resource that describes the string as a value
// corresponding to attribute.Key as well as the specific schemaURL.
func (sd stringDetector) Detect(ctx context.Context) (*Resource, error) {
value, err := sd.F()
if err != nil {
return nil, fmt.Errorf("%s: %w", string(sd.K), err)
}
a := sd.K.String(value)
if !a.Valid() {
return nil, fmt.Errorf("invalid attribute: %q -> %q", a.Key, a.Value.Emit())
}
return NewWithAttributes(sd.schemaURL, sd.K.String(value)), nil
}
// Detect implements Detector.
func (defaultServiceNameDetector) Detect(ctx context.Context) (*Resource, error) {
return StringDetector(
semconv.SchemaURL,
semconv.ServiceNameKey,
func() (string, error) {
executable, err := os.Executable()
if err != nil {
return "unknown_service:go", nil
}
return "unknown_service:" + filepath.Base(executable), nil
},
).Detect(ctx)
}

206
vendor/go.opentelemetry.io/otel/sdk/resource/config.go generated vendored Normal file
View File

@ -0,0 +1,206 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"go.opentelemetry.io/otel/attribute"
)
// config contains configuration for Resource creation.
type config struct {
// detectors that will be evaluated.
detectors []Detector
// SchemaURL to associate with the Resource.
schemaURL string
}
// Option is the interface that applies a configuration option.
type Option interface {
// apply sets the Option value of a config.
apply(config) config
}
// WithAttributes adds attributes to the configured Resource.
func WithAttributes(attributes ...attribute.KeyValue) Option {
return WithDetectors(detectAttributes{attributes})
}
type detectAttributes struct {
attributes []attribute.KeyValue
}
func (d detectAttributes) Detect(context.Context) (*Resource, error) {
return NewSchemaless(d.attributes...), nil
}
// WithDetectors adds detectors to be evaluated for the configured resource.
func WithDetectors(detectors ...Detector) Option {
return detectorsOption{detectors: detectors}
}
type detectorsOption struct {
detectors []Detector
}
func (o detectorsOption) apply(cfg config) config {
cfg.detectors = append(cfg.detectors, o.detectors...)
return cfg
}
// WithFromEnv adds attributes from environment variables to the configured resource.
func WithFromEnv() Option {
return WithDetectors(fromEnv{})
}
// WithHost adds attributes from the host to the configured resource.
func WithHost() Option {
return WithDetectors(host{})
}
// WithHostID adds host ID information to the configured resource.
func WithHostID() Option {
return WithDetectors(hostIDDetector{})
}
// WithTelemetrySDK adds TelemetrySDK version info to the configured resource.
func WithTelemetrySDK() Option {
return WithDetectors(telemetrySDK{})
}
// WithSchemaURL sets the schema URL for the configured resource.
func WithSchemaURL(schemaURL string) Option {
return schemaURLOption(schemaURL)
}
type schemaURLOption string
func (o schemaURLOption) apply(cfg config) config {
cfg.schemaURL = string(o)
return cfg
}
// WithOS adds all the OS attributes to the configured Resource.
// See individual WithOS* functions to configure specific attributes.
func WithOS() Option {
return WithDetectors(
osTypeDetector{},
osDescriptionDetector{},
)
}
// WithOSType adds an attribute with the operating system type to the configured Resource.
func WithOSType() Option {
return WithDetectors(osTypeDetector{})
}
// WithOSDescription adds an attribute with the operating system description to the
// configured Resource. The formatted string is equivalent to the output of the
// `uname -snrvm` command.
func WithOSDescription() Option {
return WithDetectors(osDescriptionDetector{})
}
// WithProcess adds all the Process attributes to the configured Resource.
//
// Warning! This option will include process command line arguments. If these
// contain sensitive information it will be included in the exported resource.
//
// This option is equivalent to calling WithProcessPID,
// WithProcessExecutableName, WithProcessExecutablePath,
// WithProcessCommandArgs, WithProcessOwner, WithProcessRuntimeName,
// WithProcessRuntimeVersion, and WithProcessRuntimeDescription. See each
// option function for information about what resource attributes each
// includes.
func WithProcess() Option {
return WithDetectors(
processPIDDetector{},
processExecutableNameDetector{},
processExecutablePathDetector{},
processCommandArgsDetector{},
processOwnerDetector{},
processRuntimeNameDetector{},
processRuntimeVersionDetector{},
processRuntimeDescriptionDetector{},
)
}
// WithProcessPID adds an attribute with the process identifier (PID) to the
// configured Resource.
func WithProcessPID() Option {
return WithDetectors(processPIDDetector{})
}
// WithProcessExecutableName adds an attribute with the name of the process
// executable to the configured Resource.
func WithProcessExecutableName() Option {
return WithDetectors(processExecutableNameDetector{})
}
// WithProcessExecutablePath adds an attribute with the full path to the process
// executable to the configured Resource.
func WithProcessExecutablePath() Option {
return WithDetectors(processExecutablePathDetector{})
}
// WithProcessCommandArgs adds an attribute with all the command arguments (including
// the command/executable itself) as received by the process to the configured
// Resource.
//
// Warning! This option will include process command line arguments. If these
// contain sensitive information it will be included in the exported resource.
func WithProcessCommandArgs() Option {
return WithDetectors(processCommandArgsDetector{})
}
// WithProcessOwner adds an attribute with the username of the user that owns the process
// to the configured Resource.
func WithProcessOwner() Option {
return WithDetectors(processOwnerDetector{})
}
// WithProcessRuntimeName adds an attribute with the name of the runtime of this
// process to the configured Resource.
func WithProcessRuntimeName() Option {
return WithDetectors(processRuntimeNameDetector{})
}
// WithProcessRuntimeVersion adds an attribute with the version of the runtime of
// this process to the configured Resource.
func WithProcessRuntimeVersion() Option {
return WithDetectors(processRuntimeVersionDetector{})
}
// WithProcessRuntimeDescription adds an attribute with an additional description
// about the runtime of the process to the configured Resource.
func WithProcessRuntimeDescription() Option {
return WithDetectors(processRuntimeDescriptionDetector{})
}
// WithContainer adds all the Container attributes to the configured Resource.
// See individual WithContainer* functions to configure specific attributes.
func WithContainer() Option {
return WithDetectors(
cgroupContainerIDDetector{},
)
}
// WithContainerID adds an attribute with the id of the container to the configured Resource.
// Note: WithContainerID will not extract the correct container ID in an ECS environment.
// Please use the ECS resource detector instead (https://pkg.go.dev/go.opentelemetry.io/contrib/detectors/aws/ecs).
func WithContainerID() Option {
return WithDetectors(cgroupContainerIDDetector{})
}

View File

@ -0,0 +1,100 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"bufio"
"context"
"errors"
"io"
"os"
"regexp"
semconv "go.opentelemetry.io/otel/semconv/v1.21.0"
)
type containerIDProvider func() (string, error)
var (
containerID containerIDProvider = getContainerIDFromCGroup
cgroupContainerIDRe = regexp.MustCompile(`^.*/(?:.*-)?([0-9a-f]+)(?:\.|\s*$)`)
)
type cgroupContainerIDDetector struct{}
const cgroupPath = "/proc/self/cgroup"
// Detect returns a *Resource that describes the id of the container.
// If no container id found, an empty resource will be returned.
func (cgroupContainerIDDetector) Detect(ctx context.Context) (*Resource, error) {
containerID, err := containerID()
if err != nil {
return nil, err
}
if containerID == "" {
return Empty(), nil
}
return NewWithAttributes(semconv.SchemaURL, semconv.ContainerID(containerID)), nil
}
var (
defaultOSStat = os.Stat
osStat = defaultOSStat
defaultOSOpen = func(name string) (io.ReadCloser, error) {
return os.Open(name)
}
osOpen = defaultOSOpen
)
// getContainerIDFromCGroup returns the id of the container from the cgroup file.
// If no container id found, an empty string will be returned.
func getContainerIDFromCGroup() (string, error) {
if _, err := osStat(cgroupPath); errors.Is(err, os.ErrNotExist) {
// File does not exist, skip
return "", nil
}
file, err := osOpen(cgroupPath)
if err != nil {
return "", err
}
defer file.Close()
return getContainerIDFromReader(file), nil
}
// getContainerIDFromReader returns the id of the container from reader.
func getContainerIDFromReader(reader io.Reader) string {
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
line := scanner.Text()
if id := getContainerIDFromLine(line); id != "" {
return id
}
}
return ""
}
// getContainerIDFromLine returns the id of the container from one string line.
func getContainerIDFromLine(line string) string {
matches := cgroupContainerIDRe.FindStringSubmatch(line)
if len(matches) <= 1 {
return ""
}
return matches[1]
}

31
vendor/go.opentelemetry.io/otel/sdk/resource/doc.go generated vendored Normal file
View File

@ -0,0 +1,31 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package resource provides detecting and representing resources.
//
// The fundamental struct is a Resource which holds identifying information
// about the entities for which telemetry is exported.
//
// To automatically construct Resources from an environment a Detector
// interface is defined. Implementations of this interface can be passed to
// the Detect function to generate a Resource from the merged information.
//
// To load a user defined Resource from the environment variable
// OTEL_RESOURCE_ATTRIBUTES the FromEnv Detector can be used. It will interpret
// the value as a list of comma delimited key/value pairs
// (e.g. `<key1>=<value1>,<key2>=<value2>,...`).
//
// While this package provides a stable API,
// the attributes added by resource detectors may change.
package resource // import "go.opentelemetry.io/otel/sdk/resource"

108
vendor/go.opentelemetry.io/otel/sdk/resource/env.go generated vendored Normal file
View File

@ -0,0 +1,108 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"fmt"
"net/url"
"os"
"strings"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
semconv "go.opentelemetry.io/otel/semconv/v1.21.0"
)
const (
// resourceAttrKey is the environment variable name OpenTelemetry Resource information will be read from.
resourceAttrKey = "OTEL_RESOURCE_ATTRIBUTES"
// svcNameKey is the environment variable name that Service Name information will be read from.
svcNameKey = "OTEL_SERVICE_NAME"
)
var (
// errMissingValue is returned when a resource value is missing.
errMissingValue = fmt.Errorf("%w: missing value", ErrPartialResource)
)
// fromEnv is a Detector that implements the Detector and collects
// resources from environment. This Detector is included as a
// builtin.
type fromEnv struct{}
// compile time assertion that FromEnv implements Detector interface.
var _ Detector = fromEnv{}
// Detect collects resources from environment.
func (fromEnv) Detect(context.Context) (*Resource, error) {
attrs := strings.TrimSpace(os.Getenv(resourceAttrKey))
svcName := strings.TrimSpace(os.Getenv(svcNameKey))
if attrs == "" && svcName == "" {
return Empty(), nil
}
var res *Resource
if svcName != "" {
res = NewSchemaless(semconv.ServiceName(svcName))
}
r2, err := constructOTResources(attrs)
// Ensure that the resource with the service name from OTEL_SERVICE_NAME
// takes precedence, if it was defined.
res, err2 := Merge(r2, res)
if err == nil {
err = err2
} else if err2 != nil {
err = fmt.Errorf("detecting resources: %s", []string{err.Error(), err2.Error()})
}
return res, err
}
func constructOTResources(s string) (*Resource, error) {
if s == "" {
return Empty(), nil
}
pairs := strings.Split(s, ",")
var attrs []attribute.KeyValue
var invalid []string
for _, p := range pairs {
k, v, found := strings.Cut(p, "=")
if !found {
invalid = append(invalid, p)
continue
}
key := strings.TrimSpace(k)
val, err := url.QueryUnescape(strings.TrimSpace(v))
if err != nil {
// Retain original value if decoding fails, otherwise it will be
// an empty string.
val = v
otel.Handle(err)
}
attrs = append(attrs, attribute.String(key, val))
}
var err error
if len(invalid) > 0 {
err = fmt.Errorf("%w: %v", errMissingValue, invalid)
}
return NewSchemaless(attrs...), err
}

120
vendor/go.opentelemetry.io/otel/sdk/resource/host_id.go generated vendored Normal file
View File

@ -0,0 +1,120 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"errors"
"strings"
semconv "go.opentelemetry.io/otel/semconv/v1.21.0"
)
type hostIDProvider func() (string, error)
var defaultHostIDProvider hostIDProvider = platformHostIDReader.read
var hostID = defaultHostIDProvider
type hostIDReader interface {
read() (string, error)
}
type fileReader func(string) (string, error)
type commandExecutor func(string, ...string) (string, error)
// hostIDReaderBSD implements hostIDReader.
type hostIDReaderBSD struct {
execCommand commandExecutor
readFile fileReader
}
// read attempts to read the machine-id from /etc/hostid. If not found it will
// execute `kenv -q smbios.system.uuid`. If neither location yields an id an
// error will be returned.
func (r *hostIDReaderBSD) read() (string, error) {
if result, err := r.readFile("/etc/hostid"); err == nil {
return strings.TrimSpace(result), nil
}
if result, err := r.execCommand("kenv", "-q", "smbios.system.uuid"); err == nil {
return strings.TrimSpace(result), nil
}
return "", errors.New("host id not found in: /etc/hostid or kenv")
}
// hostIDReaderDarwin implements hostIDReader.
type hostIDReaderDarwin struct {
execCommand commandExecutor
}
// read executes `ioreg -rd1 -c "IOPlatformExpertDevice"` and parses host id
// from the IOPlatformUUID line. If the command fails or the uuid cannot be
// parsed an error will be returned.
func (r *hostIDReaderDarwin) read() (string, error) {
result, err := r.execCommand("ioreg", "-rd1", "-c", "IOPlatformExpertDevice")
if err != nil {
return "", err
}
lines := strings.Split(result, "\n")
for _, line := range lines {
if strings.Contains(line, "IOPlatformUUID") {
parts := strings.Split(line, " = ")
if len(parts) == 2 {
return strings.Trim(parts[1], "\""), nil
}
break
}
}
return "", errors.New("could not parse IOPlatformUUID")
}
type hostIDReaderLinux struct {
readFile fileReader
}
// read attempts to read the machine-id from /etc/machine-id followed by
// /var/lib/dbus/machine-id. If neither location yields an ID an error will
// be returned.
func (r *hostIDReaderLinux) read() (string, error) {
if result, err := r.readFile("/etc/machine-id"); err == nil {
return strings.TrimSpace(result), nil
}
if result, err := r.readFile("/var/lib/dbus/machine-id"); err == nil {
return strings.TrimSpace(result), nil
}
return "", errors.New("host id not found in: /etc/machine-id or /var/lib/dbus/machine-id")
}
type hostIDDetector struct{}
// Detect returns a *Resource containing the platform specific host id.
func (hostIDDetector) Detect(ctx context.Context) (*Resource, error) {
hostID, err := hostID()
if err != nil {
return nil, err
}
return NewWithAttributes(
semconv.SchemaURL,
semconv.HostID(hostID),
), nil
}

View File

@ -0,0 +1,23 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build dragonfly || freebsd || netbsd || openbsd || solaris
// +build dragonfly freebsd netbsd openbsd solaris
package resource // import "go.opentelemetry.io/otel/sdk/resource"
var platformHostIDReader hostIDReader = &hostIDReaderBSD{
execCommand: execCommand,
readFile: readFile,
}

View File

@ -0,0 +1,19 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
var platformHostIDReader hostIDReader = &hostIDReaderDarwin{
execCommand: execCommand,
}

View File

@ -0,0 +1,29 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build darwin || dragonfly || freebsd || netbsd || openbsd || solaris
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import "os/exec"
func execCommand(name string, arg ...string) (string, error) {
cmd := exec.Command(name, arg...)
b, err := cmd.Output()
if err != nil {
return "", err
}
return string(b), nil
}

View File

@ -0,0 +1,22 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build linux
// +build linux
package resource // import "go.opentelemetry.io/otel/sdk/resource"
var platformHostIDReader hostIDReader = &hostIDReaderLinux{
readFile: readFile,
}

View File

@ -0,0 +1,28 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build linux || dragonfly || freebsd || netbsd || openbsd || solaris
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import "os"
func readFile(filename string) (string, error) {
b, err := os.ReadFile(filename)
if err != nil {
return "", err
}
return string(b), nil
}

View File

@ -0,0 +1,36 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !darwin
// +build !dragonfly
// +build !freebsd
// +build !linux
// +build !netbsd
// +build !openbsd
// +build !solaris
// +build !windows
package resource // import "go.opentelemetry.io/otel/sdk/resource"
// hostIDReaderUnsupported is a placeholder implementation for operating systems
// for which this project currently doesn't support host.id
// attribute detection. See build tags declaration early on this file
// for a list of unsupported OSes.
type hostIDReaderUnsupported struct{}
func (*hostIDReaderUnsupported) read() (string, error) {
return "<unknown>", nil
}
var platformHostIDReader hostIDReader = &hostIDReaderUnsupported{}

View File

@ -0,0 +1,48 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build windows
// +build windows
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"golang.org/x/sys/windows/registry"
)
// implements hostIDReader
type hostIDReaderWindows struct{}
// read reads MachineGuid from the windows registry key:
// SOFTWARE\Microsoft\Cryptography
func (*hostIDReaderWindows) read() (string, error) {
k, err := registry.OpenKey(
registry.LOCAL_MACHINE, `SOFTWARE\Microsoft\Cryptography`,
registry.QUERY_VALUE|registry.WOW64_64KEY,
)
if err != nil {
return "", err
}
defer k.Close()
guid, _, err := k.GetStringValue("MachineGuid")
if err != nil {
return "", err
}
return guid, nil
}
var platformHostIDReader hostIDReader = &hostIDReaderWindows{}

99
vendor/go.opentelemetry.io/otel/sdk/resource/os.go generated vendored Normal file
View File

@ -0,0 +1,99 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"strings"
"go.opentelemetry.io/otel/attribute"
semconv "go.opentelemetry.io/otel/semconv/v1.21.0"
)
type osDescriptionProvider func() (string, error)
var defaultOSDescriptionProvider osDescriptionProvider = platformOSDescription
var osDescription = defaultOSDescriptionProvider
func setDefaultOSDescriptionProvider() {
setOSDescriptionProvider(defaultOSDescriptionProvider)
}
func setOSDescriptionProvider(osDescriptionProvider osDescriptionProvider) {
osDescription = osDescriptionProvider
}
type osTypeDetector struct{}
type osDescriptionDetector struct{}
// Detect returns a *Resource that describes the operating system type the
// service is running on.
func (osTypeDetector) Detect(ctx context.Context) (*Resource, error) {
osType := runtimeOS()
osTypeAttribute := mapRuntimeOSToSemconvOSType(osType)
return NewWithAttributes(
semconv.SchemaURL,
osTypeAttribute,
), nil
}
// Detect returns a *Resource that describes the operating system the
// service is running on.
func (osDescriptionDetector) Detect(ctx context.Context) (*Resource, error) {
description, err := osDescription()
if err != nil {
return nil, err
}
return NewWithAttributes(
semconv.SchemaURL,
semconv.OSDescription(description),
), nil
}
// mapRuntimeOSToSemconvOSType translates the OS name as provided by the Go runtime
// into an OS type attribute with the corresponding value defined by the semantic
// conventions. In case the provided OS name isn't mapped, it's transformed to lowercase
// and used as the value for the returned OS type attribute.
func mapRuntimeOSToSemconvOSType(osType string) attribute.KeyValue {
// the elements in this map are the intersection between
// available GOOS values and defined semconv OS types
osTypeAttributeMap := map[string]attribute.KeyValue{
"aix": semconv.OSTypeAIX,
"darwin": semconv.OSTypeDarwin,
"dragonfly": semconv.OSTypeDragonflyBSD,
"freebsd": semconv.OSTypeFreeBSD,
"linux": semconv.OSTypeLinux,
"netbsd": semconv.OSTypeNetBSD,
"openbsd": semconv.OSTypeOpenBSD,
"solaris": semconv.OSTypeSolaris,
"windows": semconv.OSTypeWindows,
"zos": semconv.OSTypeZOS,
}
var osTypeAttribute attribute.KeyValue
if attr, ok := osTypeAttributeMap[osType]; ok {
osTypeAttribute = attr
} else {
osTypeAttribute = semconv.OSTypeKey.String(strings.ToLower(osType))
}
return osTypeAttribute
}

View File

@ -0,0 +1,102 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"encoding/xml"
"fmt"
"io"
"os"
)
type plist struct {
XMLName xml.Name `xml:"plist"`
Dict dict `xml:"dict"`
}
type dict struct {
Key []string `xml:"key"`
String []string `xml:"string"`
}
// osRelease builds a string describing the operating system release based on the
// contents of the property list (.plist) system files. If no .plist files are found,
// or if the required properties to build the release description string are missing,
// an empty string is returned instead. The generated string resembles the output of
// the `sw_vers` commandline program, but in a single-line string. For more information
// about the `sw_vers` program, see: https://www.unix.com/man-page/osx/1/SW_VERS.
func osRelease() string {
file, err := getPlistFile()
if err != nil {
return ""
}
defer file.Close()
values, err := parsePlistFile(file)
if err != nil {
return ""
}
return buildOSRelease(values)
}
// getPlistFile returns a *os.File pointing to one of the well-known .plist files
// available on macOS. If no file can be opened, it returns an error.
func getPlistFile() (*os.File, error) {
return getFirstAvailableFile([]string{
"/System/Library/CoreServices/SystemVersion.plist",
"/System/Library/CoreServices/ServerVersion.plist",
})
}
// parsePlistFile process the file pointed by `file` as a .plist file and returns
// a map with the key-values for each pair of correlated <key> and <string> elements
// contained in it.
func parsePlistFile(file io.Reader) (map[string]string, error) {
var v plist
err := xml.NewDecoder(file).Decode(&v)
if err != nil {
return nil, err
}
if len(v.Dict.Key) != len(v.Dict.String) {
return nil, fmt.Errorf("the number of <key> and <string> elements doesn't match")
}
properties := make(map[string]string, len(v.Dict.Key))
for i, key := range v.Dict.Key {
properties[key] = v.Dict.String[i]
}
return properties, nil
}
// buildOSRelease builds a string describing the OS release based on the properties
// available on the provided map. It tries to find the `ProductName`, `ProductVersion`
// and `ProductBuildVersion` properties. If some of these properties are not found,
// it returns an empty string.
func buildOSRelease(properties map[string]string) string {
productName := properties["ProductName"]
productVersion := properties["ProductVersion"]
productBuildVersion := properties["ProductBuildVersion"]
if productName == "" || productVersion == "" || productBuildVersion == "" {
return ""
}
return fmt.Sprintf("%s %s (%s)", productName, productVersion, productBuildVersion)
}

View File

@ -0,0 +1,154 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build aix || dragonfly || freebsd || linux || netbsd || openbsd || solaris || zos
// +build aix dragonfly freebsd linux netbsd openbsd solaris zos
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"bufio"
"fmt"
"io"
"os"
"strings"
)
// osRelease builds a string describing the operating system release based on the
// properties of the os-release file. If no os-release file is found, or if the
// required properties to build the release description string are missing, an empty
// string is returned instead. For more information about os-release files, see:
// https://www.freedesktop.org/software/systemd/man/os-release.html
func osRelease() string {
file, err := getOSReleaseFile()
if err != nil {
return ""
}
defer file.Close()
values := parseOSReleaseFile(file)
return buildOSRelease(values)
}
// getOSReleaseFile returns a *os.File pointing to one of the well-known os-release
// files, according to their order of preference. If no file can be opened, it
// returns an error.
func getOSReleaseFile() (*os.File, error) {
return getFirstAvailableFile([]string{"/etc/os-release", "/usr/lib/os-release"})
}
// parseOSReleaseFile process the file pointed by `file` as an os-release file and
// returns a map with the key-values contained in it. Empty lines or lines starting
// with a '#' character are ignored, as well as lines with the missing key=value
// separator. Values are unquoted and unescaped.
func parseOSReleaseFile(file io.Reader) map[string]string {
values := make(map[string]string)
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
if skip(line) {
continue
}
key, value, ok := parse(line)
if ok {
values[key] = value
}
}
return values
}
// skip returns true if the line is blank or starts with a '#' character, and
// therefore should be skipped from processing.
func skip(line string) bool {
line = strings.TrimSpace(line)
return len(line) == 0 || strings.HasPrefix(line, "#")
}
// parse attempts to split the provided line on the first '=' character, and then
// sanitize each side of the split before returning them as a key-value pair.
func parse(line string) (string, string, bool) {
k, v, found := strings.Cut(line, "=")
if !found || len(k) == 0 {
return "", "", false
}
key := strings.TrimSpace(k)
value := unescape(unquote(strings.TrimSpace(v)))
return key, value, true
}
// unquote checks whether the string `s` is quoted with double or single quotes
// and, if so, returns a version of the string without them. Otherwise it returns
// the provided string unchanged.
func unquote(s string) string {
if len(s) < 2 {
return s
}
if (s[0] == '"' || s[0] == '\'') && s[0] == s[len(s)-1] {
return s[1 : len(s)-1]
}
return s
}
// unescape removes the `\` prefix from some characters that are expected
// to have it added in front of them for escaping purposes.
func unescape(s string) string {
return strings.NewReplacer(
`\$`, `$`,
`\"`, `"`,
`\'`, `'`,
`\\`, `\`,
"\\`", "`",
).Replace(s)
}
// buildOSRelease builds a string describing the OS release based on the properties
// available on the provided map. It favors a combination of the `NAME` and `VERSION`
// properties as first option (falling back to `VERSION_ID` if `VERSION` isn't
// found), and using `PRETTY_NAME` alone if some of the previous are not present. If
// none of these properties are found, it returns an empty string.
//
// The rationale behind not using `PRETTY_NAME` as first choice was that, for some
// Linux distributions, it doesn't include the same detail that can be found on the
// individual `NAME` and `VERSION` properties, and combining `PRETTY_NAME` with
// other properties can produce "pretty" redundant strings in some cases.
func buildOSRelease(values map[string]string) string {
var osRelease string
name := values["NAME"]
version := values["VERSION"]
if version == "" {
version = values["VERSION_ID"]
}
if name != "" && version != "" {
osRelease = fmt.Sprintf("%s %s", name, version)
} else {
osRelease = values["PRETTY_NAME"]
}
return osRelease
}

View File

@ -0,0 +1,90 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build aix || darwin || dragonfly || freebsd || linux || netbsd || openbsd || solaris || zos
// +build aix darwin dragonfly freebsd linux netbsd openbsd solaris zos
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"fmt"
"os"
"golang.org/x/sys/unix"
)
type unameProvider func(buf *unix.Utsname) (err error)
var defaultUnameProvider unameProvider = unix.Uname
var currentUnameProvider = defaultUnameProvider
func setDefaultUnameProvider() {
setUnameProvider(defaultUnameProvider)
}
func setUnameProvider(unameProvider unameProvider) {
currentUnameProvider = unameProvider
}
// platformOSDescription returns a human readable OS version information string.
// The final string combines OS release information (where available) and the
// result of the `uname` system call.
func platformOSDescription() (string, error) {
uname, err := uname()
if err != nil {
return "", err
}
osRelease := osRelease()
if osRelease != "" {
return fmt.Sprintf("%s (%s)", osRelease, uname), nil
}
return uname, nil
}
// uname issues a uname(2) system call (or equivalent on systems which doesn't
// have one) and formats the output in a single string, similar to the output
// of the `uname` commandline program. The final string resembles the one
// obtained with a call to `uname -snrvm`.
func uname() (string, error) {
var utsName unix.Utsname
err := currentUnameProvider(&utsName)
if err != nil {
return "", err
}
return fmt.Sprintf("%s %s %s %s %s",
unix.ByteSliceToString(utsName.Sysname[:]),
unix.ByteSliceToString(utsName.Nodename[:]),
unix.ByteSliceToString(utsName.Release[:]),
unix.ByteSliceToString(utsName.Version[:]),
unix.ByteSliceToString(utsName.Machine[:]),
), nil
}
// getFirstAvailableFile returns an *os.File of the first available
// file from a list of candidate file paths.
func getFirstAvailableFile(candidates []string) (*os.File, error) {
for _, c := range candidates {
file, err := os.Open(c)
if err == nil {
return file, nil
}
}
return nil, fmt.Errorf("no candidate file available: %v", candidates)
}

View File

@ -0,0 +1,34 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !aix
// +build !darwin
// +build !dragonfly
// +build !freebsd
// +build !linux
// +build !netbsd
// +build !openbsd
// +build !solaris
// +build !windows
// +build !zos
package resource // import "go.opentelemetry.io/otel/sdk/resource"
// platformOSDescription is a placeholder implementation for OSes
// for which this project currently doesn't support os.description
// attribute detection. See build tags declaration early on this file
// for a list of unsupported OSes.
func platformOSDescription() (string, error) {
return "<unknown>", nil
}

View File

@ -0,0 +1,101 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"fmt"
"strconv"
"golang.org/x/sys/windows/registry"
)
// platformOSDescription returns a human readable OS version information string.
// It does so by querying registry values under the
// `SOFTWARE\Microsoft\Windows NT\CurrentVersion` key. The final string
// resembles the one displayed by the Version Reporter Applet (winver.exe).
func platformOSDescription() (string, error) {
k, err := registry.OpenKey(
registry.LOCAL_MACHINE, `SOFTWARE\Microsoft\Windows NT\CurrentVersion`, registry.QUERY_VALUE)
if err != nil {
return "", err
}
defer k.Close()
var (
productName = readProductName(k)
displayVersion = readDisplayVersion(k)
releaseID = readReleaseID(k)
currentMajorVersionNumber = readCurrentMajorVersionNumber(k)
currentMinorVersionNumber = readCurrentMinorVersionNumber(k)
currentBuildNumber = readCurrentBuildNumber(k)
ubr = readUBR(k)
)
if displayVersion != "" {
displayVersion += " "
}
return fmt.Sprintf("%s %s(%s) [Version %s.%s.%s.%s]",
productName,
displayVersion,
releaseID,
currentMajorVersionNumber,
currentMinorVersionNumber,
currentBuildNumber,
ubr,
), nil
}
func getStringValue(name string, k registry.Key) string {
value, _, _ := k.GetStringValue(name)
return value
}
func getIntegerValue(name string, k registry.Key) uint64 {
value, _, _ := k.GetIntegerValue(name)
return value
}
func readProductName(k registry.Key) string {
return getStringValue("ProductName", k)
}
func readDisplayVersion(k registry.Key) string {
return getStringValue("DisplayVersion", k)
}
func readReleaseID(k registry.Key) string {
return getStringValue("ReleaseID", k)
}
func readCurrentMajorVersionNumber(k registry.Key) string {
return strconv.FormatUint(getIntegerValue("CurrentMajorVersionNumber", k), 10)
}
func readCurrentMinorVersionNumber(k registry.Key) string {
return strconv.FormatUint(getIntegerValue("CurrentMinorVersionNumber", k), 10)
}
func readCurrentBuildNumber(k registry.Key) string {
return getStringValue("CurrentBuildNumber", k)
}
func readUBR(k registry.Key) string {
return strconv.FormatUint(getIntegerValue("UBR", k), 10)
}

180
vendor/go.opentelemetry.io/otel/sdk/resource/process.go generated vendored Normal file
View File

@ -0,0 +1,180 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"fmt"
"os"
"os/user"
"path/filepath"
"runtime"
semconv "go.opentelemetry.io/otel/semconv/v1.21.0"
)
type pidProvider func() int
type executablePathProvider func() (string, error)
type commandArgsProvider func() []string
type ownerProvider func() (*user.User, error)
type runtimeNameProvider func() string
type runtimeVersionProvider func() string
type runtimeOSProvider func() string
type runtimeArchProvider func() string
var (
defaultPidProvider pidProvider = os.Getpid
defaultExecutablePathProvider executablePathProvider = os.Executable
defaultCommandArgsProvider commandArgsProvider = func() []string { return os.Args }
defaultOwnerProvider ownerProvider = user.Current
defaultRuntimeNameProvider runtimeNameProvider = func() string {
if runtime.Compiler == "gc" {
return "go"
}
return runtime.Compiler
}
defaultRuntimeVersionProvider runtimeVersionProvider = runtime.Version
defaultRuntimeOSProvider runtimeOSProvider = func() string { return runtime.GOOS }
defaultRuntimeArchProvider runtimeArchProvider = func() string { return runtime.GOARCH }
)
var (
pid = defaultPidProvider
executablePath = defaultExecutablePathProvider
commandArgs = defaultCommandArgsProvider
owner = defaultOwnerProvider
runtimeName = defaultRuntimeNameProvider
runtimeVersion = defaultRuntimeVersionProvider
runtimeOS = defaultRuntimeOSProvider
runtimeArch = defaultRuntimeArchProvider
)
func setDefaultOSProviders() {
setOSProviders(
defaultPidProvider,
defaultExecutablePathProvider,
defaultCommandArgsProvider,
)
}
func setOSProviders(
pidProvider pidProvider,
executablePathProvider executablePathProvider,
commandArgsProvider commandArgsProvider,
) {
pid = pidProvider
executablePath = executablePathProvider
commandArgs = commandArgsProvider
}
func setDefaultRuntimeProviders() {
setRuntimeProviders(
defaultRuntimeNameProvider,
defaultRuntimeVersionProvider,
defaultRuntimeOSProvider,
defaultRuntimeArchProvider,
)
}
func setRuntimeProviders(
runtimeNameProvider runtimeNameProvider,
runtimeVersionProvider runtimeVersionProvider,
runtimeOSProvider runtimeOSProvider,
runtimeArchProvider runtimeArchProvider,
) {
runtimeName = runtimeNameProvider
runtimeVersion = runtimeVersionProvider
runtimeOS = runtimeOSProvider
runtimeArch = runtimeArchProvider
}
func setDefaultUserProviders() {
setUserProviders(defaultOwnerProvider)
}
func setUserProviders(ownerProvider ownerProvider) {
owner = ownerProvider
}
type processPIDDetector struct{}
type processExecutableNameDetector struct{}
type processExecutablePathDetector struct{}
type processCommandArgsDetector struct{}
type processOwnerDetector struct{}
type processRuntimeNameDetector struct{}
type processRuntimeVersionDetector struct{}
type processRuntimeDescriptionDetector struct{}
// Detect returns a *Resource that describes the process identifier (PID) of the
// executing process.
func (processPIDDetector) Detect(ctx context.Context) (*Resource, error) {
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessPID(pid())), nil
}
// Detect returns a *Resource that describes the name of the process executable.
func (processExecutableNameDetector) Detect(ctx context.Context) (*Resource, error) {
executableName := filepath.Base(commandArgs()[0])
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessExecutableName(executableName)), nil
}
// Detect returns a *Resource that describes the full path of the process executable.
func (processExecutablePathDetector) Detect(ctx context.Context) (*Resource, error) {
executablePath, err := executablePath()
if err != nil {
return nil, err
}
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessExecutablePath(executablePath)), nil
}
// Detect returns a *Resource that describes all the command arguments as received
// by the process.
func (processCommandArgsDetector) Detect(ctx context.Context) (*Resource, error) {
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessCommandArgs(commandArgs()...)), nil
}
// Detect returns a *Resource that describes the username of the user that owns the
// process.
func (processOwnerDetector) Detect(ctx context.Context) (*Resource, error) {
owner, err := owner()
if err != nil {
return nil, err
}
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessOwner(owner.Username)), nil
}
// Detect returns a *Resource that describes the name of the compiler used to compile
// this process image.
func (processRuntimeNameDetector) Detect(ctx context.Context) (*Resource, error) {
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessRuntimeName(runtimeName())), nil
}
// Detect returns a *Resource that describes the version of the runtime of this process.
func (processRuntimeVersionDetector) Detect(ctx context.Context) (*Resource, error) {
return NewWithAttributes(semconv.SchemaURL, semconv.ProcessRuntimeVersion(runtimeVersion())), nil
}
// Detect returns a *Resource that describes the runtime of this process.
func (processRuntimeDescriptionDetector) Detect(ctx context.Context) (*Resource, error) {
runtimeDescription := fmt.Sprintf(
"go version %s %s/%s", runtimeVersion(), runtimeOS(), runtimeArch())
return NewWithAttributes(
semconv.SchemaURL,
semconv.ProcessRuntimeDescription(runtimeDescription),
), nil
}

View File

@ -0,0 +1,271 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resource // import "go.opentelemetry.io/otel/sdk/resource"
import (
"context"
"errors"
"sync"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
)
// Resource describes an entity about which identifying information
// and metadata is exposed. Resource is an immutable object,
// equivalent to a map from key to unique value.
//
// Resources should be passed and stored as pointers
// (`*resource.Resource`). The `nil` value is equivalent to an empty
// Resource.
type Resource struct {
attrs attribute.Set
schemaURL string
}
var (
defaultResource *Resource
defaultResourceOnce sync.Once
)
var errMergeConflictSchemaURL = errors.New("cannot merge resource due to conflicting Schema URL")
// New returns a Resource combined from the user-provided detectors.
func New(ctx context.Context, opts ...Option) (*Resource, error) {
cfg := config{}
for _, opt := range opts {
cfg = opt.apply(cfg)
}
r := &Resource{schemaURL: cfg.schemaURL}
return r, detect(ctx, r, cfg.detectors)
}
// NewWithAttributes creates a resource from attrs and associates the resource with a
// schema URL. If attrs contains duplicate keys, the last value will be used. If attrs
// contains any invalid items those items will be dropped. The attrs are assumed to be
// in a schema identified by schemaURL.
func NewWithAttributes(schemaURL string, attrs ...attribute.KeyValue) *Resource {
resource := NewSchemaless(attrs...)
resource.schemaURL = schemaURL
return resource
}
// NewSchemaless creates a resource from attrs. If attrs contains duplicate keys,
// the last value will be used. If attrs contains any invalid items those items will
// be dropped. The resource will not be associated with a schema URL. If the schema
// of the attrs is known use NewWithAttributes instead.
func NewSchemaless(attrs ...attribute.KeyValue) *Resource {
if len(attrs) == 0 {
return &Resource{}
}
// Ensure attributes comply with the specification:
// https://github.com/open-telemetry/opentelemetry-specification/blob/v1.20.0/specification/common/README.md#attribute
s, _ := attribute.NewSetWithFiltered(attrs, func(kv attribute.KeyValue) bool {
return kv.Valid()
})
// If attrs only contains invalid entries do not allocate a new resource.
if s.Len() == 0 {
return &Resource{}
}
return &Resource{attrs: s} //nolint
}
// String implements the Stringer interface and provides a
// human-readable form of the resource.
//
// Avoid using this representation as the key in a map of resources,
// use Equivalent() as the key instead.
func (r *Resource) String() string {
if r == nil {
return ""
}
return r.attrs.Encoded(attribute.DefaultEncoder())
}
// MarshalLog is the marshaling function used by the logging system to represent this exporter.
func (r *Resource) MarshalLog() interface{} {
return struct {
Attributes attribute.Set
SchemaURL string
}{
Attributes: r.attrs,
SchemaURL: r.schemaURL,
}
}
// Attributes returns a copy of attributes from the resource in a sorted order.
// To avoid allocating a new slice, use an iterator.
func (r *Resource) Attributes() []attribute.KeyValue {
if r == nil {
r = Empty()
}
return r.attrs.ToSlice()
}
// SchemaURL returns the schema URL associated with Resource r.
func (r *Resource) SchemaURL() string {
if r == nil {
return ""
}
return r.schemaURL
}
// Iter returns an iterator of the Resource attributes.
// This is ideal to use if you do not want a copy of the attributes.
func (r *Resource) Iter() attribute.Iterator {
if r == nil {
r = Empty()
}
return r.attrs.Iter()
}
// Equal returns true when a Resource is equivalent to this Resource.
func (r *Resource) Equal(eq *Resource) bool {
if r == nil {
r = Empty()
}
if eq == nil {
eq = Empty()
}
return r.Equivalent() == eq.Equivalent()
}
// Merge creates a new resource by combining resource a and b.
//
// If there are common keys between resource a and b, then the value
// from resource b will overwrite the value from resource a, even
// if resource b's value is empty.
//
// The SchemaURL of the resources will be merged according to the spec rules:
// https://github.com/open-telemetry/opentelemetry-specification/blob/v1.20.0/specification/resource/sdk.md#merge
// If the resources have different non-empty schemaURL an empty resource and an error
// will be returned.
func Merge(a, b *Resource) (*Resource, error) {
if a == nil && b == nil {
return Empty(), nil
}
if a == nil {
return b, nil
}
if b == nil {
return a, nil
}
// Merge the schema URL.
var schemaURL string
switch true {
case a.schemaURL == "":
schemaURL = b.schemaURL
case b.schemaURL == "":
schemaURL = a.schemaURL
case a.schemaURL == b.schemaURL:
schemaURL = a.schemaURL
default:
return Empty(), errMergeConflictSchemaURL
}
// Note: 'b' attributes will overwrite 'a' with last-value-wins in attribute.Key()
// Meaning this is equivalent to: append(a.Attributes(), b.Attributes()...)
mi := attribute.NewMergeIterator(b.Set(), a.Set())
combine := make([]attribute.KeyValue, 0, a.Len()+b.Len())
for mi.Next() {
combine = append(combine, mi.Attribute())
}
merged := NewWithAttributes(schemaURL, combine...)
return merged, nil
}
// Empty returns an instance of Resource with no attributes. It is
// equivalent to a `nil` Resource.
func Empty() *Resource {
return &Resource{}
}
// Default returns an instance of Resource with a default
// "service.name" and OpenTelemetrySDK attributes.
func Default() *Resource {
defaultResourceOnce.Do(func() {
var err error
defaultResource, err = Detect(
context.Background(),
defaultServiceNameDetector{},
fromEnv{},
telemetrySDK{},
)
if err != nil {
otel.Handle(err)
}
// If Detect did not return a valid resource, fall back to emptyResource.
if defaultResource == nil {
defaultResource = &Resource{}
}
})
return defaultResource
}
// Environment returns an instance of Resource with attributes
// extracted from the OTEL_RESOURCE_ATTRIBUTES environment variable.
func Environment() *Resource {
detector := &fromEnv{}
resource, err := detector.Detect(context.Background())
if err != nil {
otel.Handle(err)
}
return resource
}
// Equivalent returns an object that can be compared for equality
// between two resources. This value is suitable for use as a key in
// a map.
func (r *Resource) Equivalent() attribute.Distinct {
return r.Set().Equivalent()
}
// Set returns the equivalent *attribute.Set of this resource's attributes.
func (r *Resource) Set() *attribute.Set {
if r == nil {
r = Empty()
}
return &r.attrs
}
// MarshalJSON encodes the resource attributes as a JSON list of { "Key":
// "...", "Value": ... } pairs in order sorted by key.
func (r *Resource) MarshalJSON() ([]byte, error) {
if r == nil {
r = Empty()
}
return r.attrs.MarshalJSON()
}
// Len returns the number of unique key-values in this Resource.
func (r *Resource) Len() int {
if r == nil {
return 0
}
return r.attrs.Len()
}
// Encoded returns an encoded representation of the resource.
func (r *Resource) Encoded(enc attribute.Encoder) string {
if r == nil {
return ""
}
return r.attrs.Encoded(enc)
}

View File

@ -0,0 +1,420 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
"sync"
"sync/atomic"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/sdk/internal/env"
"go.opentelemetry.io/otel/trace"
)
// Defaults for BatchSpanProcessorOptions.
const (
DefaultMaxQueueSize = 2048
DefaultScheduleDelay = 5000
DefaultExportTimeout = 30000
DefaultMaxExportBatchSize = 512
)
// BatchSpanProcessorOption configures a BatchSpanProcessor.
type BatchSpanProcessorOption func(o *BatchSpanProcessorOptions)
// BatchSpanProcessorOptions is configuration settings for a
// BatchSpanProcessor.
type BatchSpanProcessorOptions struct {
// MaxQueueSize is the maximum queue size to buffer spans for delayed processing. If the
// queue gets full it drops the spans. Use BlockOnQueueFull to change this behavior.
// The default value of MaxQueueSize is 2048.
MaxQueueSize int
// BatchTimeout is the maximum duration for constructing a batch. Processor
// forcefully sends available spans when timeout is reached.
// The default value of BatchTimeout is 5000 msec.
BatchTimeout time.Duration
// ExportTimeout specifies the maximum duration for exporting spans. If the timeout
// is reached, the export will be cancelled.
// The default value of ExportTimeout is 30000 msec.
ExportTimeout time.Duration
// MaxExportBatchSize is the maximum number of spans to process in a single batch.
// If there are more than one batch worth of spans then it processes multiple batches
// of spans one batch after the other without any delay.
// The default value of MaxExportBatchSize is 512.
MaxExportBatchSize int
// BlockOnQueueFull blocks onEnd() and onStart() method if the queue is full
// AND if BlockOnQueueFull is set to true.
// Blocking option should be used carefully as it can severely affect the performance of an
// application.
BlockOnQueueFull bool
}
// batchSpanProcessor is a SpanProcessor that batches asynchronously-received
// spans and sends them to a trace.Exporter when complete.
type batchSpanProcessor struct {
e SpanExporter
o BatchSpanProcessorOptions
queue chan ReadOnlySpan
dropped uint32
batch []ReadOnlySpan
batchMutex sync.Mutex
timer *time.Timer
stopWait sync.WaitGroup
stopOnce sync.Once
stopCh chan struct{}
stopped atomic.Bool
}
var _ SpanProcessor = (*batchSpanProcessor)(nil)
// NewBatchSpanProcessor creates a new SpanProcessor that will send completed
// span batches to the exporter with the supplied options.
//
// If the exporter is nil, the span processor will perform no action.
func NewBatchSpanProcessor(exporter SpanExporter, options ...BatchSpanProcessorOption) SpanProcessor {
maxQueueSize := env.BatchSpanProcessorMaxQueueSize(DefaultMaxQueueSize)
maxExportBatchSize := env.BatchSpanProcessorMaxExportBatchSize(DefaultMaxExportBatchSize)
if maxExportBatchSize > maxQueueSize {
if DefaultMaxExportBatchSize > maxQueueSize {
maxExportBatchSize = maxQueueSize
} else {
maxExportBatchSize = DefaultMaxExportBatchSize
}
}
o := BatchSpanProcessorOptions{
BatchTimeout: time.Duration(env.BatchSpanProcessorScheduleDelay(DefaultScheduleDelay)) * time.Millisecond,
ExportTimeout: time.Duration(env.BatchSpanProcessorExportTimeout(DefaultExportTimeout)) * time.Millisecond,
MaxQueueSize: maxQueueSize,
MaxExportBatchSize: maxExportBatchSize,
}
for _, opt := range options {
opt(&o)
}
bsp := &batchSpanProcessor{
e: exporter,
o: o,
batch: make([]ReadOnlySpan, 0, o.MaxExportBatchSize),
timer: time.NewTimer(o.BatchTimeout),
queue: make(chan ReadOnlySpan, o.MaxQueueSize),
stopCh: make(chan struct{}),
}
bsp.stopWait.Add(1)
go func() {
defer bsp.stopWait.Done()
bsp.processQueue()
bsp.drainQueue()
}()
return bsp
}
// OnStart method does nothing.
func (bsp *batchSpanProcessor) OnStart(parent context.Context, s ReadWriteSpan) {}
// OnEnd method enqueues a ReadOnlySpan for later processing.
func (bsp *batchSpanProcessor) OnEnd(s ReadOnlySpan) {
// Do not enqueue spans after Shutdown.
if bsp.stopped.Load() {
return
}
// Do not enqueue spans if we are just going to drop them.
if bsp.e == nil {
return
}
bsp.enqueue(s)
}
// Shutdown flushes the queue and waits until all spans are processed.
// It only executes once. Subsequent call does nothing.
func (bsp *batchSpanProcessor) Shutdown(ctx context.Context) error {
var err error
bsp.stopOnce.Do(func() {
bsp.stopped.Store(true)
wait := make(chan struct{})
go func() {
close(bsp.stopCh)
bsp.stopWait.Wait()
if bsp.e != nil {
if err := bsp.e.Shutdown(ctx); err != nil {
otel.Handle(err)
}
}
close(wait)
}()
// Wait until the wait group is done or the context is cancelled
select {
case <-wait:
case <-ctx.Done():
err = ctx.Err()
}
})
return err
}
type forceFlushSpan struct {
ReadOnlySpan
flushed chan struct{}
}
func (f forceFlushSpan) SpanContext() trace.SpanContext {
return trace.NewSpanContext(trace.SpanContextConfig{TraceFlags: trace.FlagsSampled})
}
// ForceFlush exports all ended spans that have not yet been exported.
func (bsp *batchSpanProcessor) ForceFlush(ctx context.Context) error {
// Interrupt if context is already canceled.
if err := ctx.Err(); err != nil {
return err
}
// Do nothing after Shutdown.
if bsp.stopped.Load() {
return nil
}
var err error
if bsp.e != nil {
flushCh := make(chan struct{})
if bsp.enqueueBlockOnQueueFull(ctx, forceFlushSpan{flushed: flushCh}) {
select {
case <-bsp.stopCh:
// The batchSpanProcessor is Shutdown.
return nil
case <-flushCh:
// Processed any items in queue prior to ForceFlush being called
case <-ctx.Done():
return ctx.Err()
}
}
wait := make(chan error)
go func() {
wait <- bsp.exportSpans(ctx)
close(wait)
}()
// Wait until the export is finished or the context is cancelled/timed out
select {
case err = <-wait:
case <-ctx.Done():
err = ctx.Err()
}
}
return err
}
// WithMaxQueueSize returns a BatchSpanProcessorOption that configures the
// maximum queue size allowed for a BatchSpanProcessor.
func WithMaxQueueSize(size int) BatchSpanProcessorOption {
return func(o *BatchSpanProcessorOptions) {
o.MaxQueueSize = size
}
}
// WithMaxExportBatchSize returns a BatchSpanProcessorOption that configures
// the maximum export batch size allowed for a BatchSpanProcessor.
func WithMaxExportBatchSize(size int) BatchSpanProcessorOption {
return func(o *BatchSpanProcessorOptions) {
o.MaxExportBatchSize = size
}
}
// WithBatchTimeout returns a BatchSpanProcessorOption that configures the
// maximum delay allowed for a BatchSpanProcessor before it will export any
// held span (whether the queue is full or not).
func WithBatchTimeout(delay time.Duration) BatchSpanProcessorOption {
return func(o *BatchSpanProcessorOptions) {
o.BatchTimeout = delay
}
}
// WithExportTimeout returns a BatchSpanProcessorOption that configures the
// amount of time a BatchSpanProcessor waits for an exporter to export before
// abandoning the export.
func WithExportTimeout(timeout time.Duration) BatchSpanProcessorOption {
return func(o *BatchSpanProcessorOptions) {
o.ExportTimeout = timeout
}
}
// WithBlocking returns a BatchSpanProcessorOption that configures a
// BatchSpanProcessor to wait for enqueue operations to succeed instead of
// dropping data when the queue is full.
func WithBlocking() BatchSpanProcessorOption {
return func(o *BatchSpanProcessorOptions) {
o.BlockOnQueueFull = true
}
}
// exportSpans is a subroutine of processing and draining the queue.
func (bsp *batchSpanProcessor) exportSpans(ctx context.Context) error {
bsp.timer.Reset(bsp.o.BatchTimeout)
bsp.batchMutex.Lock()
defer bsp.batchMutex.Unlock()
if bsp.o.ExportTimeout > 0 {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, bsp.o.ExportTimeout)
defer cancel()
}
if l := len(bsp.batch); l > 0 {
global.Debug("exporting spans", "count", len(bsp.batch), "total_dropped", atomic.LoadUint32(&bsp.dropped))
err := bsp.e.ExportSpans(ctx, bsp.batch)
// A new batch is always created after exporting, even if the batch failed to be exported.
//
// It is up to the exporter to implement any type of retry logic if a batch is failing
// to be exported, since it is specific to the protocol and backend being sent to.
bsp.batch = bsp.batch[:0]
if err != nil {
return err
}
}
return nil
}
// processQueue removes spans from the `queue` channel until processor
// is shut down. It calls the exporter in batches of up to MaxExportBatchSize
// waiting up to BatchTimeout to form a batch.
func (bsp *batchSpanProcessor) processQueue() {
defer bsp.timer.Stop()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
for {
select {
case <-bsp.stopCh:
return
case <-bsp.timer.C:
if err := bsp.exportSpans(ctx); err != nil {
otel.Handle(err)
}
case sd := <-bsp.queue:
if ffs, ok := sd.(forceFlushSpan); ok {
close(ffs.flushed)
continue
}
bsp.batchMutex.Lock()
bsp.batch = append(bsp.batch, sd)
shouldExport := len(bsp.batch) >= bsp.o.MaxExportBatchSize
bsp.batchMutex.Unlock()
if shouldExport {
if !bsp.timer.Stop() {
<-bsp.timer.C
}
if err := bsp.exportSpans(ctx); err != nil {
otel.Handle(err)
}
}
}
}
}
// drainQueue awaits the any caller that had added to bsp.stopWait
// to finish the enqueue, then exports the final batch.
func (bsp *batchSpanProcessor) drainQueue() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
for {
select {
case sd := <-bsp.queue:
if _, ok := sd.(forceFlushSpan); ok {
// Ignore flush requests as they are not valid spans.
continue
}
bsp.batchMutex.Lock()
bsp.batch = append(bsp.batch, sd)
shouldExport := len(bsp.batch) == bsp.o.MaxExportBatchSize
bsp.batchMutex.Unlock()
if shouldExport {
if err := bsp.exportSpans(ctx); err != nil {
otel.Handle(err)
}
}
default:
// There are no more enqueued spans. Make final export.
if err := bsp.exportSpans(ctx); err != nil {
otel.Handle(err)
}
return
}
}
}
func (bsp *batchSpanProcessor) enqueue(sd ReadOnlySpan) {
ctx := context.TODO()
if bsp.o.BlockOnQueueFull {
bsp.enqueueBlockOnQueueFull(ctx, sd)
} else {
bsp.enqueueDrop(ctx, sd)
}
}
func (bsp *batchSpanProcessor) enqueueBlockOnQueueFull(ctx context.Context, sd ReadOnlySpan) bool {
if !sd.SpanContext().IsSampled() {
return false
}
select {
case bsp.queue <- sd:
return true
case <-ctx.Done():
return false
}
}
func (bsp *batchSpanProcessor) enqueueDrop(ctx context.Context, sd ReadOnlySpan) bool {
if !sd.SpanContext().IsSampled() {
return false
}
select {
case bsp.queue <- sd:
return true
default:
atomic.AddUint32(&bsp.dropped, 1)
}
return false
}
// MarshalLog is the marshaling function used by the logging system to represent this exporter.
func (bsp *batchSpanProcessor) MarshalLog() interface{} {
return struct {
Type string
SpanExporter SpanExporter
Config BatchSpanProcessorOptions
}{
Type: "BatchSpanProcessor",
SpanExporter: bsp.e,
Config: bsp.o,
}
}

21
vendor/go.opentelemetry.io/otel/sdk/trace/doc.go generated vendored Normal file
View File

@ -0,0 +1,21 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/*
Package trace contains support for OpenTelemetry distributed tracing.
The following assumes a basic familiarity with OpenTelemetry concepts.
See https://opentelemetry.io.
*/
package trace // import "go.opentelemetry.io/otel/sdk/trace"

37
vendor/go.opentelemetry.io/otel/sdk/trace/event.go generated vendored Normal file
View File

@ -0,0 +1,37 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"time"
"go.opentelemetry.io/otel/attribute"
)
// Event is a thing that happened during a Span's lifetime.
type Event struct {
// Name is the name of this event
Name string
// Attributes describe the aspects of the event.
Attributes []attribute.KeyValue
// DroppedAttributeCount is the number of attributes that were not
// recorded due to configured limits being reached.
DroppedAttributeCount int
// Time at which this event was recorded.
Time time.Time
}

View File

@ -0,0 +1,44 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package trace // import "go.opentelemetry.io/otel/sdk/trace"
// evictedQueue is a FIFO queue with a configurable capacity.
type evictedQueue struct {
queue []interface{}
capacity int
droppedCount int
}
func newEvictedQueue(capacity int) evictedQueue {
// Do not pre-allocate queue, do this lazily.
return evictedQueue{capacity: capacity}
}
// add adds value to the evictedQueue eq. If eq is at capacity, the oldest
// queued value will be discarded and the drop count incremented.
func (eq *evictedQueue) add(value interface{}) {
if eq.capacity == 0 {
eq.droppedCount++
return
}
if eq.capacity > 0 && len(eq.queue) == eq.capacity {
// Drop first-in while avoiding allocating more capacity to eq.queue.
copy(eq.queue[:eq.capacity-1], eq.queue[1:])
eq.queue = eq.queue[:eq.capacity-1]
eq.droppedCount++
}
eq.queue = append(eq.queue, value)
}

View File

@ -0,0 +1,77 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
crand "crypto/rand"
"encoding/binary"
"math/rand"
"sync"
"go.opentelemetry.io/otel/trace"
)
// IDGenerator allows custom generators for TraceID and SpanID.
type IDGenerator interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// NewIDs returns a new trace and span ID.
NewIDs(ctx context.Context) (trace.TraceID, trace.SpanID)
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// NewSpanID returns a ID for a new span in the trace with traceID.
NewSpanID(ctx context.Context, traceID trace.TraceID) trace.SpanID
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}
type randomIDGenerator struct {
sync.Mutex
randSource *rand.Rand
}
var _ IDGenerator = &randomIDGenerator{}
// NewSpanID returns a non-zero span ID from a randomly-chosen sequence.
func (gen *randomIDGenerator) NewSpanID(ctx context.Context, traceID trace.TraceID) trace.SpanID {
gen.Lock()
defer gen.Unlock()
sid := trace.SpanID{}
_, _ = gen.randSource.Read(sid[:])
return sid
}
// NewIDs returns a non-zero trace ID and a non-zero span ID from a
// randomly-chosen sequence.
func (gen *randomIDGenerator) NewIDs(ctx context.Context) (trace.TraceID, trace.SpanID) {
gen.Lock()
defer gen.Unlock()
tid := trace.TraceID{}
_, _ = gen.randSource.Read(tid[:])
sid := trace.SpanID{}
_, _ = gen.randSource.Read(sid[:])
return tid, sid
}
func defaultIDGenerator() IDGenerator {
gen := &randomIDGenerator{}
var rngSeed int64
_ = binary.Read(crand.Reader, binary.LittleEndian, &rngSeed)
gen.randSource = rand.New(rand.NewSource(rngSeed))
return gen
}

34
vendor/go.opentelemetry.io/otel/sdk/trace/link.go generated vendored Normal file
View File

@ -0,0 +1,34 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
)
// Link is the relationship between two Spans. The relationship can be within
// the same Trace or across different Traces.
type Link struct {
// SpanContext of the linked Span.
SpanContext trace.SpanContext
// Attributes describe the aspects of the link.
Attributes []attribute.KeyValue
// DroppedAttributeCount is the number of attributes that were not
// recorded due to configured limits being reached.
DroppedAttributeCount int
}

500
vendor/go.opentelemetry.io/otel/sdk/trace/provider.go generated vendored Normal file
View File

@ -0,0 +1,500 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
"fmt"
"sync"
"sync/atomic"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/internal/global"
"go.opentelemetry.io/otel/sdk/instrumentation"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/trace"
)
const (
defaultTracerName = "go.opentelemetry.io/otel/sdk/tracer"
)
// tracerProviderConfig.
type tracerProviderConfig struct {
// processors contains collection of SpanProcessors that are processing pipeline
// for spans in the trace signal.
// SpanProcessors registered with a TracerProvider and are called at the start
// and end of a Span's lifecycle, and are called in the order they are
// registered.
processors []SpanProcessor
// sampler is the default sampler used when creating new spans.
sampler Sampler
// idGenerator is used to generate all Span and Trace IDs when needed.
idGenerator IDGenerator
// spanLimits defines the attribute, event, and link limits for spans.
spanLimits SpanLimits
// resource contains attributes representing an entity that produces telemetry.
resource *resource.Resource
}
// MarshalLog is the marshaling function used by the logging system to represent this exporter.
func (cfg tracerProviderConfig) MarshalLog() interface{} {
return struct {
SpanProcessors []SpanProcessor
SamplerType string
IDGeneratorType string
SpanLimits SpanLimits
Resource *resource.Resource
}{
SpanProcessors: cfg.processors,
SamplerType: fmt.Sprintf("%T", cfg.sampler),
IDGeneratorType: fmt.Sprintf("%T", cfg.idGenerator),
SpanLimits: cfg.spanLimits,
Resource: cfg.resource,
}
}
// TracerProvider is an OpenTelemetry TracerProvider. It provides Tracers to
// instrumentation so it can trace operational flow through a system.
type TracerProvider struct {
mu sync.Mutex
namedTracer map[instrumentation.Scope]*tracer
spanProcessors atomic.Pointer[spanProcessorStates]
isShutdown atomic.Bool
// These fields are not protected by the lock mu. They are assumed to be
// immutable after creation of the TracerProvider.
sampler Sampler
idGenerator IDGenerator
spanLimits SpanLimits
resource *resource.Resource
}
var _ trace.TracerProvider = &TracerProvider{}
// NewTracerProvider returns a new and configured TracerProvider.
//
// By default the returned TracerProvider is configured with:
// - a ParentBased(AlwaysSample) Sampler
// - a random number IDGenerator
// - the resource.Default() Resource
// - the default SpanLimits.
//
// The passed opts are used to override these default values and configure the
// returned TracerProvider appropriately.
func NewTracerProvider(opts ...TracerProviderOption) *TracerProvider {
o := tracerProviderConfig{
spanLimits: NewSpanLimits(),
}
o = applyTracerProviderEnvConfigs(o)
for _, opt := range opts {
o = opt.apply(o)
}
o = ensureValidTracerProviderConfig(o)
tp := &TracerProvider{
namedTracer: make(map[instrumentation.Scope]*tracer),
sampler: o.sampler,
idGenerator: o.idGenerator,
spanLimits: o.spanLimits,
resource: o.resource,
}
global.Info("TracerProvider created", "config", o)
spss := make(spanProcessorStates, 0, len(o.processors))
for _, sp := range o.processors {
spss = append(spss, newSpanProcessorState(sp))
}
tp.spanProcessors.Store(&spss)
return tp
}
// Tracer returns a Tracer with the given name and options. If a Tracer for
// the given name and options does not exist it is created, otherwise the
// existing Tracer is returned.
//
// If name is empty, DefaultTracerName is used instead.
//
// This method is safe to be called concurrently.
func (p *TracerProvider) Tracer(name string, opts ...trace.TracerOption) trace.Tracer {
// This check happens before the mutex is acquired to avoid deadlocking if Tracer() is called from within Shutdown().
if p.isShutdown.Load() {
return trace.NewNoopTracerProvider().Tracer(name, opts...)
}
c := trace.NewTracerConfig(opts...)
if name == "" {
name = defaultTracerName
}
is := instrumentation.Scope{
Name: name,
Version: c.InstrumentationVersion(),
SchemaURL: c.SchemaURL(),
}
t, ok := func() (trace.Tracer, bool) {
p.mu.Lock()
defer p.mu.Unlock()
// Must check the flag after acquiring the mutex to avoid returning a valid tracer if Shutdown() ran
// after the first check above but before we acquired the mutex.
if p.isShutdown.Load() {
return trace.NewNoopTracerProvider().Tracer(name, opts...), true
}
t, ok := p.namedTracer[is]
if !ok {
t = &tracer{
provider: p,
instrumentationScope: is,
}
p.namedTracer[is] = t
}
return t, ok
}()
if !ok {
// This code is outside the mutex to not hold the lock while calling third party logging code:
// - That code may do slow things like I/O, which would prolong the duration the lock is held,
// slowing down all tracing consumers.
// - Logging code may be instrumented with tracing and deadlock because it could try
// acquiring the same non-reentrant mutex.
global.Info("Tracer created", "name", name, "version", is.Version, "schemaURL", is.SchemaURL)
}
return t
}
// RegisterSpanProcessor adds the given SpanProcessor to the list of SpanProcessors.
func (p *TracerProvider) RegisterSpanProcessor(sp SpanProcessor) {
// This check prevents calls during a shutdown.
if p.isShutdown.Load() {
return
}
p.mu.Lock()
defer p.mu.Unlock()
// This check prevents calls after a shutdown.
if p.isShutdown.Load() {
return
}
current := p.getSpanProcessors()
newSPS := make(spanProcessorStates, 0, len(current)+1)
newSPS = append(newSPS, current...)
newSPS = append(newSPS, newSpanProcessorState(sp))
p.spanProcessors.Store(&newSPS)
}
// UnregisterSpanProcessor removes the given SpanProcessor from the list of SpanProcessors.
func (p *TracerProvider) UnregisterSpanProcessor(sp SpanProcessor) {
// This check prevents calls during a shutdown.
if p.isShutdown.Load() {
return
}
p.mu.Lock()
defer p.mu.Unlock()
// This check prevents calls after a shutdown.
if p.isShutdown.Load() {
return
}
old := p.getSpanProcessors()
if len(old) == 0 {
return
}
spss := make(spanProcessorStates, len(old))
copy(spss, old)
// stop the span processor if it is started and remove it from the list
var stopOnce *spanProcessorState
var idx int
for i, sps := range spss {
if sps.sp == sp {
stopOnce = sps
idx = i
}
}
if stopOnce != nil {
stopOnce.state.Do(func() {
if err := sp.Shutdown(context.Background()); err != nil {
otel.Handle(err)
}
})
}
if len(spss) > 1 {
copy(spss[idx:], spss[idx+1:])
}
spss[len(spss)-1] = nil
spss = spss[:len(spss)-1]
p.spanProcessors.Store(&spss)
}
// ForceFlush immediately exports all spans that have not yet been exported for
// all the registered span processors.
func (p *TracerProvider) ForceFlush(ctx context.Context) error {
spss := p.getSpanProcessors()
if len(spss) == 0 {
return nil
}
for _, sps := range spss {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
if err := sps.sp.ForceFlush(ctx); err != nil {
return err
}
}
return nil
}
// Shutdown shuts down TracerProvider. All registered span processors are shut down
// in the order they were registered and any held computational resources are released.
// After Shutdown is called, all methods are no-ops.
func (p *TracerProvider) Shutdown(ctx context.Context) error {
// This check prevents deadlocks in case of recursive shutdown.
if p.isShutdown.Load() {
return nil
}
p.mu.Lock()
defer p.mu.Unlock()
// This check prevents calls after a shutdown has already been done concurrently.
if !p.isShutdown.CompareAndSwap(false, true) { // did toggle?
return nil
}
var retErr error
for _, sps := range p.getSpanProcessors() {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
var err error
sps.state.Do(func() {
err = sps.sp.Shutdown(ctx)
})
if err != nil {
if retErr == nil {
retErr = err
} else {
// Poor man's list of errors
retErr = fmt.Errorf("%v; %v", retErr, err)
}
}
}
p.spanProcessors.Store(&spanProcessorStates{})
return retErr
}
func (p *TracerProvider) getSpanProcessors() spanProcessorStates {
return *(p.spanProcessors.Load())
}
// TracerProviderOption configures a TracerProvider.
type TracerProviderOption interface {
apply(tracerProviderConfig) tracerProviderConfig
}
type traceProviderOptionFunc func(tracerProviderConfig) tracerProviderConfig
func (fn traceProviderOptionFunc) apply(cfg tracerProviderConfig) tracerProviderConfig {
return fn(cfg)
}
// WithSyncer registers the exporter with the TracerProvider using a
// SimpleSpanProcessor.
//
// This is not recommended for production use. The synchronous nature of the
// SimpleSpanProcessor that will wrap the exporter make it good for testing,
// debugging, or showing examples of other feature, but it will be slow and
// have a high computation resource usage overhead. The WithBatcher option is
// recommended for production use instead.
func WithSyncer(e SpanExporter) TracerProviderOption {
return WithSpanProcessor(NewSimpleSpanProcessor(e))
}
// WithBatcher registers the exporter with the TracerProvider using a
// BatchSpanProcessor configured with the passed opts.
func WithBatcher(e SpanExporter, opts ...BatchSpanProcessorOption) TracerProviderOption {
return WithSpanProcessor(NewBatchSpanProcessor(e, opts...))
}
// WithSpanProcessor registers the SpanProcessor with a TracerProvider.
func WithSpanProcessor(sp SpanProcessor) TracerProviderOption {
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
cfg.processors = append(cfg.processors, sp)
return cfg
})
}
// WithResource returns a TracerProviderOption that will configure the
// Resource r as a TracerProvider's Resource. The configured Resource is
// referenced by all the Tracers the TracerProvider creates. It represents the
// entity producing telemetry.
//
// If this option is not used, the TracerProvider will use the
// resource.Default() Resource by default.
func WithResource(r *resource.Resource) TracerProviderOption {
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
var err error
cfg.resource, err = resource.Merge(resource.Environment(), r)
if err != nil {
otel.Handle(err)
}
return cfg
})
}
// WithIDGenerator returns a TracerProviderOption that will configure the
// IDGenerator g as a TracerProvider's IDGenerator. The configured IDGenerator
// is used by the Tracers the TracerProvider creates to generate new Span and
// Trace IDs.
//
// If this option is not used, the TracerProvider will use a random number
// IDGenerator by default.
func WithIDGenerator(g IDGenerator) TracerProviderOption {
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
if g != nil {
cfg.idGenerator = g
}
return cfg
})
}
// WithSampler returns a TracerProviderOption that will configure the Sampler
// s as a TracerProvider's Sampler. The configured Sampler is used by the
// Tracers the TracerProvider creates to make their sampling decisions for the
// Spans they create.
//
// This option overrides the Sampler configured through the OTEL_TRACES_SAMPLER
// and OTEL_TRACES_SAMPLER_ARG environment variables. If this option is not used
// and the sampler is not configured through environment variables or the environment
// contains invalid/unsupported configuration, the TracerProvider will use a
// ParentBased(AlwaysSample) Sampler by default.
func WithSampler(s Sampler) TracerProviderOption {
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
if s != nil {
cfg.sampler = s
}
return cfg
})
}
// WithSpanLimits returns a TracerProviderOption that configures a
// TracerProvider to use the SpanLimits sl. These SpanLimits bound any Span
// created by a Tracer from the TracerProvider.
//
// If any field of sl is zero or negative it will be replaced with the default
// value for that field.
//
// If this or WithRawSpanLimits are not provided, the TracerProvider will use
// the limits defined by environment variables, or the defaults if unset.
// Refer to the NewSpanLimits documentation for information about this
// relationship.
//
// Deprecated: Use WithRawSpanLimits instead which allows setting unlimited
// and zero limits. This option will be kept until the next major version
// incremented release.
func WithSpanLimits(sl SpanLimits) TracerProviderOption {
if sl.AttributeValueLengthLimit <= 0 {
sl.AttributeValueLengthLimit = DefaultAttributeValueLengthLimit
}
if sl.AttributeCountLimit <= 0 {
sl.AttributeCountLimit = DefaultAttributeCountLimit
}
if sl.EventCountLimit <= 0 {
sl.EventCountLimit = DefaultEventCountLimit
}
if sl.AttributePerEventCountLimit <= 0 {
sl.AttributePerEventCountLimit = DefaultAttributePerEventCountLimit
}
if sl.LinkCountLimit <= 0 {
sl.LinkCountLimit = DefaultLinkCountLimit
}
if sl.AttributePerLinkCountLimit <= 0 {
sl.AttributePerLinkCountLimit = DefaultAttributePerLinkCountLimit
}
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
cfg.spanLimits = sl
return cfg
})
}
// WithRawSpanLimits returns a TracerProviderOption that configures a
// TracerProvider to use these limits. These limits bound any Span created by
// a Tracer from the TracerProvider.
//
// The limits will be used as-is. Zero or negative values will not be changed
// to the default value like WithSpanLimits does. Setting a limit to zero will
// effectively disable the related resource it limits and setting to a
// negative value will mean that resource is unlimited. Consequentially, this
// means that the zero-value SpanLimits will disable all span resources.
// Because of this, limits should be constructed using NewSpanLimits and
// updated accordingly.
//
// If this or WithSpanLimits are not provided, the TracerProvider will use the
// limits defined by environment variables, or the defaults if unset. Refer to
// the NewSpanLimits documentation for information about this relationship.
func WithRawSpanLimits(limits SpanLimits) TracerProviderOption {
return traceProviderOptionFunc(func(cfg tracerProviderConfig) tracerProviderConfig {
cfg.spanLimits = limits
return cfg
})
}
func applyTracerProviderEnvConfigs(cfg tracerProviderConfig) tracerProviderConfig {
for _, opt := range tracerProviderOptionsFromEnv() {
cfg = opt.apply(cfg)
}
return cfg
}
func tracerProviderOptionsFromEnv() []TracerProviderOption {
var opts []TracerProviderOption
sampler, err := samplerFromEnv()
if err != nil {
otel.Handle(err)
}
if sampler != nil {
opts = append(opts, WithSampler(sampler))
}
return opts
}
// ensureValidTracerProviderConfig ensures that given TracerProviderConfig is valid.
func ensureValidTracerProviderConfig(cfg tracerProviderConfig) tracerProviderConfig {
if cfg.sampler == nil {
cfg.sampler = ParentBased(AlwaysSample())
}
if cfg.idGenerator == nil {
cfg.idGenerator = defaultIDGenerator()
}
if cfg.resource == nil {
cfg.resource = resource.Default()
}
return cfg
}

View File

@ -0,0 +1,108 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"errors"
"fmt"
"os"
"strconv"
"strings"
)
const (
tracesSamplerKey = "OTEL_TRACES_SAMPLER"
tracesSamplerArgKey = "OTEL_TRACES_SAMPLER_ARG"
samplerAlwaysOn = "always_on"
samplerAlwaysOff = "always_off"
samplerTraceIDRatio = "traceidratio"
samplerParentBasedAlwaysOn = "parentbased_always_on"
samplerParsedBasedAlwaysOff = "parentbased_always_off"
samplerParentBasedTraceIDRatio = "parentbased_traceidratio"
)
type errUnsupportedSampler string
func (e errUnsupportedSampler) Error() string {
return fmt.Sprintf("unsupported sampler: %s", string(e))
}
var (
errNegativeTraceIDRatio = errors.New("invalid trace ID ratio: less than 0.0")
errGreaterThanOneTraceIDRatio = errors.New("invalid trace ID ratio: greater than 1.0")
)
type samplerArgParseError struct {
parseErr error
}
func (e samplerArgParseError) Error() string {
return fmt.Sprintf("parsing sampler argument: %s", e.parseErr.Error())
}
func (e samplerArgParseError) Unwrap() error {
return e.parseErr
}
func samplerFromEnv() (Sampler, error) {
sampler, ok := os.LookupEnv(tracesSamplerKey)
if !ok {
return nil, nil
}
sampler = strings.ToLower(strings.TrimSpace(sampler))
samplerArg, hasSamplerArg := os.LookupEnv(tracesSamplerArgKey)
samplerArg = strings.TrimSpace(samplerArg)
switch sampler {
case samplerAlwaysOn:
return AlwaysSample(), nil
case samplerAlwaysOff:
return NeverSample(), nil
case samplerTraceIDRatio:
if !hasSamplerArg {
return TraceIDRatioBased(1.0), nil
}
return parseTraceIDRatio(samplerArg)
case samplerParentBasedAlwaysOn:
return ParentBased(AlwaysSample()), nil
case samplerParsedBasedAlwaysOff:
return ParentBased(NeverSample()), nil
case samplerParentBasedTraceIDRatio:
if !hasSamplerArg {
return ParentBased(TraceIDRatioBased(1.0)), nil
}
ratio, err := parseTraceIDRatio(samplerArg)
return ParentBased(ratio), err
default:
return nil, errUnsupportedSampler(sampler)
}
}
func parseTraceIDRatio(arg string) (Sampler, error) {
v, err := strconv.ParseFloat(arg, 64)
if err != nil {
return TraceIDRatioBased(1.0), samplerArgParseError{err}
}
if v < 0.0 {
return TraceIDRatioBased(1.0), errNegativeTraceIDRatio
}
if v > 1.0 {
return TraceIDRatioBased(1.0), errGreaterThanOneTraceIDRatio
}
return TraceIDRatioBased(v), nil
}

293
vendor/go.opentelemetry.io/otel/sdk/trace/sampling.go generated vendored Normal file
View File

@ -0,0 +1,293 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
"encoding/binary"
"fmt"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
)
// Sampler decides whether a trace should be sampled and exported.
type Sampler interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// ShouldSample returns a SamplingResult based on a decision made from the
// passed parameters.
ShouldSample(parameters SamplingParameters) SamplingResult
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Description returns information describing the Sampler.
Description() string
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}
// SamplingParameters contains the values passed to a Sampler.
type SamplingParameters struct {
ParentContext context.Context
TraceID trace.TraceID
Name string
Kind trace.SpanKind
Attributes []attribute.KeyValue
Links []trace.Link
}
// SamplingDecision indicates whether a span is dropped, recorded and/or sampled.
type SamplingDecision uint8
// Valid sampling decisions.
const (
// Drop will not record the span and all attributes/events will be dropped.
Drop SamplingDecision = iota
// Record indicates the span's `IsRecording() == true`, but `Sampled` flag
// *must not* be set.
RecordOnly
// RecordAndSample has span's `IsRecording() == true` and `Sampled` flag
// *must* be set.
RecordAndSample
)
// SamplingResult conveys a SamplingDecision, set of Attributes and a Tracestate.
type SamplingResult struct {
Decision SamplingDecision
Attributes []attribute.KeyValue
Tracestate trace.TraceState
}
type traceIDRatioSampler struct {
traceIDUpperBound uint64
description string
}
func (ts traceIDRatioSampler) ShouldSample(p SamplingParameters) SamplingResult {
psc := trace.SpanContextFromContext(p.ParentContext)
x := binary.BigEndian.Uint64(p.TraceID[8:16]) >> 1
if x < ts.traceIDUpperBound {
return SamplingResult{
Decision: RecordAndSample,
Tracestate: psc.TraceState(),
}
}
return SamplingResult{
Decision: Drop,
Tracestate: psc.TraceState(),
}
}
func (ts traceIDRatioSampler) Description() string {
return ts.description
}
// TraceIDRatioBased samples a given fraction of traces. Fractions >= 1 will
// always sample. Fractions < 0 are treated as zero. To respect the
// parent trace's `SampledFlag`, the `TraceIDRatioBased` sampler should be used
// as a delegate of a `Parent` sampler.
//
//nolint:revive // revive complains about stutter of `trace.TraceIDRatioBased`
func TraceIDRatioBased(fraction float64) Sampler {
if fraction >= 1 {
return AlwaysSample()
}
if fraction <= 0 {
fraction = 0
}
return &traceIDRatioSampler{
traceIDUpperBound: uint64(fraction * (1 << 63)),
description: fmt.Sprintf("TraceIDRatioBased{%g}", fraction),
}
}
type alwaysOnSampler struct{}
func (as alwaysOnSampler) ShouldSample(p SamplingParameters) SamplingResult {
return SamplingResult{
Decision: RecordAndSample,
Tracestate: trace.SpanContextFromContext(p.ParentContext).TraceState(),
}
}
func (as alwaysOnSampler) Description() string {
return "AlwaysOnSampler"
}
// AlwaysSample returns a Sampler that samples every trace.
// Be careful about using this sampler in a production application with
// significant traffic: a new trace will be started and exported for every
// request.
func AlwaysSample() Sampler {
return alwaysOnSampler{}
}
type alwaysOffSampler struct{}
func (as alwaysOffSampler) ShouldSample(p SamplingParameters) SamplingResult {
return SamplingResult{
Decision: Drop,
Tracestate: trace.SpanContextFromContext(p.ParentContext).TraceState(),
}
}
func (as alwaysOffSampler) Description() string {
return "AlwaysOffSampler"
}
// NeverSample returns a Sampler that samples no traces.
func NeverSample() Sampler {
return alwaysOffSampler{}
}
// ParentBased returns a composite sampler which behaves differently,
// based on the parent of the span. If the span has no parent,
// the root(Sampler) is used to make sampling decision. If the span has
// a parent, depending on whether the parent is remote and whether it
// is sampled, one of the following samplers will apply:
// - remoteParentSampled(Sampler) (default: AlwaysOn)
// - remoteParentNotSampled(Sampler) (default: AlwaysOff)
// - localParentSampled(Sampler) (default: AlwaysOn)
// - localParentNotSampled(Sampler) (default: AlwaysOff)
func ParentBased(root Sampler, samplers ...ParentBasedSamplerOption) Sampler {
return parentBased{
root: root,
config: configureSamplersForParentBased(samplers),
}
}
type parentBased struct {
root Sampler
config samplerConfig
}
func configureSamplersForParentBased(samplers []ParentBasedSamplerOption) samplerConfig {
c := samplerConfig{
remoteParentSampled: AlwaysSample(),
remoteParentNotSampled: NeverSample(),
localParentSampled: AlwaysSample(),
localParentNotSampled: NeverSample(),
}
for _, so := range samplers {
c = so.apply(c)
}
return c
}
// samplerConfig is a group of options for parentBased sampler.
type samplerConfig struct {
remoteParentSampled, remoteParentNotSampled Sampler
localParentSampled, localParentNotSampled Sampler
}
// ParentBasedSamplerOption configures the sampler for a particular sampling case.
type ParentBasedSamplerOption interface {
apply(samplerConfig) samplerConfig
}
// WithRemoteParentSampled sets the sampler for the case of sampled remote parent.
func WithRemoteParentSampled(s Sampler) ParentBasedSamplerOption {
return remoteParentSampledOption{s}
}
type remoteParentSampledOption struct {
s Sampler
}
func (o remoteParentSampledOption) apply(config samplerConfig) samplerConfig {
config.remoteParentSampled = o.s
return config
}
// WithRemoteParentNotSampled sets the sampler for the case of remote parent
// which is not sampled.
func WithRemoteParentNotSampled(s Sampler) ParentBasedSamplerOption {
return remoteParentNotSampledOption{s}
}
type remoteParentNotSampledOption struct {
s Sampler
}
func (o remoteParentNotSampledOption) apply(config samplerConfig) samplerConfig {
config.remoteParentNotSampled = o.s
return config
}
// WithLocalParentSampled sets the sampler for the case of sampled local parent.
func WithLocalParentSampled(s Sampler) ParentBasedSamplerOption {
return localParentSampledOption{s}
}
type localParentSampledOption struct {
s Sampler
}
func (o localParentSampledOption) apply(config samplerConfig) samplerConfig {
config.localParentSampled = o.s
return config
}
// WithLocalParentNotSampled sets the sampler for the case of local parent
// which is not sampled.
func WithLocalParentNotSampled(s Sampler) ParentBasedSamplerOption {
return localParentNotSampledOption{s}
}
type localParentNotSampledOption struct {
s Sampler
}
func (o localParentNotSampledOption) apply(config samplerConfig) samplerConfig {
config.localParentNotSampled = o.s
return config
}
func (pb parentBased) ShouldSample(p SamplingParameters) SamplingResult {
psc := trace.SpanContextFromContext(p.ParentContext)
if psc.IsValid() {
if psc.IsRemote() {
if psc.IsSampled() {
return pb.config.remoteParentSampled.ShouldSample(p)
}
return pb.config.remoteParentNotSampled.ShouldSample(p)
}
if psc.IsSampled() {
return pb.config.localParentSampled.ShouldSample(p)
}
return pb.config.localParentNotSampled.ShouldSample(p)
}
return pb.root.ShouldSample(p)
}
func (pb parentBased) Description() string {
return fmt.Sprintf("ParentBased{root:%s,remoteParentSampled:%s,"+
"remoteParentNotSampled:%s,localParentSampled:%s,localParentNotSampled:%s}",
pb.root.Description(),
pb.config.remoteParentSampled.Description(),
pb.config.remoteParentNotSampled.Description(),
pb.config.localParentSampled.Description(),
pb.config.localParentNotSampled.Description(),
)
}

View File

@ -0,0 +1,131 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package trace // import "go.opentelemetry.io/otel/sdk/trace"
import (
"context"
"sync"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/internal/global"
)
// simpleSpanProcessor is a SpanProcessor that synchronously sends all
// completed Spans to a trace.Exporter immediately.
type simpleSpanProcessor struct {
exporterMu sync.Mutex
exporter SpanExporter
stopOnce sync.Once
}
var _ SpanProcessor = (*simpleSpanProcessor)(nil)
// NewSimpleSpanProcessor returns a new SpanProcessor that will synchronously
// send completed spans to the exporter immediately.
//
// This SpanProcessor is not recommended for production use. The synchronous
// nature of this SpanProcessor make it good for testing, debugging, or
// showing examples of other feature, but it will be slow and have a high
// computation resource usage overhead. The BatchSpanProcessor is recommended
// for production use instead.
func NewSimpleSpanProcessor(exporter SpanExporter) SpanProcessor {
ssp := &simpleSpanProcessor{
exporter: exporter,
}
global.Warn("SimpleSpanProcessor is not recommended for production use, consider using BatchSpanProcessor instead.")
return ssp
}
// OnStart does nothing.
func (ssp *simpleSpanProcessor) OnStart(context.Context, ReadWriteSpan) {}
// OnEnd immediately exports a ReadOnlySpan.
func (ssp *simpleSpanProcessor) OnEnd(s ReadOnlySpan) {
ssp.exporterMu.Lock()
defer ssp.exporterMu.Unlock()
if ssp.exporter != nil && s.SpanContext().TraceFlags().IsSampled() {
if err := ssp.exporter.ExportSpans(context.Background(), []ReadOnlySpan{s}); err != nil {
otel.Handle(err)
}
}
}
// Shutdown shuts down the exporter this SimpleSpanProcessor exports to.
func (ssp *simpleSpanProcessor) Shutdown(ctx context.Context) error {
var err error
ssp.stopOnce.Do(func() {
stopFunc := func(exp SpanExporter) (<-chan error, func()) {
done := make(chan error)
return done, func() { done <- exp.Shutdown(ctx) }
}
// The exporter field of the simpleSpanProcessor needs to be zeroed to
// signal it is shut down, meaning all subsequent calls to OnEnd will
// be gracefully ignored. This needs to be done synchronously to avoid
// any race condition.
//
// A closure is used to keep reference to the exporter and then the
// field is zeroed. This ensures the simpleSpanProcessor is shut down
// before the exporter. This order is important as it avoids a
// potential deadlock. If the exporter shut down operation generates a
// span, that span would need to be exported. Meaning, OnEnd would be
// called and try acquiring the lock that is held here.
ssp.exporterMu.Lock()
done, shutdown := stopFunc(ssp.exporter)
ssp.exporter = nil
ssp.exporterMu.Unlock()
go shutdown()
// Wait for the exporter to shut down or the deadline to expire.
select {
case err = <-done:
case <-ctx.Done():
// It is possible for the exporter to have immediately shut down
// and the context to be done simultaneously. In that case this
// outer select statement will randomly choose a case. This will
// result in a different returned error for similar scenarios.
// Instead, double check if the exporter shut down at the same
// time and return that error if so. This will ensure consistency
// as well as ensure the caller knows the exporter shut down
// successfully (they can already determine if the deadline is
// expired given they passed the context).
select {
case err = <-done:
default:
err = ctx.Err()
}
}
})
return err
}
// ForceFlush does nothing as there is no data to flush.
func (ssp *simpleSpanProcessor) ForceFlush(context.Context) error {
return nil
}
// MarshalLog is the marshaling function used by the logging system to represent this Span Processor.
func (ssp *simpleSpanProcessor) MarshalLog() interface{} {
return struct {
Type string
Exporter SpanExporter
}{
Type: "SimpleSpanProcessor",
Exporter: ssp.exporter,
}
}

Some files were not shown because too many files have changed in this diff Show More