Compare commits

...

21 Commits

Author SHA1 Message Date
larry-safran 56d1c63802 Bump version to 1.54.1 2023-04-14 15:42:30 -07:00
larry-safran 4a5605cea3 Update README etc to reference 1.54.1 2023-04-14 14:57:17 -07:00
Eric Anderson 4b01c907bc core: Fix NPE race during hedging
The problem was one hedge was committed before another had drained
start(). This was not testable because HedgingRunnable checks whether
scheduledHedgingRef is cancelled, which is racy, but there's no way to
deterministically trigger either race.

The same problem couldn't be triggered with retries because only one
attempt will be draining at a time. Retries with cancellation also
couldn't trigger it, for the surprising reason that the noop stream used
in cancel() wasn't considered drained.

This commit marks the noop stream as drained with cancel(), which allows
memory to be garbage collected sooner and exposes the race for tests.
That then showed the stream as hanging, because inFlightSubStreams
wasn't being decremented.

Fixes #9185
2023-04-13 10:10:36 -07:00
DNVindhya 92b4faed40
Pass interop parameters to each langs run.sh as-is. run.sh should just pass through to the interop client/server binaries (#10042)
Co-authored-by: Stanley Cheung <stanleycheung@google.com>
2023-04-12 11:56:24 -07:00
DNVindhya 1bf518af12 gcp-o11y: Remove monitored resource detection for logging (#10020)
* removed populating monitored resource to k8s_conatiner by default for logging; Delegating the resource detection to cloud logging library instead (enabled by default)

* remove kubernetes resource detection logic from observability
2023-04-07 07:24:33 -07:00
Vindhya Ningegowda 5369df13a5 Removes the ExperimentalApi annotation from GcpObservability. 2023-04-07 07:24:11 -07:00
DNVindhya 6d21d71a25 use glob for example file names which is used in updating release versions (#9998) 2023-04-04 07:46:06 -07:00
Eric Anderson e955afe50a examples: Fix grpc version in gcp-observability
gcp-observability was missing from RELEASING.md, so it wasn't updated
when we changed the patch release.
2023-04-04 07:45:47 -07:00
yifeizhuang 5c09616aae
xds: fix flaky wrr test (#10005) 2023-04-03 11:27:32 -07:00
Terry Wilson 7d5d25d34e Bump version to 1.54.1-SNAPSHOT 2023-03-23 11:11:14 -07:00
Terry Wilson e988f84d14 Bump version to 1.54.0 2023-03-23 10:53:48 -07:00
Terry Wilson abdb6980ec Update README etc to reference 1.54.0 2023-03-23 10:16:45 -07:00
Stanley Cheung 61ec299352 Remove sleep from Observability Interop Test binary now that its done in close() (#9977)
After #9972, the `sleep()` is done inside Observability `close()`, we can remove this `sleep()` in the Observability Interop test binary.
2023-03-23 09:15:10 -07:00
DNVindhya 9f26b7dd08 gcp-o11y: add default custom tag for metrics exporter
This PR adds a default custom tag for metrics, irrespective of custom
tags being present in the observability configuration. 

OpenCensus by default adds a custom tag
[opencenus_task](https://docs.google.com/document/d/1sWC-XD277cM0PXxAhzJKY2X0Uj2W7bVoSv-jvnA0N8Q/edit?resourcekey=0-l-wqh1fctxZXHCUrvZv2BQ#heading=h.xy85j580eik0)
for metrics which gets overriden if custom tags are set.

The unique custom tag is required to ensure the uniqueness of the
Timeseries. The format of the default custom tag is:
`java-{PID}@{HOSTNAME}`, if `{PID}` is not available a random number
will be used.
2023-03-23 08:15:27 -07:00
DNVindhya fefa2d9b16
examples: add gcp-observability examples (v1.54.x backport) (#9987)
* examples: add gcp-observability examples (#9967)
2023-03-22 23:18:49 -07:00
DNVindhya 882a27bcb6 gcp-o11y: add sleep in Observability close()
This commit adds sleep in `close()` for metrics and/or traces to be
flushed before closing observability.

Currently sleep is set to 2 * [Metrics export interval (30 secs)].
2023-03-22 15:08:02 -07:00
Vindhya Ningegowda 2e41c9a5cb disable recording real-time metrics using in gcp-o11y 2023-03-22 15:05:58 -07:00
Stanley Cheung 132bf3e573 interop-testing: Do not System.exit(0) from interop client
After #9937 was merged, the Java observability tests start to fail.

This System.exit(0) call in the existing Interop client main() method
prevented execution to continue in the new combined Observability
Interop test binary here. (The new binary is calling the old binary's
main() method.)
2023-03-21 16:16:41 -07:00
DNVindhya 85ce900dfc gcp-observability, census: add trace information to logs (#9963)
This commit adds trace information (TraceId, SpanId and TraceSampled)
fields to LogEntry, when both logging and tracing are enabled in
gcp-observability. 

For server-side logs, span information was readily available using
Span.getContext() propagated via `io.grpc.Context`. Similar approach is
not feasible for client-side architecture.

Client SpanContext which has all the information required to be added
to logs is propagated to the logging interceptor via `io.grpc.CallOptions`.
2023-03-21 15:01:21 -07:00
DNVindhya bb39ca3ec9 gcp-observability: Update logging fields for GA and use custom BatchingSettings (#9959)
This commit updates the following in gcp observability logging schema
* `payload.status_code` will be of type `google.rpc.Code` instead of `uint32`.
*  names in enum `Address.TYPE`

Use custom batching settings for [LoggingOptions](https://javadoc.io/doc/com.google.cloud/google-cloud-logging/latest/com/google/cloud/logging/LoggingOptions.html)

Note: Upgraded `com.google.cloud:google-cloud-logging` from `3.6.1` to `3.14.5`.
2023-03-21 08:40:02 -07:00
DNVindhya 051e3971de census: Trace annotation for reporting inbound message sizes (#9944)
This commit uses [OpenCensus Annotation][] to report message size
[bytes] for inbound/received messages in traces.

`addMessageEvent` API which is currently used expects both uncompressed
and compressed message (optional) sizes to be reported at the same.
Since decompression for messages happens at a later point in time,
reporting compressed message as is and reporting uncompressed size as
`-1` renders the size as _0 bytes received_ in cloud tracing front end.

As a workaround, we add _two annotations for each received message_:
* For compressed message size
* For uncompressed message size (when it is available)

This commit also removes `addMessageEvents` a flag introduced in
PR #9485 to temporarily suppress message events for gcp-observability.

[OpenCensus Annotation]: https://www.javadoc.io/static/io.opencensus/opencensus-api/0.31.0/io/opencensus/trace/Annotation.html
2023-03-13 15:45:17 -07:00
66 changed files with 1463 additions and 813 deletions

View File

@ -44,8 +44,8 @@ For a guided tour, take a look at the [quick start
guide](https://grpc.io/docs/languages/java/quickstart) or the more explanatory [gRPC
basics](https://grpc.io/docs/languages/java/basics).
The [examples](https://github.com/grpc/grpc-java/tree/v1.52.1/examples) and the
[Android example](https://github.com/grpc/grpc-java/tree/v1.52.1/examples/android)
The [examples](https://github.com/grpc/grpc-java/tree/v1.54.1/examples) and the
[Android example](https://github.com/grpc/grpc-java/tree/v1.54.1/examples/android)
are standalone projects that showcase the usage of gRPC.
Download
@ -56,18 +56,18 @@ Download [the JARs][]. Or for Maven with non-Android, add to your `pom.xml`:
<dependency>
<groupId>io.grpc</groupId>
<artifactId>grpc-netty-shaded</artifactId>
<version>1.52.1</version>
<version>1.54.1</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.grpc</groupId>
<artifactId>grpc-protobuf</artifactId>
<version>1.52.1</version>
<version>1.54.1</version>
</dependency>
<dependency>
<groupId>io.grpc</groupId>
<artifactId>grpc-stub</artifactId>
<version>1.52.1</version>
<version>1.54.1</version>
</dependency>
<dependency> <!-- necessary for Java 9+ -->
<groupId>org.apache.tomcat</groupId>
@ -79,23 +79,23 @@ Download [the JARs][]. Or for Maven with non-Android, add to your `pom.xml`:
Or for Gradle with non-Android, add to your dependencies:
```gradle
runtimeOnly 'io.grpc:grpc-netty-shaded:1.52.1'
implementation 'io.grpc:grpc-protobuf:1.52.1'
implementation 'io.grpc:grpc-stub:1.52.1'
runtimeOnly 'io.grpc:grpc-netty-shaded:1.54.1'
implementation 'io.grpc:grpc-protobuf:1.54.1'
implementation 'io.grpc:grpc-stub:1.54.1'
compileOnly 'org.apache.tomcat:annotations-api:6.0.53' // necessary for Java 9+
```
For Android client, use `grpc-okhttp` instead of `grpc-netty-shaded` and
`grpc-protobuf-lite` instead of `grpc-protobuf`:
```gradle
implementation 'io.grpc:grpc-okhttp:1.52.1'
implementation 'io.grpc:grpc-protobuf-lite:1.52.1'
implementation 'io.grpc:grpc-stub:1.52.1'
implementation 'io.grpc:grpc-okhttp:1.54.1'
implementation 'io.grpc:grpc-protobuf-lite:1.54.1'
implementation 'io.grpc:grpc-stub:1.54.1'
compileOnly 'org.apache.tomcat:annotations-api:6.0.53' // necessary for Java 9+
```
[the JARs]:
https://search.maven.org/search?q=g:io.grpc%20AND%20v:1.52.1
https://search.maven.org/search?q=g:io.grpc%20AND%20v:1.54.1
Development snapshots are available in [Sonatypes's snapshot
repository](https://oss.sonatype.org/content/repositories/snapshots/).
@ -127,7 +127,7 @@ For protobuf-based codegen integrated with the Maven build system, you can use
<configuration>
<protocArtifact>com.google.protobuf:protoc:3.21.7:exe:${os.detected.classifier}</protocArtifact>
<pluginId>grpc-java</pluginId>
<pluginArtifact>io.grpc:protoc-gen-grpc-java:1.52.1:exe:${os.detected.classifier}</pluginArtifact>
<pluginArtifact>io.grpc:protoc-gen-grpc-java:1.54.1:exe:${os.detected.classifier}</pluginArtifact>
</configuration>
<executions>
<execution>
@ -157,7 +157,7 @@ protobuf {
}
plugins {
grpc {
artifact = 'io.grpc:protoc-gen-grpc-java:1.52.1'
artifact = 'io.grpc:protoc-gen-grpc-java:1.54.1'
}
}
generateProtoTasks {
@ -190,7 +190,7 @@ protobuf {
}
plugins {
grpc {
artifact = 'io.grpc:protoc-gen-grpc-java:1.52.1'
artifact = 'io.grpc:protoc-gen-grpc-java:1.54.1'
}
}
generateProtoTasks {

View File

@ -26,18 +26,8 @@ $ VERSION_FILES=(
examples/android/helloworld/app/build.gradle
examples/android/routeguide/app/build.gradle
examples/android/strictmode/app/build.gradle
examples/example-alts/build.gradle
examples/example-gauth/build.gradle
examples/example-gauth/pom.xml
examples/example-jwt-auth/build.gradle
examples/example-jwt-auth/pom.xml
examples/example-hostname/build.gradle
examples/example-hostname/pom.xml
examples/example-servlet/build.gradle
examples/example-tls/build.gradle
examples/example-tls/pom.xml
examples/example-xds/build.gradle
examples/example-orca/build.gradle
examples/example-*/build.gradle
examples/example-*/pom.xml
)
```

View File

@ -20,7 +20,7 @@ subprojects {
apply plugin: "net.ltgt.errorprone"
group = "io.grpc"
version = "1.54.0-SNAPSHOT" // CURRENT_GRPC_VERSION
version = "1.54.1" // CURRENT_GRPC_VERSION
repositories {
maven { // The google mirror is less flaky than mavenCentral()

View File

@ -16,29 +16,14 @@
set -ex
cd "$(dirname "$0")"/../..
# TODO(stanleycheung): replace positional parameters with explicit parameters
#
# $1: server | client
#
# For server: $2: server_port
#
# For client: $2: server_host
# $3: server_port
# $4: test_case
# $5: num_times
if [ "$1" = "server" ] ; then
/grpc-java/bin/gcp-observability-interop \
server --use_tls=false \
--port=$2
/grpc-java/bin/gcp-observability-interop server --use_tls=false "${@:2}"
elif [ "$1" = "client" ] ; then
/grpc-java/bin/gcp-observability-interop \
client --use_tls=false \
--server_host=$2 --server_port=$3 \
--test_case=$4 --num_times=$5
/grpc-java/bin/gcp-observability-interop client --use_tls=false "${@:2}"
else
echo "Invalid action: $1"
echo "Invalid action: $1. Usage:"
echo " $ .../run.sh [server|client] --server_host=<hostname> --server_port=<port> ..."
exit 1
fi

View File

@ -17,6 +17,7 @@
package io.grpc.census;
import static com.google.common.base.Preconditions.checkNotNull;
import static io.grpc.census.internal.ObservabilityCensusConstants.CLIENT_TRACE_SPAN_CONTEXT_KEY;
import com.google.common.annotations.VisibleForTesting;
import io.grpc.Attributes;
@ -41,6 +42,9 @@ import io.opencensus.trace.SpanContext;
import io.opencensus.trace.Status;
import io.opencensus.trace.Tracer;
import io.opencensus.trace.propagation.BinaryFormat;
import java.util.HashMap;
import java.util.Locale;
import java.util.Map;
import java.util.concurrent.atomic.AtomicIntegerFieldUpdater;
import java.util.logging.Level;
import java.util.logging.Logger;
@ -92,12 +96,9 @@ final class CensusTracingModule {
final Metadata.Key<SpanContext> tracingHeader;
private final TracingClientInterceptor clientInterceptor = new TracingClientInterceptor();
private final ServerTracerFactory serverTracerFactory = new ServerTracerFactory();
private final boolean addMessageEvents;
CensusTracingModule(
Tracer censusTracer,
final BinaryFormat censusPropagationBinaryFormat,
boolean addMessageEvents) {
Tracer censusTracer, final BinaryFormat censusPropagationBinaryFormat) {
this.censusTracer = checkNotNull(censusTracer, "censusTracer");
checkNotNull(censusPropagationBinaryFormat, "censusPropagationBinaryFormat");
this.tracingHeader =
@ -117,7 +118,6 @@ final class CensusTracingModule {
}
}
});
this.addMessageEvents = addMessageEvents;
}
/**
@ -125,8 +125,8 @@ final class CensusTracingModule {
*/
@VisibleForTesting
CallAttemptsTracerFactory newClientCallTracer(
@Nullable Span parentSpan, MethodDescriptor<?, ?> method) {
return new CallAttemptsTracerFactory(parentSpan, method);
@Nullable Span clientSpan, MethodDescriptor<?, ?> method) {
return new CallAttemptsTracerFactory(clientSpan, method);
}
/**
@ -218,9 +218,6 @@ final class CensusTracingModule {
private void recordMessageEvent(
Span span, MessageEvent.Type type,
int seqNo, long optionalWireSize, long optionalUncompressedSize) {
if (!addMessageEvents) {
return;
}
MessageEvent.Builder eventBuilder = MessageEvent.builder(type, seqNo);
if (optionalUncompressedSize != -1) {
eventBuilder.setUncompressedMessageSize(optionalUncompressedSize);
@ -231,6 +228,19 @@ final class CensusTracingModule {
span.addMessageEvent(eventBuilder.build());
}
private void recordAnnotation(
Span span, MessageEvent.Type type, int seqNo, boolean isCompressed, long size) {
String messageType = isCompressed ? "compressed" : "uncompressed";
Map<String, AttributeValue> attributes = new HashMap<>();
attributes.put("id", AttributeValue.longAttributeValue(seqNo));
attributes.put("type", AttributeValue.stringAttributeValue(messageType));
String messageDirection = type == MessageEvent.Type.SENT ? "" : "";
String inlineDescription =
messageDirection + size + " bytes " + type.name().toLowerCase(Locale.US);
span.addAnnotation(inlineDescription, attributes);
}
@VisibleForTesting
final class CallAttemptsTracerFactory extends ClientStreamTracer.Factory {
volatile int callEnded;
@ -239,17 +249,11 @@ final class CensusTracingModule {
private final Span span;
private final String fullMethodName;
CallAttemptsTracerFactory(@Nullable Span parentSpan, MethodDescriptor<?, ?> method) {
CallAttemptsTracerFactory(@Nullable Span clientSpan, MethodDescriptor<?, ?> method) {
checkNotNull(method, "method");
this.isSampledToLocalTracing = method.isSampledToLocalTracing();
this.fullMethodName = method.getFullMethodName();
this.span =
censusTracer
.spanBuilderWithExplicitParent(
generateTraceSpanName(false, fullMethodName),
parentSpan)
.setRecordEvents(true)
.startSpan();
this.span = clientSpan;
}
@Override
@ -265,7 +269,7 @@ final class CensusTracingModule {
"previous-rpc-attempts", AttributeValue.longAttributeValue(info.getPreviousAttempts()));
attemptSpan.putAttribute(
"transparent-retry", AttributeValue.booleanAttributeValue(info.isTransparentRetry()));
return new ClientTracer(attemptSpan, tracingHeader, isSampledToLocalTracing);
return new ClientTracer(attemptSpan, span, tracingHeader, isSampledToLocalTracing);
}
/**
@ -291,12 +295,16 @@ final class CensusTracingModule {
private final class ClientTracer extends ClientStreamTracer {
private final Span span;
private final Span parentSpan;
final Metadata.Key<SpanContext> tracingHeader;
final boolean isSampledToLocalTracing;
volatile int seqNo;
ClientTracer(
Span span, Metadata.Key<SpanContext> tracingHeader, boolean isSampledToLocalTracing) {
Span span, Span parentSpan, Metadata.Key<SpanContext> tracingHeader,
boolean isSampledToLocalTracing) {
this.span = checkNotNull(span, "span");
this.parentSpan = checkNotNull(parentSpan, "parent span");
this.tracingHeader = tracingHeader;
this.isSampledToLocalTracing = isSampledToLocalTracing;
}
@ -319,8 +327,19 @@ final class CensusTracingModule {
@Override
public void inboundMessageRead(
int seqNo, long optionalWireSize, long optionalUncompressedSize) {
recordMessageEvent(
span, MessageEvent.Type.RECEIVED, seqNo, optionalWireSize, optionalUncompressedSize);
recordAnnotation(
span, MessageEvent.Type.RECEIVED, seqNo, true, optionalWireSize);
}
@Override
public void inboundMessage(int seqNo) {
this.seqNo = seqNo;
}
@Override
public void inboundUncompressedSize(long bytes) {
recordAnnotation(
parentSpan, MessageEvent.Type.RECEIVED, seqNo, false, bytes);
}
@Override
@ -334,6 +353,7 @@ final class CensusTracingModule {
private final Span span;
volatile boolean isSampledToLocalTracing;
volatile int streamClosed;
private int seqNo;
ServerTracer(String fullMethodName, @Nullable SpanContext remoteSpan) {
checkNotNull(fullMethodName, "fullMethodName");
@ -396,8 +416,19 @@ final class CensusTracingModule {
@Override
public void inboundMessageRead(
int seqNo, long optionalWireSize, long optionalUncompressedSize) {
recordMessageEvent(
span, MessageEvent.Type.RECEIVED, seqNo, optionalWireSize, optionalUncompressedSize);
recordAnnotation(
span, MessageEvent.Type.RECEIVED, seqNo, true, optionalWireSize);
}
@Override
public void inboundMessage(int seqNo) {
this.seqNo = seqNo;
}
@Override
public void inboundUncompressedSize(long bytes) {
recordAnnotation(
span, MessageEvent.Type.RECEIVED, seqNo, false, bytes);
}
}
@ -425,13 +456,20 @@ final class CensusTracingModule {
// Safe usage of the unsafe trace API because CONTEXT_SPAN_KEY.get() returns the same value
// as Tracer.getCurrentSpan() except when no value available when the return value is null
// for the direct access and BlankSpan when Tracer API is used.
final CallAttemptsTracerFactory tracerFactory =
newClientCallTracer(
io.opencensus.trace.unsafe.ContextUtils.getValue(Context.current()), method);
Span parentSpan = io.opencensus.trace.unsafe.ContextUtils.getValue(Context.current());
Span clientSpan = censusTracer
.spanBuilderWithExplicitParent(
generateTraceSpanName(false, method.getFullMethodName()),
parentSpan)
.setRecordEvents(true)
.startSpan();
final CallAttemptsTracerFactory tracerFactory = newClientCallTracer(clientSpan, method);
ClientCall<ReqT, RespT> call =
next.newCall(
method,
callOptions.withStreamTracerFactory(tracerFactory));
callOptions.withStreamTracerFactory(tracerFactory)
.withOption(CLIENT_TRACE_SPAN_CONTEXT_KEY, clientSpan.getContext()));
return new SimpleForwardingClientCall<ReqT, RespT>(call) {
@Override
public void start(Listener<RespT> responseListener, Metadata headers) {

View File

@ -36,42 +36,21 @@ public final class InternalCensusTracingAccessor {
* Returns a {@link ClientInterceptor} with default tracing implementation.
*/
public static ClientInterceptor getClientInterceptor() {
return getClientInterceptor(true);
}
/**
* Returns the client interceptor that facilitates Census-based stats reporting.
*
* @param addMessageEvents add message events to Spans
* @return a {@link ClientInterceptor} with default tracing implementation.
*/
public static ClientInterceptor getClientInterceptor(
boolean addMessageEvents) {
CensusTracingModule censusTracing =
new CensusTracingModule(
Tracing.getTracer(),
Tracing.getPropagationComponent().getBinaryFormat(),
addMessageEvents);
Tracing.getPropagationComponent().getBinaryFormat());
return censusTracing.getClientInterceptor();
}
/**
* Returns a {@link ServerStreamTracer.Factory} with default stats implementation.
* Returns a {@link ServerStreamTracer.Factory} with default tracing implementation.
*/
public static ServerStreamTracer.Factory getServerStreamTracerFactory() {
return getServerStreamTracerFactory(true);
}
/**
* Returns a {@link ServerStreamTracer.Factory} with default stats implementation.
*/
public static ServerStreamTracer.Factory getServerStreamTracerFactory(
boolean addMessageEvents) {
CensusTracingModule censusTracing =
new CensusTracingModule(
Tracing.getTracer(),
Tracing.getPropagationComponent().getBinaryFormat(),
addMessageEvents);
Tracing.getPropagationComponent().getBinaryFormat());
return censusTracing.getServerTracerFactory();
}
}

View File

@ -26,11 +26,13 @@ import static io.opencensus.contrib.grpc.metrics.RpcMeasureConstants.GRPC_SERVER
import static io.opencensus.contrib.grpc.metrics.RpcMeasureConstants.GRPC_SERVER_STATUS;
import com.google.common.annotations.VisibleForTesting;
import io.grpc.CallOptions;
import io.opencensus.contrib.grpc.metrics.RpcViewConstants;
import io.opencensus.stats.Aggregation;
import io.opencensus.stats.Measure;
import io.opencensus.stats.Measure.MeasureDouble;
import io.opencensus.stats.View;
import io.opencensus.trace.SpanContext;
import java.util.Arrays;
// TODO(dnvindhya): Remove metric and view definitions from this class once it is moved to
@ -42,6 +44,9 @@ import java.util.Arrays;
@VisibleForTesting
public final class ObservabilityCensusConstants {
public static CallOptions.Key<SpanContext> CLIENT_TRACE_SPAN_CONTEXT_KEY
= CallOptions.Key.createWithDefault("Client span context for tracing", SpanContext.INVALID);
static final Aggregation AGGREGATION_WITH_BYTES_HISTOGRAM =
RpcViewConstants.GRPC_CLIENT_SENT_BYTES_PER_RPC_VIEW.getAggregation();

View File

@ -22,6 +22,7 @@ import static io.grpc.census.CensusStatsModule.CallAttemptsTracerFactory.RETRIES
import static io.grpc.census.CensusStatsModule.CallAttemptsTracerFactory.RETRY_DELAY_PER_CALL;
import static io.grpc.census.CensusStatsModule.CallAttemptsTracerFactory.TRANSPARENT_RETRIES_PER_CALL;
import static io.grpc.census.internal.ObservabilityCensusConstants.API_LATENCY_PER_CALL;
import static io.grpc.census.internal.ObservabilityCensusConstants.CLIENT_TRACE_SPAN_CONTEXT_KEY;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
@ -100,6 +101,7 @@ import io.opencensus.trace.propagation.SpanContextParseException;
import java.io.InputStream;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.Set;
import java.util.concurrent.atomic.AtomicReference;
@ -204,6 +206,8 @@ public class CensusModulesTest {
private ArgumentCaptor<Status> statusCaptor;
@Captor
private ArgumentCaptor<MessageEvent> messageEventCaptor;
@Captor
private ArgumentCaptor<Map<String, AttributeValue>> annotationAttributesCaptor;
private CensusStatsModule censusStats;
private CensusTracingModule censusTracing;
@ -229,7 +233,7 @@ public class CensusModulesTest {
new CensusStatsModule(
tagger, tagCtxSerializer, statsRecorder, fakeClock.getStopwatchSupplier(),
true, true, true, false /* real-time */, true);
censusTracing = new CensusTracingModule(tracer, mockTracingPropagationHandler, true);
censusTracing = new CensusTracingModule(tracer, mockTracingPropagationHandler);
}
@After
@ -314,6 +318,10 @@ public class CensusModulesTest {
capturedCallOptions.get().getStreamTracerFactories().get(1)
instanceof CensusStatsModule.CallAttemptsTracerFactory);
// The interceptor adds client SpanContext to CallOptions
assertTrue(capturedCallOptions.get().getOption(CLIENT_TRACE_SPAN_CONTEXT_KEY).isValid());
assertTrue(capturedCallOptions.get().getOption(CLIENT_TRACE_SPAN_CONTEXT_KEY) != null);
// Make the call
Metadata headers = new Metadata();
call.start(mockClientCallListener, headers);
@ -735,12 +743,10 @@ public class CensusModulesTest {
@Test
public void clientBasicTracingDefaultSpan() {
CallAttemptsTracerFactory callTracer =
censusTracing.newClientCallTracer(null, method);
censusTracing.newClientCallTracer(spyClientSpan, method);
Metadata headers = new Metadata();
ClientStreamTracer clientStreamTracer = callTracer.newClientStreamTracer(STREAM_INFO, headers);
clientStreamTracer.streamCreated(Attributes.EMPTY, headers);
verify(tracer).spanBuilderWithExplicitParent(
eq("Sent.package1.service2.method3"), ArgumentMatchers.<Span>isNull());
verify(tracer).spanBuilderWithExplicitParent(
eq("Attempt.package1.service2.method3"), eq(spyClientSpan));
verify(spyClientSpan, never()).end(any(EndSpanOptions.class));
@ -761,7 +767,7 @@ public class CensusModulesTest {
.putAttribute("previous-rpc-attempts", AttributeValue.longAttributeValue(0));
inOrder.verify(spyAttemptSpan)
.putAttribute("transparent-retry", AttributeValue.booleanAttributeValue(false));
inOrder.verify(spyAttemptSpan, times(3)).addMessageEvent(messageEventCaptor.capture());
inOrder.verify(spyAttemptSpan, times(2)).addMessageEvent(messageEventCaptor.capture());
List<MessageEvent> events = messageEventCaptor.getAllValues();
assertEquals(
MessageEvent.builder(MessageEvent.Type.SENT, 0).setCompressedMessageSize(882).build(),
@ -769,12 +775,14 @@ public class CensusModulesTest {
assertEquals(
MessageEvent.builder(MessageEvent.Type.SENT, 1).setUncompressedMessageSize(27).build(),
events.get(1));
assertEquals(
MessageEvent.builder(MessageEvent.Type.RECEIVED, 0)
.setCompressedMessageSize(255)
.setUncompressedMessageSize(90)
.build(),
events.get(2));
ArgumentCaptor<String> stringCaptor = ArgumentCaptor.forClass(String.class);
inOrder.verify(spyAttemptSpan, times(1))
.addAnnotation(stringCaptor.capture(), annotationAttributesCaptor.capture());
assertEquals("↘ 255 bytes received", stringCaptor.getValue());
assertThat(annotationAttributesCaptor.getValue().get("id"))
.isEqualTo(AttributeValue.longAttributeValue(0));
assertThat(annotationAttributesCaptor.getValue().get("type"))
.isEqualTo(AttributeValue.stringAttributeValue("compressed"));
inOrder.verify(spyAttemptSpan).end(
EndSpanOptions.builder()
.setStatus(io.opencensus.trace.Status.OK)
@ -792,7 +800,7 @@ public class CensusModulesTest {
@Test
public void clientTracingSampledToLocalSpanStore() {
CallAttemptsTracerFactory callTracer =
censusTracing.newClientCallTracer(null, sampledMethod);
censusTracing.newClientCallTracer(spyClientSpan, sampledMethod);
callTracer.callEnded(Status.OK);
verify(spyClientSpan).end(
@ -862,10 +870,7 @@ public class CensusModulesTest {
@Test
public void clientStreamNeverCreatedStillRecordTracing() {
CallAttemptsTracerFactory callTracer =
censusTracing.newClientCallTracer(fakeClientParentSpan, method);
verify(tracer).spanBuilderWithExplicitParent(
eq("Sent.package1.service2.method3"), same(fakeClientParentSpan));
verify(spyClientSpanBuilder).setRecordEvents(eq(true));
censusTracing.newClientCallTracer(spyClientSpan, method);
callTracer.callEnded(Status.DEADLINE_EXCEEDED.withDescription("3 seconds"));
verify(spyClientSpan).end(
@ -1041,18 +1046,15 @@ public class CensusModulesTest {
@Test
public void traceHeadersPropagateSpanContext() throws Exception {
CallAttemptsTracerFactory callTracer =
censusTracing.newClientCallTracer(fakeClientParentSpan, method);
censusTracing.newClientCallTracer(spyClientSpan, method);
Metadata headers = new Metadata();
ClientStreamTracer streamTracer = callTracer.newClientStreamTracer(STREAM_INFO, headers);
streamTracer.streamCreated(Attributes.EMPTY, headers);
verify(mockTracingPropagationHandler).toByteArray(same(fakeAttemptSpanContext));
verifyNoMoreInteractions(mockTracingPropagationHandler);
verify(tracer).spanBuilderWithExplicitParent(
eq("Sent.package1.service2.method3"), same(fakeClientParentSpan));
verify(tracer).spanBuilderWithExplicitParent(
eq("Attempt.package1.service2.method3"), same(spyClientSpan));
verify(spyClientSpanBuilder).setRecordEvents(eq(true));
verifyNoMoreInteractions(tracer);
assertTrue(headers.containsKey(censusTracing.tracingHeader));
@ -1310,7 +1312,7 @@ public class CensusModulesTest {
serverStreamTracer.streamClosed(Status.CANCELLED);
InOrder inOrder = inOrder(spyServerSpan);
inOrder.verify(spyServerSpan, times(3)).addMessageEvent(messageEventCaptor.capture());
inOrder.verify(spyServerSpan, times(2)).addMessageEvent(messageEventCaptor.capture());
List<MessageEvent> events = messageEventCaptor.getAllValues();
assertEquals(
MessageEvent.builder(MessageEvent.Type.SENT, 0).setCompressedMessageSize(882).build(),
@ -1318,12 +1320,16 @@ public class CensusModulesTest {
assertEquals(
MessageEvent.builder(MessageEvent.Type.SENT, 1).setUncompressedMessageSize(27).build(),
events.get(1));
assertEquals(
MessageEvent.builder(MessageEvent.Type.RECEIVED, 0)
.setCompressedMessageSize(255)
.setUncompressedMessageSize(90)
.build(),
events.get(2));
ArgumentCaptor<String> stringCaptor = ArgumentCaptor.forClass(String.class);
inOrder.verify(spyServerSpan, times(1))
.addAnnotation(stringCaptor.capture(), annotationAttributesCaptor.capture());
assertEquals("↘ 255 bytes received", stringCaptor.getValue());
assertThat(annotationAttributesCaptor.getValue().get("id"))
.isEqualTo(AttributeValue.longAttributeValue(0));
assertThat(annotationAttributesCaptor.getValue().get("type"))
.isEqualTo(AttributeValue.stringAttributeValue("compressed"));
inOrder.verify(spyServerSpan).end(
EndSpanOptions.builder()
.setStatus(io.opencensus.trace.Status.CANCELLED)

View File

@ -16,6 +16,8 @@
package io.grpc.census;
import static com.google.common.truth.Truth.assertThat;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNull;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyString;
@ -35,6 +37,7 @@ import io.grpc.census.CensusTracingModule.CallAttemptsTracerFactory;
import io.grpc.internal.testing.StatsTestUtils.FakeStatsRecorder;
import io.grpc.internal.testing.StatsTestUtils.MockableSpan;
import io.grpc.testing.GrpcServerRule;
import io.opencensus.trace.AttributeValue;
import io.opencensus.trace.MessageEvent;
import io.opencensus.trace.Span;
import io.opencensus.trace.SpanBuilder;
@ -42,6 +45,8 @@ import io.opencensus.trace.SpanContext;
import io.opencensus.trace.Tracer;
import io.opencensus.trace.propagation.BinaryFormat;
import java.io.InputStream;
import java.util.List;
import java.util.Map;
import java.util.Random;
import org.junit.After;
import org.junit.Before;
@ -61,7 +66,7 @@ import org.mockito.junit.MockitoRule;
* Test for {@link CensusTracingModule}.
*/
@RunWith(JUnit4.class)
public class CensusTracingNoMessageEventTest {
public class CensusTracingAnnotationEventTest {
private static final ClientStreamTracer.StreamInfo STREAM_INFO =
ClientStreamTracer.StreamInfo.newBuilder().build();
@ -125,6 +130,9 @@ public class CensusTracingNoMessageEventTest {
@Captor
private ArgumentCaptor<MessageEvent> messageEventCaptor;
@Captor
private ArgumentCaptor<Map<String, AttributeValue>> annotationAttributesCaptor;
private ArgumentCaptor<String> stringCaptor;
private CensusTracingModule censusTracing;
@ -145,8 +153,9 @@ public class CensusTracingNoMessageEventTest {
.thenReturn(binarySpanContext);
when(mockTracingPropagationHandler.fromByteArray(any(byte[].class)))
.thenReturn(fakeAttemptSpanContext);
stringCaptor = ArgumentCaptor.forClass(String.class);
censusTracing = new CensusTracingModule(tracer, mockTracingPropagationHandler, false);
censusTracing = new CensusTracingModule(tracer, mockTracingPropagationHandler);
}
@After
@ -155,9 +164,9 @@ public class CensusTracingNoMessageEventTest {
}
@Test
public void clientBasicTracingNoMessageEvents() {
public void clientBasicTracingUncompressedSizeAnnotation() {
CallAttemptsTracerFactory callTracer =
censusTracing.newClientCallTracer(null, method);
censusTracing.newClientCallTracer(spyClientSpan, method);
Metadata headers = new Metadata();
ClientStreamTracer clientStreamTracer = callTracer.newClientStreamTracer(STREAM_INFO, headers);
clientStreamTracer.streamCreated(Attributes.EMPTY, headers);
@ -168,16 +177,63 @@ public class CensusTracingNoMessageEventTest {
clientStreamTracer.outboundMessage(1);
clientStreamTracer.outboundMessageSent(1, -1, 27);
clientStreamTracer.inboundMessageRead(0, 255, 90);
clientStreamTracer.inboundUncompressedSize(90);
clientStreamTracer.inboundMessage(1);
clientStreamTracer.inboundMessageRead(1, 128, 60);
clientStreamTracer.inboundUncompressedSize(60);
clientStreamTracer.streamClosed(Status.OK);
callTracer.callEnded(Status.OK);
InOrder inOrder = inOrder(spyClientSpan, spyAttemptSpan);
inOrder.verify(spyAttemptSpan, times(0)).addMessageEvent(messageEventCaptor.capture());
inOrder.verify(spyAttemptSpan, times(2)).addMessageEvent(messageEventCaptor.capture());
List<MessageEvent> events = messageEventCaptor.getAllValues();
assertEquals(
MessageEvent.builder(MessageEvent.Type.SENT, 0).setCompressedMessageSize(882).build(),
events.get(0));
assertEquals(
MessageEvent.builder(MessageEvent.Type.SENT, 1).setUncompressedMessageSize(27).build(),
events.get(1));
inOrder
.verify(spyAttemptSpan, times(1))
.addAnnotation(stringCaptor.capture(), annotationAttributesCaptor.capture());
assertEquals("↘ 255 bytes received", stringCaptor.getValue());
assertThat(annotationAttributesCaptor.getValue().get("id"))
.isEqualTo(AttributeValue.longAttributeValue(0));
assertThat(annotationAttributesCaptor.getValue().get("type"))
.isEqualTo(AttributeValue.stringAttributeValue("compressed"));
inOrder
.verify(spyClientSpan, times(1))
.addAnnotation(stringCaptor.capture(), annotationAttributesCaptor.capture());
assertEquals("↘ 90 bytes received", stringCaptor.getValue());
assertThat(annotationAttributesCaptor.getValue().get("id"))
.isEqualTo(AttributeValue.longAttributeValue(0));
assertThat(annotationAttributesCaptor.getValue().get("type"))
.isEqualTo(AttributeValue.stringAttributeValue("uncompressed"));
inOrder
.verify(spyAttemptSpan, times(1))
.addAnnotation(stringCaptor.capture(), annotationAttributesCaptor.capture());
assertEquals("↘ 128 bytes received", stringCaptor.getValue());
assertThat(annotationAttributesCaptor.getValue().get("id"))
.isEqualTo(AttributeValue.longAttributeValue(1));
assertThat(annotationAttributesCaptor.getValue().get("type"))
.isEqualTo(AttributeValue.stringAttributeValue("compressed"));
inOrder
.verify(spyClientSpan, times(1))
.addAnnotation(stringCaptor.capture(), annotationAttributesCaptor.capture());
assertEquals("↘ 60 bytes received", stringCaptor.getValue());
assertThat(annotationAttributesCaptor.getValue().get("id"))
.isEqualTo(AttributeValue.longAttributeValue(1));
assertThat(annotationAttributesCaptor.getValue().get("type"))
.isEqualTo(AttributeValue.stringAttributeValue("uncompressed"));
}
@Test
public void serverBasicTracingNoMessageEvents() {
public void serverBasicTracingUncompressedSizeAnnotation() {
ServerStreamTracer.Factory tracerFactory = censusTracing.getServerTracerFactory();
ServerStreamTracer serverStreamTracer =
tracerFactory.newServerStreamTracer(method.getFullMethodName(), new Metadata());
@ -191,10 +247,37 @@ public class CensusTracingNoMessageEventTest {
serverStreamTracer.outboundMessage(1);
serverStreamTracer.outboundMessageSent(1, -1, 27);
serverStreamTracer.inboundMessageRead(0, 255, 90);
serverStreamTracer.inboundUncompressedSize(90);
serverStreamTracer.streamClosed(Status.CANCELLED);
InOrder inOrder = inOrder(spyServerSpan);
inOrder.verify(spyServerSpan, times(0)).addMessageEvent(messageEventCaptor.capture());
inOrder.verify(spyServerSpan, times(2)).addMessageEvent(messageEventCaptor.capture());
List<MessageEvent> events = messageEventCaptor.getAllValues();
assertEquals(
MessageEvent.builder(MessageEvent.Type.SENT, 0).setCompressedMessageSize(882).build(),
events.get(0));
assertEquals(
MessageEvent.builder(MessageEvent.Type.SENT, 1).setUncompressedMessageSize(27).build(),
events.get(1));
inOrder
.verify(spyServerSpan, times(2))
.addAnnotation(stringCaptor.capture(), annotationAttributesCaptor.capture());
List<String> annotationDescriptions = stringCaptor.getAllValues();
List<Map<String, AttributeValue>> annotationAttributes =
annotationAttributesCaptor.getAllValues();
assertEquals("↘ 255 bytes received", annotationDescriptions.get(0));
assertThat(annotationAttributes.get(0).get("id"))
.isEqualTo(AttributeValue.longAttributeValue(0));
assertThat(annotationAttributes.get(0).get("type"))
.isEqualTo(AttributeValue.stringAttributeValue("compressed"));
assertEquals("↘ 90 bytes received", annotationDescriptions.get(1));
assertThat(annotationAttributes.get(1).get("id"))
.isEqualTo(AttributeValue.longAttributeValue(0));
assertThat(annotationAttributes.get(1).get("type"))
.isEqualTo(AttributeValue.stringAttributeValue("uncompressed"));
}
}

View File

@ -8,7 +8,7 @@ import static io.grpc.MethodDescriptor.generateFullMethodName;
* </pre>
*/
@javax.annotation.Generated(
value = "by gRPC proto compiler (version 1.54.0-SNAPSHOT)",
value = "by gRPC proto compiler (version 1.54.1)",
comments = "Source: grpc/testing/compiler/test.proto")
@io.grpc.stub.annotations.GrpcGenerated
@java.lang.Deprecated

View File

@ -8,7 +8,7 @@ import static io.grpc.MethodDescriptor.generateFullMethodName;
* </pre>
*/
@javax.annotation.Generated(
value = "by gRPC proto compiler (version 1.54.0-SNAPSHOT)",
value = "by gRPC proto compiler (version 1.54.1)",
comments = "Source: grpc/testing/compiler/test.proto")
@io.grpc.stub.annotations.GrpcGenerated
public final class TestServiceGrpc {

View File

@ -8,7 +8,7 @@ import static io.grpc.MethodDescriptor.generateFullMethodName;
* </pre>
*/
@javax.annotation.Generated(
value = "by gRPC proto compiler (version 1.54.0-SNAPSHOT)",
value = "by gRPC proto compiler (version 1.54.1)",
comments = "Source: grpc/testing/compiler/test.proto")
@io.grpc.stub.annotations.GrpcGenerated
@java.lang.Deprecated

View File

@ -8,7 +8,7 @@ import static io.grpc.MethodDescriptor.generateFullMethodName;
* </pre>
*/
@javax.annotation.Generated(
value = "by gRPC proto compiler (version 1.54.0-SNAPSHOT)",
value = "by gRPC proto compiler (version 1.54.1)",
comments = "Source: grpc/testing/compiler/test.proto")
@io.grpc.stub.annotations.GrpcGenerated
public final class TestServiceGrpc {

View File

@ -217,7 +217,7 @@ public final class GrpcUtil {
public static final Splitter ACCEPT_ENCODING_SPLITTER = Splitter.on(',').trimResults();
private static final String IMPLEMENTATION_VERSION = "1.54.0-SNAPSHOT"; // CURRENT_GRPC_VERSION
private static final String IMPLEMENTATION_VERSION = "1.54.1"; // CURRENT_GRPC_VERSION
/**
* The default timeout in nanos for a keepalive ping request.

View File

@ -282,14 +282,12 @@ abstract class RetriableStream<ReqT> implements ClientStream {
synchronized (lock) {
savedState = state;
if (streamStarted) {
if (savedState.winningSubstream != null && savedState.winningSubstream != substream) {
// committed but not me, to be cancelled
break;
}
if (savedState.cancelled) {
break;
}
if (savedState.winningSubstream != null && savedState.winningSubstream != substream) {
// committed but not me, to be cancelled
break;
}
if (savedState.cancelled) {
break;
}
if (index == savedState.buffer.size()) { // I'm drained
state = savedState.substreamDrained(substream);
@ -326,15 +324,13 @@ abstract class RetriableStream<ReqT> implements ClientStream {
if (bufferEntry instanceof RetriableStream.StartEntry) {
streamStarted = true;
}
if (streamStarted) {
savedState = state;
if (savedState.winningSubstream != null && savedState.winningSubstream != substream) {
// committed but not me, to be cancelled
break;
}
if (savedState.cancelled) {
break;
}
savedState = state;
if (savedState.winningSubstream != null && savedState.winningSubstream != substream) {
// committed but not me, to be cancelled
break;
}
if (savedState.cancelled) {
break;
}
}
}
@ -344,6 +340,10 @@ abstract class RetriableStream<ReqT> implements ClientStream {
return;
}
if (!streamStarted) {
// Start stream so inFlightSubStreams is decremented in Sublistener.closed()
substream.stream.start(new Sublistener(substream));
}
substream.stream.cancel(
state.winningSubstream == substream ? cancellationStatus : CANCELLED_BECAUSE_COMMITTED);
}
@ -484,6 +484,8 @@ abstract class RetriableStream<ReqT> implements ClientStream {
}
if (cancelled) {
// Start stream so inFlightSubStreams is decremented in Sublistener.closed()
newSubstream.stream.start(new Sublistener(newSubstream));
newSubstream.stream.cancel(Status.CANCELLED.withDescription("Unneeded hedging"));
return;
}
@ -507,6 +509,9 @@ abstract class RetriableStream<ReqT> implements ClientStream {
Runnable runnable = commit(noopSubstream);
if (runnable != null) {
synchronized (lock) {
state = state.substreamDrained(noopSubstream);
}
runnable.run();
safeCloseMasterListener(reason, RpcProgress.PROCESSED, new Metadata());
return;

View File

@ -188,7 +188,7 @@ public class RetriableStreamTest {
}
}
private final RetriableStream<String> retriableStream =
private RetriableStream<String> retriableStream =
newThrottledRetriableStream(null /* throttle */);
private final RetriableStream<String> hedgingStream =
newThrottledHedgingStream(null /* throttle */);
@ -196,10 +196,13 @@ public class RetriableStreamTest {
private ClientStreamTracer bufferSizeTracer;
private RetriableStream<String> newThrottledRetriableStream(Throttle throttle) {
return newThrottledRetriableStream(throttle, MoreExecutors.directExecutor());
}
private RetriableStream<String> newThrottledRetriableStream(Throttle throttle, Executor drainer) {
return new RecordedRetriableStream(
method, new Metadata(), channelBufferUsed, PER_RPC_BUFFER_LIMIT, CHANNEL_BUFFER_LIMIT,
MoreExecutors.directExecutor(), fakeClock.getScheduledExecutorService(), RETRY_POLICY,
null, throttle);
drainer, fakeClock.getScheduledExecutorService(), RETRY_POLICY, null, throttle);
}
private RetriableStream<String> newThrottledHedgingStream(Throttle throttle) {
@ -598,6 +601,44 @@ public class RetriableStreamTest {
inOrder.verify(retriableStreamRecorder, never()).postCommit();
}
@Test
public void transparentRetry_cancel_race() {
FakeClock drainer = new FakeClock();
retriableStream = newThrottledRetriableStream(null, drainer.getScheduledExecutorService());
ClientStream mockStream1 = mock(ClientStream.class);
doReturn(mockStream1).when(retriableStreamRecorder).newSubstream(0);
InOrder inOrder = inOrder(retriableStreamRecorder);
retriableStream.start(masterListener);
ArgumentCaptor<ClientStreamListener> sublistenerCaptor1 =
ArgumentCaptor.forClass(ClientStreamListener.class);
verify(mockStream1).start(sublistenerCaptor1.capture());
// retry, but don't drain
ClientStream mockStream2 = mock(ClientStream.class);
doReturn(mockStream2).when(retriableStreamRecorder).newSubstream(0);
sublistenerCaptor1.getValue().closed(
Status.fromCode(NON_RETRIABLE_STATUS_CODE), MISCARRIED, new Metadata());
assertEquals(1, drainer.numPendingTasks());
// cancel
retriableStream.cancel(Status.CANCELLED);
// drain transparent retry
drainer.runDueTasks();
inOrder.verify(retriableStreamRecorder).postCommit();
ArgumentCaptor<ClientStreamListener> sublistenerCaptor2 =
ArgumentCaptor.forClass(ClientStreamListener.class);
verify(mockStream2).start(sublistenerCaptor2.capture());
ArgumentCaptor<Status> statusCaptor = ArgumentCaptor.forClass(Status.class);
verify(mockStream2).cancel(statusCaptor.capture());
assertEquals(Status.CANCELLED.getCode(), statusCaptor.getValue().getCode());
assertEquals(CANCELLED_BECAUSE_COMMITTED, statusCaptor.getValue().getDescription());
sublistenerCaptor2.getValue().closed(statusCaptor.getValue(), PROCESSED, new Metadata());
verify(masterListener).closed(same(Status.CANCELLED), same(PROCESSED), any(Metadata.class));
}
@Test
public void unretriableClosed_cancel() {
ClientStream mockStream1 = mock(ClientStream.class);

View File

@ -26,7 +26,7 @@ In your app module's `build.gradle` file, include a dependency on both `grpc-cro
Google Play Services Client Library for Cronet
```
implementation 'io.grpc:grpc-cronet:1.52.1'
implementation 'io.grpc:grpc-cronet:1.54.1'
implementation 'com.google.android.gms:play-services-cronet:16.0.0'
```

View File

@ -36,8 +36,8 @@ In your `build.gradle` file, include a dependency on both `grpc-android` and
`grpc-okhttp`:
```
implementation 'io.grpc:grpc-android:1.52.1'
implementation 'io.grpc:grpc-okhttp:1.52.1'
implementation 'io.grpc:grpc-android:1.54.1'
implementation 'io.grpc:grpc-okhttp:1.54.1'
```
You also need permission to access the device's network state in your

View File

@ -34,7 +34,7 @@ android {
protobuf {
protoc { artifact = 'com.google.protobuf:protoc:3.21.7' }
plugins {
grpc { artifact = 'io.grpc:protoc-gen-grpc-java:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
grpc { artifact = 'io.grpc:protoc-gen-grpc-java:1.54.1' // CURRENT_GRPC_VERSION
}
}
generateProtoTasks {
@ -54,12 +54,12 @@ dependencies {
implementation 'com.android.support:appcompat-v7:27.0.2'
// You need to build grpc-java to obtain these libraries below.
implementation 'io.grpc:grpc-okhttp:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-protobuf-lite:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-stub:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-okhttp:1.54.1' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-protobuf-lite:1.54.1' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-stub:1.54.1' // CURRENT_GRPC_VERSION
implementation 'org.apache.tomcat:annotations-api:6.0.53'
testImplementation 'junit:junit:4.13.2'
testImplementation 'com.google.truth:truth:1.0.1'
testImplementation 'io.grpc:grpc-testing:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
testImplementation 'io.grpc:grpc-testing:1.54.1' // CURRENT_GRPC_VERSION
}

View File

@ -32,7 +32,7 @@ android {
protobuf {
protoc { artifact = 'com.google.protobuf:protoc:3.21.7' }
plugins {
grpc { artifact = 'io.grpc:protoc-gen-grpc-java:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
grpc { artifact = 'io.grpc:protoc-gen-grpc-java:1.54.1' // CURRENT_GRPC_VERSION
}
}
generateProtoTasks {
@ -52,8 +52,8 @@ dependencies {
implementation 'com.android.support:appcompat-v7:27.0.2'
// You need to build grpc-java to obtain these libraries below.
implementation 'io.grpc:grpc-okhttp:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-protobuf-lite:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-stub:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-okhttp:1.54.1' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-protobuf-lite:1.54.1' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-stub:1.54.1' // CURRENT_GRPC_VERSION
implementation 'org.apache.tomcat:annotations-api:6.0.53'
}

View File

@ -32,7 +32,7 @@ android {
protobuf {
protoc { artifact = 'com.google.protobuf:protoc:3.21.7' }
plugins {
grpc { artifact = 'io.grpc:protoc-gen-grpc-java:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
grpc { artifact = 'io.grpc:protoc-gen-grpc-java:1.54.1' // CURRENT_GRPC_VERSION
}
}
generateProtoTasks {
@ -52,8 +52,8 @@ dependencies {
implementation 'com.android.support:appcompat-v7:27.0.2'
// You need to build grpc-java to obtain these libraries below.
implementation 'io.grpc:grpc-okhttp:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-protobuf-lite:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-stub:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-okhttp:1.54.1' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-protobuf-lite:1.54.1' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-stub:1.54.1' // CURRENT_GRPC_VERSION
implementation 'org.apache.tomcat:annotations-api:6.0.53'
}

View File

@ -33,7 +33,7 @@ android {
protobuf {
protoc { artifact = 'com.google.protobuf:protoc:3.21.7' }
plugins {
grpc { artifact = 'io.grpc:protoc-gen-grpc-java:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
grpc { artifact = 'io.grpc:protoc-gen-grpc-java:1.54.1' // CURRENT_GRPC_VERSION
}
}
generateProtoTasks {
@ -53,8 +53,8 @@ dependencies {
implementation 'com.android.support:appcompat-v7:28.0.0'
// You need to build grpc-java to obtain these libraries below.
implementation 'io.grpc:grpc-okhttp:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-protobuf-lite:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-stub:1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-okhttp:1.54.1' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-protobuf-lite:1.54.1' // CURRENT_GRPC_VERSION
implementation 'io.grpc:grpc-stub:1.54.1' // CURRENT_GRPC_VERSION
implementation 'org.apache.tomcat:annotations-api:6.0.53'
}

View File

@ -21,7 +21,7 @@ targetCompatibility = 1.8
// Feel free to delete the comment at the next line. It is just for safely
// updating the version in our release process.
def grpcVersion = '1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
def grpcVersion = '1.54.1' // CURRENT_GRPC_VERSION
def protobufVersion = '3.21.7'
def protocVersion = protobufVersion

View File

@ -23,7 +23,7 @@ targetCompatibility = 1.8
// Feel free to delete the comment at the next line. It is just for safely
// updating the version in our release process.
def grpcVersion = '1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
def grpcVersion = '1.54.1' // CURRENT_GRPC_VERSION
def protocVersion = '3.21.7'
dependencies {

View File

@ -23,7 +23,7 @@ targetCompatibility = 1.8
// Feel free to delete the comment at the next line. It is just for safely
// updating the version in our release process.
def grpcVersion = '1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
def grpcVersion = '1.54.1' // CURRENT_GRPC_VERSION
def protobufVersion = '3.21.7'
def protocVersion = protobufVersion

View File

@ -6,13 +6,13 @@
<packaging>jar</packaging>
<!-- Feel free to delete the comment at the end of these lines. It is just
for safely updating the version in our release process. -->
<version>1.54.0-SNAPSHOT</version><!-- CURRENT_GRPC_VERSION -->
<version>1.54.1</version><!-- CURRENT_GRPC_VERSION -->
<name>example-gauth</name>
<url>https://github.com/grpc/grpc-java</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<grpc.version>1.54.0-SNAPSHOT</grpc.version><!-- CURRENT_GRPC_VERSION -->
<grpc.version>1.54.1</grpc.version><!-- CURRENT_GRPC_VERSION -->
<protobuf.version>3.21.7</protobuf.version>
<!-- required for jdk9 -->
<maven.compiler.source>1.8</maven.compiler.source>

View File

@ -0,0 +1,39 @@
gRPC GCP Observability Example
================
The GCP Observability example consists of a Hello World client and a Hello World server instrumented for logs, metrics and tracing.
__Please refer to Microservices Observability user guide for setup.__
### Build the example
Build the Observability client & server. From the `grpc-java/examples/example-gcp-observability`
directory:
```
$ ../gradlew installDist
```
This creates the scripts `build/install/example-gcp-observability/bin/gcp-observability-client` and
`build/install/example-gcp-observability/bin/gcp-observability-server`.
### Run the example with configuration
To use Observability, you should first setup and configure authorization as mentioned in the user guide.
You need to set the `GRPC_GCP_OBSERVABILITY_CONFIG_FILE` environment variable to point to the gRPC GCP Observability configuration file (preferred) or if that
is not set then `GRPC_GCP_OBSERVABILITY_CONFIG` environment variable to gRPC GCP Observability configuration value. This is needed by both
`build/install/example-gcp-observability/bin/gcp-observability-client` and
`build/install/example-gcp-observability/bin/gcp-observability-server`.
1. To start the observability-enabled example server on its default port of 50051, run:
```
$ export GRPC_GCP_OBSERVABILITY_CONFIG_FILE=src/main/resources/io/grpc/examples/gcpobservability/gcp_observability_server_config.json
$ ./build/install/example-gcp-observability/bin/gcp-observability-server
```
2. In a different terminal window, run the observability-enabled example client:
```
$ export GRPC_GCP_OBSERVABILITY_CONFIG_FILE=src/main/resources/io/grpc/examples/gcpobservability/gcp_observability_client_config.json
$ ./build/install/example-gcp-observability/bin/gcp-observability-client
```

View File

@ -0,0 +1,68 @@
plugins {
// Provide convenience executables for trying out the examples.
id 'application'
// ASSUMES GRADLE 5.6 OR HIGHER. Use plugin version 0.8.10 with earlier gradle versions
id 'com.google.protobuf' version '0.8.17'
// Generate IntelliJ IDEA's .idea & .iml project files
id 'idea'
id 'java'
}
repositories {
maven { // The google mirror is less flaky than mavenCentral()
url "https://maven-central.storage-download.googleapis.com/maven2/"
}
mavenCentral()
mavenLocal()
}
sourceCompatibility = 1.8
targetCompatibility = 1.8
// IMPORTANT: You probably want the non-SNAPSHOT version of gRPC. Make sure you
// are looking at a tagged version of the example and not "master"!
// Feel free to delete the comment at the next line. It is just for safely
// updating the version in our release process.
def grpcVersion = '1.54.1' // CURRENT_GRPC_VERSION
def protocVersion = '3.21.7'
dependencies {
implementation "io.grpc:grpc-protobuf:${grpcVersion}"
implementation "io.grpc:grpc-stub:${grpcVersion}"
implementation "io.grpc:grpc-gcp-observability:${grpcVersion}"
compileOnly "org.apache.tomcat:annotations-api:6.0.53"
runtimeOnly "io.grpc:grpc-netty-shaded:${grpcVersion}"
}
protobuf {
protoc { artifact = "com.google.protobuf:protoc:${protocVersion}" }
plugins {
grpc { artifact = "io.grpc:protoc-gen-grpc-java:${grpcVersion}" }
}
generateProtoTasks {
all()*.plugins { grpc {} }
}
}
startScripts.enabled = false
task ObservabilityHelloWorldServer(type: CreateStartScripts) {
mainClass = 'io.grpc.examples.gcpobservability.GcpObservabilityServer'
applicationName = 'gcp-observability-server'
outputDir = new File(project.buildDir, 'tmp/scripts/' + name)
classpath = startScripts.classpath
}
task ObservabilityHelloWorldClient(type: CreateStartScripts) {
mainClass = 'io.grpc.examples.gcpobservability.GcpObservabilityClient'
applicationName = 'gcp-observability-client'
outputDir = new File(project.buildDir, 'tmp/scripts/' + name)
classpath = startScripts.classpath
}
applicationDistribution.into('bin') {
from(ObservabilityHelloWorldServer)
from(ObservabilityHelloWorldClient)
fileMode = 0755
}

View File

@ -0,0 +1 @@
rootProject.name = 'example-gcp-observability'

View File

@ -0,0 +1,93 @@
/*
* Copyright 2023 The gRPC Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.grpc.examples.gcpobservability;
import io.grpc.Channel;
import io.grpc.Grpc;
import io.grpc.InsecureChannelCredentials;
import io.grpc.ManagedChannel;
import io.grpc.StatusRuntimeException;
import io.grpc.examples.helloworld.GreeterGrpc;
import io.grpc.examples.helloworld.HelloReply;
import io.grpc.examples.helloworld.HelloRequest;
import io.grpc.gcp.observability.GcpObservability;
import java.util.concurrent.TimeUnit;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* A simple observability client that requests a greeting from the {@link HelloWorldServer} and
* generates logs, metrics and traces based on the configuration.
*/
public class GcpObservabilityClient {
private static final Logger logger = Logger.getLogger(GcpObservabilityClient.class.getName());
private final GreeterGrpc.GreeterBlockingStub blockingStub;
/** Construct client for accessing HelloWorld server using the existing channel. */
public GcpObservabilityClient(Channel channel) {
blockingStub = GreeterGrpc.newBlockingStub(channel);
}
/** Say hello to server. */
public void greet(String name) {
logger.info("Will try to greet " + name + " ...");
HelloRequest request = HelloRequest.newBuilder().setName(name).build();
HelloReply response;
try {
response = blockingStub.sayHello(request);
} catch (StatusRuntimeException e) {
logger.log(Level.WARNING, "RPC failed: {0}", e.getStatus());
return;
}
logger.info("Greeting: " + response.getMessage());
}
/**
* Greet server. If provided, the first element of {@code args} is the name to use in the
* greeting. The second argument is the target server.
*/
public static void main(String[] args) throws Exception {
String user = "world";
String target = "localhost:50051";
if (args.length > 0) {
if ("--help".equals(args[0])) {
System.err.println("Usage: [name [target]]");
System.err.println("");
System.err.println(" name The name you wish to be greeted by. Defaults to " + user);
System.err.println(" target The server to connect to. Defaults to " + target);
System.exit(1);
}
user = args[0];
}
if (args.length > 1) {
target = args[1];
}
// Initialize observability
try (GcpObservability observability = GcpObservability.grpcInit()) {
ManagedChannel channel = Grpc.newChannelBuilder(target, InsecureChannelCredentials.create())
.build();
try {
GcpObservabilityClient client = new GcpObservabilityClient(channel);
client.greet(user);
} finally {
channel.shutdownNow().awaitTermination(5, TimeUnit.SECONDS);
}
} // observability.close() called implicitly
}
}

View File

@ -0,0 +1,97 @@
/*
* Copyright 2023 The gRPC Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.grpc.examples.gcpobservability;
import io.grpc.Grpc;
import io.grpc.InsecureServerCredentials;
import io.grpc.Server;
import io.grpc.examples.helloworld.GreeterGrpc;
import io.grpc.examples.helloworld.HelloReply;
import io.grpc.examples.helloworld.HelloRequest;
import io.grpc.gcp.observability.GcpObservability;
import io.grpc.stub.StreamObserver;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
import java.util.logging.Logger;
/**
* Observability server that manages startup/shutdown of a {@code Greeter} server and generates
* logs, metrics and traces based on the configuration.
*/
public class GcpObservabilityServer {
private static final Logger logger = Logger.getLogger(GcpObservabilityServer.class.getName());
private Server server;
private void start() throws IOException {
int port = 50051;
server = Grpc.newServerBuilderForPort(port, InsecureServerCredentials.create())
.addService(new GreeterImpl())
.build()
.start();
logger.info("Server started, listening on " + port);
}
private void stop() throws InterruptedException {
if (server != null) {
server.shutdown().awaitTermination(30, TimeUnit.SECONDS);
}
}
private void blockUntilShutdown() throws InterruptedException {
if (server != null) {
server.awaitTermination();
}
}
/**
* Main launches the server from the command line.
*/
public static void main(String[] args) throws IOException, InterruptedException {
// Initialize observability
GcpObservability observability = GcpObservability.grpcInit();
final GcpObservabilityServer server = new GcpObservabilityServer();
server.start();
Runtime.getRuntime().addShutdownHook(new Thread() {
@Override
public void run() {
System.err.println("*** shutting down gRPC server since JVM is shutting down");
try {
server.stop();
} catch (InterruptedException e) {
e.printStackTrace(System.err);
}
// Shut down observability
observability.close();
System.err.println("*** server shut down");
}
});
server.blockUntilShutdown();
}
static class GreeterImpl extends GreeterGrpc.GreeterImplBase {
@Override
public void sayHello(HelloRequest req, StreamObserver<HelloReply> responseObserver) {
HelloReply reply = HelloReply.newBuilder().setMessage("Hello " + req.getName()).build();
responseObserver.onNext(reply);
responseObserver.onCompleted();
}
}
}

View File

@ -0,0 +1,39 @@
/*
* Copyright 2023 The gRPC Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
syntax = "proto3";
option java_multiple_files = true;
option java_package = "io.grpc.examples.helloworld";
option java_outer_classname = "HelloWorldProto";
option objc_class_prefix = "HLW";
package helloworld;
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}

View File

@ -0,0 +1,17 @@
{
"cloud_monitoring": {},
"cloud_trace": {
"sampling_rate": 1.0
},
"cloud_logging": {
"client_rpc_events": [{
"methods": ["helloworld.Greeter/*"]
}],
"server_rpc_events": [{
"methods": ["helloworld.Greeter/*"]
}]
},
"labels": {
"environment" : "example-client"
}
}

View File

@ -0,0 +1,17 @@
{
"cloud_monitoring": {},
"cloud_trace": {
"sampling_rate": 1.0
},
"cloud_logging": {
"client_rpc_events": [{
"methods": ["helloworld.Greeter/*"]
}],
"server_rpc_events": [{
"methods": ["helloworld.Greeter/*"]
}]
},
"labels": {
"environment" : "example-server"
}
}

View File

@ -21,7 +21,7 @@ targetCompatibility = 1.8
// Feel free to delete the comment at the next line. It is just for safely
// updating the version in our release process.
def grpcVersion = '1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
def grpcVersion = '1.54.1' // CURRENT_GRPC_VERSION
def protobufVersion = '3.21.7'
dependencies {

View File

@ -6,13 +6,13 @@
<packaging>jar</packaging>
<!-- Feel free to delete the comment at the end of these lines. It is just
for safely updating the version in our release process. -->
<version>1.54.0-SNAPSHOT</version><!-- CURRENT_GRPC_VERSION -->
<version>1.54.1</version><!-- CURRENT_GRPC_VERSION -->
<name>example-hostname</name>
<url>https://github.com/grpc/grpc-java</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<grpc.version>1.54.0-SNAPSHOT</grpc.version><!-- CURRENT_GRPC_VERSION -->
<grpc.version>1.54.1</grpc.version><!-- CURRENT_GRPC_VERSION -->
<protoc.version>3.21.7</protoc.version>
<!-- required for jdk9 -->
<maven.compiler.source>1.8</maven.compiler.source>

View File

@ -22,7 +22,7 @@ targetCompatibility = 1.8
// Feel free to delete the comment at the next line. It is just for safely
// updating the version in our release process.
def grpcVersion = '1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
def grpcVersion = '1.54.1' // CURRENT_GRPC_VERSION
def protobufVersion = '3.21.7'
def protocVersion = protobufVersion

View File

@ -7,13 +7,13 @@
<packaging>jar</packaging>
<!-- Feel free to delete the comment at the end of these lines. It is just
for safely updating the version in our release process. -->
<version>1.54.0-SNAPSHOT</version><!-- CURRENT_GRPC_VERSION -->
<version>1.54.1</version><!-- CURRENT_GRPC_VERSION -->
<name>example-jwt-auth</name>
<url>https://github.com/grpc/grpc-java</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<grpc.version>1.54.0-SNAPSHOT</grpc.version><!-- CURRENT_GRPC_VERSION -->
<grpc.version>1.54.1</grpc.version><!-- CURRENT_GRPC_VERSION -->
<protobuf.version>3.21.7</protobuf.version>
<protoc.version>3.21.7</protoc.version>
<!-- required for jdk9 -->

View File

@ -17,7 +17,7 @@ repositories {
sourceCompatibility = 1.8
targetCompatibility = 1.8
def grpcVersion = '1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
def grpcVersion = '1.54.1' // CURRENT_GRPC_VERSION
def protocVersion = '3.21.7'
dependencies {

View File

@ -15,7 +15,7 @@ repositories {
sourceCompatibility = 1.8
targetCompatibility = 1.8
def grpcVersion = '1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
def grpcVersion = '1.54.1' // CURRENT_GRPC_VERSION
def protocVersion = '3.21.7'
dependencies {

View File

@ -23,7 +23,7 @@ targetCompatibility = 1.8
// Feel free to delete the comment at the next line. It is just for safely
// updating the version in our release process.
def grpcVersion = '1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
def grpcVersion = '1.54.1' // CURRENT_GRPC_VERSION
def protocVersion = '3.21.7'
dependencies {

View File

@ -6,13 +6,13 @@
<packaging>jar</packaging>
<!-- Feel free to delete the comment at the end of these lines. It is just
for safely updating the version in our release process. -->
<version>1.54.0-SNAPSHOT</version><!-- CURRENT_GRPC_VERSION -->
<version>1.54.1</version><!-- CURRENT_GRPC_VERSION -->
<name>example-tls</name>
<url>https://github.com/grpc/grpc-java</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<grpc.version>1.54.0-SNAPSHOT</grpc.version><!-- CURRENT_GRPC_VERSION -->
<grpc.version>1.54.1</grpc.version><!-- CURRENT_GRPC_VERSION -->
<protoc.version>3.21.7</protoc.version>
<netty.tcnative.version>2.0.56.Final</netty.tcnative.version>
<!-- required for jdk9 -->

View File

@ -22,7 +22,7 @@ targetCompatibility = 1.8
// Feel free to delete the comment at the next line. It is just for safely
// updating the version in our release process.
def grpcVersion = '1.54.0-SNAPSHOT' // CURRENT_GRPC_VERSION
def grpcVersion = '1.54.1' // CURRENT_GRPC_VERSION
def nettyTcNativeVersion = '2.0.56.Final'
def protocVersion = '3.21.7'

View File

@ -6,13 +6,13 @@
<packaging>jar</packaging>
<!-- Feel free to delete the comment at the end of these lines. It is just
for safely updating the version in our release process. -->
<version>1.54.0-SNAPSHOT</version><!-- CURRENT_GRPC_VERSION -->
<version>1.54.1</version><!-- CURRENT_GRPC_VERSION -->
<name>examples</name>
<url>https://github.com/grpc/grpc-java</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<grpc.version>1.54.0-SNAPSHOT</grpc.version><!-- CURRENT_GRPC_VERSION -->
<grpc.version>1.54.1</grpc.version><!-- CURRENT_GRPC_VERSION -->
<protobuf.version>3.21.7</protobuf.version>
<protoc.version>3.21.7</protoc.version>
<!-- required for JDK 8 -->

View File

@ -20,11 +20,13 @@ tasks.named("compileJava").configure {
}
dependencies {
def cloudLoggingVersion = '3.6.1'
def cloudLoggingVersion = '3.14.5'
annotationProcessor libraries.auto.value
api project(':grpc-api')
// TODO(dnvindhya): Prefer using our own libraries, update the dependencies
// in gradle/libs.versions instead
implementation project(':grpc-protobuf'),
project(':grpc-stub'),
project(':grpc-alts'),
@ -35,12 +37,10 @@ dependencies {
libraries.opencensus.exporter.trace.stackdriver,
project(':grpc-xds'), // Align grpc versions
project(':grpc-services'), // Align grpc versions
libraries.animalsniffer.annotations, // Prefer our version
libraries.google.auth.credentials, // Prefer our version
libraries.protobuf.java.util, // Prefer our version
libraries.gson, // Prefer our version
libraries.perfmark.api, // Prefer our version
libraries.re2j, // Prefer our version
('com.google.protobuf:protobuf-java:3.21.12'),
('com.google.api.grpc:proto-google-common-protos:2.14.2'),
('com.google.auth:google-auth-library-oauth2-http:1.16.0'),
('io.opencensus:opencensus-api:0.31.1'),
('com.google.guava:guava:31.1-jre')
runtimeOnly libraries.opencensus.impl

View File

@ -20,7 +20,6 @@ import io.grpc.gcp.observability.GcpObservability;
import io.grpc.testing.integration.TestServiceClient;
import io.grpc.testing.integration.TestServiceServer;
import java.util.Arrays;
import java.util.concurrent.TimeUnit;
/**
* Combined interop client and server for observability testing.
@ -47,11 +46,6 @@ public final class TestServiceInterop {
} else {
TestServiceServer.main(args);
}
// TODO(stanleycheung): remove this once the observability exporter plugin is able to
// gracefully flush observability data to cloud at shutdown
final int o11yCloseSleepSeconds = 65;
System.out.println("Sleeping " + o11yCloseSleepSeconds + " seconds before exiting");
Thread.sleep(TimeUnit.MILLISECONDS.convert(o11yCloseSleepSeconds, TimeUnit.SECONDS));
}
}

View File

@ -21,7 +21,6 @@ import static com.google.common.base.Preconditions.checkNotNull;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableSet;
import io.grpc.ClientInterceptor;
import io.grpc.ExperimentalApi;
import io.grpc.InternalGlobalInterceptors;
import io.grpc.ManagedChannelProvider.ProviderNotFoundException;
import io.grpc.ServerInterceptor;
@ -36,6 +35,7 @@ import io.grpc.gcp.observability.interceptors.InternalLoggingServerInterceptor;
import io.grpc.gcp.observability.interceptors.LogHelper;
import io.grpc.gcp.observability.logging.GcpLogSink;
import io.grpc.gcp.observability.logging.Sink;
import io.grpc.gcp.observability.logging.TraceLoggingHelper;
import io.opencensus.common.Duration;
import io.opencensus.contrib.grpc.metrics.RpcViewConstants;
import io.opencensus.exporter.stats.stackdriver.StackdriverStatsConfiguration;
@ -50,17 +50,30 @@ import io.opencensus.trace.AttributeValue;
import io.opencensus.trace.Tracing;
import io.opencensus.trace.config.TraceConfig;
import java.io.IOException;
import java.lang.management.ManagementFactory;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.security.SecureRandom;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.logging.Level;
import java.util.logging.Logger;
import java.util.stream.Collectors;
/** The main class for gRPC Google Cloud Platform Observability features. */
@ExperimentalApi("https://github.com/grpc/grpc-java/issues/8869")
public final class GcpObservability implements AutoCloseable {
private static final Logger logger = Logger.getLogger(GcpObservability.class.getName());
private static final int METRICS_EXPORT_INTERVAL = 30;
private static final ImmutableSet<String> SERVICES_TO_EXCLUDE = ImmutableSet.of(
static final String DEFAULT_METRIC_CUSTOM_TAG_KEY = "opencensus_task";
@VisibleForTesting
static final ImmutableSet<String> SERVICES_TO_EXCLUDE = ImmutableSet.of(
"google.logging.v2.LoggingServiceV2", "google.monitoring.v3.MetricService",
"google.devtools.cloudtrace.v2.TraceService");
private static GcpObservability instance = null;
private final Sink sink;
private final ObservabilityConfig config;
@ -75,11 +88,11 @@ public final class GcpObservability implements AutoCloseable {
*/
public static synchronized GcpObservability grpcInit() throws IOException {
if (instance == null) {
GlobalLocationTags globalLocationTags = new GlobalLocationTags();
ObservabilityConfigImpl observabilityConfig = ObservabilityConfigImpl.getInstance();
Sink sink = new GcpLogSink(observabilityConfig.getProjectId(),
globalLocationTags.getLocationTags(), observabilityConfig.getCustomTags(),
SERVICES_TO_EXCLUDE);
TraceLoggingHelper traceLoggingHelper = new TraceLoggingHelper(
observabilityConfig.getProjectId());
Sink sink = new GcpLogSink(observabilityConfig.getProjectId(), observabilityConfig,
SERVICES_TO_EXCLUDE, traceLoggingHelper);
LogHelper helper = new LogHelper(sink);
ConfigFilterHelper configFilterHelper = ConfigFilterHelper.getInstance(observabilityConfig);
instance = grpcInit(sink, observabilityConfig,
@ -113,6 +126,16 @@ public final class GcpObservability implements AutoCloseable {
throw new IllegalStateException("GcpObservability already closed!");
}
sink.close();
if (config.isEnableCloudMonitoring() || config.isEnableCloudTracing()) {
try {
// Sleeping before shutdown to ensure all metrics and traces are flushed
Thread.sleep(
TimeUnit.MILLISECONDS.convert(2 * METRICS_EXPORT_INTERVAL, TimeUnit.SECONDS));
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
logger.log(Level.SEVERE, "Caught exception during sleep", e);
}
}
instance = null;
}
}
@ -128,14 +151,14 @@ public final class GcpObservability implements AutoCloseable {
}
if (config.isEnableCloudMonitoring()) {
clientInterceptors.add(getConditionalInterceptor(
InternalCensusStatsAccessor.getClientInterceptor(true, true, true, true)));
InternalCensusStatsAccessor.getClientInterceptor(true, true, false, true)));
tracerFactories.add(
InternalCensusStatsAccessor.getServerStreamTracerFactory(true, true, true));
InternalCensusStatsAccessor.getServerStreamTracerFactory(true, true, false));
}
if (config.isEnableCloudTracing()) {
clientInterceptors.add(
getConditionalInterceptor(InternalCensusTracingAccessor.getClientInterceptor(false)));
tracerFactories.add(InternalCensusTracingAccessor.getServerStreamTracerFactory(false));
getConditionalInterceptor(InternalCensusTracingAccessor.getClientInterceptor()));
tracerFactories.add(InternalCensusTracingAccessor.getServerStreamTracerFactory());
}
InternalGlobalInterceptors.setInterceptorsTracers(
@ -180,12 +203,17 @@ public final class GcpObservability implements AutoCloseable {
if (projectId != null) {
statsConfigurationBuilder.setProjectId(projectId);
}
Map<LabelKey, LabelValue> constantLabels = new HashMap<>();
constantLabels.put(
LabelKey.create(DEFAULT_METRIC_CUSTOM_TAG_KEY, DEFAULT_METRIC_CUSTOM_TAG_KEY),
LabelValue.create(generateDefaultMetricTagValue()));
if (customTags != null) {
Map<LabelKey, LabelValue> constantLabels = customTags.entrySet().stream()
.collect(Collectors.toMap(e -> LabelKey.create(e.getKey(), e.getKey()),
e -> LabelValue.create(e.getValue())));
statsConfigurationBuilder.setConstantLabels(constantLabels);
for (Map.Entry<String, String> mapEntry : customTags.entrySet()) {
constantLabels.putIfAbsent(LabelKey.create(mapEntry.getKey(), mapEntry.getKey()),
LabelValue.create(mapEntry.getValue()));
}
}
statsConfigurationBuilder.setConstantLabels(constantLabels);
statsConfigurationBuilder.setExportInterval(Duration.create(METRICS_EXPORT_INTERVAL, 0));
StackdriverStatsExporter.createAndRegister(statsConfigurationBuilder.build());
}
@ -209,6 +237,20 @@ public final class GcpObservability implements AutoCloseable {
}
}
private static String generateDefaultMetricTagValue() {
final String jvmName = ManagementFactory.getRuntimeMXBean().getName();
if (jvmName.indexOf('@') < 1) {
String hostname = "localhost";
try {
hostname = InetAddress.getLocalHost().getHostName();
} catch (UnknownHostException e) {
logger.log(Level.INFO, "Unable to get the hostname.", e);
}
return "java-" + new SecureRandom().nextInt() + "@" + hostname;
}
return "java-" + jvmName;
}
private GcpObservability(
Sink sink,
ObservabilityConfig config) {

View File

@ -1,148 +0,0 @@
/*
* Copyright 2022 The gRPC Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.grpc.gcp.observability;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.util.Strings;
import com.google.auth.http.HttpTransportFactory;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Charsets;
import com.google.common.collect.ImmutableMap;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Map;
import java.util.Scanner;
import java.util.function.Function;
import java.util.logging.Level;
import java.util.logging.Logger;
/** A container of all global location tags used for observability. */
final class GlobalLocationTags {
private static final Logger logger = Logger.getLogger(GlobalLocationTags.class.getName());
private final Map<String, String> locationTags;
GlobalLocationTags() {
ImmutableMap.Builder<String, String> locationTagsBuilder = ImmutableMap.builder();
populate(locationTagsBuilder);
locationTags = locationTagsBuilder.buildOrThrow();
}
private static String applyTrim(String value) {
if (!Strings.isNullOrEmpty(value)) {
value = value.trim();
}
return value;
}
Map<String, String> getLocationTags() {
return locationTags;
}
@VisibleForTesting
static void populateFromMetadataServer(ImmutableMap.Builder<String, String> locationTags) {
MetadataConfig metadataConfig = new MetadataConfig(new DefaultHttpTransportFactory());
metadataConfig.init();
locationTags.putAll(metadataConfig.getAllValues());
}
@VisibleForTesting
static void populateFromKubernetesValues(ImmutableMap.Builder<String, String> locationTags,
String namespaceFile,
String hostnameFile, String cgroupFile) {
// namespace name: contents of file /var/run/secrets/kubernetes.io/serviceaccount/namespace
populateFromFileContents(locationTags, "namespace_name",
namespaceFile, GlobalLocationTags::applyTrim);
// pod_name: hostname i.e. contents of /etc/hostname
populateFromFileContents(locationTags, "pod_name", hostnameFile,
GlobalLocationTags::applyTrim);
// container_id: parsed from /proc/self/cgroup . Note: only works for Linux-based containers
populateFromFileContents(locationTags, "container_id", cgroupFile,
(value) -> getContainerIdFromFileContents(value));
}
@VisibleForTesting
static void populateFromFileContents(ImmutableMap.Builder<String, String> locationTags,
String key, String filePath, Function<String, String> parser) {
String value = parser.apply(readFileContents(filePath));
if (value != null) {
locationTags.put(key, value);
}
}
/**
* Parse from a line such as this.
* 1:name=systemd:/kubepods/burstable/podf5143dd2/de67c4419b20924eaa141813
*
* @param value file contents
* @return container-id parsed ("podf5143dd2/de67c4419b20924eaa141813" from the above snippet)
*/
@VisibleForTesting static String getContainerIdFromFileContents(String value) {
if (value != null) {
try (Scanner scanner = new Scanner(value)) {
while (scanner.hasNextLine()) {
String line = scanner.nextLine();
String[] tokens = line.split(":");
if (tokens.length == 3 && tokens[2].startsWith("/kubepods/burstable/")) {
tokens = tokens[2].split("/");
if (tokens.length == 5) {
return tokens[4];
}
}
}
}
}
return null;
}
private static String readFileContents(String file) {
Path fileName = Paths.get(file);
if (Files.isReadable(fileName)) {
try {
byte[] bytes = Files.readAllBytes(fileName);
return new String(bytes, Charsets.US_ASCII);
} catch (IOException e) {
logger.log(Level.FINE, "Reading file:" + file, e);
}
} else {
logger.log(Level.FINE, "File:" + file + " is not readable (or missing?)");
}
return null;
}
static void populate(ImmutableMap.Builder<String, String> locationTags) {
populateFromMetadataServer(locationTags);
populateFromKubernetesValues(locationTags,
"/var/run/secrets/kubernetes.io/serviceaccount/namespace",
"/etc/hostname", "/proc/self/cgroup");
}
private static class DefaultHttpTransportFactory implements HttpTransportFactory {
private static final HttpTransport netHttpTransport = new NetHttpTransport();
@Override
public HttpTransport create() {
return netHttpTransport;
}
}
}

View File

@ -1,107 +0,0 @@
/*
* Copyright 2022 The gRPC Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.grpc.gcp.observability;
import com.google.api.client.http.GenericUrl;
import com.google.api.client.http.HttpHeaders;
import com.google.api.client.http.HttpRequest;
import com.google.api.client.http.HttpRequestFactory;
import com.google.api.client.http.HttpResponse;
import com.google.api.client.http.HttpStatusCodes;
import com.google.api.client.http.HttpTransport;
import com.google.auth.http.HttpTransportFactory;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableMap;
import java.io.IOException;
import java.io.InputStream;
import java.util.logging.Level;
import java.util.logging.Logger;
/** Class to read Google Metadata Server values. */
final class MetadataConfig {
private static final Logger logger = Logger.getLogger(MetadataConfig.class.getName());
private static final int TIMEOUT_MS = 5000;
private static final String METADATA_URL = "http://metadata.google.internal/computeMetadata/v1/";
private HttpRequestFactory requestFactory;
private HttpTransportFactory transportFactory;
@VisibleForTesting public MetadataConfig(HttpTransportFactory transportFactory) {
this.transportFactory = transportFactory;
}
void init() {
HttpTransport httpTransport = transportFactory.create();
requestFactory = httpTransport.createRequestFactory();
}
/** gets all the values from the MDS we need to set in our logging tags. */
ImmutableMap<String, String> getAllValues() {
ImmutableMap.Builder<String, String> builder = ImmutableMap.builder();
//addValueFor(builder, "instance/hostname", "GCE_INSTANCE_HOSTNAME");
addValueFor(builder, "instance/id", "gke_node_id");
//addValueFor(builder, "instance/zone", "GCE_INSTANCE_ZONE");
addValueFor(builder, "project/project-id", "project_id");
addValueFor(builder, "project/numeric-project-id", "project_numeric_id");
addValueFor(builder, "instance/attributes/cluster-name", "cluster_name");
addValueFor(builder, "instance/attributes/cluster-uid", "cluster_uid");
addValueFor(builder, "instance/attributes/cluster-location", "location");
try {
requestFactory.getTransport().shutdown();
} catch (IOException e) {
logger.log(Level.FINE, "Calling HttpTransport.shutdown()", e);
}
return builder.buildOrThrow();
}
void addValueFor(ImmutableMap.Builder<String, String> builder, String attribute, String key) {
try {
String value = getAttribute(attribute);
if (value != null) {
builder.put(key, value);
}
} catch (IOException e) {
logger.log(Level.FINE, "Calling getAttribute('" + attribute + "')", e);
}
}
String getAttribute(String attributeName) throws IOException {
GenericUrl url = new GenericUrl(METADATA_URL + attributeName);
HttpRequest request = requestFactory.buildGetRequest(url);
request = request.setReadTimeout(TIMEOUT_MS);
request = request.setConnectTimeout(TIMEOUT_MS);
request = request.setHeaders(new HttpHeaders().set("Metadata-Flavor", "Google"));
HttpResponse response = null;
try {
response = request.execute();
if (response.getStatusCode() == HttpStatusCodes.STATUS_CODE_OK) {
InputStream stream = response.getContent();
if (stream != null) {
byte[] bytes = new byte[stream.available()];
stream.read(bytes);
return new String(bytes, response.getContentCharset());
}
}
} finally {
if (response != null) {
response.disconnect();
}
}
return null;
}
}

View File

@ -16,6 +16,8 @@
package io.grpc.gcp.observability.interceptors;
import static io.grpc.census.internal.ObservabilityCensusConstants.CLIENT_TRACE_SPAN_CONTEXT_KEY;
import com.google.protobuf.Duration;
import com.google.protobuf.util.Durations;
import io.grpc.CallOptions;
@ -33,6 +35,7 @@ import io.grpc.Status;
import io.grpc.gcp.observability.interceptors.ConfigFilterHelper.FilterParams;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventLogger;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventType;
import io.opencensus.trace.SpanContext;
import java.util.UUID;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
@ -92,6 +95,7 @@ public final class InternalLoggingChannelInterceptor implements ClientIntercepto
// Get the stricter deadline to calculate the timeout once the call starts
final Deadline deadline = LogHelper.min(callOptions.getDeadline(),
Context.current().getDeadline());
final SpanContext clientSpanContext = callOptions.getOption(CLIENT_TRACE_SPAN_CONTEXT_KEY);
FilterParams filterParams = filterHelper.logRpcMethod(method.getFullMethodName(), true);
if (!filterParams.log()) {
@ -122,7 +126,8 @@ public final class InternalLoggingChannelInterceptor implements ClientIntercepto
maxHeaderBytes,
EventLogger.CLIENT,
callId,
null);
null,
clientSpanContext);
} catch (Exception e) {
// Catching generic exceptions instead of specific ones for all the events.
// This way we can catch both expected and unexpected exceptions instead of re-throwing
@ -148,7 +153,8 @@ public final class InternalLoggingChannelInterceptor implements ClientIntercepto
message,
maxMessageBytes,
EventLogger.CLIENT,
callId);
callId,
clientSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log response message", e);
}
@ -168,7 +174,8 @@ public final class InternalLoggingChannelInterceptor implements ClientIntercepto
maxHeaderBytes,
EventLogger.CLIENT,
callId,
LogHelper.getPeerAddress(getAttributes()));
LogHelper.getPeerAddress(getAttributes()),
clientSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log response header", e);
}
@ -189,7 +196,8 @@ public final class InternalLoggingChannelInterceptor implements ClientIntercepto
maxHeaderBytes,
EventLogger.CLIENT,
callId,
LogHelper.getPeerAddress(getAttributes()));
LogHelper.getPeerAddress(getAttributes()),
clientSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log trailer", e);
}
@ -212,7 +220,8 @@ public final class InternalLoggingChannelInterceptor implements ClientIntercepto
message,
maxMessageBytes,
EventLogger.CLIENT,
callId);
callId,
clientSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log request message", e);
}
@ -229,7 +238,8 @@ public final class InternalLoggingChannelInterceptor implements ClientIntercepto
methodName,
authority,
EventLogger.CLIENT,
callId);
callId,
clientSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log half close", e);
}
@ -246,7 +256,8 @@ public final class InternalLoggingChannelInterceptor implements ClientIntercepto
methodName,
authority,
EventLogger.CLIENT,
callId);
callId,
clientSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log cancel", e);
}

View File

@ -31,6 +31,9 @@ import io.grpc.Status;
import io.grpc.gcp.observability.interceptors.ConfigFilterHelper.FilterParams;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventLogger;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventType;
import io.opencensus.trace.Span;
import io.opencensus.trace.SpanContext;
import io.opencensus.trace.unsafe.ContextHandleUtils;
import java.net.SocketAddress;
import java.util.UUID;
import java.util.concurrent.TimeUnit;
@ -91,6 +94,8 @@ public final class InternalLoggingServerInterceptor implements ServerInterceptor
Deadline deadline = Context.current().getDeadline();
final Duration timeout = deadline == null ? null
: Durations.fromNanos(deadline.timeRemaining(TimeUnit.NANOSECONDS));
Span span = ContextHandleUtils.getValue(ContextHandleUtils.currentContext());
final SpanContext serverSpanContext = span == null ? SpanContext.INVALID : span.getContext();
FilterParams filterParams =
filterHelper.logRpcMethod(call.getMethodDescriptor().getFullMethodName(), false);
@ -113,7 +118,8 @@ public final class InternalLoggingServerInterceptor implements ServerInterceptor
maxHeaderBytes,
EventLogger.SERVER,
callId,
peerAddress);
peerAddress,
serverSpanContext);
} catch (Exception e) {
// Catching generic exceptions instead of specific ones for all the events.
// This way we can catch both expected and unexpected exceptions instead of re-throwing
@ -139,7 +145,8 @@ public final class InternalLoggingServerInterceptor implements ServerInterceptor
maxHeaderBytes,
EventLogger.SERVER,
callId,
null);
null,
serverSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log response header", e);
}
@ -160,7 +167,8 @@ public final class InternalLoggingServerInterceptor implements ServerInterceptor
message,
maxMessageBytes,
EventLogger.SERVER,
callId);
callId,
serverSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log response message", e);
}
@ -181,7 +189,8 @@ public final class InternalLoggingServerInterceptor implements ServerInterceptor
maxHeaderBytes,
EventLogger.SERVER,
callId,
null);
null,
serverSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log trailer", e);
}
@ -206,7 +215,8 @@ public final class InternalLoggingServerInterceptor implements ServerInterceptor
message,
maxMessageBytes,
EventLogger.SERVER,
callId);
callId,
serverSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log request message", e);
}
@ -223,7 +233,8 @@ public final class InternalLoggingServerInterceptor implements ServerInterceptor
methodName,
authority,
EventLogger.SERVER,
callId);
callId,
serverSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log half close", e);
}
@ -240,7 +251,8 @@ public final class InternalLoggingServerInterceptor implements ServerInterceptor
methodName,
authority,
EventLogger.SERVER,
callId);
callId,
serverSpanContext);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to log cancel", e);
}

View File

@ -23,6 +23,7 @@ import static io.grpc.InternalMetadata.BASE64_ENCODING_OMIT_PADDING;
import com.google.common.base.Joiner;
import com.google.protobuf.ByteString;
import com.google.protobuf.Duration;
import com.google.rpc.Code;
import io.grpc.Attributes;
import io.grpc.Deadline;
import io.grpc.Grpc;
@ -35,6 +36,7 @@ import io.grpc.observabilitylog.v1.GrpcLogRecord;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventLogger;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventType;
import io.grpc.observabilitylog.v1.Payload;
import io.opencensus.trace.SpanContext;
import java.net.Inet4Address;
import java.net.Inet6Address;
import java.net.InetAddress;
@ -87,7 +89,8 @@ public class LogHelper {
GrpcLogRecord.EventLogger eventLogger,
String callId,
// null on client side
@Nullable SocketAddress peerAddress) {
@Nullable SocketAddress peerAddress,
SpanContext spanContext) {
checkNotNull(serviceName, "serviceName");
checkNotNull(methodName, "methodName");
checkNotNull(authority, "authority");
@ -113,7 +116,7 @@ public class LogHelper {
if (peerAddress != null) {
logEntryBuilder.setPeer(socketAddressToProto(peerAddress));
}
sink.write(logEntryBuilder.build());
sink.write(logEntryBuilder.build(), spanContext);
}
/**
@ -128,7 +131,8 @@ public class LogHelper {
int maxHeaderBytes,
GrpcLogRecord.EventLogger eventLogger,
String callId,
@Nullable SocketAddress peerAddress) {
@Nullable SocketAddress peerAddress,
SpanContext spanContext) {
checkNotNull(serviceName, "serviceName");
checkNotNull(methodName, "methodName");
checkNotNull(authority, "authority");
@ -154,7 +158,7 @@ public class LogHelper {
if (peerAddress != null) {
logEntryBuilder.setPeer(socketAddressToProto(peerAddress));
}
sink.write(logEntryBuilder.build());
sink.write(logEntryBuilder.build(), spanContext);
}
/**
@ -170,7 +174,8 @@ public class LogHelper {
int maxHeaderBytes,
GrpcLogRecord.EventLogger eventLogger,
String callId,
@Nullable SocketAddress peerAddress) {
@Nullable SocketAddress peerAddress,
SpanContext spanContext) {
checkNotNull(serviceName, "serviceName");
checkNotNull(methodName, "methodName");
checkNotNull(authority, "authority");
@ -182,7 +187,7 @@ public class LogHelper {
PayloadBuilderHelper<Payload.Builder> pair =
createMetadataProto(metadata, maxHeaderBytes);
pair.payloadBuilder.setStatusCode(status.getCode().value());
pair.payloadBuilder.setStatusCode(Code.forNumber(status.getCode().value()));
String statusDescription = status.getDescription();
if (statusDescription != null) {
pair.payloadBuilder.setStatusMessage(statusDescription);
@ -204,7 +209,7 @@ public class LogHelper {
if (peerAddress != null) {
logEntryBuilder.setPeer(socketAddressToProto(peerAddress));
}
sink.write(logEntryBuilder.build());
sink.write(logEntryBuilder.build(), spanContext);
}
/**
@ -219,7 +224,8 @@ public class LogHelper {
T message,
int maxMessageBytes,
EventLogger eventLogger,
String callId) {
String callId,
SpanContext spanContext) {
checkNotNull(serviceName, "serviceName");
checkNotNull(methodName, "methodName");
checkNotNull(authority, "authority");
@ -259,7 +265,7 @@ public class LogHelper {
logEntryBuilder.setPayload(pair.payloadBuilder)
.setPayloadTruncated(pair.truncated);
}
sink.write(logEntryBuilder.build());
sink.write(logEntryBuilder.build(), spanContext);
}
/**
@ -271,7 +277,8 @@ public class LogHelper {
String methodName,
String authority,
GrpcLogRecord.EventLogger eventLogger,
String callId) {
String callId,
SpanContext spanContext) {
checkNotNull(serviceName, "serviceName");
checkNotNull(methodName, "methodName");
checkNotNull(authority, "authority");
@ -285,7 +292,7 @@ public class LogHelper {
.setType(EventType.CLIENT_HALF_CLOSE)
.setLogger(eventLogger)
.setCallId(callId);
sink.write(logEntryBuilder.build());
sink.write(logEntryBuilder.build(), spanContext);
}
/**
@ -297,7 +304,8 @@ public class LogHelper {
String methodName,
String authority,
GrpcLogRecord.EventLogger eventLogger,
String callId) {
String callId,
SpanContext spanContext) {
checkNotNull(serviceName, "serviceName");
checkNotNull(methodName, "methodName");
checkNotNull(authority, "authority");
@ -311,7 +319,7 @@ public class LogHelper {
.setType(EventType.CANCEL)
.setLogger(eventLogger)
.setCallId(callId);
sink.write(logEntryBuilder.build());
sink.write(logEntryBuilder.build(), spanContext);
}
// TODO(DNVindhya): Evaluate if we need following clause for metadata logging in GcpObservability
@ -404,10 +412,10 @@ public class LogHelper {
if (address instanceof InetSocketAddress) {
InetAddress inetAddress = ((InetSocketAddress) address).getAddress();
if (inetAddress instanceof Inet4Address) {
builder.setType(Address.Type.TYPE_IPV4)
builder.setType(Address.Type.IPV4)
.setAddress(InetAddressUtil.toAddrString(inetAddress));
} else if (inetAddress instanceof Inet6Address) {
builder.setType(Address.Type.TYPE_IPV6)
builder.setType(Address.Type.IPV6)
.setAddress(InetAddressUtil.toAddrString(inetAddress));
} else {
logger.log(Level.SEVERE, "unknown type of InetSocketAddress: {}", address);
@ -417,7 +425,7 @@ public class LogHelper {
} else if (address.getClass().getName().equals("io.netty.channel.unix.DomainSocketAddress")) {
// To avoid a compiled time dependency on grpc-netty, we check against the
// runtime class name.
builder.setType(Address.Type.TYPE_UNIX)
builder.setType(Address.Type.UNIX)
.setAddress(address.toString());
} else {
builder.setType(Address.Type.TYPE_UNKNOWN).setAddress(address.toString());

View File

@ -18,29 +18,31 @@ package io.grpc.gcp.observability.logging;
import static com.google.common.base.Preconditions.checkNotNull;
import com.google.cloud.MonitoredResource;
import com.google.api.gax.batching.BatchingSettings;
import com.google.api.gax.batching.FlowController;
import com.google.cloud.logging.LogEntry;
import com.google.cloud.logging.Logging;
import com.google.cloud.logging.LoggingOptions;
import com.google.cloud.logging.Payload.JsonPayload;
import com.google.cloud.logging.Severity;
import com.google.cloud.logging.v2.stub.LoggingServiceV2StubSettings;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Strings;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.protobuf.util.JsonFormat;
import io.grpc.Internal;
import io.grpc.gcp.observability.ObservabilityConfig;
import io.grpc.internal.JsonParser;
import io.grpc.observabilitylog.v1.GrpcLogRecord;
import io.opencensus.trace.SpanContext;
import java.io.IOException;
import java.time.Instant;
import java.util.Collection;
import java.util.Collections;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.threeten.bp.Duration;
/**
* Sink for Google Cloud Logging.
@ -52,22 +54,22 @@ public class GcpLogSink implements Sink {
private static final String DEFAULT_LOG_NAME =
"microservices.googleapis.com%2Fobservability%2Fgrpc";
private static final Severity DEFAULT_LOG_LEVEL = Severity.DEBUG;
private static final String K8S_MONITORED_RESOURCE_TYPE = "k8s_container";
private static final Set<String> kubernetesResourceLabelSet
= ImmutableSet.of("project_id", "location", "cluster_name", "namespace_name",
"pod_name", "container_name");
private final String projectId;
private final Map<String, String> customTags;
private final MonitoredResource kubernetesResource;
/** Lazily initialize cloud logging client to avoid circular initialization. Because cloud
* logging APIs also uses gRPC. */
private volatile Logging gcpLoggingClient;
private final Collection<String> servicesToExclude;
private final boolean isTraceEnabled;
private final TraceLoggingHelper traceLoggingHelper;
@VisibleForTesting
GcpLogSink(Logging loggingClient, String projectId, Map<String, String> locationTags,
Map<String, String> customTags, Collection<String> servicesToExclude) {
this(projectId, locationTags, customTags, servicesToExclude);
GcpLogSink(Logging loggingClient, String projectId,
ObservabilityConfig config, Collection<String> servicesToExclude,
TraceLoggingHelper traceLoggingHelper) {
this(projectId, config, servicesToExclude, traceLoggingHelper);
this.gcpLoggingClient = loggingClient;
}
@ -77,12 +79,14 @@ public class GcpLogSink implements Sink {
* @param projectId GCP project id to write logs
* @param servicesToExclude service names for which log entries should not be generated
*/
public GcpLogSink(String projectId, Map<String, String> locationTags,
Map<String, String> customTags, Collection<String> servicesToExclude) {
public GcpLogSink(String projectId,
ObservabilityConfig config, Collection<String> servicesToExclude,
TraceLoggingHelper traceLoggingHelper) {
this.projectId = projectId;
this.customTags = getCustomTags(customTags, locationTags, projectId);
this.kubernetesResource = getResource(locationTags);
this.customTags = getCustomTags(config.getCustomTags());
this.servicesToExclude = checkNotNull(servicesToExclude, "servicesToExclude");
this.isTraceEnabled = config.isEnableCloudTracing();
this.traceLoggingHelper = traceLoggingHelper;
}
/**
@ -91,7 +95,7 @@ public class GcpLogSink implements Sink {
* @param logProto gRPC logging proto containing the message to be logged
*/
@Override
public void write(GrpcLogRecord logProto) {
public void write(GrpcLogRecord logProto, SpanContext spanContext) {
if (gcpLoggingClient == null) {
synchronized (this) {
if (gcpLoggingClient == null) {
@ -102,6 +106,7 @@ public class GcpLogSink implements Sink {
if (servicesToExclude.contains(logProto.getServiceName())) {
return;
}
LogEntry grpcLogEntry = null;
try {
GrpcLogRecord.EventType eventType = logProto.getType();
// TODO(DNVindhya): make sure all (int, long) values are not displayed as double
@ -111,59 +116,65 @@ public class GcpLogSink implements Sink {
LogEntry.newBuilder(JsonPayload.of(logProtoMap))
.setSeverity(DEFAULT_LOG_LEVEL)
.setLogName(DEFAULT_LOG_NAME)
.setResource(kubernetesResource)
.setTimestamp(Instant.now());
if (!customTags.isEmpty()) {
grpcLogEntryBuilder.setLabels(customTags);
}
LogEntry grpcLogEntry = grpcLogEntryBuilder.build();
addTraceData(grpcLogEntryBuilder, spanContext);
grpcLogEntry = grpcLogEntryBuilder.build();
synchronized (this) {
logger.log(Level.FINEST, "Writing gRPC event : {0} to Cloud Logging", eventType);
gcpLoggingClient.write(Collections.singleton(grpcLogEntry));
}
} catch (FlowController.FlowControlRuntimeException e) {
String grpcLogEntryString = null;
if (grpcLogEntry != null) {
grpcLogEntryString = grpcLogEntry.toStructuredJsonString();
}
logger.log(Level.SEVERE, "Limit exceeded while writing log entry to cloud logging");
logger.log(Level.SEVERE, "Log entry = ", grpcLogEntryString);
} catch (Exception e) {
logger.log(Level.SEVERE, "Caught exception while writing to Cloud Logging", e);
}
}
void addTraceData(LogEntry.Builder builder, SpanContext spanContext) {
if (!isTraceEnabled) {
return;
}
traceLoggingHelper.enhanceLogEntry(builder, spanContext);
}
Logging createLoggingClient() {
LoggingOptions.Builder builder = LoggingOptions.newBuilder();
if (!Strings.isNullOrEmpty(projectId)) {
builder.setProjectId(projectId);
}
BatchingSettings loggingDefaultBatchingSettings = LoggingServiceV2StubSettings.newBuilder()
.writeLogEntriesSettings().getBatchingSettings();
// Custom batching settings
BatchingSettings grpcLoggingVBatchingSettings = loggingDefaultBatchingSettings.toBuilder()
.setDelayThreshold(Duration.ofSeconds(1L)).setFlowControlSettings(
loggingDefaultBatchingSettings.getFlowControlSettings().toBuilder()
.setMaxOutstandingRequestBytes(52428800L) //50 MiB
.setLimitExceededBehavior(FlowController.LimitExceededBehavior.ThrowException)
.build()).build();
builder.setBatchingSettings(grpcLoggingVBatchingSettings);
return builder.build().getService();
}
@VisibleForTesting
static Map<String, String> getCustomTags(Map<String, String> customTags,
Map<String, String> locationTags, String projectId) {
static Map<String, String> getCustomTags(Map<String, String> customTags) {
ImmutableMap.Builder<String, String> tagsBuilder = ImmutableMap.builder();
String sourceProjectId = locationTags.get("project_id");
if (!Strings.isNullOrEmpty(projectId)
&& !Strings.isNullOrEmpty(sourceProjectId)
&& !Objects.equals(sourceProjectId, projectId)) {
tagsBuilder.put("source_project_id", sourceProjectId);
}
if (customTags != null) {
tagsBuilder.putAll(customTags);
}
return tagsBuilder.buildOrThrow();
}
@VisibleForTesting
static MonitoredResource getResource(Map<String, String> resourceTags) {
MonitoredResource.Builder builder = MonitoredResource.newBuilder(K8S_MONITORED_RESOURCE_TYPE);
if ((resourceTags != null) && !resourceTags.isEmpty()) {
for (Map.Entry<String, String> entry : resourceTags.entrySet()) {
String resourceKey = entry.getKey();
if (kubernetesResourceLabelSet.contains(resourceKey)) {
builder.addLabel(resourceKey, entry.getValue());
}
}
}
return builder.build();
}
@SuppressWarnings("unchecked")
private Map<String, Object> protoToMapConverter(GrpcLogRecord logProto)

View File

@ -18,6 +18,7 @@ package io.grpc.gcp.observability.logging;
import io.grpc.Internal;
import io.grpc.observabilitylog.v1.GrpcLogRecord;
import io.opencensus.trace.SpanContext;
/**
* Sink for GCP observability.
@ -27,7 +28,7 @@ public interface Sink {
/**
* Writes the {@code message} to the destination.
*/
void write(GrpcLogRecord message);
void write(GrpcLogRecord message, SpanContext spanContext);
/**
* Closes the sink.

View File

@ -0,0 +1,49 @@
/*
* Copyright 2023 The gRPC Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.grpc.gcp.observability.logging;
import com.google.cloud.logging.LogEntry;
import com.google.common.annotations.VisibleForTesting;
import io.grpc.Internal;
import io.opencensus.trace.SpanContext;
import io.opencensus.trace.TraceId;
@Internal
public class TraceLoggingHelper {
private final String tracePrefix;
public TraceLoggingHelper(String projectId) {
this.tracePrefix = "projects/" + projectId + "/traces/";;
}
@VisibleForTesting
void enhanceLogEntry(LogEntry.Builder builder, SpanContext spanContext) {
addTracingData(tracePrefix, spanContext, builder);
}
private static void addTracingData(
String tracePrefix, SpanContext spanContext, LogEntry.Builder builder) {
builder.setTrace(formatTraceId(tracePrefix, spanContext.getTraceId()));
builder.setSpanId(spanContext.getSpanId().toLowerBase16());
builder.setTraceSampled(spanContext.getTraceOptions().isSampled());
}
private static String formatTraceId(String tracePrefix, TraceId traceId) {
return tracePrefix + traceId.toLowerBase16();
}
}

View File

@ -20,6 +20,7 @@ package grpc.observabilitylog.v1;
import "google/protobuf/duration.proto";
import "google/protobuf/timestamp.proto";
import "google/rpc/code.proto";
option java_multiple_files = true;
option java_package = "io.grpc.observabilitylog.v1";
@ -97,7 +98,7 @@ message Payload {
// the RPC timeout value
google.protobuf.Duration timeout = 2;
// The gRPC status code
uint32 status_code = 3;
google.rpc.Code status_code = 3;
// The gRPC status message
string status_message = 4;
// The value of the grpc-status-details-bin metadata key, if any.
@ -115,9 +116,9 @@ message Payload {
message Address {
enum Type {
TYPE_UNKNOWN = 0;
TYPE_IPV4 = 1; // in 1.2.3.4 form
TYPE_IPV6 = 2; // IPv6 canonical form (RFC5952 section 4)
TYPE_UNIX = 3; // UDS string
IPV4 = 1; // in 1.2.3.4 form
IPV6 = 2; // IPv6 canonical form (RFC5952 section 4)
UNIX = 3; // UDS string
}
Type type = 1;
string address = 2;

View File

@ -1,110 +0,0 @@
/*
* Copyright 2022 The gRPC Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.grpc.gcp.observability;
import static com.google.common.truth.Truth.assertThat;
import com.google.common.collect.ImmutableMap;
import com.google.common.io.Files;
import java.io.File;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TemporaryFolder;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;
@RunWith(JUnit4.class)
public class GlobalLocationTagsTest {
private static String FILE_CONTENTS =
"12:perf_event:/kubepods/burstable/podc43b6442-0725-4fb8-bb1c-d17f5122155c/"
+ "fe61ca6482b58f4a9831d08d6ea15db25f9fd19b4be19a54df8c6c0eab8742b7\n"
+ "11:freezer:/kubepods/burstable/podc43b6442-0725-4fb8-bb1c-d17f5122155c/"
+ "fe61ca6482b58f4a9831d08d6ea15db25f9fd19b4be19a54df8c6c0eab8742b7\n"
+ "2:rdma:/\n"
+ "1:name=systemd:/kubepods/burstable/podc43b6442-0725-4fb8-bb1c-d17f5122155c/"
+ "fe61ca6482b58f4a9831d08d6ea15db25f9fd19b4be19a54df8c6c0eab8742b7\n"
+ "0::/system.slice/containerd.service\n";
private static String FILE_CONTENTS_LAST_LINE =
"0::/system.slice/containerd.service\n"
+ "6442-0725-4fb8-bb1c-d17f5122155cslslsl/fe61ca6482b58f4a9831d08d6ea15db25f\n"
+ "\n"
+ "12:perf_event:/kubepods/burstable/podc43b6442-0725-4fb8-bb1c-d17f5122155c/e19a54df\n";
@Rule public TemporaryFolder namespaceFolder = new TemporaryFolder();
@Rule public TemporaryFolder hostnameFolder = new TemporaryFolder();
@Rule public TemporaryFolder cgroupFolder = new TemporaryFolder();
@Test
public void testContainerIdParsing_lastLine() {
String containerId = GlobalLocationTags.getContainerIdFromFileContents(FILE_CONTENTS_LAST_LINE);
assertThat(containerId).isEqualTo("e19a54df");
}
@Test
public void testContainerIdParsing_fewerFields_notFound() {
String containerId = GlobalLocationTags.getContainerIdFromFileContents(
"12:/kubepods/burstable/podc43b6442-0725-4fb8-bb1c-d17f5122155c/"
+ "fe61ca6482b58f4a9831d08d6ea15db25f9fd19b4be19a54df8c6c0eab8742b7\n");
assertThat(containerId).isNull();
}
@Test
public void testContainerIdParsing_fewerPaths_notFound() {
String containerId = GlobalLocationTags.getContainerIdFromFileContents(
"12:xdf:/kubepods/podc43b6442-0725-4fb8-bb1c-d17f5122155c/"
+ "fe61ca6482b58f4a9831d08d6ea15db25f9fd19b4be19a54df8c6c0eab8742b7\n");
assertThat(containerId).isNull();
}
@Test
public void testPopulateKubernetesValues() throws IOException {
File namespaceFile = namespaceFolder.newFile();
File hostnameFile = hostnameFolder.newFile();
File cgroupFile = cgroupFolder.newFile();
Files.write("test-namespace1".getBytes(StandardCharsets.UTF_8), namespaceFile);
Files.write("test-hostname2\n".getBytes(StandardCharsets.UTF_8), hostnameFile);
Files.write(FILE_CONTENTS.getBytes(StandardCharsets.UTF_8), cgroupFile);
ImmutableMap.Builder<String, String> locationTags = ImmutableMap.builder();
GlobalLocationTags.populateFromKubernetesValues(locationTags, namespaceFile.getAbsolutePath(),
hostnameFile.getAbsolutePath(), cgroupFile.getAbsolutePath());
assertThat(locationTags.buildOrThrow()).containsExactly("container_id",
"fe61ca6482b58f4a9831d08d6ea15db25f9fd19b4be19a54df8c6c0eab8742b7", "namespace_name",
"test-namespace1", "pod_name", "test-hostname2");
}
@Test
public void testNonKubernetesInstanceValues() throws IOException {
String namespaceFilePath = "/var/run/secrets/kubernetes.io/serviceaccount/namespace";
File hostnameFile = hostnameFolder.newFile();
File cgroupFile = cgroupFolder.newFile();
Files.write("test-hostname2\n".getBytes(StandardCharsets.UTF_8), hostnameFile);
Files.write(FILE_CONTENTS.getBytes(StandardCharsets.UTF_8), cgroupFile);
ImmutableMap.Builder<String, String> locationTags = ImmutableMap.builder();
GlobalLocationTags.populateFromKubernetesValues(locationTags,
namespaceFilePath, hostnameFile.getAbsolutePath(), cgroupFile.getAbsolutePath());
assertThat(locationTags.buildOrThrow()).containsExactly("container_id",
"fe61ca6482b58f4a9831d08d6ea15db25f9fd19b4be19a54df8c6c0eab8742b7",
"pod_name", "test-hostname2");
}
}

View File

@ -38,9 +38,11 @@ import io.grpc.gcp.observability.interceptors.InternalLoggingServerInterceptor;
import io.grpc.gcp.observability.interceptors.LogHelper;
import io.grpc.gcp.observability.logging.GcpLogSink;
import io.grpc.gcp.observability.logging.Sink;
import io.grpc.gcp.observability.logging.TraceLoggingHelper;
import io.grpc.observabilitylog.v1.GrpcLogRecord;
import io.grpc.testing.GrpcCleanupRule;
import io.grpc.testing.protobuf.SimpleServiceGrpc;
import io.opencensus.trace.SpanContext;
import java.io.IOException;
import java.util.Collections;
import java.util.regex.Pattern;
@ -49,7 +51,9 @@ import org.junit.Ignore;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;
import org.mockito.AdditionalMatchers;
import org.mockito.ArgumentCaptor;
import org.mockito.ArgumentMatchers;
import org.mockito.Mockito;
@RunWith(JUnit4.class)
@ -59,12 +63,6 @@ public class LoggingTest {
public static final GrpcCleanupRule cleanupRule = new GrpcCleanupRule();
private static final String PROJECT_ID = "PROJECT";
private static final ImmutableMap<String, String> LOCATION_TAGS = ImmutableMap.of(
"project_id", "PROJECT",
"location", "us-central1-c",
"cluster_name", "grpc-observability-cluster",
"namespace_name", "default" ,
"pod_name", "app1-6c7c58f897-n92c5");
private static final ImmutableMap<String, String> CUSTOM_TAGS = ImmutableMap.of(
"KEY1", "Value1",
"KEY2", "VALUE2");
@ -111,10 +109,12 @@ public class LoggingTest {
@Override
public void run() {
ObservabilityConfig config = mock(ObservabilityConfig.class);
when(config.getCustomTags()).thenReturn(CUSTOM_TAGS);
Sink sink =
new GcpLogSink(
PROJECT_ID, LOCATION_TAGS, CUSTOM_TAGS, Collections.emptySet());
ObservabilityConfig config = mock(ObservabilityConfig.class);
PROJECT_ID, config, Collections.emptySet(),
mock(TraceLoggingHelper.class));
LogHelper spyLogHelper = spy(new LogHelper(sink));
ConfigFilterHelper mockFilterHelper = mock(ConfigFilterHelper.class);
InternalLoggingChannelInterceptor.Factory channelInterceptorFactory =
@ -237,7 +237,9 @@ public class LoggingTest {
// = 8
assertThat(Mockito.mockingDetails(mockSink).getInvocations().size()).isEqualTo(12);
ArgumentCaptor<GrpcLogRecord> captor = ArgumentCaptor.forClass(GrpcLogRecord.class);
verify(mockSink, times(12)).write(captor.capture());
verify(mockSink, times(12)).write(captor.capture(),
AdditionalMatchers.or(ArgumentMatchers.isNull(),
ArgumentMatchers.any(SpanContext.class)));
for (GrpcLogRecord record : captor.getAllValues()) {
assertThat(record.getType()).isInstanceOf(GrpcLogRecord.EventType.class);
assertThat(record.getLogger()).isInstanceOf(GrpcLogRecord.EventLogger.class);

View File

@ -1,56 +0,0 @@
/*
* Copyright 2022 The gRPC Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.grpc.gcp.observability;
import static com.google.common.truth.Truth.assertThat;
import static org.mockito.Mockito.when;
import com.google.api.client.testing.http.MockHttpTransport;
import com.google.api.client.testing.http.MockLowLevelHttpResponse;
import com.google.auth.http.HttpTransportFactory;
import java.io.IOException;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
@RunWith(JUnit4.class)
public class MetadataConfigTest {
@Mock HttpTransportFactory httpTransportFactory;
@Before
public void setUp() {
MockitoAnnotations.initMocks(this);
}
@Test
public void testGetAttribute() throws IOException {
MockHttpTransport.Builder builder = new MockHttpTransport.Builder();
MockLowLevelHttpResponse response = new MockLowLevelHttpResponse();
response.setContent("foo");
builder.setLowLevelHttpResponse(response);
MockHttpTransport httpTransport = builder.build();
when(httpTransportFactory.create()).thenReturn(httpTransport);
MetadataConfig metadataConfig = new MetadataConfig(httpTransportFactory);
metadataConfig.init();
String val = metadataConfig.getAttribute("instance/attributes/cluster-name");
assertThat(val).isEqualTo("foo");
}
}

View File

@ -17,6 +17,7 @@
package io.grpc.gcp.observability.interceptors;
import static com.google.common.truth.Truth.assertThat;
import static io.grpc.census.internal.ObservabilityCensusConstants.CLIENT_TRACE_SPAN_CONTEXT_KEY;
import static io.grpc.gcp.observability.interceptors.LogHelperTest.BYTEARRAY_MARSHALLER;
import static org.junit.Assert.assertSame;
import static org.mockito.ArgumentMatchers.any;
@ -52,6 +53,11 @@ import io.grpc.internal.NoopClientCall;
import io.grpc.observabilitylog.v1.GrpcLogRecord;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventLogger;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventType;
import io.opencensus.trace.SpanContext;
import io.opencensus.trace.SpanId;
import io.opencensus.trace.TraceId;
import io.opencensus.trace.TraceOptions;
import io.opencensus.trace.Tracestate;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
@ -83,6 +89,14 @@ public class InternalLoggingChannelInterceptorTest {
public final MockitoRule mockito = MockitoJUnit.rule();
private static final Charset US_ASCII = StandardCharsets.US_ASCII;
private static final SpanContext DEFAULT_CLIENT_SPAN_CONTEXT = SpanContext.INVALID;
private static final SpanContext SPAN_CONTEXT = SpanContext.create(
TraceId.fromLowerBase16("4c6af40c499951eb7de2777ba1e4fefa"),
SpanId.fromLowerBase16("de52e84d13dd232d"),
TraceOptions.builder().setIsSampled(true).build(),
Tracestate.builder().build());
private static final CallOptions CALL_OPTIONS_WITH_SPAN_CONTEXT =
CallOptions.DEFAULT.withOption(CLIENT_TRACE_SPAN_CONTEXT_KEY, SPAN_CONTEXT);
private InternalLoggingChannelInterceptor.Factory factory;
private AtomicReference<ClientCall.Listener<byte[]>> interceptedListener;
@ -192,7 +206,8 @@ public class InternalLoggingChannelInterceptorTest {
eq(filterParams.headerBytes()),
eq(EventLogger.CLIENT),
anyString(),
ArgumentMatchers.isNull());
ArgumentMatchers.isNull(),
eq(DEFAULT_CLIENT_SPAN_CONTEXT));
verifyNoMoreInteractions(mockLogHelper);
assertSame(clientInitial, actualClientInitial.get());
}
@ -213,7 +228,8 @@ public class InternalLoggingChannelInterceptorTest {
eq(filterParams.headerBytes()),
eq(EventLogger.CLIENT),
anyString(),
same(peer));
same(peer),
any(SpanContext.class));
verifyNoMoreInteractions(mockLogHelper);
verify(mockListener).onHeaders(same(serverInitial));
}
@ -234,7 +250,8 @@ public class InternalLoggingChannelInterceptorTest {
same(request),
eq(filterParams.messageBytes()),
eq(EventLogger.CLIENT),
anyString());
anyString(),
any(SpanContext.class));
verifyNoMoreInteractions(mockLogHelper);
assertSame(request, actualRequest.get());
}
@ -251,7 +268,8 @@ public class InternalLoggingChannelInterceptorTest {
eq("method"),
eq("the-authority"),
eq(EventLogger.CLIENT),
anyString());
anyString(),
any(SpanContext.class));
halfCloseCalled.get(1, TimeUnit.MILLISECONDS);
verifyNoMoreInteractions(mockLogHelper);
}
@ -272,7 +290,8 @@ public class InternalLoggingChannelInterceptorTest {
same(response),
eq(filterParams.messageBytes()),
eq(EventLogger.CLIENT),
anyString());
anyString(),
any(SpanContext.class));
verifyNoMoreInteractions(mockLogHelper);
verify(mockListener).onMessage(same(response));
}
@ -295,7 +314,8 @@ public class InternalLoggingChannelInterceptorTest {
eq(filterParams.headerBytes()),
eq(EventLogger.CLIENT),
anyString(),
same(peer));
same(peer),
any(SpanContext.class));
verifyNoMoreInteractions(mockLogHelper);
verify(mockListener).onClose(same(status), same(trailers));
}
@ -312,7 +332,8 @@ public class InternalLoggingChannelInterceptorTest {
eq("method"),
eq("the-authority"),
eq(EventLogger.CLIENT),
anyString());
anyString(),
any(SpanContext.class));
cancelCalled.get(1, TimeUnit.MILLISECONDS);
}
}
@ -363,7 +384,8 @@ public class InternalLoggingChannelInterceptorTest {
any(GrpcLogRecord.EventLogger.class),
anyString(),
AdditionalMatchers.or(ArgumentMatchers.isNull(),
ArgumentMatchers.any()));
ArgumentMatchers.any()),
any(SpanContext.class));
Duration timeout = callOptTimeoutCaptor.getValue();
assertThat(TimeUnit.SECONDS.toNanos(1) - Durations.toNanos(timeout))
.isAtMost(TimeUnit.MILLISECONDS.toNanos(250));
@ -422,7 +444,8 @@ public class InternalLoggingChannelInterceptorTest {
any(GrpcLogRecord.EventLogger.class),
anyString(),
AdditionalMatchers.or(ArgumentMatchers.isNull(),
ArgumentMatchers.any()));
ArgumentMatchers.any()),
any(SpanContext.class));
Duration timeout = contextTimeoutCaptor.getValue();
assertThat(TimeUnit.SECONDS.toNanos(1) - Durations.toNanos(timeout))
.isAtMost(TimeUnit.MILLISECONDS.toNanos(250));
@ -484,7 +507,8 @@ public class InternalLoggingChannelInterceptorTest {
any(GrpcLogRecord.EventLogger.class),
anyString(),
AdditionalMatchers.or(ArgumentMatchers.isNull(),
ArgumentMatchers.any()));
ArgumentMatchers.any()),
any(SpanContext.class));
Duration timeout = timeoutCaptor.getValue();
assertThat(LogHelper.min(contextDeadline, callOptionsDeadline))
.isSameInstanceAs(contextDeadline);
@ -633,4 +657,172 @@ public class InternalLoggingChannelInterceptorTest {
assertThat(Mockito.mockingDetails(mockLogHelper).getInvocations().size()).isEqualTo(7);
}
}
@Test
public void clientSpanContextLogged_contextSetViaCallOption() {
Channel channel = new Channel() {
@Override
public <RequestT, ResponseT> ClientCall<RequestT, ResponseT> newCall(
MethodDescriptor<RequestT, ResponseT> methodDescriptor, CallOptions callOptions) {
return new NoopClientCall<RequestT, ResponseT>() {
@Override
@SuppressWarnings("unchecked")
public void start(Listener<ResponseT> responseListener, Metadata headers) {
interceptedListener.set((Listener<byte[]>) responseListener);
actualClientInitial.set(headers);
}
@Override
public void sendMessage(RequestT message) {
actualRequest.set(message);
}
@Override
public void cancel(String message, Throwable cause) {
cancelCalled.set(null);
}
@Override
public void halfClose() {
halfCloseCalled.set(null);
}
@Override
public Attributes getAttributes() {
return Attributes.newBuilder().set(Grpc.TRANSPORT_ATTR_REMOTE_ADDR, peer).build();
}
};
}
@Override
public String authority() {
return "the-authority";
}
};
@SuppressWarnings("unchecked")
ClientCall.Listener<byte[]> mockListener = mock(ClientCall.Listener.class);
MethodDescriptor<byte[], byte[]> method =
MethodDescriptor.<byte[], byte[]>newBuilder()
.setType(MethodType.UNKNOWN)
.setFullMethodName("service/method")
.setRequestMarshaller(BYTEARRAY_MARSHALLER)
.setResponseMarshaller(BYTEARRAY_MARSHALLER)
.build();
when(mockFilterHelper.logRpcMethod(method.getFullMethodName(), true))
.thenReturn(FilterParams.create(true, 10, 10));
ClientCall<byte[], byte[]> interceptedLoggingCall =
factory.create()
.interceptCall(method,
CALL_OPTIONS_WITH_SPAN_CONTEXT,
channel);
{
interceptedLoggingCall.start(mockListener, new Metadata());
ArgumentCaptor<SpanContext> callOptSpanContextCaptor = ArgumentCaptor.forClass(
SpanContext.class);
verify(mockLogHelper, times(1))
.logClientHeader(
anyLong(),
AdditionalMatchers.or(ArgumentMatchers.isNull(), anyString()),
AdditionalMatchers.or(ArgumentMatchers.isNull(), anyString()),
AdditionalMatchers.or(ArgumentMatchers.isNull(), anyString()),
ArgumentMatchers.isNull(),
any(Metadata.class),
anyInt(),
any(GrpcLogRecord.EventLogger.class),
anyString(),
AdditionalMatchers.or(ArgumentMatchers.isNull(),
ArgumentMatchers.any()),
callOptSpanContextCaptor.capture());
SpanContext spanContext = callOptSpanContextCaptor.getValue();
assertThat(spanContext).isEqualTo(SPAN_CONTEXT);
}
}
@Test
public void clientSpanContextLogged_contextNotSetViaCallOption() {
Channel channel = new Channel() {
@Override
public <RequestT, ResponseT> ClientCall<RequestT, ResponseT> newCall(
MethodDescriptor<RequestT, ResponseT> methodDescriptor, CallOptions callOptions) {
return new NoopClientCall<RequestT, ResponseT>() {
@Override
@SuppressWarnings("unchecked")
public void start(Listener<ResponseT> responseListener, Metadata headers) {
interceptedListener.set((Listener<byte[]>) responseListener);
actualClientInitial.set(headers);
}
@Override
public void sendMessage(RequestT message) {
actualRequest.set(message);
}
@Override
public void cancel(String message, Throwable cause) {
cancelCalled.set(null);
}
@Override
public void halfClose() {
halfCloseCalled.set(null);
}
@Override
public Attributes getAttributes() {
return Attributes.newBuilder().set(Grpc.TRANSPORT_ATTR_REMOTE_ADDR, peer).build();
}
};
}
@Override
public String authority() {
return "the-authority";
}
};
@SuppressWarnings("unchecked")
ClientCall.Listener<byte[]> mockListener = mock(ClientCall.Listener.class);
MethodDescriptor<byte[], byte[]> method =
MethodDescriptor.<byte[], byte[]>newBuilder()
.setType(MethodType.UNKNOWN)
.setFullMethodName("service/method")
.setRequestMarshaller(BYTEARRAY_MARSHALLER)
.setResponseMarshaller(BYTEARRAY_MARSHALLER)
.build();
when(mockFilterHelper.logRpcMethod(method.getFullMethodName(), true))
.thenReturn(FilterParams.create(true, 10, 10));
ClientCall<byte[], byte[]> interceptedLoggingCall =
factory.create()
.interceptCall(method,
CallOptions.DEFAULT,
channel);
{
interceptedLoggingCall.start(mockListener, new Metadata());
ArgumentCaptor<SpanContext> callOptSpanContextCaptor = ArgumentCaptor.forClass(
SpanContext.class);
verify(mockLogHelper, times(1))
.logClientHeader(
anyLong(),
AdditionalMatchers.or(ArgumentMatchers.isNull(), anyString()),
AdditionalMatchers.or(ArgumentMatchers.isNull(), anyString()),
AdditionalMatchers.or(ArgumentMatchers.isNull(), anyString()),
ArgumentMatchers.isNull(),
any(Metadata.class),
anyInt(),
any(GrpcLogRecord.EventLogger.class),
anyString(),
AdditionalMatchers.or(ArgumentMatchers.isNull(),
ArgumentMatchers.any()),
callOptSpanContextCaptor.capture());
SpanContext spanContext = callOptSpanContextCaptor.getValue();
assertThat(spanContext).isEqualTo(DEFAULT_CLIENT_SPAN_CONTEXT);
}
}
}

View File

@ -45,6 +45,7 @@ import io.grpc.gcp.observability.interceptors.ConfigFilterHelper.FilterParams;
import io.grpc.internal.NoopServerCall;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventLogger;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventType;
import io.opencensus.trace.SpanContext;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
@ -171,7 +172,8 @@ public class InternalLoggingServerInterceptorTest {
eq(filterParams.headerBytes()),
eq(EventLogger.SERVER),
anyString(),
same(peer));
same(peer),
eq(SpanContext.INVALID));
verifyNoMoreInteractions(mockLogHelper);
}
@ -191,7 +193,8 @@ public class InternalLoggingServerInterceptorTest {
eq(filterParams.headerBytes()),
eq(EventLogger.SERVER),
anyString(),
ArgumentMatchers.isNull());
ArgumentMatchers.isNull(),
eq(SpanContext.INVALID));
verifyNoMoreInteractions(mockLogHelper);
assertSame(serverInitial, actualServerInitial.get());
}
@ -212,7 +215,8 @@ public class InternalLoggingServerInterceptorTest {
same(request),
eq(filterParams.messageBytes()),
eq(EventLogger.SERVER),
anyString());
anyString(),
eq(SpanContext.INVALID));
verifyNoMoreInteractions(mockLogHelper);
verify(mockListener).onMessage(same(request));
}
@ -229,7 +233,8 @@ public class InternalLoggingServerInterceptorTest {
eq("method"),
eq("the-authority"),
eq(EventLogger.SERVER),
anyString());
anyString(),
eq(SpanContext.INVALID));
verifyNoMoreInteractions(mockLogHelper);
verify(mockListener).onHalfClose();
}
@ -250,7 +255,8 @@ public class InternalLoggingServerInterceptorTest {
same(response),
eq(filterParams.messageBytes()),
eq(EventLogger.SERVER),
anyString());
anyString(),
eq(SpanContext.INVALID));
verifyNoMoreInteractions(mockLogHelper);
assertSame(response, actualResponse.get());
}
@ -273,7 +279,8 @@ public class InternalLoggingServerInterceptorTest {
eq(filterParams.headerBytes()),
eq(EventLogger.SERVER),
anyString(),
ArgumentMatchers.isNull());
ArgumentMatchers.isNull(),
eq(SpanContext.INVALID));
verifyNoMoreInteractions(mockLogHelper);
assertSame(status, actualStatus.get());
assertSame(trailers, actualTrailers.get());
@ -291,7 +298,8 @@ public class InternalLoggingServerInterceptorTest {
eq("method"),
eq("the-authority"),
eq(EventLogger.SERVER),
anyString());
anyString(),
eq(SpanContext.INVALID));
verify(mockListener).onCancel();
}
}
@ -342,7 +350,8 @@ public class InternalLoggingServerInterceptorTest {
eq(filterParams.headerBytes()),
eq(EventLogger.SERVER),
anyString(),
ArgumentMatchers.isNull());
ArgumentMatchers.isNull(),
eq(SpanContext.INVALID));
verifyNoMoreInteractions(mockLogHelper);
Duration timeout = timeoutCaptor.getValue();
assertThat(TimeUnit.SECONDS.toNanos(1) - Durations.toNanos(timeout))

View File

@ -29,6 +29,7 @@ import static org.mockito.Mockito.verify;
import com.google.protobuf.ByteString;
import com.google.protobuf.Duration;
import com.google.protobuf.util.Durations;
import com.google.rpc.Code;
import io.grpc.Attributes;
import io.grpc.Grpc;
import io.grpc.Metadata;
@ -42,6 +43,11 @@ import io.grpc.observabilitylog.v1.GrpcLogRecord;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventLogger;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventType;
import io.grpc.observabilitylog.v1.Payload;
import io.opencensus.trace.SpanContext;
import io.opencensus.trace.SpanId;
import io.opencensus.trace.TraceId;
import io.opencensus.trace.TraceOptions;
import io.opencensus.trace.Tracestate;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
@ -74,11 +80,22 @@ public class LogHelperTest {
Metadata.Key.of("c", Metadata.ASCII_STRING_MARSHALLER);
private static final int HEADER_LIMIT = 10;
private static final int MESSAGE_LIMIT = Integer.MAX_VALUE;
private static final SpanContext CLIENT_SPAN_CONTEXT = SpanContext.create(
TraceId.fromLowerBase16("4c6af40c499951eb7de2777ba1e4fefa"),
SpanId.fromLowerBase16("de52e84d13dd232d"),
TraceOptions.builder().setIsSampled(true).build(),
Tracestate.builder().build());
private static final SpanContext SERVER_SPAN_CONTEXT = SpanContext.create(
TraceId.fromLowerBase16("549a8a64db2d0c757fdf6bb1bfe84e2c"),
SpanId.fromLowerBase16("a5b7704614fe903d"),
TraceOptions.builder().setIsSampled(true).build(),
Tracestate.builder().build());
private final Metadata nonEmptyMetadata = new Metadata();
private final Sink sink = mock(GcpLogSink.class);
private final LogHelper logHelper = new LogHelper(sink);
@Before
public void setUp() {
nonEmptyMetadata.put(KEY_A, DATA_A);
@ -94,7 +111,7 @@ public class LogHelperTest {
assertThat(LogHelper.socketAddressToProto(socketAddress))
.isEqualTo(Address
.newBuilder()
.setType(Address.Type.TYPE_IPV4)
.setType(Address.Type.IPV4)
.setAddress("127.0.0.1")
.setIpPort(12345)
.build());
@ -109,7 +126,7 @@ public class LogHelperTest {
assertThat(LogHelper.socketAddressToProto(socketAddress))
.isEqualTo(Address
.newBuilder()
.setType(Address.Type.TYPE_IPV6)
.setType(Address.Type.IPV6)
.setAddress("2001:db8::2:1") // RFC 5952 section 4: ipv6 canonical form required
.setIpPort(12345)
.build());
@ -287,8 +304,9 @@ public class LogHelperTest {
HEADER_LIMIT,
EventLogger.CLIENT,
callId,
null);
verify(sink).write(base);
null,
CLIENT_SPAN_CONTEXT);
verify(sink).write(base, CLIENT_SPAN_CONTEXT);
}
// logged on server
@ -303,12 +321,14 @@ public class LogHelperTest {
HEADER_LIMIT,
EventLogger.SERVER,
callId,
peerAddress);
peerAddress,
SERVER_SPAN_CONTEXT);
verify(sink).write(
base.toBuilder()
.setPeer(LogHelper.socketAddressToProto(peerAddress))
.setLogger(EventLogger.SERVER)
.build());
.build(),
SERVER_SPAN_CONTEXT);
}
// timeout is null
@ -323,11 +343,13 @@ public class LogHelperTest {
HEADER_LIMIT,
EventLogger.CLIENT,
callId,
null);
null,
CLIENT_SPAN_CONTEXT);
verify(sink).write(
base.toBuilder()
.setPayload(base.getPayload().toBuilder().clearTimeout().build())
.build());
.build(),
CLIENT_SPAN_CONTEXT);
}
// peerAddress is not null (error on client)
@ -342,7 +364,8 @@ public class LogHelperTest {
HEADER_LIMIT,
EventLogger.CLIENT,
callId,
peerAddress);
peerAddress,
CLIENT_SPAN_CONTEXT);
fail();
} catch (IllegalArgumentException expected) {
assertThat(expected).hasMessageThat().contains("peerAddress can only be specified by server");
@ -385,8 +408,9 @@ public class LogHelperTest {
HEADER_LIMIT,
EventLogger.CLIENT,
callId,
peerAddress);
verify(sink).write(base);
peerAddress,
CLIENT_SPAN_CONTEXT);
verify(sink).write(base, CLIENT_SPAN_CONTEXT);
}
// logged on server
@ -400,12 +424,14 @@ public class LogHelperTest {
HEADER_LIMIT,
EventLogger.SERVER,
callId,
null);
null,
SERVER_SPAN_CONTEXT);
verify(sink).write(
base.toBuilder()
.setLogger(EventLogger.SERVER)
.clearPeer()
.build());
.build(),
SERVER_SPAN_CONTEXT);
}
// peerAddress is not null (error on server)
@ -419,7 +445,8 @@ public class LogHelperTest {
HEADER_LIMIT,
EventLogger.SERVER,
callId,
peerAddress);
peerAddress,
SERVER_SPAN_CONTEXT);
fail();
} catch (IllegalArgumentException expected) {
@ -454,7 +481,7 @@ public class LogHelperTest {
builder.setPeer(LogHelper.socketAddressToProto(peer));
builder.setPayload(
builder.getPayload().toBuilder()
.setStatusCode(Status.INTERNAL.getCode().value())
.setStatusCode(Code.forNumber(Status.INTERNAL.getCode().value()))
.setStatusMessage("test description")
.build());
GrpcLogRecord base = builder.build();
@ -471,8 +498,9 @@ public class LogHelperTest {
HEADER_LIMIT,
EventLogger.CLIENT,
callId,
peer);
verify(sink).write(base);
peer,
CLIENT_SPAN_CONTEXT);
verify(sink).write(base, CLIENT_SPAN_CONTEXT);
}
// logged on server
@ -487,12 +515,14 @@ public class LogHelperTest {
HEADER_LIMIT,
EventLogger.SERVER,
callId,
null);
null,
SERVER_SPAN_CONTEXT);
verify(sink).write(
base.toBuilder()
.clearPeer()
.setLogger(EventLogger.SERVER)
.build());
.build(),
SERVER_SPAN_CONTEXT);
}
// peer address is null
@ -507,11 +537,13 @@ public class LogHelperTest {
HEADER_LIMIT,
EventLogger.CLIENT,
callId,
null);
null,
CLIENT_SPAN_CONTEXT);
verify(sink).write(
base.toBuilder()
.clearPeer()
.build());
.build(),
CLIENT_SPAN_CONTEXT);
}
// status description is null
@ -526,11 +558,13 @@ public class LogHelperTest {
HEADER_LIMIT,
EventLogger.CLIENT,
callId,
peer);
peer,
CLIENT_SPAN_CONTEXT);
verify(sink).write(
base.toBuilder()
.setPayload(base.getPayload().toBuilder().clearStatusMessage().build())
.build());
.build(),
CLIENT_SPAN_CONTEXT);
}
}
@ -590,8 +624,9 @@ public class LogHelperTest {
message,
MESSAGE_LIMIT,
EventLogger.CLIENT,
callId);
verify(sink).write(base);
callId,
CLIENT_SPAN_CONTEXT);
verify(sink).write(base, CLIENT_SPAN_CONTEXT);
}
// response message, logged on client
{
@ -604,11 +639,13 @@ public class LogHelperTest {
message,
MESSAGE_LIMIT,
EventLogger.CLIENT,
callId);
callId,
CLIENT_SPAN_CONTEXT);
verify(sink).write(
base.toBuilder()
.setType(EventType.SERVER_MESSAGE)
.build());
.build(),
CLIENT_SPAN_CONTEXT);
}
// request message, logged on server
{
@ -621,11 +658,13 @@ public class LogHelperTest {
message,
MESSAGE_LIMIT,
EventLogger.SERVER,
callId);
callId,
SERVER_SPAN_CONTEXT);
verify(sink).write(
base.toBuilder()
.setLogger(EventLogger.SERVER)
.build());
.build(),
SERVER_SPAN_CONTEXT);
}
// response message, logged on server
{
@ -638,12 +677,14 @@ public class LogHelperTest {
message,
MESSAGE_LIMIT,
EventLogger.SERVER,
callId);
callId,
SpanContext.INVALID);
verify(sink).write(
base.toBuilder()
.setType(EventType.SERVER_MESSAGE)
.setLogger(EventLogger.SERVER)
.build());
.build(),
SpanContext.INVALID);
}
// message is not of type : com.google.protobuf.Message or byte[]
{
@ -656,12 +697,14 @@ public class LogHelperTest {
"message",
MESSAGE_LIMIT,
EventLogger.CLIENT,
callId);
callId,
CLIENT_SPAN_CONTEXT);
verify(sink).write(
base.toBuilder()
.clearPayload()
.clearPayloadTruncated()
.build());
.build(),
CLIENT_SPAN_CONTEXT);
}
}

View File

@ -22,8 +22,8 @@ import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoInteractions;
import static org.mockito.Mockito.verifyNoMoreInteractions;
import static org.mockito.Mockito.when;
import com.google.cloud.MonitoredResource;
import com.google.cloud.logging.LogEntry;
import com.google.cloud.logging.Logging;
import com.google.common.collect.ImmutableMap;
@ -31,9 +31,15 @@ import com.google.protobuf.Duration;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;
import com.google.protobuf.util.Durations;
import io.grpc.gcp.observability.ObservabilityConfig;
import io.grpc.observabilitylog.v1.GrpcLogRecord;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventLogger;
import io.grpc.observabilitylog.v1.GrpcLogRecord.EventType;
import io.opencensus.trace.SpanContext;
import io.opencensus.trace.SpanId;
import io.opencensus.trace.TraceId;
import io.opencensus.trace.TraceOptions;
import io.opencensus.trace.Tracestate;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
@ -57,12 +63,6 @@ public class GcpLogSinkTest {
@Rule
public final MockitoRule mockito = MockitoJUnit.rule();
private static final ImmutableMap<String, String> LOCATION_TAGS =
ImmutableMap.of("project_id", "PROJECT",
"location", "us-central1-c",
"cluster_name", "grpc-observability-cluster",
"namespace_name", "default" ,
"pod_name", "app1-6c7c58f897-n92c5");
private static final ImmutableMap<String, String> CUSTOM_TAGS =
ImmutableMap.of("KEY1", "Value1",
"KEY2", "VALUE2");
@ -105,13 +105,16 @@ public class GcpLogSinkTest {
.build();
@Mock
private Logging mockLogging;
@Mock
private ObservabilityConfig mockConfig;
@Test
@SuppressWarnings("unchecked")
public void verifyWrite() throws Exception {
GcpLogSink sink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME, LOCATION_TAGS,
CUSTOM_TAGS, Collections.emptySet());
sink.write(LOG_PROTO);
when(mockConfig.getCustomTags()).thenReturn(CUSTOM_TAGS);
GcpLogSink sink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME,
mockConfig, Collections.emptySet(), new TraceLoggingHelper(DEST_PROJECT_NAME));
sink.write(LOG_PROTO, null);
ArgumentCaptor<Collection<LogEntry>> logEntrySetCaptor = ArgumentCaptor.forClass(
(Class) Collection.class);
@ -127,10 +130,10 @@ public class GcpLogSinkTest {
@Test
@SuppressWarnings("unchecked")
public void verifyWriteWithTags() {
GcpLogSink sink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME, LOCATION_TAGS,
CUSTOM_TAGS, Collections.emptySet());
MonitoredResource expectedMonitoredResource = GcpLogSink.getResource(LOCATION_TAGS);
sink.write(LOG_PROTO);
when(mockConfig.getCustomTags()).thenReturn(CUSTOM_TAGS);
GcpLogSink sink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME,
mockConfig, Collections.emptySet(), new TraceLoggingHelper(DEST_PROJECT_NAME));
sink.write(LOG_PROTO, null);
ArgumentCaptor<Collection<LogEntry>> logEntrySetCaptor = ArgumentCaptor.forClass(
(Class) Collection.class);
@ -138,7 +141,6 @@ public class GcpLogSinkTest {
System.out.println(logEntrySetCaptor.getValue());
for (Iterator<LogEntry> it = logEntrySetCaptor.getValue().iterator(); it.hasNext(); ) {
LogEntry entry = it.next();
assertThat(entry.getResource()).isEqualTo(expectedMonitoredResource);
assertThat(entry.getLabels()).isEqualTo(CUSTOM_TAGS);
assertThat(entry.getPayload().getData()).isEqualTo(EXPECTED_STRUCT_LOG_PROTO);
assertThat(entry.getLogName()).isEqualTo(EXPECTED_LOG_NAME);
@ -150,10 +152,11 @@ public class GcpLogSinkTest {
@SuppressWarnings("unchecked")
public void emptyCustomTags_labelsNotSet() {
Map<String, String> emptyCustomTags = null;
when(mockConfig.getCustomTags()).thenReturn(emptyCustomTags);
Map<String, String> expectedEmptyLabels = new HashMap<>();
GcpLogSink sink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME, LOCATION_TAGS,
emptyCustomTags, Collections.emptySet());
sink.write(LOG_PROTO);
GcpLogSink sink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME,
mockConfig, Collections.emptySet(), new TraceLoggingHelper(DEST_PROJECT_NAME));
sink.write(LOG_PROTO, null);
ArgumentCaptor<Collection<LogEntry>> logEntrySetCaptor = ArgumentCaptor.forClass(
(Class) Collection.class);
@ -169,12 +172,13 @@ public class GcpLogSinkTest {
@SuppressWarnings("unchecked")
public void emptyCustomTags_setSourceProject() {
Map<String, String> emptyCustomTags = null;
when(mockConfig.getCustomTags()).thenReturn(emptyCustomTags);
String projectId = "PROJECT";
Map<String, String> expectedLabels = GcpLogSink.getCustomTags(emptyCustomTags, LOCATION_TAGS,
projectId);
GcpLogSink sink = new GcpLogSink(mockLogging, projectId, LOCATION_TAGS,
emptyCustomTags, Collections.emptySet());
sink.write(LOG_PROTO);
Map<String, String> expectedLabels = GcpLogSink.getCustomTags(emptyCustomTags
);
GcpLogSink sink = new GcpLogSink(mockLogging, projectId,
mockConfig, Collections.emptySet(), new TraceLoggingHelper(DEST_PROJECT_NAME));
sink.write(LOG_PROTO, null);
ArgumentCaptor<Collection<LogEntry>> logEntrySetCaptor = ArgumentCaptor.forClass(
(Class) Collection.class);
@ -188,9 +192,9 @@ public class GcpLogSinkTest {
@Test
public void verifyClose() throws Exception {
GcpLogSink sink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME, LOCATION_TAGS,
CUSTOM_TAGS, Collections.emptySet());
sink.write(LOG_PROTO);
GcpLogSink sink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME,
mockConfig, Collections.emptySet(), new TraceLoggingHelper(DEST_PROJECT_NAME));
sink.write(LOG_PROTO, null);
verify(mockLogging, times(1)).write(anyIterable());
sink.close();
verify(mockLogging).close();
@ -199,9 +203,107 @@ public class GcpLogSinkTest {
@Test
public void verifyExclude() throws Exception {
Sink mockSink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME, LOCATION_TAGS,
CUSTOM_TAGS, Collections.singleton("service"));
mockSink.write(LOG_PROTO);
Sink mockSink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME,
mockConfig, Collections.singleton("service"), new TraceLoggingHelper(DEST_PROJECT_NAME));
mockSink.write(LOG_PROTO, null);
verifyNoInteractions(mockLogging);
}
@Test
@SuppressWarnings("unchecked")
public void verifyNoTraceDataInLogs_withTraceDisabled() throws Exception {
SpanContext validSpanContext = SpanContext.create(
TraceId.fromLowerBase16("4c6af40c499951eb7de2777ba1e4fefa"),
SpanId.fromLowerBase16("de52e84d13dd232d"),
TraceOptions.builder().setIsSampled(true).build(),
Tracestate.builder().build());
TraceLoggingHelper traceLoggingHelper = new TraceLoggingHelper(DEST_PROJECT_NAME);
when(mockConfig.isEnableCloudTracing()).thenReturn(false);
Sink mockSink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME,
mockConfig, Collections.emptySet(), traceLoggingHelper);
mockSink.write(LOG_PROTO, validSpanContext);
ArgumentCaptor<Collection<LogEntry>> logEntrySetCaptor = ArgumentCaptor.forClass(
(Class) Collection.class);
verify(mockLogging, times(1)).write(logEntrySetCaptor.capture());
for (Iterator<LogEntry> it = logEntrySetCaptor.getValue().iterator(); it.hasNext(); ) {
LogEntry entry = it.next();
assertThat(entry.getTrace()).isNull(); // Field not present
assertThat(entry.getSpanId()).isNull(); // Field not present
assertThat(entry.getTraceSampled()).isFalse(); // Default value
assertThat(entry.getPayload().getData()).isEqualTo(EXPECTED_STRUCT_LOG_PROTO);
}
}
@Test
@SuppressWarnings("unchecked")
public void verifyTraceDataInLogs_withValidSpanContext() throws Exception {
CharSequence traceIdSeq = "4c6af40c499951eb7de2777ba1e4fefa";
CharSequence spanIdSeq = "de52e84d13dd232d";
TraceId traceId = TraceId.fromLowerBase16(traceIdSeq);
SpanId spanId = SpanId.fromLowerBase16(spanIdSeq);
boolean traceSampled = true;
SpanContext validSpanContext = SpanContext.create(traceId, spanId,
TraceOptions.builder().setIsSampled(traceSampled).build(),
Tracestate.builder().build());
TraceLoggingHelper traceLoggingHelper = new TraceLoggingHelper(DEST_PROJECT_NAME);
when(mockConfig.isEnableCloudTracing()).thenReturn(true);
Sink mockSink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME,
mockConfig, Collections.emptySet(), traceLoggingHelper);
mockSink.write(LOG_PROTO, validSpanContext);
String expectedTrace = "projects/" + DEST_PROJECT_NAME + "/traces/" + traceIdSeq;
ArgumentCaptor<Collection<LogEntry>> logEntrySetCaptor = ArgumentCaptor.forClass(
(Class) Collection.class);
verify(mockLogging, times(1)).write(logEntrySetCaptor.capture());
for (Iterator<LogEntry> it = logEntrySetCaptor.getValue().iterator(); it.hasNext(); ) {
LogEntry entry = it.next();
assertThat(entry.getTrace()).isEqualTo(expectedTrace);
assertThat(entry.getSpanId()).isEqualTo("" + spanIdSeq);
assertThat(entry.getTraceSampled()).isEqualTo(traceSampled);
assertThat(entry.getPayload().getData()).isEqualTo(EXPECTED_STRUCT_LOG_PROTO);
}
}
@Test
@SuppressWarnings("unchecked")
public void verifyTraceDataLogs_withNullSpanContext() throws Exception {
TraceLoggingHelper traceLoggingHelper = new TraceLoggingHelper(DEST_PROJECT_NAME);
when(mockConfig.isEnableCloudTracing()).thenReturn(true);
Sink mockSink = new GcpLogSink(mockLogging, DEST_PROJECT_NAME,
mockConfig, Collections.emptySet(), traceLoggingHelper);
String expectedTrace =
"projects/" + DEST_PROJECT_NAME + "/traces/00000000000000000000000000000000";
String expectedSpanId = "0000000000000000";
ArgumentCaptor<Collection<LogEntry>> logEntrySetCaptor = ArgumentCaptor.forClass(
(Class) Collection.class);
// Client log with default span context
mockSink.write(LOG_PROTO , SpanContext.INVALID);
verify(mockLogging, times(1)).write(logEntrySetCaptor.capture());
for (Iterator<LogEntry> it = logEntrySetCaptor.getValue().iterator(); it.hasNext(); ) {
LogEntry entry = it.next();
assertThat(entry.getTrace()).isEqualTo(expectedTrace);
assertThat(entry.getSpanId()).isEqualTo(expectedSpanId);
assertThat(entry.getTraceSampled()).isFalse();
}
// Server log
GrpcLogRecord serverLogProto = LOG_PROTO.toBuilder().setLogger(EventLogger.SERVER).build();
mockSink.write(serverLogProto , SpanContext.INVALID);
verify(mockLogging, times(2)).write(logEntrySetCaptor.capture());
for (Iterator<LogEntry> it = logEntrySetCaptor.getValue().iterator(); it.hasNext(); ) {
LogEntry entry = it.next();
assertThat(entry.getTrace()).isEqualTo(expectedTrace);
assertThat(entry.getSpanId()).isEqualTo(expectedSpanId);
assertThat(entry.getTraceSampled()).isFalse();
}
}
}

View File

@ -0,0 +1,91 @@
/*
* Copyright 2023 The gRPC Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.grpc.gcp.observability.logging;
import static com.google.common.truth.Truth.assertThat;
import com.google.cloud.logging.LogEntry;
import io.opencensus.trace.SpanContext;
import io.opencensus.trace.SpanId;
import io.opencensus.trace.TraceId;
import io.opencensus.trace.TraceOptions;
import io.opencensus.trace.Tracestate;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;
/**
* Tests for {@link TraceLoggingHelper}.
*/
@RunWith(JUnit4.class)
public class TraceLoggingHelperTest {
private static final String PROJECT = "PROJECT";
private static final Tracestate EMPTY_TRACESTATE = Tracestate.builder().build();
private static TraceLoggingHelper traceLoggingHelper;
@Before
public void setUp() {
traceLoggingHelper = new TraceLoggingHelper(PROJECT);
}
@Test
public void enhanceLogEntry_AddSampledSpanContextToLogEntry() {
SpanContext spanContext = SpanContext.create(
TraceId.fromLowerBase16("5ce724c382c136b2a67bb447e6a6bd27"),
SpanId.fromLowerBase16("de52e84d13dd232d"),
TraceOptions.builder().setIsSampled(true).build(),
EMPTY_TRACESTATE);
LogEntry logEntry = getEnhancedLogEntry(traceLoggingHelper, spanContext);
assertThat(logEntry.getTraceSampled()).isTrue();
assertThat(logEntry.getTrace())
.isEqualTo("projects/PROJECT/traces/5ce724c382c136b2a67bb447e6a6bd27");
assertThat(logEntry.getSpanId()).isEqualTo("de52e84d13dd232d");
}
@Test
public void enhanceLogEntry_AddNonSampledSpanContextToLogEntry() {
SpanContext spanContext = SpanContext.create(
TraceId.fromLowerBase16("649a8a64db2d0c757fd06bb1bfe84e2c"),
SpanId.fromLowerBase16("731e102335b7a5a0"),
TraceOptions.builder().setIsSampled(false).build(),
EMPTY_TRACESTATE);
LogEntry logEntry = getEnhancedLogEntry(traceLoggingHelper, spanContext);
assertThat(logEntry.getTraceSampled()).isFalse();
assertThat(logEntry.getTrace())
.isEqualTo("projects/PROJECT/traces/649a8a64db2d0c757fd06bb1bfe84e2c");
assertThat(logEntry.getSpanId()).isEqualTo("731e102335b7a5a0");
}
@Test
public void enhanceLogEntry_AddBlankSpanContextToLogEntry() {
SpanContext spanContext = SpanContext.INVALID;
LogEntry logEntry = getEnhancedLogEntry(traceLoggingHelper, spanContext);
assertThat(logEntry.getTraceSampled()).isFalse();
assertThat(logEntry.getTrace())
.isEqualTo("projects/PROJECT/traces/00000000000000000000000000000000");
assertThat(logEntry.getSpanId()).isEqualTo("0000000000000000");
}
private static LogEntry getEnhancedLogEntry(TraceLoggingHelper traceLoggingHelper,
SpanContext spanContext) {
LogEntry.Builder logEntryBuilder = LogEntry.newBuilder(null);
traceLoggingHelper.enhanceLogEntry(logEntryBuilder, spanContext);
return logEntryBuilder.build();
}
}

View File

@ -71,7 +71,6 @@ public class TestServiceClient {
} finally {
client.tearDown();
}
System.exit(0);
}
private String serverHost = "localhost";

View File

@ -638,7 +638,8 @@ public class WeightedRoundRobinLoadBalancerTest {
pickCount.put(result, pickCount.getOrDefault(result, 0) + 1);
}
for (int i = 0; i < capacity; i++) {
assertThat(Math.abs(pickCount.get(i) / 1000.0 - weights[i] / totalWeight) ).isAtMost(0.01);
assertThat(Math.abs(pickCount.getOrDefault(i, 0) / 1000.0 - weights[i] / totalWeight) )
.isAtMost(0.01);
}
}