Add contents of new site repo

Signed-off-by: lucperkins <lucperkins@gmail.com>
This commit is contained in:
lucperkins 2019-04-22 18:07:22 -07:00
parent 1787e78fc5
commit 067307dd29
467 changed files with 61608 additions and 1 deletions

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

16
Makefile Normal file
View File

@ -0,0 +1,16 @@
serve:
hugo server \
--buildDrafts \
--buildFuture \
--disableFastRender
production-build:
hugo \
--minify
preview-build:
hugo \
--baseURL $(DEPLOY_PRIME_URL) \
--buildDrafts \
--buildFuture \
--minify

View File

@ -1 +1,4 @@
# grpc.io
# gRPC
This is the new [gRPC](https://grpc.io/) website, written on [Hugo](https://gohugo.io/). Here are a few key things to know about the site:
* Key variables are stored in `/config.toml`, for example `grpc_release_tag` which is used in various places throughout the site.
* If you're not familiar with Hugo, all content is stored in the `/content` directory. Go there to edit anything on the site. Images are stored in `/static/img`. Check out the [Hugo Quick Start](https://gohugo.io/getting-started/quick-start/) for a quick intro to Hugo.

6
archetypes/default.md Normal file
View File

@ -0,0 +1,6 @@
---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
draft: true
---

87
config.toml Normal file
View File

@ -0,0 +1,87 @@
baseURL = "https://cjyabraham.github.io/"
languageCode = "en-us"
title = "gRPC"
pygmentsCodeFences = true
[params]
grpc_release_tag = "v1.20.0"
grpc_release_tag_no_v = "1.20.0"
grpc_java_release_tag = "v1.20.0"
milestones_link = "https://github.com/grpc/grpc/milestones"
[mediaTypes."text/netlify"]
delimiter = ""
[outputFormats.REDIRECTS]
mediaType = "text/netlify"
baseName = "_redirects"
[outputs]
home = ["HTML", "REDIRECTS"]
# Site menus
[menu]
[[menu.dropdown]]
name = "Overview"
url = "/docs"
weight = 1
[[menu.dropdown]]
name = "Quick Start"
url = "/docs/quickstart/"
weight = 2
[[menu.dropdown]]
name = "Guides"
url = "/docs/guides/"
weight = 3
[[menu.dropdown]]
name = "Tutorials"
url = "/docs/tutorials/"
weight = 4
[[menu.dropdown]]
name = "Reference"
url = "/docs/reference/"
weight = 5
[[menu.dropdown]]
name = "Samples"
url = "/docs/samples/"
weight = 6
[[menu.dropdown]]
name = "Presentations"
url = "/docs/talks"
weight = 7
[[menu.guides]]
name = "What is gRPC?"
url = "/docs/guides/"
weight = 1
[[menu.guides]]
name = "gRPC Concepts"
url = "/docs/guides/concepts/"
weight = 2
[[menu.guides]]
name = "Authentication"
url = "/docs/guides/auth/"
weight = 3
[[menu.guides]]
name = "Error handling and debugging"
url = "/docs/guides/error/"
weight = 4
[[menu.guides]]
name = "Benchmarking"
url = "/docs/guides/benchmarking/"
weight = 5
[[menu.guides]]
name = "gRPC Wire Format"
url = "https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md"
weight = 6

88
content/_index.html Normal file
View File

@ -0,0 +1,88 @@
---
---
<div class="section2">
<a name="arrow"><h1>Why gRPC?</h1></a>
gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. It is also applicable in last mile of distributed computing to connect devices, mobile applications and browsers to backend services.
</div>
<div class="cols" >
<div class="col1">
<div class="colpart1">
<img class="col1image" src="img/grpc-newicon-1.svg" style="padding:2%;display:block">
<div class="coltext1">
<h6>Simple service definition</h6>
Define your service using Protocol Buffers, a powerful binary serialization toolset and language
</div>
</div>
<div class="colpart2">
<img class="col1image" src="img/grpc-newicon-2.svg" style="padding:2%;display:block">
<div class="coltext1">
<h6>Start quickly and scale</h6>Install runtime and dev environments with a single line and also scale to millions of RPCs per second with the framework
</div>
</div>
</div>
<div class="col2">
<div class="colpart1">
<img class="col1image" src="img/grpc-newicon-3.svg" style="padding:2%;display:block">
<div class="coltext1">
<h6>Works across languages and platforms</h6>Automatically generate idiomatic client and server stubs for your service in a variety of languages and platforms</div>
</div>
<div class="colpart2">
<img class="col1image" src="img/grpc-newicon-4.svg" style="padding:2%;display:block">
<div class="coltext1">
<h6>Bi-directional streaming and integrated auth</h6>Bi-directional streaming and fully integrated pluggable authentication with http/2 based transport
</div>
</div>
</div></div>
<div class="section3">
<h1>Used By</h1>
<div class="companybox">
<img src="img/square.png">
</div>
<div class="companybox">
<img src="img/netflix.png">
</div>
<div class="companybox">
<img src="img/coreos.png">
</div>
<div class="companybox">
<img src="img/cockroach-labs.png">
</div>
<div class="companybox">
<img src="img/wisconsin.png">
</div>
<div class="companybox">
<img src="img/carbon.png">
</div>
<div class="companybox">
<img src="img/cisco.png">
</div>
<div class="companybox" style="margin-right:0% !important">
<img src="img/juniper.png">
</div>
</div>
<div class="section4">
<h1 style="color:white">Want to Learn More?</h1>
Get started by learning concepts and doing our hello world quickstart in language of your choice. <br>
<div class="button" style="margin-top:3%">
<a href="docs/guides/concepts/"><button>GET STARTED</button></a>
</div>
</div>

190
content/about.md Normal file
View File

@ -0,0 +1,190 @@
---
title: "About gRPC"
date: 2018-09-11T14:11:42+07:00
draft: false
---
gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. It is also applicable in last mile of distributed computing to connect devices, mobile applications and browsers to backend services.
<b>The main usage scenarios:</b>
* Efficiently connecting polyglot services in microservices style architecture
* Connecting mobile devices, browser clients to backend services
* Generating efficient client libraries
<b>Core Features that make it awesome:</b>
* Idiomatic client libraries in 10 languages
* Highly efficient on wire and with a simple service definition framework
* Bi-directional streaming with http/2 based transport
* Pluggable auth, tracing, load balancing and health checking
<hr>
## Cases: Whos using it and why?
Many companies are already using gRPC for connecting multiple services in their environments. The use case varies from connecting a handful of services to hundreds of services across various languages in on-prem or cloud environments. Below are details and quotes from some of our early adopters.
Check out what people are saying below:
<div class="testimonialrow">
<div class="testimonialsection">
<div>
<a href="https://www.youtube.com/watch?v=-2sWDr3Z0Wo">
<div class="testimonialimage"> <img src="../img/square-icon.png" style="width:45%"/></a></div>
<div>
<div class="testimonialquote"></div>
At Square, we have been collaborating with Google so that we can replace all uses of our custom RPC solution to use gRPC. We decided to move to gRPC because of its open support for multiple platforms, the demonstrated performance of the protocol, and the ability to customize and adapt it to our network. Developers at Square are looking forward to being able to take advantage of writing streaming APIs and in the future, push gRPC to the edges of the network for integration with mobile clients and third party APIs.
</div>
</div>
</div>
<div class="testimonialsection">
<div class="testimonialimage"><a href="https://github.com/Netflix/ribbon">
<img src="../img/netflix-logo.png" /> </a></div>
<div>
<div class="testimonialquote"></div>
In our initial use of gRPC we've been able to extend it easily to live within our opinionated ecosystem. Further, we've had great success making improvements directly to gRPC through pull requests and interactions with Google's team that manages the project. We expect to see many improvements to developer productivity, and the ability to allow development in non-JVM languages as a result of adopting gRPC.
</div>
</div>
<div class="testimonialsection">
<div class="testimonialimage"><a href="https://blog.gopheracademy.com/advent-2015/etcd-distributed-key-value-store-with-grpc-http2/">
<img src="../img/coreos-1.png" style="width:30%"/></a></div>
<div>
<div class="testimonialquote"></div>
At CoreOS we are excited by the gRPC v1.0 release and the opportunities it opens up for people consuming and building what we like to call Google Infrastructure for Everyone Else. Today gRPC is in use in a number of our critical open source projects such as the etcd consensus database and the rkt container engine.
</div>
</div>
</div>
<div class="testimonialsection">
<div class="testimonialimage"> <a href="https://github.com/cockroachdb/cockroach">
<img src="../img/cockroach-1.png" /> </a></div>
<div>
<div class="testimonialquote"></div>
Our switch from a home-grown RPC system to gRPC was seamless. We quickly took advantage of the per-stream flow control to provide better scheduling of large RPCs over the same connection as small ones.
</div>
</div>
<div class="testimonialsection">
<div class="testimonialimage"><a href="https://github.com/CiscoDevNet/grpc-getting-started">
<img src="../img/cisco.svg" /></a></div>
<div>
<div class="testimonialquote"></div>
With support for high performance bi-directional streaming, TLS based security, and a wide variety of programming languages, gRPC is an ideal unified transport protocol for model driven configuration and telemetry.
</div>
</div>
<div class="testimonialsection">
<div class="testimonialimage"><a href="http://www.carbon3d.com">
<img src="../img/carbon3d.svg" /></a></div>
<div>
<div class="testimonialquote"></div>
Carbon3D uses gRPC to implement distributed processes both within and outside our 3D printers. We actually switched from using Thrift early on for a number of reasons including but not limited to robust support for multiple languages like C++, Nodejs and Python. Features like bi-directional streaming are a huge win in keeping our systems implementations simpler and correct. Lastly the gRPC team/community is very active and responsive which is also a key factor for us in selecting an open source technology for mission critical projects.
</div>
</div>
<div class="testimonialsection">
<div class="testimonialimage"><img src="../img/wisc-mad.jpg" /></a></div>
<div>
<div class="testimonialquote"></div>
We've been using gRPC for both classes and research at University of Wisconsin. Students in our distributed systems class (CS 739) utilized many of its powerful features when building their own distributed systems. In addition, gRPC is a key component of our OpenLambda research project (https://www.open-lambda.org/) which aims to provide an open-source, low-latency, serverless computational framework.
</div>
</div>
<div class="testimonialsection">
<div class="testimonialimage"><a href="https://github.com/Juniper/open-nti">
<img src="../img/juniperlogo.png" /> </a></div>
<div>
<div class="testimonialquote"></div>
The fact that gRPC is built on HTTP/2 transport brings us native bi-directional streaming capabilities and flexible custom-metadata in request headers. The first point is important for large payloadexchange and network telemetry scenarios while the latter enables us to expand and include capabilities including but not limited to various network element authentication mechanisms.
In addition, the wide language binding support that gRPC/proto3 bringsenables us to provide a flexible and rapid development environment for both internal and external consumers.
Last but not least, while there are a number of network communicationprotocols for configuration, operational state retrieval and network telemetry, gRPC provides us with a unified flexible protocol and transport to ease client/server interaction.
</div>
</div>
<div class="aboutsection2">
<h2>Officially Supported Platforms</h2>
<table style="width:80%;margin-top:5%;margin-bottom:5%">
<tr style="width:100%">
<th style="width:20%">Language </th><th> Platform </th><th>Compiler</th>
</tr>
<tr>
<td>C/C++</td><td>Linux</td><td>GCC 4.4 <br/> GCC 4.6 <br> GCC 5.3 <br> Clang 3.5 <br> Clang 3.6 <br> Clang 3.7|</td>
</tr>
<tr>
<td>C/C++</td><td>Windows 7+</td><td>Visual Studio 2013+</td>
</tr>
<tr>
<td>C#</td><td>Windows 7+ </td><td> Linux <br> .NET Core, .NET 4.5+ <br> .NET Core, Mono 4+ <br> .NET Core, Mono 4+</td>
</tr>
<tr>
<td>Dart</td><td>Windows/Linux/Mac</td><td> Dart 2.0+</td>
</tr>
<tr>
<td>Go</td><td>Windows/Linux/Mac</td><td> Go 1.6+</td>
</tr>
<tr>
<td>Java</td><td>Windows/Linux/Mac</td><td> JDK 8 recommended. Gingerbread+ for Android</td>
</tr>
<tr>
<td>Node.js</td><td>Windows/Linux/Mac</td><td> Node v4+</td>
</tr>
<tr>
<td>PHP * </td><td>Linux/Mac</td><td> PHP 5.5+ and PHP 7.0+</td>
</tr>
<tr>
<td>Python </td><td>Windows/Linux/Mac</td><td> Python 2.7 and Python 3.4+</td>
</tr>
<tr>
<td>Ruby </td><td>Windows/Linux/Mac</td>
</tr>
</table>
<!--
| Language | Platform | Compiler |
| -------- | -------- | -------- |
| C/C++ | Linux | GCC 4.4 <br/> GCC 4.6 <br> GCC 5.3 <br> Clang 3.5 <br> Clang 3.6 <br> Clang 3.7|
| C/C++ | Windows 7+ | Visual Studio 2013+ |
| C# | Windows 7+ <br> Linux <br> Mac | .NET Core, .NET 4.5+ <br> .NET Core, Mono 4+ <br> .NET Core, Mono 4+ |
| Dart | Windows/Linux/Mac | Dart 2.0+ |
| Go | Windows/Linux/Mac | Go 1.6+ |
| Java | Windows/Linux/Mac | JDK 8 recommended. Gingerbread+ for Android |
| Node.js | Windows/Linux/Mac | Node v4+ |
| PHP * | Linux/Mac | PHP 5.5+ and PHP 7.0+ |
| Python | Windows/Linux/Mac | Python 2.7 and Python 3.4+ |
| Ruby | Windows/Linux/Mac | |
_* still in beta_
-->
<h2>The story behind gRPC</h2>
Google has been using a single general-purpose RPC infrastructure called Stubby to connect the large number of microservices running within and across our data centers for over a decade. Our internal systems have long embraced the microservice architecture gaining popularity today. Stubby has powered all of Googles microservices interconnect for over a decade and is the RPC backbone behind every Google service that you use today. In March 2015, we decided to build the next version of Stubby in the open so we can share our learnings with the industry and collaborate with them to build the next version of Stubby both for microservices inside and outside Google but also for last mile of computing (mobile, web and IOT).
<br>
For more background on why we created gRPC, read the <a href="/blog/principles">gRPC Motivation and Design Principles blog</a>.
<br><br>
</div>

View File

@ -0,0 +1,29 @@
---
attribution: Mugur Marculescu, gRPC
date: "2015-10-26T00:00:00Z"
published: true
title: gRPC releases Beta, opening door for use in production environments.
url: blog/beta_release
---
<p>
The gRPC team is excited to announce the immediate availability of gRPC Beta. This release marks an important point in API stability and going forward most API changes are expected to be additive in nature. This milestone opens the door for gRPC use in production environments.
</p>
<!--more-->
<p>
Were also taking a big step forward in improving the installation process. Over the past few weeks weve rolled out gRPC packages to <a href="https://packages.debian.org/jessie-backports/libgrpc0">Debian Stable/Backports</a>. Installation in most cases is now a two line install using the Debian package and available language specific package managers (<a href="https://search.maven.org/#artifactdetails%7Cio.grpc%7Cgrpc-core%7C0.9.0%7Cjar">maven</a>, <a href="https://pypi.python.org/pypi/grpcio">pip</a>, <a href="https://rubygems.org/gems/grpc">gem</a>, <a href="https://packagist.org/packages/grpc/grpc">composer</a>, <a href="https://pecl.php.net/package/gRPC">pecl</a>, <a href="https://www.npmjs.com/package/grpc">npm</a>, <a href="https://www.nuget.org/packages/Grpc/">nuget</a>, <a href="https://cocoapods.org/pods/gRPC">pod</a>). In addition <a href="https://hub.docker.com/r/grpc/">gRPC docker images</a> are now available on Docker Hub.
</p>
<p>
Weve updated the <a href="/docs/">documentation</a> on grpc.io to reflect the latest changes and released additional language-specific <a href="/docs/reference/">reference docs</a>. See whats changed with the Beta release in the release notes on GitHub for <a href="https://github.com/grpc/grpc-java/releases/tag/v0.9.0">Java</a>, <a href="https://godoc.org/google.golang.org/grpc">Go</a> and <a href="https://github.com/grpc/grpc/releases/tag/release-0_11_0">all other</a> languages.
</p>
<p>
In keeping in line with our <a href="/blog/principles">principles</a> and goal to enable highly performant and scalable APIs and microservices on top of HTTP/2, in the coming months, the focus of the gRPC project will be to keep improving performance and stability and adding carefully chosen features for production use cases. Documentation will also be clarified and will continue to improve with new examples and guides.
</p>
<p>
Weve been very excited to see the community response to gRPC and the various projects starting to use it (<a href="https://coreos.com/blog/etcd-2.2/">etcd v3 experimental api</a>, <a href="https://github.com/gengo/grpc-gateway">grpc-gateway</a> for RESTful APIs and others).
</p>
<p>
We really want to thank everyone who contributed code, gave presentations, adopted the technology and engaged in the community. With your help support we look forward to the 1.0!
</p>

View File

@ -0,0 +1,50 @@
---
attribution: Originally written by Louis Ryan with help from others at Google.
date: "2015-09-08T00:00:00Z"
published: true
title: gRPC Motivation and Design Principles.
url: blog/principles
---
<h2>Motivation</h2>
<p>Google has been using a single general-purpose RPC infrastructure called Stubby to connect the large number of microservices running within and across our data centers for over a decade. Our internal systems have long embraced the microservice architecture gaining popularity today. Having a uniform, cross-platform RPC infrastructure has allowed for the rollout of fleet-wide improvements in efficiency, security, reliability and behavioral analysis critical to supporting the incredible growth seen in that period.
</p>
<!--more-->
<p>Stubby has many great features - however, it's not based on any standard and is too tightly coupled to our internal infrastructure to be considered suitable for public release. With the advent of SPDY, HTTP/2, and QUIC, many of these same features have appeared in public standards, together with other features that Stubby does not provide. It became clear that it was time to rework Stubby to take advantage of this standardization, and to extend its applicability to mobile, IoT, and Cloud use-cases.</p>
<h2>Principles &amp; Requirements</h2>
<p><strong>Services not Objects, Messages not References</strong> - Promote the microservices design philosophy of coarse-grained message exchange between systems while avoiding the <a href="https://martinfowler.com/articles/distributed-objects-microservices.html">pitfalls of distributed objects</a> and the <a href="https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing">fallacies of ignoring the network</a>.</p>
<p><strong>Coverage &amp; Simplicity</strong> - The stack should be available on every popular development platform and easy for someone to build for their platform of choice. It should be viable on CPU &amp; memory limited devices. </p>
<p><strong>Free &amp; Open</strong> - Make the fundamental features free for all to use. Release all artifacts as open-source efforts with licensing that should facilitate and not impede adoption.</p>
<p><strong>Interoperability &amp; Reach</strong> - The wire-protocol must be capable of surviving traversal over common internet infrastructure.</p>
<p><strong>General Purpose &amp; Performant</strong> - The stack should be applicable to a broad class of use-cases while sacrificing little in performance when compared to a use-case specific stack.</p>
<p><strong>Layered</strong> - Key facets of the stack must be able to evolve independently. A revision to the wire-format should not disrupt application layer bindings.</p>
<p><strong>Payload Agnostic</strong> - Different services need to use different message types and encodings such as protocol buffers, JSON, XML, and Thrift; the protocol and implementations must allow for this. Similarly the need for payload compression varies by use-case and payload type: the protocol should allow for pluggable compression mechanisms.</p>
<p><strong>Streaming</strong> - Storage systems rely on streaming and flow-control to express large data-sets. Other services, like voice-to-text or stock-tickers, rely on streaming to represent temporally related message sequences.</p>
<p><strong>Blocking &amp; Non-Blocking</strong> - Support both asynchronous and synchronous processing of the sequence of messages exchanged by a client and server. This is critical for scaling and handling streams on certain platforms.</p>
<p><strong>Cancellation &amp; Timeout</strong> - Operations can be expensive and long-lived - cancellation allows servers to reclaim resources when clients are well-behaved. When a causal-chain of work is tracked, cancellation can cascade. A client may indicate a timeout for a call, which allows services to tune their behavior to the needs of the client.</p>
<p><strong>Lameducking</strong> - Servers must be allowed to gracefully shut-down by rejecting new requests while continuing to process in-flight ones.</p>
<p><strong>Flow-Control</strong> - Computing power and network capacity are often unbalanced between client &amp; server. Flow control allows for better buffer management as well as providing protection from DOS by an overly active peer.</p>
<p><strong>Pluggable</strong> - A wire protocol is only part of a functioning API infrastructure. Large distributed systems need security, health-checking, load-balancing and failover, monitoring, tracing, logging, and so on. Implementations should provide extensions points to allow for plugging in these features and, where useful, default implementations.</p>
<p><strong>Extensions as APIs</strong> - Extensions that require collaboration among services should favor using APIs rather than protocol extensions where possible. Extensions of this type could include health-checking, service introspection, load monitoring, and load-balancing assignment.</p>
<p><strong>Metadata Exchange</strong> - Common cross-cutting concerns like authentication or tracing rely on the exchange of data that is not part of the declared interface of a service. Deployments rely on their ability to evolve these features at a different rate to the individual APIs exposed by services.</p>
<p><strong>Standardized Status Codes</strong> - Clients typically respond to errors returned by API calls in a limited number of ways. The status code namespace should be constrained to make these error handling decisions clearer. If richer domain-specific status is needed the metadata exchange mechanism can be used to provide that.</p>

View File

@ -0,0 +1,43 @@
---
attribution: Originally written by Dale Hopkins with additional content by Lisa Carey
and others at Google.
author: Dale Hopkins
company: Vendasta
company-link: https://vendasta.com
date: "2016-07-25T00:00:00Z"
published: false
thumbnail: ../img/vend-icon.png?raw=true
title: Why we have decided to move our APIs to gRPC
url: blog/vendasta
---
Our guest post today comes from Dale Hopkins, CTO of [Vendasta](https://vendasta.com/). Vendasta started out 8 years ago as a point solution provider of products for small business. From the beginning we partnered with media companies and agencies who have armies of salespeople and existing relationships with those businesses to sell our software. It is estimated that over 30 million small businesses exist in the United States alone, so scalability of our SaaS solution was considered one of our top concerns from the beginning and it was the reason we started with Google App Engine (Python GAE) and Datastore. This solution worked really well for us as our system scaled from hundreds to hundreds of thousands of end users. We also scaled our offering from a point solution to an entire platform with multiple products and the tools for partners to manage their sales of those products during this time.
<!--more-->
All throughout this journey Python GAE served our needs well. We exposed a number of APIs via HTTP + JSON for our partners to automate tasks and integrate their other systems with our products and platform. However, in 2016 we introduced the Vendasta Marketplace. This marked a major change to our offering, which depended heavily on having 3rd party vendors use our APIs to deliver their own products in our platform. This was a major change because our public APIs provide an upper-bound on 3rd-party applications, and made us realize that we really needed to make APIs that were amazing, not just good.
# Three Optimizations to our architecture
* The first optimization that we started with was to use the Go programming language to build endpoints that handled higher throughput with lower latency than we could get with Python. On some APIs this made an incredible difference: we saw 50th percentile response times to drop from 1200 ms to 4 ms, and even more spectacularly 99th percentile response times drop from 30,000 ms to 12 ms! On other APIs we saw a much smaller, but still significant difference.
* The second optimization we used was to replicate large portions of our Datastore data into ElasticSearch. ElasticSearch is a fundamentally different storage technology to Datastore, and is not a managed service, so it was a big leap for us. But this change allowed us to migrate almost all of our overnight batch-processing APIs to real-time APIs. So with these first two solutions in place, we had made meaningful changes to the performance of our APIs.
* The last optimization we made was to move our APIs to gRPC. This change was much more extensive than the others as it affected our clients. Like ElasticSearch, it represents a fundamentally different model with differing performance characteristics, but unlike ElasticSearch we found it to be a true superset: all of our usage scenarios were impacted positively by it.
## Four Benefits from gRPC
* The first benefit we saw from gRPC was the ability to move from publishing APIs and asking developers to integrate with them, to releasing SDKs and asking developers to copy-paste example code written in their language. This represents a really big benefit for people looking to integrate with our products, while not requiring us to hand-roll entire SDKs in the 5+ languages our partners and vendors use. It is important to note that we still write light wrappers over the generated gRPC SDKs to make them package-manager friendly, and to provide wrappers over the generated protobuf structures.
* The second benefit we saw from gRPC was the ability to break free from the call-and-response architecture necessitated by HTTP + JSON. gRPC is built on top of HTTP/2, which allows for client-side and/or server-side streaming. In our use cases, this means we can lower the time to first display by streaming results as they become ready on the server (server-side streaming), and by providing very flexible create endpoints that easily support bulk ingestion (bi-directional streaming). We feel that we are just starting to see the benefits from this feature as it opens up a totally new model for client-server interactions that just wasn't possible with HTTP.
* The third benefit was the switch from JSON to protocol buffers, which works very well with gRPC. This improves serialization and deserialization times; which is very significant to some of our APIs, but appreciated on all of them. The more important benefit comes from the explicit format specification of proto, meaning that clients receive typed objects rather than free-form JSON. Because of this, our clients can reap the benefits of auto-completion in their IDEs, type-safety if their language supports it, and enforced compatibility between clients and servers with differing versions.
* The final benefit of gRPC was our ability to quickly spec endpoints. The proto format for both data and service definition greatly simplifies defining new endpoints and finally allows the succinct definition of endpoint contracts. Combined with code generation, it allows us to truly develop clients and servers in parallel.
Our experience with gRPC has been positive, even though it does not eliminate the difficulty of providing endpoints to partners and vendors, and address of our performance issues. However, it does make improvements to our endpoint performance, integration with those endpoints, and even in delivery of SDKs.

View File

@ -0,0 +1,22 @@
---
attribution: Originally written by Varun Talwar with additional content by Kailash
Sethuraman and others at Google.
author: Varun Talwar
company: Google
company-link: https://cloud.google.com
date: "2016-08-23T00:00:00Z"
published: true
thumbnail: ../img/gcp-icon.png?raw=true
title: gRPC Project is now 1.0 and ready for production deployments
url: blog/gablogpost
---
Today, the gRPC project has reached a significant milestone with its [1.0 release](https://github.com/grpc/grpc/releases).
Languages moving to 1.0 include C++, Java, Go, Node, Ruby, Python and C# across Linux, Windows, and Mac. Objective-C and Android Java support on iOS and Android is also moving to 1.0. The 1.0 release means that the core protocol and API surface are now stable with measured performance, stress tested and developers can rely on these APIs and deploy in production, they will follow semantic versioning from here.
We are very excited about the progress we have made so far and would like to thank all our users and contributors. First announced in March 2015 with [Square](https://corner.squareup.com/2015/02/grpc.html), gRPC is already being used in many open source projects like [etcd](https://github.com/coreos/etcd) from CoreOS, [containerd](https://github.com/docker/containerd) from Docker, [cockroachdb](https://github.com/cockroachdb/cockroach) from Cockroach Labs, and by many other companies like [Vendasta](https://vendasta.com), [Netflix](https://github.com/Netflix/ribbon), [YikYak](http://yikyakapp.com) and [Carbon 3d](http://carbon3d.com). Outside of microservices, telecom giants like [Cisco](https://github.com/CiscoDevNet/grpc-getting-started), [Juniper](https://github.com/Juniper/open-nti), [Arista](https://github.com/aristanetworks/goarista), and Ciena, are building support for streaming telemetry and network configuration from their network devices using gRPC, as part of [OpenConfig](http://www.openconfig.net/) effort.
From the beta release, we have made significant strides in the areas of usability, interoperability, and performance measurement on the [road to 1.0](https://www.youtube.com/watch?v=_vfbVJ_u5mE). In most of the languages, the [installation of the gRPC runtime](/blog/installation) as well as setup of a development environment is a single command. Beyond installation, we have set up automated tests for gRPC across languages and RPC types in order to stress test our APIs and ensure interoperability. There is now a [performance dashboard](https://goo.gl/tHPEfD) available in the open to see latency and throughput for unary and streaming ping pong for various languages. Other measurements have shown significant gains from using gRPC/Protobuf instead of HTTP/JSON such as in [CoreOS blogpost](https://blog.gopheracademy.com/advent-2015/etcd-distributed-key-value-store-with-grpc-http2/) and in [Google Cloud PubSub testing](https://cloud.google.com/blog/big-data/2016/03/announcing-grpc-alpha-for-google-cloud-pubsub). In the coming months, we will invest a lot more in performance tuning.
Even within Google, we have seen Google cloud APIs like [BigTable](https://cloudplatform.googleblog.com/2015/07/A-Go-client-for-Google-Cloud-Bigtable.html), PubSub, [Speech](https://github.com/GoogleCloudPlatform/java-docs-samples/tree/master/speech/grpc), launch of a gRPC-based API surface leading to ease of use and performance benefits. Products like [Tensorflow](https://research.googleblog.com/2016/02/running-your-models-in-production-with.html) have effectively used gRPC for inter-process communication as well.
Beyond usage, we are keen to see the contributor community grow with gRPC. We are already starting to see contributions around gRPC in meaningful ways in the [grpc-ecosystem](https://github.com/grpc-ecosystem) organization. We are very happy to see projects like [grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway) to enable users to serve REST clients with gRPC based services, [Polyglot](https://github.com/grpc-ecosystem/polyglot) to have a CLI for gRPC, [Prometheus monitoring](https://github.com/grpc-ecosystem/go-grpc-prometheus) of gRPC Services and work with [OpenTracing](https://github.com/grpc-ecosystem/grpc-opentracing). You can suggest and contribute projects to this organization [here](https://docs.google.com/a/google.com/forms/d/119zb79XRovQYafE9XKjz9sstwynCWcMpoJwHgZJvK74/edit). We look forward to working with the community to take the gRPC project to new heights.

View File

@ -0,0 +1,35 @@
---
attribution: Originally written by Dale Hopkins with additional content by Lisa Carey
and others at Google.
author: Dale Hopkins
company: Vendasta
company-link: https://vendasta.com
date: "2016-08-29T00:00:00Z"
published: true
thumbnail: ../img/vend-icon.png?raw=true
title: Why we have decided to move our APIs to gRPC
url: blog/vendastagrpc
---
Our guest post today comes from Dale Hopkins, CTO of [Vendasta](https://vendasta.com/).
Vendasta started out 8 years ago as a point solution provider of products for small business. From the beginning we partnered with media companies and agencies who have armies of salespeople and existing relationships with those businesses to sell our software. It is estimated that over 30 million small businesses exist in the United States alone, so scalability of our SaaS solution was considered one of our top concerns from the beginning and it was the reason we started with [Google App Engine](https://cloud.google.com/appengine/) and Datastore. This solution worked really well for us as our system scaled from hundreds to hundreds of thousands of end users. We also scaled our offering from a point solution to an entire platform with multiple products and the tools for partners to manage their sales of those products during this time.
All throughout this journey Python GAE served our needs well. We exposed a number of APIs via HTTP + JSON for our partners to automate tasks and integrate their other systems with our products and platform. However, in 2016 we introduced the Vendasta Marketplace. This marked a major change to our offering, which depended heavily on having 3rd party vendors use our APIs to deliver their own products in our platform. This was a major change because our public APIs provide an upper-bound on 3rd-party applications, and made us realize that we really needed to make APIs that were amazing, not just good.
The first optimization that we started with was to use the Go programming language to build endpoints that handled higher throughput with lower latency than we could get with Python. On some APIs this made an incredible difference: we saw 50th percentile response times to drop from 1200 ms to 4 ms, and even more spectacularly 99th percentile response times drop from 30,000 ms to 12 ms! On other APIs we saw a much smaller, but still significant difference.
The second optimization we used was to replicate large portions of our Datastore data into ElasticSearch. ElasticSearch is a fundamentally different storage technology to Datastore, and is not a managed service, so it was a big leap for us. But this change allowed us to migrate almost all of our overnight batch-processing APIs to real-time APIs. We had tried BigQuery, but it's query processing times meant that we couldn't display things in real time. We had tried cloudSQL, but there was too much data for it to easily scale. We had tried the appengine Search API, but it has limitations with result sets over 10,000. We instead scaled up our ElasticSearch cluster using [Google Container Engine](https://cloud.google.com/container-engine/) and with it's powerful aggregations and facet processing our needs were easily met. So with these first two solutions in place, we had made meaningful changes to the performance of our APIs.
The last optimization we made was to move our APIs to [gRPC](/). This change was much more extensive than the others as it affected our clients. Like ElasticSearch, it represents a fundamentally different model with differing performance characteristics, but unlike ElasticSearch we found it to be a true superset: all of our usage scenarios were impacted positively by it.
The first benefit we saw from gRPC was the ability to move from publishing APIs and asking developers to integrate with them, to releasing SDKs and asking developers to copy-paste example code written in their language. This represents a really big benefit for people looking to integrate with our products, while not requiring us to hand-roll entire SDKs in the 5+ languages our partners and vendors use. It is important to note that we still write light wrappers over the generated gRPC SDKs to make them package-manager friendly, and to provide wrappers over the generated protobuf structures.
The second benefit we saw from gRPC was the ability to break free from the call-and-response architecture necessitated by HTTP + JSON. gRPC is built on top of HTTP/2, which allows for client-side and/or server-side streaming. In our use cases, this means we can lower the time to first display by streaming results as they become ready on the server (server-side streaming). We have also been investigating the potential to offer very flexible create endpoints that easily support bulk ingestion with bi-directional streaming, this would mean we would allow the client to asynchronously stream results, while the server would stream back statuses allowing for easy checkpoint operations while not slowing upload speeds to wait for confirmations. We feel that we are just starting to see the benefits from this feature as it opens up a totally new model for client-server interactions that just wasn't possible with HTTP.
The third benefit was the switch from JSON to protocol buffers, which works very well with gRPC. This improves serialization and deserialization times; which is very significant to some of our APIs, but appreciated on all of them. The more important benefit comes from the explicit format specification of proto, meaning that clients receive typed objects rather than free-form JSON. Because of this, our clients can reap the benefits of auto-completion in their IDEs, type-safety if their language supports it, and enforced compatibility between clients and servers with differing versions.
The final benefit of gRPC was our ability to quickly spec endpoints. The proto format for both data and service definition greatly simplifies defining new endpoints and finally allows the succinct definition of endpoint contracts. This means we are much better able to communicate endpoint specifications between our development teams. gRPC means that for the first time at our company we are able to simultaneously develop the client and the server side of our APIs! This means our latency to produce new APIs with the accompanying SDKs has dropped dramatically. Combined with code generation, it allows us to truly develop clients and servers in parallel.
Our experience with gRPC has been positive, even though it does not eliminate the difficulty of providing endpoints to partners and vendors, and address all of our performance issues. However, it does make improvements to our endpoint performance, integration with those endpoints, and even in delivery of SDKs.

View File

@ -0,0 +1,40 @@
---
attribution: Thanks to the VSCO engineers that worked on this migration.Steven Tang,
Sam Bobra, Daniel Song, Lucas Kacher, and many others.
author: Robert Sayre and Melinda Lu
company: VSCO
company-link: https://vsco.co
date: "2016-09-06T00:00:00Z"
published: true
thumbnail: ../img/vsco-logo.png?raw=true
title: gRPC at VSCO
url: blog/vscogrpc
---
Our guest post today comes from Robert Sayre and Melinda Lu of VSCO.
Founded in 2011, [VSCO](https://vsco.co) is a community for expression—empowering people to create, discover and connect through images and words. VSCO is in the process of migrating their stack to gRPC.
<!--more-->
In 2015, user growth forced VSCO down a familiar path. A monolithic PHP application in existence since the early days of the company was exhibiting performance problems and becoming difficult to maintain. We experimented with some smaller services in node.js, Go, and Java. At the same time, a larger messaging service for email, push messages, and in-app notifications was built in Go. Taking a first step away from JSON, we chose [Protocol Buffers](https://developers.google.com/protocol-buffers/) as the serialization format for this system.
Today, VSCO has largely settled on Go for new services. There are exceptions, particularly where a mature JVM solution is available for a given problem. Additionally, VSCO uses node.js for web applications, often with server-side [React](https://facebook.github.io/react/). Given that mix of languages, services, and some future data pipeline work detailed below, VSCO settled on gRPC and Protocol Buffers as the most practical solution for interprocess communication. A gradual migration from JSON over HTTP/1.1 APIs to gRPC over HTTP/2 is underway and going well. That said, there have been issues with the maturity of the PHP implementation relative to other languages.
Protocol buffers have been particularly valuable in building out our data ecosystem, where we rely on them to standardize and allow safe evolution of our data schemas in a language-agnostic way. As one example, weve built a Go service that feeds off our MySQL and MongoDB database replication logs and transforms backend database changes into a stream of immutable events in Kafka, with each row- or document-change event encoded as a protocol buffer. This database event stream allows us to add real-time data consumers as desired, without impacting production traffic and without having to coordinate with other systems. By processing all database events into protocol buffers en-route to Kafka, we can ensure that data is encoded in a uniform way that makes it easy to consume and use from multiple languages. Our implementation of [MySQL-binary-log](https://github.com/vsco/autobahn-binlog) and [Mongo-oplog](https://github.com/vsco/autobahn-oplog) tailers are available on GitHub.
Elsewhere in our data pipeline, weve begun using gRPC and protocol buffers to deliver behavioral events from our iOS and Android clients to a Go ingestion service, which then publishes these events to Kafka. To support this high-volume use case, we needed (1) a performant, fault-tolerant, language-agnostic RPC framework, (2) a way to ensure data compatibility as our product evolves, and (3) horizontally-scalable infrastructure. Weve found gRPC, protocol buffers, and Go services running in Kubernetes a good fit for all three. As this was our first client-facing Go gRPC service, we did experience some new points of friction — in particular, load-balancer support and amenities like curl-like debugging have been lagging due to the youth of the HTTP/2 ecosystem. However, the ease of defining services with the gRPC IDL, using built-in architecture like interceptors, and scaling with Go have made the tradeoffs worthwhile.
As a first step in bringing gRPC to our mobile clients, weve shipped telemetry code in our iOS and Android apps. As of gRPC 1.0, this process is relatively straightforward. They only post events to our servers so far, and dont do much with gRPC responses. The previous implementation was based on JSON, and our move to a single protocol buffer definition of our events uncovered a bunch of subtle bugs and differences between the clients.
One slight roadblock we ran into was the need for our clients to maintain compatibility with our JSON implementation as we ramp up, and for integration with vendor SDKs. This required a little bit of key-value coding on iOS, but it got more difficult on Android. We ended up having to write a protobuf compiler plugin to get the reflection features we needed while maintaining adequate performance. Drawing from that experience, weve made a concise [example protoc plugin](https://github.com/vsco/protoc-demo) built with [Bazel](https://bazel.io/) available on GitHub.
As more and more of our data becomes available in protocol buffer form, we plan to build upon this unified schema to expand our machine-learning and analytics systems. For example, we write our Kafka database replication streams to Amazon S3 as [Apache Parquet](https://parquet.apache.org/), an efficient columnar disk-storage format. Parquet has low-level support for protocol buffers, so we can use our existing data definitions to write optimized tables and do partial deserializations where desired.
From S3, we run computations on our data using Apache Spark, which can use our protocol buffer definitions to define types. Were also building new machine-learning applications with [TensorFlow](https://www.tensorflow.org/). It uses protocol buffers natively and allows us to serve our models as gRPC services with [TensorFlow Serving](https://tensorflow.github.io/serving/).
So far, weve had good luck with gRPC and Protocol Buffers. They dont eliminate every integration headache. However its easy to see how they help our engineers avoid writing a lot of boilerplate RPC code, while side-stepping the endless data-quality papercuts that come with looser serialization formats.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,20 @@
---
attribution: Originally written by Lisa Carey with help from others at Google.
date: "2016-03-24T00:00:00Z"
published: true
title: Google Cloud PubSub - with the power of gRPC!
url: blog/pubsub
---
[Google Cloud PubSub](https://cloud.google.com/pubsub/) is Google's scalable real-time messaging service that lets users send and receive messages between independent applications. It's an important part of Google Cloud Platform's big data offering, and is used by customers worldwide to build their own robust, global services. However, until now, the only way to use the Cloud PubSub API was via JSON over HTTP. That's all changed with the release of [PubSub gRPC alpha](https://cloud.google.com/blog/big-data/2016/03/announcing-grpc-alpha-for-google-cloud-pubsub). Now **users can access PubSub via gRPC** and benefit from all the advantages it brings.
<!--more-->
[Alpha instructions and gRPC code](https://cloud.google.com/pubsub/grpc-overview) are now available for gRPC PubSub in Python and Java.
But what if you want to use this service now with gRPC in another language - C#, say, or Ruby? Once you have a Google account, with a little bit of extra work you can do that too! You can use the tools and the instructions on [our site](/docs/) to generate and use your own gRPC client code from the PubSub service's `.proto` file, available from [GitHub](https://github.com/googleapis/googleapis/blob/master/google/pubsub/v1/pubsub.proto).
[Read the full Google Cloud PubSub announcement](https://cloud.google.com/blog/big-data/2016/03/announcing-grpc-alpha-for-google-cloud-pubsub)
[Find out more about using Google Cloud PubSub](https://cloud.google.com/pubsub/docs)

View File

@ -0,0 +1,38 @@
---
attribution: Originally written by Lisa Carey with help from others at Google.
date: "2016-04-04T00:00:00Z"
published: true
title: gRPC - now with easy installation.
url: blog/installation
---
Today we are happy to provide an update that significantly simplifies the getting started experience for gRPC.
* For most languages, **the gRPC runtime can now be installed in a single step via native package managers** such as `npm` for Node.js, `gem` for Ruby and `pip` for Python. Even though our Node, Ruby and Python runtimes are wrapped on gRPC's C core, users now don't need to explicitly pre-install the C core library as a package in most Linux distributions. We autofetch it for you :-).
* **For Java, we have simplified the steps needed to add gRPC support to your build tools** by providing plugins for Maven and Gradle. These let you easily depend on the core runtime to deploy or ship generated libraries into production environments.
* You can also use our Dockerfiles to use these updated packages - deploying microservices built on gRPC should now be a very simple experience.
<!--more-->
The installation story is not yet complete: we are now focused on improving your development experience by packaging our protocol buffer plugins in the same way as the gRPC runtime. This will simplify code generation and setting up your development environment.
### Want to try it?
Here's how to install the gRPC runtime today in all our supported languages:
Language | Platform | Command
---------|----------|--------
Node.js | Linux, Mac, Windows | `npm install grpc`
Python | Linux, Mac, Windows | `pip install grpcio`
Ruby | Linux, Mac, Windows | `gem install grpc`
PHP | Linux, Mac, Windows | `pecl install grpc-beta`
Go | Linux, Mac, Windows | `go get google.golang.org/grpc`
Objective-C | Mac | Runtime source fetched automatically from GitHub by Cocoapods
C# | Windows | Install [gRPC NuGet package](https://www.nuget.org/packages/Grpc/) from your IDE (Visual Studio, Monodevelop, Xamarin Studio)
Java | Linux, Mac, Windows | Use our [Maven and Gradle plugins](https://github.com/grpc/grpc-java/blob/master/README.md) that provide gRPC with [statically linked `boringssl`](https://github.com/grpc/grpc-java/blob/master/SECURITY.md#openssl-statically-linked-netty-tcnative-boringssl-static)
C++ | Linux, Mac, Windows | Currently requires [manual build and install](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/src/cpp/README.md)
You can find out more about installation in our [Getting Started guide](/docs/#install-grpc) and GitHub repositories. Do send us your feedback on our [mailing list](https://groups.google.com/forum/#!forum/grpc-io) or file issues on our issue tracker if you run into any problems.

View File

@ -0,0 +1,103 @@
---
attribution: Originally written by Brandon Phillips with additional content by Lisa
Carey and others at Google.
author: Brandon Phillips
company: CoreOS
company-link: https://coreos.com
date: "2016-05-09T00:00:00Z"
published: true
thumbnail: https://avatars2.githubusercontent.com/u/3730757?v=3&s=200
title: gRPC with REST and Open APIs
url: blog/coreos
---
Our guest post today comes from Brandon Phillips of [CoreOS](https://coreos.com/). CoreOS builds open source projects and products for Linux Containers. Their flagship product for consensus and discovery [etcd](https://coreos.com/etcd/) and their container engine [rkt](https://coreos.com/rkt/) are early adopters of gRPC.
One of the key reasons CoreOS chose gRPC is because it uses HTTP/2, enabling applications to present both a HTTP 1.1 REST/JSON API and an efficient gRPC interface on a single TCP port (available for Go). This provides developers with compatibility with the REST web ecosystem, while advancing a new, high-efficiency RPC protocol. With the recent release of Go 1.6, Go ships with a stable `net/http2` package by default.
<!--more-->
Since many CoreOS clients speak HTTP 1.1 with JSON, gRPC's easy interoperability with JSON and the [Open API Specification](https://github.com/OAI/OpenAPI-Specification) (formerly Swagger) was extremely valuable. For their users who are more comfortable with HTTP/1.1+JSON-based and Open API Spec APIs they used a combination of open source libraries to make their gRPC services available in both gRPC and HTTP REST flavors, using API multiplexers to give users the best of both worlds. Let's dive into the details and find out how they did it!
*This post was originally published at the [CoreOS blog](https://coreos.com/blog/gRPC-protobufs-swagger.html). We are reproducing it here with some edits.*
## A gRPC application called EchoService
In this post we will build a small proof-of-concept gRPC application from a gRPC API definition, add a REST service gateway, and finally serve it all on a single TLS port. The application is called EchoService, and is the web equivalent of the shell command echo: the service returns, or "echoes", whatever text is sent to it.
First, lets define the arguments to EchoService in a protobuf message called EchoMessage, which includes a single field called value. We will define this message in a protobuf ".proto" file called `service.proto`. Here is our EchoMessage:
```proto
message EchoMessage {
string value = 1;
}
```
In this same .proto file, we define a gRPC service that takes this data structure and returns it:
```proto
service EchoService {
rpc Echo(EchoMessage) returns (EchoMessage) {
}
}
```
Running this `service.proto` file "as is" through the Protocol Buffer compiler `protoc` generates a stub gRPC service in Go, along with clients in various languages. But gRPC alone isnt as useful as a service that also exposes a REST interface, so we wont stop with the gRPC service stub.
Next, we add the gRPC REST Gateway. This library will build a RESTful proxy on top of the gRPC EchoService. To build this gateway, we add metadata to the EchoService .proto to indicate that the Echo RPC maps to a RESTful POST method with all RPC parameters mapped to a JSON body. The gateway can map RPC parameters to URL paths and query parameters, but we omit those complications here for brevity.
```proto
service EchoService {
rpc Echo(EchoMessage) returns (EchoMessage) {
option (google.api.http) = {
post: "/v1/echo"
body: "*"
};
}
}
```
This means the gateway, once generated by `protoc`, can now accept a HTTP request from `curl` like this:
```sh
curl -X POST -k https://localhost:10000/v1/echo -d '{"value": "CoreOS is hiring!"}'
```
The whole system so far looks like this, with a single `service.proto` file generating both a gRPC server and a REST proxy:
<img src="/img/grpc-rest-gateway.png" class="img-responsive" alt="gRPC API with REST gateway">
To bring this all together, the echo service creates a Go `http.Handler` to detect if the protocol is HTTP/2 and the Content-Type is "application/grpc", and sends such requests to the gRPC server. Everything else is routed to the REST gateway. The code looks something like this:
```go
if r.ProtoMajor == 2 && strings.Contains(r.Header.Get("Content-Type"), "application/grpc") {
grpcServer.ServeHTTP(w, r)
} else {
otherHandler.ServeHTTP(w, r)
}
```
To try it out, all you need is a working Go 1.6 development environment and the following simple commands:
```sh
$ go get -u github.com/philips/grpc-gateway-example
$ grpc-gateway-example serve
```
With the server running you can try requests on both HTTP 1.1 and gRPC interfaces:
```sh
grpc-gateway-example echo Take a REST from REST with gRPC
curl -X POST -k https://localhost:10000/v1/echo -d '{"value": "CoreOS is hiring!"}'
```
One last bonus: because we have an Open API specification, you can browse the Open API UI running at `https://localhost:10000/swagger-ui/#!/EchoService/Echo` if you have the server above running on your laptop.
<img src="/img/grpc-swaggerscreen.png" class="img-responsive" alt="gRPC/REST Open API document">
Weve taken a look at how to use gRPC to bridge to the world of REST. If you want to take a look at the complete project, check out the [repo on GitHub](https://github.com/philips/grpc-gateway-example). We think this pattern of using a single protobuf to describe an API leads to an easy to consume, flexible API framework, and were excited to leverage it in more of our projects.

View File

@ -0,0 +1,119 @@
---
attribution: Originally written by David Cao with additional content by Makarand and
others at Google.
author: David Cao
company: Google
company-link: https://cloud.google.com
date: "2016-07-26T00:00:00Z"
published: false
thumbnail: ../img/gcp-icon.png?raw=true
title: Mobile Benchmarks
url: blog/mobile-benchmarks
---
As gRPC has become a better and faster RPC framework, we've consistently gotten the question, "How _much_ faster is gRPC?" We already have comprehensive server-side benchmarks, but we don't have mobile benchmarks. Benchmarking a client is a bit different than benchmarking a server. We care more about things such as latency and request size and less about things like queries per second (QPS) and number of concurrent threads. Thus we built an Android app in order to quantify these factors and provide solid numbers behind them.
Specifically what we want to benchmark is client side protobuf vs. JSON serialization/deserialization and gRPC vs. a RESTful HTTP JSON service. For the serialization benchmarks, we want to measure the size of messages and speed at which we serialize and deserialize. For the RPC benchmarks, we want to measure the latency of end-to-end requests and packet size.
Protobuf vs. JSON
In order to benchmark protobuf and JSON, we ran serializations and deserializations over and over on randomly generated protos, which can be seen [here](https://github.com/david-cao/gRPCBenchmarks/tree/master/protolite_app/app/src/main/proto). These protos varied quite a bit in size and complexity, from just a few bytes to over 100kb. JSON equivalents were created and then also benchmarked. For the protobuf messages, we had three main methods of serializing and deserializing: simply using a byte array, `CodedOutputStream`/`CodedInputStream` which is protobuf's own implementation of input and output streams, and Java's `ByteArrayOutputStream` and `ByteArrayInputStream`. For JSON we used `org.json`'s [`JSONObject`](https://developer.android.com/reference/org/json/JSONObject.html). This only had one method to serialize and deserialize, `toString()` and `new JSONObject()`, respectively.
In order to keep benchmarks as accurate as possible, we wrapped the code to be benchmarked in an interface and simply looped it for a set number of iterations. This way we discounted any time spent checking the system time.
``` Java
interface Action {
void execute();
}
// Sample benchmark of multiplication
Action a = new Action() {
@Override
public void execute() {
int x = 1000 * 123456;
}
}
for (int i = 0; i < 100; ++i) {
a.execute();
}
```
Before running a benchmark, we ran a warmup in order to clean out any erratic behaviour by the JVM, and then calculated the number of iterations needed to run for a set time (10 seconds in the protobuf vs. JSON case). To do this, we started with 1 iteration, measured the time it took for that run, and compared it to a minimum sample time (2 seconds in our case). If the number of iterations took long enough, we estimated the number of iterations needed to run for 10 seconds by doing some math. Otherwise, we multiplied the number of iterations by 2 and repeated.
```Java
// This can be found in ProtobufBenchmarker.java benchmark()
int iterations = 1;
// Time action simply reports the time it takes to run a certain action for that number of iterations
long elapsed = timeAction(action, iterations);
while (elapsed < MIN_SAMPLE_TIME_MS) {
iterations *= 2;
elapsed = timeAction(action, iterations);
}
// Estimate number of iterations to run for 10 seconds
iterations = (int) ((TARGET_TIME_MS / (double) elapsed) * iterations);
```
Results
Benchmarks were run on protobuf, JSON, and gzipped JSON.
We found that regardless of the serialization/deserialization method used for protobuf, it was consistently about 3x faster for serializing than JSON. For deserialization, JSON is actually a bit faster for small messages (<1kb), around 1.5x, but for larger messages (>15kb) protobuf is 2x faster. For gzipped JSON, protobuf is well over 5x faster in serialization, regardless of size. For deserialization, both are about the same at small messages, but protobuf is about 3x faster for larger messages. Results can be explored in more depth and replicated [here](/github_readme).
gRPC vs. HTTP JSON
To benchmark RPC calls, we want to measure end-to-end latency and bandwidth. To do this, we ping pong with a server for 60 seconds, using the same message each time, and measure the latency and message size. The message consists of some fields for the server to read, and a payload of bytes. We compared gRPC's unary call to a simple RESTful HTTP JSON service. The gRPC benchmark creates a channel, and starts a unary call that repeats when it recieves a response until 60 seconds have passed. The response contains a proto with the same payload sent.
Similarly for the HTTP JSON benchmarks, it sends a POST request to the server with an equivalent JSON object, and the server sends back a JSON object with the same payload.
```Java
// This can be found in AsyncClient.java doUnaryCalls()
// Make stub to send unary call
final BenchmarkServiceStub stub = BenchmarkServiceGrpc.newStub(channel);
stub.unaryCall(request, new StreamObserver<SimpleResponse>() {
long lastCall = System.nanoTime();
// Do nothing on next
@Override
public void onNext(SimpleResponse value) {
}
@Override
public void onError(Throwable t) {
Status status = Status.fromThrowable(t);
System.err.println("Encountered an error in unaryCall. Status is " + status);
t.printStackTrace();
future.cancel(true);
}
// Repeat if time isn't reached
@Override
public void onCompleted() {
long now = System.nanoTime();
// Record the latencies in microseconds
histogram.recordValue((now - lastCall) / 1000);
lastCall = now;
Context prevCtx = Context.ROOT.attach();
try {
if (endTime > now) {
stub.unaryCall(request, this);
} else {
future.done();
}
} finally {
Context.current().detach(prevCtx);
}
}
});
```
Both `HttpUrlConnection` and the [OkHttp library](https://square.github.io/okhttp/) were used.
Only gRPC's unary calls were benchmarked against HTTP, since streaming calls were over 2x faster than the unary calls. Moreover, HTTP has no equivalent of streaming, which is an HTTP/2 specific feature.
Results
In terms of latency, gRPC is **5x-10x** faster up to the 95th percentile, with averages of around 2 milliseconds for an end-to-end request. For bandwidth, gRPC is about 3x faster for small requests (100-1000 byte payload), and consistently 2x faster for large requests (10kb-100kb payload). To replicate these results or explore in more depth, check out our [repository](/github_readme).

View File

@ -0,0 +1,227 @@
---
author: Miguel Mendez
company: Yik Yak
company-link: https://yikyakapp.com
date: "2017-04-12T00:00:00Z"
published: true
thumbnail: https://cdn-images-1.medium.com/max/1600/0*qYehJ2DvPgFcG_nX.
title: Migration to Google Cloud PlatformgRPC & grpc-gateway
url: blog/yygrpc
---
Our guest post today comes from [Miguel Mendez](https://www.linkedin.com/in/miguel-mendez-008231/) of Yik Yak.
_This post was originally a part of the [Yik Yak Engineering Blog](https://medium.com/yik-yak-eng) which focused on sharing the lessons learned as we evolved Yik Yak from early-stage startup code running in Amazon Web Services to an eventual incremental rewrite, re-architecture, and live-migration to Google Cloud Platform._
In our previous blog [post](https://medium.com/yik-yak-eng/migration-to-google-cloud-platform-overview-9b5e5c17c368) we gave an overview of our migration to Google Cloud Platform from Amazon Web Services. In this post we will drill down into the role that [gRPC](/) and [grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway) played in that migration and share some lessons which we picked up along the way.
<!--more-->
## Most people have REST APIs, dont you? Whats the problem?
Yes, we actually still have REST APIs that clients use because migrating the client APIs was out of scope. To be fair, you can make REST APIs work and there are a lot of useful REST APIs out there. Having said that, the issues that we had with REST lie in the details.
### No Canonical REST Specification
There is no single REST specification that is canonical. There are best practices, but no true canon. For that reason, there isnt unanimous agreement on when to use specific HTTP methods and response codes. Beyond that, not all of the possible HTTP methods and response codes are supported across all platforms… This forces REST API implementers to compensate for these deficiencies using techniques that work for them but create more variance in REST APIs across the board. At best, REST APIs are really REST-ish dialects.
### Harder on Developers
REST APIs arent exactly great from a developers standpoint either.
First, because REST is tied to HTTP there is no simple mapping to an API in my language of choice. If Im using Go or Java there is no “interface” that I can use in my code to stub it out. I can create one, but it is extra-linguistic to the REST API definition.
Second, REST APIs spread information that is necessary to interpreting the intent of the request across various components of the request. You have the HTTP method, the request URI, the request payload, and it can get even more complicated if request headers are involved in the semantics.
Third, it is great that I can use curl from the command line to hit an API, but it comes at the cost of having to shoehorn the API into that ecosystem. Normally that use case only matters for letting people quickly try out an APIand if that is high on your list of requirements then by all means feel free to use REST… Just keep it simple.
### No Declarative REST API Description
The fourth problem with a REST APIs is that, at least until [Swagger](https://swagger.io/) arrived on the scene, there was no declarative way to define a REST API and include type information. It may sound pedantic, but there are legitimate reasons to want a proper definition that includes type information in general. To reinforce the point, look at the lines of PHP server code below, which were extracted from various files, that set the “hidePin” field on “yak” which was then returned to the client. The actual line of code that executed on the server was a function of multiple parameters, so imagine that the one which was run was basically chosen at random:
```php
// Code omitted…
$yak->hidePin=false;
// Code omitted…
$yak->hidePin=true;
// Code omitted…
$yak->hidePin=0;
// Code omitted…
$yak->hidePin=1;
```
What is the type of the field hidePin? You cannot say for certain. It could be a boolean or an integer or whatever happens to have been written there by the server, but in any case now your clients have to be able to deal with these possibilities which makes them more complicated.
Problems can also arise when the clients definition of a type varies from that which the server expects. Have a look at the server code below which processed a JSON payload sent up by a client:
```php
// Code omitted…
switch ($fieldName) {
// Code omitted…
case “recipientID”:
// This is being added because iOS is passing the recipientID
// incorrectly and we still want to capture these events
// … expected fall through …
case “Recipientid”:
$this->yakkerEvent->recipientID = $value;
break;
// Code omitted…
}
// Code omitted…
```
In this case, the server had to deal with an iOS client that sent a JSON object whose field name used unexpected casing. Again, not insurmountable but all of these little disconnects compound and work together to steal time away from the problems that really move the ball down the field.
## gRPC can address the issues with REST…
If youre not familiar with gRPC, its a “high performance, open-source universal remote procedure call (RPC) framework” that uses Google Protocol Buffers as the Interface Description Language (IDL) for describing a service interface as well as the structure of the messages exchanged. This IDL can then be compiled to produce language-specific client and server stubs. In case that seemed a little obtuse, Ill zoom into the aspects that are important.
### gRPC is Declarative, Strongly-Typed, and Language Independent
gRPC descriptions are written using an Interface Description Language that is independent of any specific programming language, yet its concepts map onto the supported languages. This means that you can describe your ideal service API, the messages that it supports, and then use “protoc”, the protocol compiler, to generate client and server stubs for your API. Out of the box, you can produce client and server stubs in C/C++, C#, Node.js, PHP, Ruby, Python, Go and Java. You can also get additional protoc plugins which can create stubs for Objective-C and Swift.
Those issues that we had with “hidePin” and “recipientID” vs.”Recipientid” fields above go away because we have a single, canonical declaration that establishes the types used, and the language-specific code generation ensures that we dont have typos in the client or server code regardless of their implementation language.
### gRPC Means No hand-rolling of RPC Code is Required
This is a very powerful aspect of the gRPC ecosystem. Often times developers will hand roll their RPC code because it just seems more straightforward. However, as the number of types of clients that you need to support increases, the carrying costs of this approach also increase non-linearly.
Imagine that you start off with a service that is called from a web browser. At some point down the road, the requirements are updated and now you have to support Android and iOS clients. Your server is likely fine, but the clients now need to be able to speak the same RPC dialect and often times there are differences that creep in. Things can get even worse if the server has to compensate for the differences amongst the clients.
On the other hand, using gRPC you just add the protocol compiler plugins and they generate the Android and iOS client stubs. This cuts out a whole class of problems. As a bonus, if you dont modify the generated codeand you should not have tothen any performance improvements in the generated code will be picked up.
### gRPC has Compact Serialization
gRPC uses Google protocol buffers to serialize messages. This serialization format is very compact because, among other things, field names are not included in the serialized form. Compare this to a JSON object where each instance of an object carries a full copy of its field names, includes extra curly braces, etc. For a low-volume application this may not be an issue, but it can add up quickly.
### gRPC Tooling is Extensible
Another very useful feature of the gRPC framework is that it is extensible. If you need support for a language that is not currently supported, there is a way to create plugins for the protocol compiler that allows you to add what you need.
### gRPC Supports Contract Updates
An often overlooked aspect of service APIs is how they may evolve over time. At best, this is often a secondary consideration. If you are using gRPC, and you adhered to a few basic rules, your messages can be forward and backward compatible.
## Grpc-gatewaybecause REST will be with us for a while…
Youre probably thinking: gRPC is great but I have a ton of REST clients to deal with. Well, there is another tool in this ecosystem and it is called grpc-gateway. Grpc-gateway “generates a reverse-proxy server which translates a RESTful JSON API into gRPC”. So if you want to support REST clients you can, and it doesnt cost you any real extra effort.
If your existing REST clients are pretty far from the normal REST APIs, you can use custom marshallers with grpc-gateway to compensate.
## Migration and gRPC + grpc-gateway
As mentioned previously, we had a lot of PHP code and REST endpoints which we wanted to rework as part of the migration. By using the combination of gRPC and grpc-gateway, we were able to define gRPC versions of the legacy REST APIs and then use grpc-gateway to expose the exact REST endpoints that clients were used to. With these alternative implementations in place we were able to move traffic between the old and new systems using combinations of DNS updates as well as our [Experimentation and Configuration System](https://medium.com/yik-yak-eng/yik-yak-configuration-and-experiment-system-16a5c15ee77c#.7s11d3kqh) without causing any disruption to the existing clients. We were even able to leverage the existing test suites to verify functionality and establish parity between the old and new systems.
Lets walk through the pieces and how they fit together.
### gRPC IDL for “/api/getMessages”
Below is the gRPC IDL that we defined to mimic the legacy Yik Yak API in GCP. Weve simplified the example to only contain the “/api/getMessages” endpoint which clients use to get the set of messages centered around their current location.
```proto
// APIRequest Messagesent by clients
message APIRequest {
// userID is the ID of the user making the request
string userID = 1;
// Other fields omitted for clarity…
}
// APIFeedResponse contains the set of messages that clients should
// display.
message APIFeedResponse {
repeated APIPost messages = 1;
// Other fields omitted for clarity…
}
// APIPost defines the set of post fields returned to the clients.
message APIPost {
string messageID = 1;
string message = 2;
// Other fields omitted for clarity…
}
// YYAPI service accessed by Android, iOS and Web clients.
service YYAPI {
// Other endpoints omitted…
// APIGetMessages returns the list of messages within a radius of
// the users current location.
rpc APIGetMessages (APIRequest) returns (APIFeedResponse) {
option (google.api.http) = {
get: “/api/getMessages” // Option tells grpc-gateway that an HTTP
// GET to /api/getMessages should be
// routed to the APIGetMessages gRPC
// endpoint.
};
}
// Other endpoints omitted…
}
```
### Protoc Generated Go Interfaces for YYAPI Service
The IDL above is then compiled to Go files by the protoc compiler to produce client proxies and server stubs as below.
```go
// Client API for YYAPI service
type YYAPIClient interface {
APIGetMessages(ctx context.Context, in *APIRequest, opts ...grpc.CallOption) (*APIFeedResponse, error)
}
// NewYYAPIClient returns an implementation of the YYAPIClient interface which
// clients can use to call the gRPC service.
func NewYYAPIClient(cc *grpc.ClientConn) YYAPIClient {
// Code omitted for clarity..
}
// Server API for YYAPI service
type YYAPIServer interface {
APIGetMessages(context.Context, *APIRequest) (*APIFeedResponse, error)
}
// RegisterYYAPIServer registers an implementation of the YYAPIServer with an
// existing gRPC server instance.
func RegisterYYAPIServer(s *grpc.Server, srv YYAPIServer) {
// Code omitted for clarity..
}
```
### Grpc-gateway Generated Go-code for REST Reverse Proxy of YYAPI Service
By using the google.api.http option in our IDL above, we tell the grpc-gateway system that it should route HTTP GETs for “/api/getMessages” to the APIGetMessages gRPC endpoint. In turn, it creates the HTTP to gRPC reverse proxy and allows you to set it up by calling the generated function below.
```go
// RegisterYYAPIHandler registers the http handlers for service YYAPI to “mux”.
// The handlers forward requests to the grpc endpoint over “conn”.
func RegisterYYAPIHandler(ctx context.Context, mux *runtime.ServeMux, conn *grpc.ClientConn) error {
// Code omitted for clarity
}
```
So again, from a single gRPC IDL description you can obtain client and server interfaces and implementation stubs in your language of choice as well as REST reverse proxies for free.
## gRPCI heard there were some rough edges?
We started working with gRPC for Go late in Q1 of 2016 and there were definitely some rough edges at the time.
### Early Adopter Issues
We ran into [Issue 674](https://github.com/grpc/grpc-go/issues/674), a resource leak inside of the Go gRPC client code which could cause gRPC transports to hang when under heavy load. The gRPC team was very responsive and the fix was merged into the master branch within days.
We ran into a resource leak in the generated code for grpc-gateway. However, by the time we found that issue, it had already been fixed by that team and merged into master.
The last early-adopter type issue that we ran into was around the Gos gRPC client not supporting the GOAWAY packet that was part of the gRPC protocol spec. Fortunately, this one did not impact us in production. It only manifested during the repo case we had put together for Issue 674.
All in all this was fairly reasonable given how early we were.
### Load Balancing
Now, if you are going to use gRPC this is definitely one area that you need to think through carefully. By default, gRPC uses HTTP2 instead of HTTP1. HTTP2 is able to open a connection to a server and reuse it for multiple requests among other things. If you use it in that mode, you wont distribute requests amongst all of the servers in your load balancing pool. At the time that we executing the migration, existing load balancers didnt handle HTTP2 traffic very well if at all.
At the time the gRPC team didnt have a [Load Balancing Proposal](https://github.com/grpc/grpc/blob/master/doc/load-balancing.md), so we burned a lot of cycles trying to force our system to do some type of client-side load balancing. In the end, since most of our raw gRPC communications took place within the data center, and everything was deployed using Kubernetes, it was simpler to dial the remote server every time thereby forcing the system to spread the load out amongst the servers in the Kubernetes Service. Given our setup it only added about 1 ms to the overall response time, so it was a simple work around.
So was that the end of the load balancing issues? Not exactly. Once we had our basic gRPC-based system up and running we started running load tests against it, and noticed some interesting behaviors. Below is the per gRPC server CPU load graph over time, do you notice anything curious about it?
![](/img/yy-cpu-imbalance.png)
The server with the heaviest load was running at around 50% CPU, while the most lightly loaded server was running at around 20% CPU even after several minutes of warmup. It turned out that even though we were dialing every time, we had an [nghttp2](https://nghttp2.org/) ingress as part of our network topology which would tend to send inbound requests to servers to whom it had already connected and thereby causing uneven distribution. After removing the nghttp2 ingress, our CPU graphs showed much less variance in the load distribution.
![](/img/yy-cpu-balanced.png)
## Conclusion
REST APIs have their issues, but they are not going away anytime soon. If you are up for trying something a little cleaner, then definitely consider using gRPC (along with grpc-gateway if you still need to expose a REST API). Even though we hit some issues early on, gRPC was a net gain for us. It gave us a path forward to more tightly defined APIs. It also allowed us to stand up new implementations of the legacy REST APIs in GCP which teed us up to seamlessly migrate traffic from the AWS implementations to the new GCP ones in a controlled manner.
Having discussed our use of Go, gRPC and Google Cloud Platform, we are ready to discuss how we built a new geo store on top of Google Bigtable and the Google S2 Librarythe subject of our next post.

View File

@ -0,0 +1,55 @@
---
author: Brian Hardock
company: DEIS
company-link: https://deis.com/
date: "2017-05-15T00:00:00Z"
published: true
thumbnail: https://gabrtv.github.io/deis-dockercon-2014/img/DeisLogo.png
title: gRPC in Helm
url: blog/helmgrpc
---
*Our guest post today comes from Brian Hardock, a software engineer from Deis working on the [Helm](https://helm.sh/) project.*
Helm is the package manager for Kubernetes. Helm provides its users with a customizable mechanism for
managing distributed applications and controlling their deployment.
I have the good fortune to be a member of the phenomenal open-source Kubernetes Helm community serving as
a core contributor. My first day working with the Helm team was spent prototyping the architecture for
the next generation of Helm. By the end of that day, we had procured the preliminary RPC protocol data model
used to enable communication between Helm and its in-cluster server component, Tiller.
<!--more-->
We chose to use protocol buffers - the default framework gRPC uses for serialization and over-the-air
transmission - as our data definition language. By the end of that first day hacking with the Helm team,
gRPC and protocol buffers proved to be a powerful combination. We had successfully had acheived communication
between the Helm client and Tiller server using code generated from the protobuf and gRPC service definitions.
As a personal preference, we found that the protobuf files and resulting generated gRPC
code provided an aesthetic, nearly self-documenting developer experience compared to something like Swagger.
Within a few days, the Helm team was scoping and implementing features for our users. By choosing gRPC/Proto
we had reduced the typical time spent bikeshedding that, in general, inevitably evolves from API modeling and
churning out boilerplate server code. If we had not reaped the benefits of gRPC/protobuf from day 1, we would
have spent significantly more time pivoting up and down the stack, as opposed to honing our focus on what
matters: the users and the features they requested.
In addition to serving as the Helm/Tiller communication protocol, one of our more interesting applications
of protocol buffers is that we use it to model what's referred to in Kubernetes parlance as a "Chart". Charts
are an encapsulation of Kubernetes manifests that enable you to define, install, and upgrade Kubernetes applications.
For more complex Kubernetes applications, the set of manifests may be large. By virtue of its inherent compression
capabilities, protocol buffers and gRPC allowed us to mitigate the nuisance of transmitting bulky and
sprawling Kubernetes manifests.
For a deeper dive into:
- The Helm proto, see: <https://github.com/kubernetes/helm/tree/master/_proto/hapi>
- Its generated counterpart, see: <https://github.com/kubernetes/helm/tree/master/pkg/proto/hapi>
- The interface to our Helm client, see: <https://github.com/kubernetes/helm/tree/master/pkg/helm>
In summary, protobuf and gRPC provided Helm with:
* Clearly defined message and protocol semantics for client and server communications.
* Increased feature development via a reduction in time spent on boilerplate server code / API modeling.
* High performance transmission of data through generated code and compression.
* Minimized cognitive cycles spent going from 0 to client/server communications.

View File

@ -0,0 +1,201 @@
---
author: makdharma
company: Google
company-link: https://www.google.com
date: "2017-06-15T00:00:00Z"
published: true
title: gRPC Load Balancing
url: blog/loadbalancing
---
This post describes various load balancing scenarios seen when deploying gRPC. If you use [gRPC](/) with multiple backends, this document is for you.
A large scale gRPC deployment typically has a number of identical back-end instances, and a number of clients. Each server has a certain capacity. Load balancing is used for distributing the load from clients optimally across available servers.
<!--more-->
### Why gRPC?
gRPC is a modern RPC protocol implemented on top of HTTP/2. HTTP/2 is a Layer 7 (Application layer) protocol, that runs on top of a TCP (Layer 4 - Transport layer) protocol, which runs on top of IP (Layer 3 - Network layer) protocol. gRPC has many [advantages](https://http2.github.io/faq/#why-is-http2-binary) over traditional HTTP/REST/JSON mechanism such as
1. Binary protocol (HTTP/2),
2. Multiplexing many requests on one connection (HTTP/2)
3. Header compression (HTTP/2)
4. Strongly typed service and message definition (Protobuf)
5. Idiomatic client/server library implementations in many languages
In addition, gRPC integrates seamlessly with ecosystem components like service discovery, name resolver, load balancer, tracing and monitoring, among others.
## Load balancing options
### Proxy or Client side?
*Note: Proxy load balancing is also known as server-side load balancing in some literature.*
Deciding between proxy versus client-side load balancing is a primary architectural choice. In Proxy load balancing, the client issues RPCs to the a Load Balancer (LB) proxy. The LB distributes the RPC call to one of the available backend servers that implement the actual logic for serving the call. The LB keeps track of load on each backend and implements algorithms for distributing load fairly. The clients themselves do not know about the backend servers. Clients can be untrusted. This architecture is typically used for user facing services where clients from open internet can connect to servers in a data center, as shown in the picture below. In this scenario, clients make requests to LB (#1). The LB passes on the request to one of the backends (#2), and the backends report load to LB (#3).
![image alt text](/img/image_0.png)
In Client side load balancing, the client is aware of multiple backend servers and chooses one to use for each RPC. The client gets load reports from backend servers and the client implements the load balancing algorithms. In simpler configurations server load is not considered and client can just round-robin between available servers. This is shown in the picture below. As you can see, the client makes request to a specific backend (#1). The backends respond with load information (#2), typically on the same connection on which client RPC is executed. The client then updates its internal state.
![image alt text](/img/image_1.png)
The following table outlines the pros and cons of each model.
<table>
<tr>
<td></td>
<td>Proxy</td>
<td>Client Side</td>
</tr>
<tr>
<td style="width:10% !important">Pros</td>
<td>
le client
* No client-side awareness of backend
* Works with untrusted clients
</td>
<td>
* High performance because elimination of extra hop
</td>
</tr>
<tr>
<td>Cons</td>
<td>
* LB is in the data path
* Higher latency
* LB throughput may limit scalability
</td>
<td>
* Complex client
* Client keeps track of server load and health
* Client implements load balancing algorithm
* Per-language implementation and maintenance burden
* Client needs to be trusted, or the trust boundary needs to be handled by a lookaside LB.
</td>
</tr>
</table>
### Proxy Load Balancer options
Proxy load balancing can be L3/L4 (transport level) or L7 (application level). In transport level load balancing, the server terminates the TCP connection and opens another connection to the backend of choice. The application data (HTTP/2 and gRPC frames) are simply copied between the client connection to the backend connection. L3/L4 LB by design does very little processing, adds less latency compared with L7 LB, and is cheaper because it consumes fewer resources.
In L7 (application level) load balancing, the LB terminates and parses the HTTP/2 protocol. The LB can inspect each request and assign a backend based on the request contents. For example, a session cookie sent as part of HTTP header can be used to associate with a specific backend, so all requests for that session are served by the same backend. Once the LB has chosen an appropriate backend, it creates a new HTTP/2 connection to that backend. It then forwards the HTTP/2 streams received from the client to the backend(s) of choice. With HTTP/2, LB can distribute the streams from one client among multiple backends.
#### L3/L4 (Transport) vs L7 (Application)
<table>
<tr>
<td>
Use case
</td>
<td>
Recommendation
</td>
</tr>
<tr>
<td>RPC load varies a lot among connections</td>
<td>Use Application level LB</td>
</tr>
<tr>
<td>Storage or compute affinity is important</td>
<td>Use Application level LB and use cookies or similar for routing requests to correct backend</td>
</tr>
<tr>
<td>Minimizing resource utilization in proxy is more important than features</td>
<td>Use L3/L4 LB</td>
</tr>
<tr>
<td>Latency is paramount</td>
<td>Use L3/L4 LB</td>
</tr>
</table>
### Client side LB options
#### Thick client
A thick client approach means the load balancing smarts are implemented in the client. The client is responsible for keeping track of available servers, their workload, and the algorithms used for choosing servers. The client typically integrates libraries that communicate with other infrastructures such as service discovery, name resolution, quota management, etc.
#### Lookaside Load Balancing
*Note: A lookaside load balancer is also known as an external load balancer or one-arm load balancer*
With lookaside load balancing, the load balancing smarts are implemented in a special LB server. Clients query the lookaside LB and the LB responds with best server(s) to use. The heavy lifting of keeping server state and implementation of LB algorithm is consolidated in the lookaside LB. Note that client might choose to implement simple algorithms on top of the sophisticated ones implemented in the LB. gRPC defines a protocol for communication between client and LB using this model. See Load Balancing in gRPC [doc](https://github.com/grpc/grpc/blob/master/doc/load-balancing.md) for details.
The picture below illustrates this approach. The client gets at least one address from lookaside LB (#1). Then the client uses this address to make a RPC (#2), and server sends load report to the LB (#3). The lookaside LB communicates with other infrastructure such as name resolution, service discovery, and so on (#4).
![image alt text](/img/image_2.png)
## Recommendations and best practices
Depending upon the particular deployment and constraints, we suggest the following.
<table>
<tr>
<td>Setup</td>
<td>Recommendation</td>
</tr>
<tr>
<td markdown="1">
* Very high traffic between clients and servers
* Clients can be trusted
</td>
<td markdown="1">
* Thick client-side load balancing
* Client side LB with ZooKeeper/Etcd/Consul/Eureka. [ZooKeeper Example](https://github.com/makdharma/grpc-zookeeper-lb).
</td>
</tr>
<tr>
<td markdown="1">
* Traditional setup - Many clients connecting to services behind a proxy
* Need trust boundary between servers and clients
</td>
<td markdown="1">
* Proxy Load Balancing
* L3/L4 LB with GCLB (if using GCP)
* L3/L4 LB with haproxy - [Config file](https://gist.github.com/thpham/114d20de8472b2cef966)
* Nginx coming soon
* If need session stickiness - L7 LB with Envoy as proxy
</td>
</tr>
<tr>
<td markdown="1">
* Microservices - N clients, M servers in the data center
* Very high performance requirements (low latency, high traffic)
* Client can be untrusted
</td>
<td markdown="1">
* Look-aside Load Balancing
* Client-side LB using [gRPC-LB protocol](https://github.com/grpc/grpc/blob/master/doc/load-balancing.md). Roll your own implementation (Q217), hosted gRPC-LB in the works.
</td>
</tr>
<tr>
<td markdown="1">
* Existing Service-mesh like setup using Linkerd or Istio
</td>
<td markdown="1">
* Service Mesh
* Use built-in LB with [Istio](https://istio.io/), or [Envoy](https://github.com/lyft/envoy).
</td>
</tr>
</table>

View File

@ -0,0 +1,38 @@
---
author: Jaye Pitzeruse
company: Indeed
company-link: https://www.indeed.com
date: "2017-08-17T00:00:00Z"
published: true
title: 2017-08-17 Community Meeting Update
---
**Next Community Meeting:** Thursday, August 31, 2017 11am Pacific Time (US and Canada)
<!--more-->
## General Announcements
Call for Papers: CloudNativeCon
CloudNativeCon gathers all CNCF (Cloud Native Computing Foundation) projects under a single roof.
Presenters will be talking about their experiences with Kubernetes, Prometheus, OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt and CNI.
The call for papers period will be ending on Monday, August 21, 2017.
The conference will be taking place on the 6th and 7th of December, 2017.
If you submit a talk, please add an entry to the spreadsheet linked in the [gRPC Community Meeting Working Doc](https://docs.google.com/document/d/1DTMEbBNmzNbZBh8nOivsnnw3CwUr1Q7WGRe7rNxyHOU/edit#bookmark=id.7qk9qf3ri75m).
Register for [CloudNativeCon](https://events.linuxfoundation.org/events/cloudnativecon-and-kubecon-north-america/attend/register)
## Release Updates
1.6 will be available soon.
The required changes have already been merged and published in the protocol buffer libraries.
Keep your eyes peeled for the upcoming release.
## Platform Updates
No platform updates.
## Language Updates
No language specific updates.

View File

@ -0,0 +1,75 @@
---
author: Wouter van Oortmerssen
company: Google
company-link: https://www.google.com
date: "2017-08-17T00:00:00Z"
published: true
title: Announcing out of the box support for gRPC in the Flatbuffers serialization
library.
url: blog/flatbuffers
---
The recent release of Flatbuffers [version 1.7](https://github.com/google/flatbuffers/releases) introduced truly zero-copy support for gRPC out of the box.
[Flatbuffers](https://google.github.io/flatbuffers/) is a serialization library that allows you to access serialized data without first unpacking it or allocating any
additional data structures. It was originally designed for games and other resource constrained applications, but is now finding more general use, both by teams within Google and in other companies such as Netflix and Facebook.
<!--more-->
Flatbuffers enables maximum throughput by directly using gRPC's slice buffers with zero-copy for common use cases. An incoming rpc can be processed directly from gRPCs internal buffers, and constructing a new message will write directly to these buffers without intermediate steps.
This is currently, fully supported in the C++ implementation of FlatBuffers, with more languages to come. There is also an implementation in Go, which is not entirely zero copy, but still very low on allocation cost (see below).
## Example Usage
Let's look at an example of how this works.
### Use Flatbuffers as an IDL
Start with an `.fbs` schema (similar to .proto, if you are familiar with protocol buffers) that declares an RPC service:
```proto
table HelloReply {
message:string;
}
table HelloRequest {
name:string;
}
table ManyHellosRequest {
name:string;
num_greetings:int;
}
rpc_service Greeter {
SayHello(HelloRequest):HelloReply;
SayManyHellos(ManyHellosRequest):HelloReply (streaming: "server");
}
```
To generate C++ code from this, run: `flatc --cpp --grpc example.fbs`, much like in protocol buffers.
#### Generated Server Implementation
The server implementation is very similar to protocol buffers, except now the request and response messages are of type `flatbuffers::grpc::Message<HelloRequest> *`.
Unlike protocol buffers, where these types represent a tree of C++ objects, here they are merely handles to a flat object in the underlying gRPC slice. You can access the data directly:
```cpp
auto request = request_msg->GetRoot();
auto name = request->name()->str();
```
Building a response is equally simple
```cpp
auto msg_offset = mb_.CreateString("Hello, " + name);
auto hello_offset = CreateHelloReply(mb_, msg_offset);
mb_.Finish(hello_offset);
*response_msg = mb_.ReleaseMessage<HelloReply>();
```
The client code is the same as that generated by protocol buffers, except for the FlatBuffer access and construction code.
See the full example [here](https://github.com/google/flatbuffers/tree/master/grpc/samples/greeter). To compile it, you need gRPC.
The same repo has a [similar example](https://github.com/google/flatbuffers/blob/master/grpc/tests/go_test.go) for Go.
Read more about using and building FlatBuffers for your platform [on the flatbuffers site](https://google.github.io/flatbuffers/).

View File

@ -0,0 +1,170 @@
---
author: Mahak Mukhi
company: Google
company-link: google.com
date: "2017-08-22T00:00:00Z"
published: true
title: 2017-08-22 gRPC-Go performance Improvements
---
<p>
<span style="margin-bottom:5%">For past few months we've been working on improving gRPC-Go performance. This includes improving network utilization, optimizing CPU usage and memory allocations. Most of our recent effort has been focused around revamping gRPC-Go flow control. After several optimizations and new features we've been able to improve quite significantly, especially on high-latency networks. We expect users that are working with high-latency networks and large messages to see an order of magnitude performance gain.
Benchmark results at the end.
This blog summarizes the work we have done so far (in chronological order) to improve performance and lays out our near-future plans.</style>
</p><br>
<!--more-->
### Recently Implemented Optimizations
###### Expanding stream window on receiving large messages
[Code link](https://github.com/grpc/grpc-go/pull/1248)
This is an optimization used by gRPC-C to achieve performance benefits for large messages. The idea is that when there's an active read by the application on the receive side, we can effectively bypass stream-level flow control to request the whole message. This proves to be very helpful with large messages. Since the application is already committed to reading and has allocated enough memory for it, it makes sense that we send a proactive large window update (if necessary) to get the whole message rather than receiving it in chunks and sending window updates when we run low on window.
This optimization alone provided a 10x improvement for large messages on high-latency networks.
###### Decoupling application reads from connection flow control
[Code link](https://github.com/grpc/grpc-go/pull/1265)
After having several discussions with gRPC-Java and gRPC-C core team we realized that gRPC-Go's connection-level flow control was overly restrictive in the sense that window updates on the connection depended upon if the application had read data from it or not. It must be noted that it makes perfect sense to have stream-level flow control depended on application read but not so much for connection-level flow control. The rationale is as follows: A connection is shared by several streams (RPCs). If there were at least one stream that read slowly or didn't read at all, it would hamper the performance or completely stall other streams on that connection. This happens because we won't send out window updates on the connection until that slow or inactive stream read data. Therefore, it makes sense to decouple the connection's flow control from application reads.
However, this begs at least two questions:
1. Won't a client be able to send as much data as it wants to the server by creating new streams when one runs out?
2. Why even have connection-level flow control if the stream-level flow control is enough?
The answer to the first question is short and simple: no. A server has an option to limit the number of streams that it intends to serve concurrently. Therefore, although at first it may seem like a problem, it really is not.
The need for connection-level flow control:
It is true that stream-level flow control is sufficient to throttle a sender from sending too much data. But not having connection-level flow control (or using an unlimited connection-level window) makes it so that when things get slower on a stream, opening a new one will appear to make things faster. This will only take one so far since the number of streams are limited. However, having a connection-level flow control window set to the Bandwidth Delay Product (BDP) of the network puts an upper-bound on how much performance can realistically be squeezed out of the network.
###### Piggyback window updates
[Code link](https://github.com/grpc/grpc-go/pull/1273)
Sending a window update itself has a cost associated to it; a flush operation is necessary, which results in a syscall. Syscalls are blocking and slow. Therefore, when sending out a stream-level window update, it makes sense to also check if a connection-level window update can be sent using the same flush syscall.
###### BDP estimation and dynamic flow control window
[Code link](https://github.com/grpc/grpc-go/pull/1310)
This feature is the latest and in some ways the most awaited optimization feature that has helped us close the final gap between gRPC and HTTP/1.1 performance on high latency networks.
Bandwidth Delay Product (BDP) is the bandwidth of a network connection times its round-trip latency. This effectively tells us how many bytes can be "on the wire" at a given moment, if full utilization is achieved.
The [algorithm](https://docs.google.com/document/d/1Eq4eBEbNt1rc8EYuwqsduQd1ZfcBOCYt9HVSBa--m-E/pub) to compute BDP and adapt accordingly was first proposed by @ejona and later implemented by both gRPC-C core and gRPC-Java (note that it isn't enabled in Java yet). The idea is simple and powerful: every time a receiver gets a data frame it sends out a BDP ping (a ping with unique data only used by BDP estimator). After this, the receiver starts counting the number of bytes it receives (including the ones that triggered the BDP ping) until it receives the ack for that ping. This total sum of all bytes received in about 1.5 RTT (Round-Trip Time) is an approximation of the effective BDP * 1.5. If this is close to our current window size (say, more than 2/3rd of it) we must increase the window size. We put our window sizes (both streaming and connection) to be twice the BDP we sampled(total sum of all bytes received).
This algorithm by itself could cause the BDP estimation to increase indefinitely; an increase in window will result in sampling more bytes which in turn will cause the window to be increased further. This phenomenon is called [buffer-bloat](https://en.wikipedia.org/wiki/Bufferbloat) and was discovered by earlier implementations in gRPC-C core and gRPC-Java. The solution to this is to calculate the bandwidth for every sample and check if it is greater than the maximum bandwidth noted so far. If so, only then increase our window sizes. The bandwidth, as we know, can be calculated by dividing the sample by RTT * 1.5 (remember the sample was for one and a half round trips). If the bandwidth doesn't increase with an increase in sampled bytes that's indicative that this change is because of an increased window size and doesn't really reflect the nature of the network itself.
While running experiments on VMs in different continents we realized that every once in awhile a rogue, unnaturally fast ping-ack at the right time (really the wrong time) would cause our window sizes to go up. This happens because such a ping-ack would cause us to notice a decreased RTT and calculate a high bandwidth value. Now if that sample of bytes was greater than 2/3rd of our window then we would increase the window sizes. However, this ping ack was an aberration and shouldn't have changed our perception of the network RTT al together. Therefore, we keep a running average of the RTTs we note weighted by a constant rather than the total number of samples to heed more to recent RTTs and less to the ones in past. It is important because networks might change over time.
During implementation, we experimented with several tuning parameters, such as the multiplier to compute the window size from the sample size to select the best settings, that balanced between growth and accuracy.
Given that we're always bound by the flow control of TCP which for most cases is upper bounded at 4MB, we bound the growth of our window sizes by the same number: 4MB.
BDP estimation and dynamically adjusting window sizes is turned-on by default and can be turned off by setting values manually for connection and/or stream window sizes.
###### Near-future efforts
We are now looking into improving our throughput by better CPU utilization, the following efforts are in-line with that.
###### Reducing flush syscalls
We noticed a bug in our transport layer which causes us to make a flush syscall for every data frame we write, even if the same goroutine has more data to send. We can batch a lot of these writes to use only one flush. This in fact will not be a big change to the code itself.
In our efforts to get rid of unnecessary flushes we recently combined the headers and data write for unary and server streaming RPCs to one flush on the client-side. Link to [code](https://github.com/grpc/grpc-go/pull/1343)
Another related idea proposed by one of our users @petermattic in [this](https://github.com/grpc/grpc-go/pull/1373) PR was to combine a server response to a unary RPC into one flush. We are currently looking into that as well.
###### Reducing memory allocation
For every data frame read from the wire a new memory allocation takes place. The same holds true at the gRPC layer for every new message for decompressing and decoding. These allocations result in excessive garbage collection cycles, which are expensive. Reusing memory buffers can reduce this GC pressure, and we are prototyping approaches to do so. As requests need buffers of differing sizes, one approach would be to maintain individual memory pools of fixed sizes (powers of two). So now when reading x bytes from the wire we can find the nearest power of 2 greater than x and reuse a buffer from our cache if available or allocate a new one if need be. We will be using golang sync Pools so we don't have to worry about garbage collection. However, we will need to run sufficient tests before committing to this.
###### Results:
* Benchmark on a real network:
* Server and client were launched on two VMs in different continents. RTT of ~152ms.
* Client made an RPC with a payload and server responded back with an empty message.
* The time taken for each RPC was measured.
* [Code link](https://github.com/grpc/grpc-go/compare/master...MakMukhi:http_greeter)
<table>
<tr><th>Message Size </th><th>GRPC </th><th>HTTP 1.1</th></tr>
<tr><td>1 KB</td><td>~152 ms</td><td>~152 ms</td></tr>
<tr><td>10 KB</td><td>~152 ms</td><td>~152 ms</td></tr>
<tr><td>10 KB</td><td>~152 ms</td><td>~152 ms</td></tr>
<tr><td>1 MB</td><td>~152 ms</td><td>~152 ms</td></tr>
<tr><td>10 MB</td><td>~622 ms</td><td>~630 ms</td></tr>
<tr><td>100 MB</td><td>~5 sec</td><td>~5 sec</td></tr>
</table>
* Benchmark on simulated network:
* Server and client were launched on the same machine and different network latencies were simulated.
* Client made an RPC with 1MB of payload and server responded back with an empty message.
* The time taken for each RPC was measured.
* Following tables show time taken by first 10 RPCs.
* [Code link](https://github.com/grpc/grpc-go/compare/master...MakMukhi:grpc_vs_http)
##### No Latency Network
| GRPC | HTTP 2.0 | HTTP 1.1 |
| --------------|:-------------:|-----------:|
|5.097809ms|16.107461ms|18.298959ms |
|4.46083ms|4.301808ms|7.715456ms
|5.081421ms|4.076645ms|8.118601ms
|4.338013ms|4.232606ms|6.621028ms
|5.013544ms|4.693488ms|5.83375ms
|3.963463ms|4.558047ms|5.571579ms
|3.509808ms|4.855556ms|4.966938ms
|4.864618ms|4.324159ms|6.576279ms
|3.545933ms|4.61375ms|6.105608ms
|3.481094ms|4.621215ms|7.001607ms
##### Network with RTT of 16ms
| GRPC | HTTP 2.0 | HTTP 1.1 |
| --------------|:-------------:|-----------:|
|118.837625ms|84.453913ms|58.858109ms
|36.801006ms|22.476308ms|20.877585ms
|35.008349ms|21.206222ms|19.793881ms
|21.153461ms|20.940937ms|22.18179ms
|20.640364ms|21.888247ms|21.4666ms
|21.410346ms|21.186008ms|20.925514ms
|19.755766ms|21.818027ms|20.553768ms
|20.388882ms|21.366796ms|21.460029ms
|20.623342ms|20.681414ms|20.586908ms
|20.452023ms|20.781208ms|20.278481ms
##### Network with RTT of 64ms
| GRPC | HTTP 2.0 | HTTP 1.1 |
| --------------|:-------------:|-----------:|
|455.072669ms|275.290241ms|208.826314ms
|195.43357ms|70.386788ms|70.042513ms
|132.215978ms|70.01131ms|71.19429ms
|69.239273ms|70.032237ms|69.479335ms
|68.669903ms|70.192272ms|70.858937ms
|70.458108ms|69.395154ms|71.161921ms
|68.488057ms|69.252731ms|71.374758ms
|68.816031ms|69.628744ms|70.141381ms
|69.170105ms|68.935813ms|70.685521ms
|68.831608ms|69.728349ms|69.45605ms

View File

@ -0,0 +1,14 @@
---
attribution: Mark Mandel, Sandeep Dinesh
date: "2017-09-14T00:00:00Z"
published: true
title: The gRPC Meetup Kit
url: blog/meetup-kit
---
<p>If you have ever wanted to run an event around <a href="http://grpc.io">gRPC</a>, but didn&rsquo;t know where to start, or wasn&rsquo;t sure what content is available - we have released the <a href="https://github.com/grpc-ecosystem/meetup-kit">gRPC Meetup Kit</a>!</p>
<!--more-->
<p>The meetup kit includes a 15 minute presentation on the basic concepts of gRPC, with accompanying <a href="https://docs.google.com/presentation/d/1dgI09a-_4dwBMLyqfwchvS6iXtbcISQPLAXL6gSYOcc/edit?usp=sharing">slides</a> and <a href="https://www.youtube.com/watch?v=UVsIfSfS6I4">video</a> for either reference or playback, as well as a <a href="https://codelabs.developers.google.com/codelabs/cloud-grpc/index.html">45 minute codelab</a> that takes you through the basics of gRPC in <a href="https://nodejs.org">Node.js</a> and <a href="https://golang.org/">Go</a>. At the end of the codelab participants will have a solid understanding of the fundamentals of gRPC.</p>
<p>If you are thinking about running a gRPC event, make sure to contact us to receive <a href="https://goo.gl/forms/C3TCtFdobz4ippty2">gRPC stickers</a> and/or organise <a href="https://goo.gl/forms/pvxNwWExr5ApbNst2">office hours over Hangouts with the gRPC team</a>! </p>

View File

@ -0,0 +1,141 @@
---
author: Doug Fawley, gRPC-Go TL
company: Google
company-link: google.com
date: "2018-01-22T00:00:00Z"
published: true
title: 2018-01-19 gRPC-Go Engineering Practices
---
It's the start of the new year, and almost the end of my first full year on the
gRPC-Go project, so I'd like to take this opportunity to provide an update on
the state of gRPC-Go development and give some visibility into how we manage the
project. For me, personally, this is the first open source project to which
I've meaningfully contributed, so this year has been a learning experience for
me. Over this year, the team has made constant improvements to our work habits
and communication. I still see room for improvement, but I believe we are in a
considerably better place than we were a year ago.
<!--more-->
## Repo Health
When I first joined the gRPC-Go team, they had been without their previous
technical lead for a few months. At that time, we had 45 open PRs, the oldest
of which was over a year old at the time. As a new team member and maintainer,
the accumulation of stale PRs made it difficult to assess priorities and
understand the state of things. For our contributors, neglecting PRs was both
disrespectful and an inconvenience when we started asking for rebases due to
other commits. To resolve this, we made a concerted effort to either merge or
close all of those PRs, and we now hold weekly meetings to review the status of
every active PR to prevent the situation from reoccurring.
At the same time, we had 103 open issues, many of which were already fixed or
outdated or untriaged. Since then, we fixed or closed 85 of those and put in
place a process to ensure we triage and prioritize new issues on a weekly
rotation. Similarly to our PRs, we also review our assigned and high-priority
issues in a weekly meeting.
Our ongoing SLO for new issues and PRs is 1 week to triage and first response.
We also revamped our [labels](https://github.com/grpc/grpc-go/labels) for issues
and PRs to help with organization. We typically apply a priority (P0-P3) and a
type (e.g. Bug, Feature, or Performance) to every issue. We also have a
collection of status labels we apply in various situations. The type labels are
also applied to PRs to aid in generating our release notes.
## Versioning and Backward Compatibility
We have recently documented our [versioning
policy](https://github.com/grpc/grpc-go/blob/master/Documentation/versioning.md).
Our goal is to maintain full backward compatibility except in limited
circumstances, including experimental APIs and mitigating security risks (most
notably [#1392](https://github.com/grpc/grpc-go/pull/1392)). If you notice a
behavior regression, please don't hesitate to [open an
issue](https://github.com/grpc/grpc-go/issues/new) in our repo (please [be
reasonable](https://xkcd.com/1172/)).
## gRFC
The [gRPC proposal repo](https://github.com/grpc/proposal) contains proposals
for substantial feature changes for gRPC that need to be designed upfront,
called gRFCs. The purpose of this process is to provide visibility and solicit
feedback from the community. Each change is discussed on our [mailing
list](https://groups.google.com/forum/#!forum/grpc-io) and debated before the
change is made. We leveraged this before making the
backward-compatibility-breaking metadata change ([gRFC
L7](https://github.com/grpc/proposal/blob/master/L7-go-metadata-api.md)), and
also for designing the new resolver/balancer API ([gRFC
L9](https://github.com/grpc/proposal/pull/30)).
## Regression Testing
Every PR in our repo must pass our unit and end-to-end tests. Our current test
coverage is 85%. Anytime a regression is identified, we add a test that covers
the failing scenario, both to prove to ourselves that the problem is resolved by
the fix, and to prevent it from reoccurring in the future. This helps us
improve our overall coverage numbers as well. We also intend to re-enable
coverage reporting for all PRs, but in a non-blocking fashion ([related
issue](https://github.com/grpc/grpc-go/issues/1676)).
In addition to testing for correctness, any PR that we suspect will impact
performance is run though our benchmarks. We have a set of benchmarks both in
our [open source repo](https://github.com/grpc/grpc-go/tree/master/benchmark)
and also within Google. These comprise a variety of workloads that we believe
are most important for our users, both streaming and unary, and some are
specifically designed to measure our optimal QPS, throughput, or latency.
## Releases
The GA release of gRPC-Go was made in conjunction with the other languages in
July of 2016. The team performed several patch releases between then and the
end of 2016, but none included release notes. Our subsequent releases have
improved in regularity (a minor release is performed every six weeks) and
in the quality of the release notes. We also are responsive with patch
releases, back-porting bug fixes to older releases either on demand or for more
serious issues within a week.
When performing a release, in addition to the tests in our repo, we also run a
full suite of inter-op tests with other gRPC language implementations. This
process has been working well for us, and we will cover more about this in a
future blog post.
## Non-Open Source Work
We have taken an "open source first" approach to developing gRPC. This means
that, wherever possible, gRPC functionality is added directly into the open
source project. However, to work within Google's infrastructure, our team
sometimes needs to provide additional functionality on top of gRPC. This is
typically done through hooks like the [stats
API](https://godoc.org/google.golang.org/grpc/stats#Handler) or
[interceptors](https://godoc.org/google.golang.org/grpc#UnaryClientInterceptor)
or [custom resolvers](https://godoc.org/google.golang.org/grpc/resolver).
To keep Google's internal version of gRPC up-to-date with the open source
version, we do weekly or on-demand imports. Before an import, we run every test
within Google that depends upon gRPC. This gives us another way in which we can
catch problems before performing releases in Open Source.
## Looking Forward
In 2018, we intend to do more of the same, and maintain our SLOs around
addressing issues and accepting contributions to the project. We also would
like to more aggressively tag issues with the ["Help
Wanted"](https://github.com/grpc/grpc-go/labels/Status%3A%20Help%20Wanted) label
for anyone looking to contribute to have a bigger selection of issues to choose
from.
For gRPC itself, one of our main focuses right now is performance, which we hope
will transparently benefit many of our users. In the near-term, we have some
exciting changes we're wrapping up that should provide a 30+% reduction in
latency with high concurrency, resulting in a QPS improvement of ~25%. Once
that work is done, we have a list of other [performance
issues](https://github.com/grpc/grpc-go/issues?q=is%3Aissue+is%3Aopen+label%3A%22Type%3A+Performance%22)
that we'll be tackling next.
On user experience, we want to provide better documentation, and are starting to
improve our godoc with better comments and more examples. We want to improve
the overall experience of using gRPC, so we will be working closely on projects
around distributed tracing, monitoring, and testing to make gRPC services easier
to manage in production. We want to do more, and we are hoping that starting
with these and listening to feedback will help us ship improvements steadily.

View File

@ -0,0 +1,139 @@
---
author: Gráinne Sheerin, Google SRE
company: Google
company-link: https://www.google.com
date: "2018-02-26T00:00:00Z"
published: true
title: gRPC and Deadlines
url: blog/deadlines
---
**TL;DR Always set a deadline**. This post explains why we recommend being deliberate about setting deadlines, with useful code snippets to show you how.
<!--more-->
When you use gRPC, the gRPC library takes care of communication, marshalling, unmarshalling, and deadline enforcement. Deadlines allow gRPC clients to specify how long they are willing to wait for an RPC to complete before the RPC is terminated with the error `DEADLINE_EXCEEDED`. By default this deadline is a very large number, dependent on the language implementation. How deadlines are specified is also language-dependent. Some language APIs work in terms of a **deadline**, a fixed point in time by which the RPC should complete. Others use a **timeout**, a duration of time after which the RPC times out.
In general, when you don't set a deadline, resources will be held for all in-flight requests, and all requests can potentially reach the maximum timeout. This puts the service at risk of running out of resources, like memory, which would increase the latency of the service, or could crash the entire process in the worst case.
To avoid this, services should specify the longest default deadline they technically support, and clients should wait until the response is no longer useful to them. For the service this can be as simple as providing a comment in the .proto file. For the client this involves setting useful deadlines.
There is no single answer to "What is a good deadline/timeout value?". Your service might be as simple as the [Greeter](https://github.com/grpc/grpc/blob/master/examples/protos/helloworld.proto) in our quick start guides, in which case 100 ms would be fine. Your service might be as complex as a globally-distributed and strongly consistent database. The deadline for a client query will be different from how long they should wait for you to drop their table.
So what do you need to consider to make an informed choice of deadline? Factors to take into account include the end to end latency of the whole system, which RPCs are serial, and which can be made in parallel. You should to be able to put numbers on it, even if it's a rough calculation. Engineers need to understand the service and then set a deliberate deadline for the RPCs between clients and servers.
In gRPC, both the client and server make their own independent and local determination about whether the remote procedure call (RPC) was successful. This means their conclusions may not match! An RPC that finished successfully on the server side can fail on the client side. For example, the server can send the response, but the reply can arrive at the client after their deadline has expired. The client will already have terminated with the status error `DEADLINE_EXCEEDED`. This should be checked for and managed at the application level.
## Setting a deadline
As a client you should always set a deadline for how long you are willing to wait for a reply from the server. Here's an example using the greeting service from our [Quick Start Guides](/docs/quickstart/):
### C++
```cpp
ClientContext context;
time_point deadline = std::chrono::system_clock::now() +
std::chrono::milliseconds(100);
context.set_deadline(deadline);
```
### Go
```go
clientDeadline := time.Now().Add(time.Duration(*deadlineMs) * time.Millisecond)
ctx, cancel := context.WithDeadline(ctx, clientDeadline)
```
### Java
```java
response = blockingStub.withDeadlineAfter(deadlineMs, TimeUnit.MILLISECONDS).sayHello(request);
```
This sets the deadline to 100ms from when the client RPC is set to when the response is picked up by the client.
## Checking deadlines
On the server side, the server can query to see if a particular RPC is no longer wanted. Before a server starts work on a response it is very important to check if there is still a client waiting for it. This is especially important to do before starting expensive processing.
### C++
```cpp
if (context->IsCancelled()) {
return Status(StatusCode::CANCELLED, "Deadline exceeded or Client cancelled, abandoning.");
}
```
### Go
```go
if ctx.Err() == context.Canceled {
return status.New(codes.Canceled, "Client cancelled, abandoning.")
}
```
### Java
```java
if (Context.current().isCancelled()) {
responseObserver.onError(Status.CANCELLED.withDescription("Cancelled by client").asRuntimeException());
return;
}
```
Is it useful for a server to continue with the request, when you know your client has reached their deadline? It depends. If the response can be cached in the server, it can be worth processing and caching it; particularly if it's resource heavy, and costs you money for each request. This will make future requests faster as the result will already be available.
## Adjusting deadlines
What if you set a deadline but a new release or server version causes a bad regression? The deadline could be too small, resulting in all your requests timing out with `DEADLINE_EXCEEDED`, or too large and your user tail latency is now massive. You can use a flag to set and adjust the deadline.
### C++
```cpp
#include <gflags/gflags.h>
DEFINE_int32(deadline_ms, 20*1000, "Deadline in milliseconds.");
ClientContext context;
time_point deadline = std::chrono::system_clock::now() +
std::chrono::milliseconds(FLAGS_deadline_ms);
context.set_deadline(deadline);
```
### Go
```go
var deadlineMs = flag.Int("deadline_ms", 20*1000, "Default deadline in milliseconds.")
ctx, cancel := context.WithTimeout(ctx, time.Duration(*deadlineMs) * time.Millisecond)
```
### Java
```java
@Option(name="--deadline_ms", usage="Deadline in milliseconds.")
private int deadlineMs = 20*1000;
response = blockingStub.withDeadlineAfter(deadlineMs, TimeUnit.MILLISECONDS).sayHello(request);
```
Now the deadline can be adjusted to wait longer to avoid failing, without the need to cherry-pick a release with a different hard coded deadline. This lets you mitigate the issue for users until the regression can be debugged and resolved.

View File

@ -0,0 +1,399 @@
---
author: Carl Mastrangelo
author-link: https://github.com/carl-mastrangelo
company: Google
company-link: https://www.google.com
date: "2018-03-06T00:00:00Z"
published: true
title: So You Want to Optimize gRPC - Part 1
url: blog/optimizing-grpc-part-1
---
A common question with gRPC is how to make it fast. The gRPC library offers users access to high
performance RPCs, but it isn't always clear how to achieve this. Because this question is common
enough I thought I would try to show my thought process when tuning programs.
<!--more-->
## Setup
Consider a basic key-value service that is used by multiple other programs. The service needs to
be safe for concurrent access in case multiple updates happen at the same time. It needs to be
able to scale up to use the available hardware. Lastly, it needs to be fast. gRPC is a perfect
fit for this type of service; let's look at the best way to implement it.
For this blog post, I have written an example
[client and server](https://github.com/carl-mastrangelo/kvstore) using gRPC Java. The program is
split into three main classes, and a protobuf file describing the API:
* [KvClient](https://github.com/carl-mastrangelo/kvstore/blob/01-start/src/main/java/io/grpc/examples/KvClient.java)
is a simulated user of the key value system. It randomly creates, retrieves, updates,
and deletes keys and values. The size of keys and values it uses is also randomly decided
using an [exponential distribution](https://en.wikipedia.org/wiki/Exponential_distribution).
* [KvService](https://github.com/carl-mastrangelo/kvstore/blob/01-start/src/main/java/io/grpc/examples/KvService.java)
is an implementation of the key value service. It is installed by the gRPC Server to handle
the requests issued by the client. To simulate storing the keys and values on disk, it adds
short sleeps while handling the request. Reads and writes will experience a 10 and 50
millisecond delay to make the example act more like a persistent database.
* [KvRunner](https://github.com/carl-mastrangelo/kvstore/blob/01-start/src/main/java/io/grpc/examples/KvRunner.java)
orchestrates the interaction between the client and the server. It is the main entry point,
starting both the client and server in process, and waiting for the the client to execute its
work. The runner does work for 60 seconds and then records how many RPCs were completed.
* [kvstore.proto](https://github.com/carl-mastrangelo/kvstore/blob/01-start/src/main/proto/kvstore.proto)
is the protocol buffer definition of our service. It describes exactly what clients can expect
from the service. For the sake of simplicity, we will use Create, Retrieve, Update, and Delete
as the operations (commonly known as CRUD). These operations work with keys and values made up
of arbitrary bytes. While they are somewhat REST like, we reserve the right to diverge and
add more complex operations in the future.
[Protocol buffers(protos)](https://developers.google.com/protocol-buffers/) aren't required to use
gRPC, they are a very convenient way to define service interfaces and generate client and server
code. The generated code acts as glue code between the application logic and the core gRPC
library. We refer to the code called by a gRPC client the _stub_.
## Starting Point
### Client
Now that we know what the program _should_ do, we can start looking at how the program performs.
As mentioned above, the client makes random RPCs. For example, here is the code that makes the
[creation](https://github.com/carl-mastrangelo/kvstore/blob/f422b1b6e7c69f8c07f96ed4ddba64757242352c/src/main/java/io/grpc/examples/KvClient.java#L80)
request:
```java
private void doCreate(KeyValueServiceBlockingStub stub) {
ByteString key = createRandomKey();
try {
CreateResponse res = stub.create(
CreateRequest.newBuilder()
.setKey(key)
.setValue(randomBytes(MEAN_VALUE_SIZE))
.build());
if (!res.equals(CreateResponse.getDefaultInstance())) {
throw new RuntimeException("Invalid response");
}
} catch (StatusRuntimeException e) {
if (e.getStatus().getCode() == Code.ALREADY_EXISTS) {
knownKeys.remove(key);
logger.log(Level.INFO, "Key already existed", e);
} else {
throw e;
}
}
}
```
A random key is created, along with a random value. The request is sent to the server, and the
client waits for the response. When the response is returned, the code checks that it is as
expected, and if not, throws an exception. While the keys are chosen randomly, they need to be
unique, so we need to make sure that each key isn't already in use. To address this, the code
keeps track of keys it has created, so as not to create the same key twice. However, it's
possible that another client already created a particular key, so we log it and move on.
Otherwise, an exception is thrown.
We use the **blocking** gRPC API here, which issues a requests and waits for a response.
This is the simplest gRPC stub, but it blocks the thread while running. This means that at most
**one** RPC can be in progress at a time from the client's point of view.
### Server
On the server side, the request is received by the
[service handler](https://github.com/carl-mastrangelo/kvstore/blob/f422b1b6e7c69f8c07f96ed4ddba64757242352c/src/main/java/io/grpc/examples/KvService.java#L34):
```java
private final Map<ByteBuffer, ByteBuffer> store = new HashMap<>();
@Override
public synchronized void create(
CreateRequest request, StreamObserver<CreateResponse> responseObserver) {
ByteBuffer key = request.getKey().asReadOnlyByteBuffer();
ByteBuffer value = request.getValue().asReadOnlyByteBuffer();
simulateWork(WRITE_DELAY_MILLIS);
if (store.putIfAbsent(key, value) == null) {
responseObserver.onNext(CreateResponse.getDefaultInstance());
responseObserver.onCompleted();
return;
}
responseObserver.onError(Status.ALREADY_EXISTS.asRuntimeException());
}
```
The service extracts the key and value as `ByteBuffer`s from the request. It acquires the lock
on the service itself to make sure concurrent requests don't corrupt the storage. After
simulating the disk access of a write, it stores it in the `Map` of keys to values.
Unlike the client code, the service handler is **non-blocking**, meaning it doesn't return a
value like a function call would. Instead, it invokes `onNext()` on the `responseObserver` to
send the response back to the client. Note that this call is also non-blocking, meaning that
the message may not yet have been sent. To indicate we are done with the message, `onCompleted()`
is called.
### Performance
Since the code is safe and correct, let's see how it performs. For my measurement I'm using my
Ubuntu system with a 12 core processor and 32 GB of memory. Let's build and run the code:
```sh
$ ./gradlew installDist
$ time ./build/install/kvstore/bin/kvstore
Feb 26, 2018 1:10:07 PM io.grpc.examples.KvRunner runClient
INFO: Starting
Feb 26, 2018 1:11:07 PM io.grpc.examples.KvRunner runClient
INFO: Did 16.55 RPCs/s
real 1m0.927s
user 0m10.688s
sys 0m1.456s
```
Yikes! For such a powerful machine, it can only do about 16 RPCs per second. It hardly used any
of our CPU, and we don't know how much memory it was using. We need to figure out why it's so
slow.
## Optimization
### Analysis
Let's understand what the program is doing before we make any changes. When optimizing, we need
to know where the code is spending its time in order to know what we can optimize. At this early
stage, we don't need profiling tools yet, we can just reason about the program.
The client is started and serially issues RPCs for about a minute. Each iteration, it [randomly
decides](https://github.com/carl-mastrangelo/kvstore/blob/f422b1b6e7c69f8c07f96ed4ddba64757242352c/src/main/java/io/grpc/examples/KvClient.java#L49)
what operation to do:
```java
void doClientWork(AtomicBoolean done) {
Random random = new Random();
KeyValueServiceBlockingStub stub = KeyValueServiceGrpc.newBlockingStub(channel);
while (!done.get()) {
// Pick a random CRUD action to take.
int command = random.nextInt(4);
if (command == 0) {
doCreate(stub);
continue;
}
/* ... */
rpcCount++;
}
}
```
This means that **at most one RPC can be active at any time**. Each RPC has to wait for the
previous one to complete. And how long does each RPC take to complete? From reading the server
code, most of the operations are doing a write which takes about 50 milliseconds. At top
efficiency, the most operations this code can do per second is about 20:
20 queries = 1000ms / (50 ms / query)
Our code can do about 16 queries in a second, so that seems about right. We can spot check this
assumption by looking at the output of the `time` command used to run the code. The server goes
to sleep when running queries in the
[`simulateWork`](https://github.com/carl-mastrangelo/kvstore/blob/f422b1b6e7c69f8c07f96ed4ddba64757242352c/src/main/java/io/grpc/examples/KvService.java#L88)
method. This implies that the program should be mostly idle while waiting for the RPCs to
complete.
We can confirm this is the case by looking at the `real` and `user` times of the command above.
They say that the amount of _wall clock_ time was 1 minute, while the amount of _cpu_ time
was 10 seconds. My powerful, multicore CPU was only busy 16% of the time. Thus, if we could
get the program to do more work during that time, it seems like we could get more RPCs complete.
### Hypothesis
Now we can state clearly what we think is the problem, and propose a solution. One way to speed
up programs is to make sure the CPU is not idling. To do this, we issue work concurrently.
In gRPC Java, there are three types of stubs: blocking, non-blocking, and listenable future. We
have already seen the blocking stub in the client, and the non-blocking stub in the server. The
listenable future API is a compromise between the two, offering both blocking and non-blocking
like behavior. As long as we don't block a thread waiting for work to complete, we can start
new RPCs without waiting for the old ones to complete.
### Experiment
To test our hypothesis, let's modify the client code to use the listenable future API. This
means that we need to think more about concurrency in our code. For example, when keeping track
of known keys client-side, we need to safely read, modify, and write the keys. We also need to
make sure that in case of an error, we stop making new RPCs (proper error handling will be covered
in a future post). Lastly, we need to update the number of RPCs made concurrently, since the
update could happen in another thread.
Making all these changes increases the complexity of the code. This is a trade off you will need
to consider when optimizing your code. In general, code simplicity is at odds with optimization.
Java is not known for being terse. That said, the code below is still readable, and program flow
is still roughly from top to bottom in the function. Here is the
[`doCreate()`](https://github.com/carl-mastrangelo/kvstore/blob/f0113912c01ac4ea48a80bb7a4736ddcb3f21e24/src/main/java/io/grpc/examples/KvClient.java#L92)
method revised:
```java
private void doCreate(KeyValueServiceFutureStub stub, AtomicReference<Throwable> error) {
ByteString key = createRandomKey();
ListenableFuture<CreateResponse> res = stub.create(
CreateRequest.newBuilder()
.setKey(key)
.setValue(randomBytes(MEAN_VALUE_SIZE))
.build());
res.addListener(() -> rpcCount.incrementAndGet(), MoreExecutors.directExecutor());
Futures.addCallback(res, new FutureCallback<CreateResponse>() {
@Override
public void onSuccess(CreateResponse result) {
if (!result.equals(CreateResponse.getDefaultInstance())) {
error.compareAndSet(null, new RuntimeException("Invalid response"));
}
synchronized (knownKeys) {
knownKeys.add(key);
}
}
@Override
public void onFailure(Throwable t) {
Status status = Status.fromThrowable(t);
if (status.getCode() == Code.ALREADY_EXISTS) {
synchronized (knownKeys) {
knownKeys.remove(key);
}
logger.log(Level.INFO, "Key already existed", t);
} else {
error.compareAndSet(null, t);
}
}
});
}
```
The stub has been modified to be a `KeyValueServiceFutureStub`, which produces a `Future` when
called instead of the response itself. gRPC Java uses an extension of this called `ListenableFuture`,
which allows adding a callback when the future completes. For the sake of this program, we are
not as concerned with getting the response. Instead we care more if the RPC succeeded or not.
With that in mind, the code mainly checks for errors rather than processing the response.
The first change made is how the number of RPCs is recorded. Instead of incrementing the counter
outside of the main loop, we increment it when the RPC completes.
Next, we create a new object
for each RPC which handles both the success and failure cases. Because `doCreate()` will already
be completed by the time RPC callback is invoked, we need a way to propagate errors other than
by throwing. Instead, we try to update an reference atomically. The main loop will occasionally
check if an error has occurred and stop if there is a problem.
Lastly, the code is careful to only add a key to `knownKeys` when the RPC is actually complete,
and only remove it when known to have failed. We synchronize on the variable to make sure two
threads don't conflict. Note: although the access to `knownKeys` is threadsafe, there are still
[race conditions](https://en.wikipedia.org/wiki/Race_condition). It is possible that one thread
could read from `knownKeys`, a second thread delete from `knownKeys`, and then the first thread
issue an RPC using the first key. Synchronizing on the keys only ensures that it is consistent,
not that it is correct. Fixing this properly is outside of the scope of this post, so instead we
just log the event and move on. You will see a few such log statements if you run this program.
### Running the Code
If you start up this program and run it, you'll notice that it doesn't work:
```sh
WARNING: An exception was thrown by io.grpc.netty.NettyClientStream$Sink$1.operationComplete()
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
...
```
What?! Why would I show you code that fails? The reason is that in real life making a change often
doesn't work on the first try. In this case, the program ran out of memory. Odd things begin to
happen when a program runs out of memory. Often, the root cause is hard to find, and red herrings
abound. A confusing error message says
> unable to create new native thread
even though we didn't create any new threads in our code. Experience is very helpful in fixing
these problems rather than debugging. Since I have debugged many OOMs, I happen to know Java tells
us about the straw that broke the camel's back. Our program started using way more memory, but the
final allocation that failed happened, by chance, to be in thread creation.
So what happened? _There was no pushback to starting new RPCs._ In the blocking version, a new
RPC couldn't start until the last one completed. While slow, it also prevented us from creating
tons of RPCs that we didn't have memory for. We need to account for this in the listenable
future version.
To solve this, we can apply a self-imposed limit on the number of active RPCs. Before starting a
new RPC, we will try to acquire a permit. If we get one, the RPC can start. If not, we will wait
until one is available. When an RPC completes (either in success or failure), we return the
permit. To [accomplish](https://github.com/carl-mastrangelo/kvstore/blob/02-future-client/src/main/java/io/grpc/examples/KvClient.java#L94)
this, we will using a `Semaphore`:
```java
private final Semaphore limiter = new Semaphore(100);
private void doCreate(KeyValueServiceFutureStub stub, AtomicReference<Throwable> error)
throws InterruptedException {
limiter.acquire();
ByteString key = createRandomKey();
ListenableFuture<CreateResponse> res = stub.create(
CreateRequest.newBuilder()
.setKey(key)
.setValue(randomBytes(MEAN_VALUE_SIZE))
.build());
res.addListener(() -> {
rpcCount.incrementAndGet();
limiter.release();
}, MoreExecutors.directExecutor());
/* ... */
}
```
Now the code runs successfully, and doesn't run out of memory.
### Results
Building and running the code again looks a lot better:
```sh
$ ./gradlew installDist
$ time ./build/install/kvstore/bin/kvstore
Feb 26, 2018 2:40:47 PM io.grpc.examples.KvRunner runClient
INFO: Starting
Feb 26, 2018 2:41:47 PM io.grpc.examples.KvRunner runClient
INFO: Did 24.283 RPCs/s
real 1m0.923s
user 0m12.772s
sys 0m1.572s
```
Our code does **46%** more RPCs per second than previously. We can also see that we used about 20%
more CPU than previously. As we can see our hypothesis turned out to be correct and the fix
worked. All this happened without making any changes to the server. Also, we were able to
measure without using any special profilers or tracers.
Do the numbers make sense? We expect to issue mutation (create, update, and delete) RPCs each
about with 1/4 probability. Reads are also issue 1/4 of the time, but don't take as long. The
mean RPC time should be about the weighted average RPC time:
```
.25 * 50ms (create)
.25 * 10ms (retrieve)
.25 * 50ms (update)
+.25 * 50ms (delete)
------------
40ms
```
At 40ms on average per RPC, we would expect the number of RPCs per second to be:
25 queries = 1000ms / (40 ms / query)
That's approximately what we see with the new code. The server is still serially handling
requests, so it seems like we have more work to do in the future. But for now, our optimizations
seem to have worked.
## Conclusion
There are a lot of opportunities to optimize your gRPC code. To take advantage of these, you
need to understand what your code is doing, and what your code is supposed to do. This post shows
the very basics of how to approach and think about optimization. Always make sure to measure
before and after your changes, and use these measurements to guide your optimizations.
In [Part 2](/blog/optimizing-grpc-part-2), we will continue optimizing the server part of the code.

View File

@ -0,0 +1,232 @@
---
author: Carl Mastrangelo
author-link: https://carlmastrangelo.com/
company: Google
company-link: https://www.google.com
date: "2018-04-16T00:00:00Z"
published: true
title: So You Want to Optimize gRPC - Part 2
url: blog/optimizing-grpc-part-2
---
How fast is gRPC? Pretty fast if you understand how modern clients and servers are built. In
[part 1](/blog/optimizing-grpc-part-1), I showed how to get an easy **60%** improvement. In this
post I show how to get a **10000%** improvement.
<!--more-->
## Setup
As in [part 1](/blog/optimizing-grpc-part-1), we will start with an existing, Java based,
key-value service. The service will offer concurrent access for creating, reading, updating,
and deleting keys and values. All the code can be seen
[here](https://github.com/carl-mastrangelo/kvstore/tree/03-nonblocking-server) if you want to try
it out.
## Server Concurrency
Let's look at the [KvService](https://github.com/carl-mastrangelo/kvstore/blob/f422b1b6e7c69f8c07f96ed4ddba64757242352c/src/main/java/io/grpc/examples/KvService.java)
class. This service handles the RPCs sent by the client, making sure that none of them
accidentally corrupt the state of storage. To ensure this, the service uses the `synchronized`
keyword to ensure only one RPC is active at a time:
```java
private final Map<ByteBuffer, ByteBuffer> store = new HashMap<>();
@Override
public synchronized void create(
CreateRequest request, StreamObserver<CreateResponse> responseObserver) {
ByteBuffer key = request.getKey().asReadOnlyByteBuffer();
ByteBuffer value = request.getValue().asReadOnlyByteBuffer();
simulateWork(WRITE_DELAY_MILLIS);
if (store.putIfAbsent(key, value) == null) {
responseObserver.onNext(CreateResponse.getDefaultInstance());
responseObserver.onCompleted();
return;
}
responseObserver.onError(Status.ALREADY_EXISTS.asRuntimeException());
}
```
While this code is thread safe, it comes at a high price: only one RPC can ever be active! We
need some way of allowing multiple operations to happen safely at the same time. Otherwise,
the program can't take advantage of all the available processors.
### Breaking the Lock
To solve this, we need to know a little more about the _semantics_ of our RPCs. The more we know
about how the RPCs are supposed to work, the more optimizations we can make. For a key-value
service, we notice that _operations to different keys don't interfere with each other_. When
we update key 'foo', it has no bearing on the value stored for key 'bar'. But, our server is
written such that operations to any key must be synchronized with respect to each other. If we
could make operations to different keys happen concurrently, our server could handle a lot more
load.
With the idea in place, we need to figure out how to modify the server. The
`synchronized` keyword causes Java to acquire a lock on `this`, which is the instance of
`KvService`. The lock is acquired when the `create` method is entered, and released on return.
The reason we need synchronization is to protect the `store` Map. Since it is implemented as a
[`HashMap`](https://en.wikipedia.org/wiki/Hash_table), modifications to it change the internal
arrays. Because the internal state of the `HashMap` will be corrupted if not properly
synchronized, we can't just remove the synchronization on the method.
However, Java offers a solution here: `ConcurrentHashMap`. This class offers the ability to
safely access the contents of the map concurrently. For example, in our usage we want to check
if a key is present. If not present, we want to add it, else we want to return an error. The
`putIfAbsent` method atomically checks if a value is present, adds it if not, and tells us if
it succeeded.
Concurrent maps provide stronger guarantees about the safety of `putIfAbsent`, so we can swap the
`HashMap` to a `ConcurrentHashMap` and remove `synchronized`:
```java
private final ConcurrentMap<ByteBuffer, ByteBuffer> store = new ConcurrentHashMap<>();
@Override
public void create(
CreateRequest request, StreamObserver<CreateResponse> responseObserver) {
ByteBuffer key = request.getKey().asReadOnlyByteBuffer();
ByteBuffer value = request.getValue().asReadOnlyByteBuffer();
simulateWork(WRITE_DELAY_MILLIS);
if (store.putIfAbsent(key, value) == null) {
responseObserver.onNext(CreateResponse.getDefaultInstance());
responseObserver.onCompleted();
return;
}
responseObserver.onError(Status.ALREADY_EXISTS.asRuntimeException());
}
```
### If at First You Don't Succeed
Updating `create` was pretty easy. Doing the same for `retrieve` and `delete` is easy too.
However, the `update` method is a little trickier. Let's take a look at what it's doing:
```java
@Override
public synchronized void update(
UpdateRequest request, StreamObserver<UpdateResponse> responseObserver) {
ByteBuffer key = request.getKey().asReadOnlyByteBuffer();
ByteBuffer newValue = request.getValue().asReadOnlyByteBuffer();
simulateWork(WRITE_DELAY_MILLIS);
ByteBuffer oldValue = store.get(key);
if (oldValue == null) {
responseObserver.onError(Status.NOT_FOUND.asRuntimeException());
return;
}
store.replace(key, oldValue, newValue);
responseObserver.onNext(UpdateResponse.getDefaultInstance());
responseObserver.onCompleted();
}
```
Updating a key to a new value needs two interactions with the `store`:
1. Check to see if the key exists at all.
2. Update the previous value to the new value.
Unfortunately `ConcurrentMap` doesn't have a straightforward method to do this. Since we may not
be the only ones modifying the map, we need to handle the possibility that our assumptions
have changed. We read the old value out, but by the time we replace it, it may have been deleted.
To reconcile this, let's retry if `replace` fails. It returns true if the replace
was successful. (`ConcurrentMap` asserts that the operations will not corrupt the internal
structure, but doesn't say that they will succeed!) We will use a do-while loop:
```java
@Override
public void update(
UpdateRequest request, StreamObserver<UpdateResponse> responseObserver) {
// ...
ByteBuffer oldValue;
do {
oldValue = store.get(key);
if (oldValue == null) {
responseObserver.onError(Status.NOT_FOUND.asRuntimeException());
return;
}
} while (!store.replace(key, oldValue, newValue));
responseObserver.onNext(UpdateResponse.getDefaultInstance());
responseObserver.onCompleted();
}
```
The code wants to fail if it ever sees null, but never if there is a non-null previous value. One
thing to note is that if _another_ RPC modifies the value between the `store.get()` call and the
`store.replace()` call, it will fail. This is a non-fatal error for us, so we will just try again.
Once it has successfully put the new value in, the service can respond back to the user.
There is one other possibility that could happen: two RPCs could update the same value and
overwrite each other's work. While this may be okay for some applications, it would not be
suitable for APIs that provide transactionality. It is out of scope for this post to show how to
fix this, but be aware it can happen.
## Measuring the Performance
In the last post, we modified the client to be asynchronous and use the gRPC ListenableFuture API.
To avoid running out of memory, the client was modified to have at most **100** active RPCs at a
time. As we now see from the server code, performance was bottlenecked on acquiring locks.
Since we have removed those, we expect to see a 100x improvement. The same amount of work is done
per RPC, but a lot more are happening at the same time. Let's see if our hypothesis holds:
Before:
```sh
$ ./gradlew installDist
$ time ./build/install/kvstore/bin/kvstore
Apr 16, 2018 10:38:42 AM io.grpc.examples.KvRunner runClient
INFO: Did 24.067 RPCs/s
real 1m0.886s
user 0m9.340s
sys 0m1.660s
```
After:
```sh
Apr 16, 2018 10:36:48 AM io.grpc.examples.KvRunner runClient
INFO: Did 2,449.8 RPCs/s
real 1m0.968s
user 0m52.184s
sys 0m20.692s
```
Wow! From 24 RPCs per second to 2,400 RPCs per second. And we didn't have to change our API or
our client. This is why understanding your code and API semantics is important. By exploiting the
properties of the key-value API, namely the independence of operations on different keys, the code
is now much faster.
One noteworthy artifact of this code is the `user` timing in the results. Previously the user time
was only 9 seconds, meaning that the CPU was active only 9 of the 60 seconds the code was running.
Afterwards, the usage went up by more than 5x to 52 seconds. The reason is that more CPU cores are
active. The `KvServer` is simulating work by sleeping for a few milliseconds. In a real
application, it would be doing useful work and not have such a dramatic change. Rather than
scaling per the number of RPCs, it would scale per the number of cores. Thus, if your machine had
12 cores, you would expect to see a 12x improvement. Still not bad though!
### More Errors
If you run this code yourself, you will see a lot more log spam in the form:
```sh
Apr 16, 2018 10:38:40 AM io.grpc.examples.KvClient$3 onFailure
INFO: Key not found
io.grpc.StatusRuntimeException: NOT_FOUND
```
The reason is that the new version of the code makes API level race conditions more apparent.
With 100 times as many RPCs happening, the chance of updates and deletes colliding with each other
is more likely. To solve this we will need to modify the API definition. Stay tuned for the next
post showing how to fix this.
## Conclusion
There are a lot of opportunities to optimize your gRPC code. To take advantage of these, you
need to understand what your code is doing. This post shows how to convert a lock-based service into
a low-contention, lock-free service. Always make sure to measure before and after your changes.
In Part 3, we will optimize the code even further. 2,400 RPC/s is just the beginning!

View File

@ -0,0 +1,132 @@
---
author: Spencer Fang
author-link: https://github.com/zpencer
company: Google
company-link: https://www.google.com
date: "2018-06-19T00:00:00Z"
published: true
title: gRPC ❤ Kotlin
url: blog/kotlin-gradle-projects
---
Did you know that gRPC Java now has out of box support for Kotlin projects built with Gradle? [Kotlin](https://kotlinlang.org/) is a modern, statically typed language developed by JetBrains that targets the JVM and Android. It is generally easy for Kotlin programs to interoperate with existing Java libraries. To improve this experience further, we have added support to the [protobuf-gradle-plugin](https://github.com/google/protobuf-gradle-plugin/releases) so that the generated Java libraries are automatically picked up by Kotlin. You can now add the protobuf-gradle-plugin to your Kotlin project, and use gRPC just like you would with a typical Java project.
<!--more-->
The following examples show you how to configure a project for a JVM application and an Android application using Kotlin.
### Kotlin gRPC client and server
The full example can be found [here](https://github.com/grpc/grpc-java/tree/master/examples/example-kotlin).
Configuring gRPC for a Kotlin project is the same as configuring it for a Java project.
Below is a snippet of the example project's `build.gradle` highlighting some Kotlin related sections:
```groovy
apply plugin: 'kotlin'
apply plugin: 'com.google.protobuf'
// Generate IntelliJ IDEA's .idea & .iml project files.
// protobuf-gradle-plugin automatically registers *.proto and the gen output files
// to IntelliJ as sources.
// For best results, install the Protobuf and Kotlin plugins for IntelliJ.
apply plugin: 'idea'
buildscript {
ext.kotlin_version = '1.2.21'
repositories {
mavenCentral()
}
dependencies {
classpath 'com.google.protobuf:protobuf-gradle-plugin:0.8.5'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
}
}
dependencies {
compile "org.jetbrains.kotlin:kotlin-stdlib-jdk8:$kotlin_version"
// The rest of the projects dep are added below, refer to example URL
}
// The standard protobuf block, same as normal gRPC Java projects
protobuf {
protoc { artifact = 'com.google.protobuf:protoc:3.5.1-1' }
plugins {
grpc { artifact = "io.grpc:protoc-gen-grpc-java:${grpcVersion}" }
}
generateProtoTasks {
all()*.plugins { grpc {} }
}
}
```
Now Kotlin source files can use the proto generated messages and gRPC stubs. By default, Kotlin sources should be placed in `src/main/kotlin` and `src/test/kotlin`. If needed, run `./gradlew generateProto generateTestProto` and refresh IntelliJ for the generated sources to appear in the IDE. Finally, run `./gradlew installDist` to build the project, and use `./build/install/examples/bin/hello-world-client` or `./build/install/examples/bin/hello-world-server` to run the example.
You can read more about configuring Kotlin [here](https://kotlinlang.org/docs/reference/using-gradle.html).
### Kotlin Android gRPC application
The full example can be found [here](https://github.com/grpc/grpc-java/tree/master/examples/example-kotlin/android/helloworld).
Configuring gRPC for a Kotlin Android project is the same as configuring it for a normal Android project.
In the top level `build.gradle` file:
```groovy
buildscript {
ext.kotlin_version = '1.2.21'
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
classpath "com.google.protobuf:protobuf-gradle-plugin:0.8.5"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
}
}
allprojects {
repositories {
google()
jcenter()
}
}
```
And in the app module's `build.gradle` file:
```groovy
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin: 'com.google.protobuf'
repositories {
mavenCentral()
}
dependencies {
compile "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
// refer to full example for remaining deps
}
protobuf {
// The normal gRPC configuration for Android goes here
}
android {
// Android Studio 3.1 does not automatically pick up 'src/main/kotlin' as source files
sourceSets {
main.java.srcDirs += 'src/main/kotlin'
}
}
```
Just like the non-Android project, run `./gradlew generateProto generateProto` to run the proto code generator and `./gradlew build` to build the project.
Finally, test out the Android app by opening the project in Android Studio and selecting `Run > Run 'app'`.
<img src="/img/kotlin-project-android-app.png" alt="Kotlin Android app example" style="max-width: 404px">
We are excited about improving the gRPC experience for Kotlin developers. Please add enhancement ideas or bugs to the [protobuf-gradle-plugin issue tracker](https://github.com/google/protobuf-gradle-plugin/issues) or the [grpc-java issue tracker](https://github.com/grpc/grpc-java/issues).

View File

@ -0,0 +1,116 @@
---
author: Dapeng Zhang
author-link: https://github.com/dapengzhang0
company: Google
company-link: https://www.google.com
date: "2018-06-26T00:00:00Z"
published: true
title: Gracefully clean up in gRPC JUnit tests
url: blog/gracefully_clean_up_in_grpc_junit_tests
---
It is best practice to always clean up gRPC resources such as client channels, servers, and previously attached Contexts whenever they are no longer needed.
This is even true for JUnit tests, because otherwise leaked resources may not only linger in your machine forever, but also interfere with subsequent tests. A not-so-bad case is that subsequent tests can't pass because of a leaked resource from the previous test. The worst case is that some subsequent tests pass that wouldn't have passed at all if the previously passed test had not leaked a resource.
<!--more-->
So cleanup, cleanup, cleanup... and fail the test if any cleanup is not successful.
A typical example is
```java
public class MyTest {
private Server server;
private ManagedChannel channel;
...
@After
public void tearDown() throws InterruptedException {
// assume channel and server are not null
channel.shutdownNow();
server.shutdownNow();
// fail the test if cleanup is not successful
assert channel.awaitTermination(5, TimeUnit.SECONDS) : "channel failed to shutdown";
assert server.awaitTermination(5, TimeUnit.SECONDS) : "server failed to shutdown";
}
...
}
```
or to be more graceful
```java
public class MyTest {
private Server server;
private ManagedChannel channel;
...
@After
public void tearDown() throws InterruptedException {
// assume channel and server are not null
channel.shutdown();
server.shutdown();
// fail the test if cannot gracefully shutdown
try {
assert channel.awaitTermination(5, TimeUnit.SECONDS) : "channel cannot be gracefully shutdown";
assert server.awaitTermination(5, TimeUnit.SECONDS) : "server cannot be gracefully shutdown";
} finally {
channel.shutdownNow();
server.shutdownNow();
}
}
...
}
```
However, having to add all this to every test so it shuts down gracefully gives you more work to do, as you need to write the shutdown boilerplate by yourself. Because of this, the gRPC testing library has helper rules to make this job less tedious.
Initially, a JUnit rule [`GrpcServerRule`][GrpcServerRule] was introduced to eliminate the shutdown boilerplate. This rule creates an In-Process server and channel at the beginning of the test, and shuts them down at the end of test automatically. However, users found this rule too restrictive in that it does not support transports other than In-Process transports, multiple channels to the server, custom channel or server builder options, and configuration inside individual test methods.
A more flexible JUnit rule [`GrpcCleanupRule`][GrpcCleanupRule] was introduced in gRPC release v1.13, which also eliminates the shutdown boilerplate. However unlike `GrpcServerRule`, `GrpcCleanupRule` does not create any server or channel automatically at all. Users create and start the server by themselves, and create channels by themselves, just as in plain tests. With this rule, users just need to register every resource (channel or server) that needs to be shut down at the end of test, and the rule will then shut them down gracefully automatically.
You can register resources either before running test methods
```java
public class MyTest {
@Rule
public GrpcCleanupRule grpcCleanup = new GrpcCleanupRule();
...
private String serverName = InProcessServerBuilder.generateName();
private Server server = grpcCleanup.register(InProcessServerBuilder
.forName(serverName).directExecutor().addService(myServiceImpl).build().start());
private ManagedChannel channel = grpcCleanup.register(InProcessChannelBuilder
.forName(serverName).directExecutor().build());
...
}
```
or inside each individual test method
```java
public class MyTest {
@Rule
public GrpcCleanupRule grpcCleanup = new GrpcCleanupRule();
...
private String serverName = InProcessServerBuilder.generateName();
private InProcessServerBuilder serverBuilder = InProcessServerBuilder
.forName(serverName).directExecutor();
private InProcessChannelBuilder channelBuilder = InProcessChannelBuilder
.forName(serverName).directExecutor();
...
@Test
public void testFooBar() {
...
grpcCleanup.register(
serverBuilder.addService(myServiceImpl).build().start());
ManagedChannel channel = grpcCleanup.register(
channelBuilder.maxInboundMessageSize(1024).build());
...
}
}
```
Now with [`GrpcCleanupRule`][GrpcCleanupRule] you don't need to worry about graceful shutdown of gRPC servers and channels in JUnit test. So try it out and clean up in your tests!
[GrpcServerRule]:https://github.com/grpc/grpc-java/blob/v1.1.x/testing/src/main/java/io/grpc/testing/GrpcServerRule.java
[GrpcCleanupRule]:https://github.com/grpc/grpc-java/blob/v1.13.x/testing/src/main/java/io/grpc/testing/GrpcCleanupRule.java

View File

@ -0,0 +1,66 @@
---
author: Jean de Klerk
author-link: https://github.com/jadekler
company: Google
company-link: https://www.google.com
date: "2018-07-13T00:00:00Z"
published: true
title: HTTP/2 Smarter At Scale
url: blog/http2_smarter_at_scale
---
Much of the web today runs on HTTP/1.1. The spec for HTTP/1.1 was published in June of 1999, just shy of 20 years ago. A lot has changed since then, which makes it all the more remarkable that HTTP/1.1 has persisted and flourished for so long. But in some areas its beginning to show its age; for the most part, in that the designers werent building for the scale at which HTTP/1.1 would be used and the astonishing amount of traffic that it would come to handle. A not-so-bad case is that subsequent tests can't pass because of a leaked resource from the previous test. The worst case is that some subsequent tests pass that wouldn't have passed at all if the previously passed test had not leaked a resource.
<!--more-->
HTTP/2, whose specification was published in May of 2015, seeks to address some of the scalability concerns of its predecessor while still providing a similar experience to users. HTTP/2 improves upon HTTP/1.1s design in a number of ways, perhaps most significantly in providing a semantic mapping over connections. In this post well explore the concept of streams and how they can be of substantial benefit to software engineers.
## Semantic Mapping over Connections
Theres significant overhead to creating HTTP connections. You must establish a TCP connection, secure that connection using TLS, exchange headers and settings, and so on. HTTP/1.1 simplified this process by treating connections as long-lived, reusable objects. HTTP/1.1 connections are kept idle so that new requests to the same destination can be sent over an existing, idle connection. Though connection reuse mitigates the problem, a connection can only handle one request at a time - they are coupled 1:1. If there is one large message being sent, new requests must either wait for its completion (resulting in head-of-line blocking) or, more frequently, pay the price of spinning up another connection.
HTTP/2 takes the concept of persistent connections further by providing a semantic layer above connections: streams. Streams can be thought of as a series of semantically connected messages, called frames. A stream may be short-lived, such as a unary stream that requests the status of a user (in HTTP/1.1, this might equate to `GET /users/1234/status`). With increasing frequency its long-lived. To use the last example, instead of making individual requests to the /users/1234/status endpoint, a receiver might establish a long-lived stream and thereby continuously receive user status messages in real time.
<img src="/img/conn_stream_frame_mapping.png" alt="Kotlin Android app example" style="max-width: 800px">
## Streams Provide Concurrency
The primary advantage of streams is connection concurrency, i.e. the ability to interleave messages on a single connection.
To illustrate this point, consider the case of some service A sending HTTP/1.1 requests to some service B about new users, profile updates, and product orders. Product orders tend to be large, and each product order ends up being broken up and sent as 5 TCP packets (to illustrate its size). Profile updates are very small and fit into one packet; new user requests are also small and fit into two packets.
In some snapshot in time, service A has a single idle connection to service B and wants to use it to send some data. Service A wants to send a product order (request 1), a profile update (request 2), and two “new user” requests (requests 3 and 4). Since the product order arrives first, it dominates the single idle connection. The latter three smaller requests must either wait for the large product order to be sent, or some number of new HTTP/1.1 connection must be spun up for the small requests.
<img src="/img/http2_queue_3.png" alt="Kotlin Android app example" style="max-width: 800px">
Meanwhile, with HTTP/2, streaming allows messages to be sent concurrently on the same connection. Lets imagine that service A creates a connection to service B with three streams: a “new users” stream, a “profile updates” stream, and a “product order” stream. Now, the latter requests dont have to wait for the first-to-arrive large product order request; all requests are sent concurrently.
Concurrency does not mean parallelism, though; we can only send one packet at a time on the connection. So, the sender might round robin sending packets between streams (see below). Alternatively, senders might prioritize certain streams over others; perhaps getting new users signed up is more important to the service!
<img src="/img/http2_round_robin.png" alt="Kotlin Android app example" style="max-width: 800px">
## Flow Control
Concurrent streams, however, harbor some subtle gotchas. Consider the following situation: two streams A and B on the same connection. Stream A receives a massive amount of data, far more than it can process in a short amount of time. Eventually the receivers buffer fills up and the TCP receive window limits the sender. This is all fairly standard behavior for TCP, but this situation is bad for streams as neither streams would receive any more data. Ideally stream B should be unaffected by stream As slow processing.
HTTP/2 solves this problem by providing a flow control mechanism as part of the stream specification. Flow control is used to limit the amount of outstanding data on a per-stream (and per-connection) basis. It operates as a credit system in which the receiver allocates a certain “budget” and the sender “spends” that budget. More specifically, the receiver allocates some buffer size (the “budget”) and the sender fills (“spends”) the buffer by sending data. The receiver advertises to the sender additional buffer as it is made available, using special-purpose WINDOW_UPDATE frames. When the receiver stops advertising additional buffer, the sender must stop sending messages when the buffer (its “budget”) is exhausted.
Using flow control, concurrent streams are guaranteed independent buffer allocation. Coupled with round robin request sending, streams of all sizes, processing speeds, and duration may be multiplexed on a single connection without having to care about cross-stream problems.
## Smarter Proxies
The concurrency properties of HTTP/2 allow proxies to be more performant. As an example, consider an HTTP/1.1 load balancer that accepts and forwards spiky traffic: when a spike occurs, the proxy spins up more connections to handle the load or queues the requests. The former - new connections - are typically preferred (to a point); the downside to these new connections is paid not just in time waiting for syscalls and sockets, but also in time spent underutilizing the connection whilst TCP slow-start occurs.
In contrast, consider an HTTP/2 proxy that is configured to multiplex 100 streams per connection. A spike of some amount of requests will still cause new connections to be spun up, but only 1/100 connections as compared to its HTTP/1.1 counterpart. More generally speaking: If n HTTP/1.1 requests are sent to a proxy, n HTTP/1.1 requests must go out; each request is a single, meaningful request/payload of data, and requests are 1:1 with connections. In contrast, with HTTP/2 n requests sent to a proxy require n streams, but there is no requirement of n connections!
The proxy has room to make a wide variety of smart interventions. It may, for example:
- Measure the bandwidth delay product (BDP) between itself and the service and then transparently create the minimum number of connections necessary to support the incoming streams.
- Kill idle streams without affecting the underlying connection.
- Load balance streams across connections to evenly spread traffic across those connections, ensuring maximum connection utilization.
- Measure processing speed based on WINDOW_UPDATE frames and use weighted load balancing to prioritize sending messages from streams on which messages are processed faster.
## HTTP/2 Is Smarter At Scale
HTTP/2 has many advantages over HTTP/1.1 that dramatically reduce the network cost of large-scale, real-time systems. Streams present one of the biggest flexibility and performance improvements that users will see, but HTTP/2 also provides semantics around graceful close (see: GOAWAY), header compression, server push, pinging, stream priority, and more. Check out the HTTP/2 spec if youre interested in digging in more - it is long but rather easy reading.
To get going with HTTP/2 right away, check out gRPC, a high-performance, open-source universal RPC framework that uses HTTP/2. In a future post well dive into gRPC and explore how it makes use of the mechanics provided by HTTP/2 to provide incredibly performant communication at scale.

View File

@ -0,0 +1,29 @@
---
author: Kailash Sethuraman
author-link: https://github.com/hsaliak
company: Google
company-link: https://www.google.com
date: "2018-08-14T00:00:00Z"
published: true
title: Take the gRPC Survey!
url: blog/take-the-grpc-survey
---
## The gRPC Project wants your feedback!
The gRPC project is looking for feedback to improve the gRPC experience. To do this, we are running a [gRPC user survey](http://bit.ly/gRPC18survey). We invite you to participate and provide input that will help us better plan and prioritize.
<!--more-->
## gRPC User Survey
**Who** : If you currently use gRPC, have used gRPC in the past, or have any interest in it, we would love to hear from you.
**Where**: Please take this 15 minute survey by Friday, 24th August.
**Why**: gRPC is a broadly applicable project with a variety of use cases. We want to use [this survey](http://bit.ly/gRPC18survey) to help us understand what works well, and what needs to be fixed.
## Spread the word!
Please help us spread the word on this survey by posting it on your social networks and sharing with your friends. Every single feedback is precious, and we would like as much of it as possible!
Survey Short link: [http://bit.ly/gRPC18survey
](http://bit.ly/gRPC18survey)

View File

@ -0,0 +1,230 @@
---
author: Carl Mastrangelo
author-link: https://carlmastrangelo.com
company: Google
company-link: https://www.google.com
date: "2018-08-15T00:00:00Z"
published: true
title: gRPC + JSON
url: blog/grpc-with-json
---
So you've bought into this whole RPC thing and want to try it out, but aren't quite sure about Protocol Buffers. Your existing code encodes your own objects, or perhaps you have code that needs a particular encoding. What to do?
Fortunately, gRPC is encoding agnostic! You can still get a lot of the benefits of gRPC without using Protobuf. In this post we'll go through how to make gRPC work with other encodings and types. Let's try using JSON.
<!--more-->
gRPC is actually a collection of technologies that have high cohesion, rather than a singular, monolithic framework. This means its possible to swap out parts of gRPC and still take advantage of gRPC's benefits. [Gson](https://github.com/google/gson) is a popular library for Java for doing JSON encoding. Let's remove all the protobuf related things and replace them with Gson:
```diff
- Protobuf wire encoding
- Protobuf generated message types
- gPRC generated stub types
+ JSON wire encoding
+ Gson message types
```
Previously, Protobuf and gRPC were generating code for us, but we would like to use our own types. Additionally, we are going to be using our own encoding too. Gson allows us to bring our own types in our code, but provides a way of serializing those types into bytes.
Let's continue with the [Key-Value](https://github.com/carl-mastrangelo/kvstore/tree/04-gson-marshaller) store service. We will be modifying the code used my previous [So You Want to Optimize gRPC](/blog/optimizing-grpc-part-2) post.
## What is a Service Anyways?
From the point of view of gRPC, a _Service_ is a collection of _Methods_. In Java, a method is represented as a [`MethodDescriptor`](https://grpc.io/grpc-java/javadoc/io/grpc/MethodDescriptor.html). Each `MethodDescriptor` includes the name of the method, a `Marshaller` for encoding requests, and a `Marshaller` for encoding responses. They also include additional detail, such as if the call is streaming or not. For simplicity, we'll stick with unary RPCs which have a single request and single response.
Since we won't be generating any code, we'll need to write the message classes ourselves. There are four methods, each which have a request and a response type. This means we need to make eight messages:
```java
static final class CreateRequest {
byte[] key;
byte[] value;
}
static final class CreateResponse {
}
static final class RetrieveRequest {
byte[] key;
}
static final class RetrieveResponse {
byte[] value;
}
static final class UpdateRequest {
byte[] key;
byte[] value;
}
static final class UpdateResponse {
}
static final class DeleteRequest {
byte[] key;
}
static final class DeleteResponse {
}
```
Because GSON uses reflection to determine how the fields in our classes map to the serialized JSON, we don't need to annotate the messages.
Our client and server logic will use the request and response types, but gRPC needs to know how to produce and consume these messages. To do this, we need to implement a [`Marshaller`](https://grpc.io/grpc-java/javadoc/io/grpc/MethodDescriptor.Marshaller.html). A marshaller knows how to convert from an arbitrary type to an `InputStream`, which is then passed down into the gRPC core library. It is also capable of doing the reverse transformation when decoding data from the network. For GSON, here is what the marshaller looks like:
```java
static <T> Marshaller<T> marshallerFor(Class<T> clz) {
Gson gson = new Gson();
return new Marshaller<T>() {
@Override
public InputStream stream(T value) {
return new ByteArrayInputStream(gson.toJson(value, clz).getBytes(StandardCharsets.UTF_8));
}
@Override
public T parse(InputStream stream) {
return gson.fromJson(new InputStreamReader(stream, StandardCharsets.UTF_8), clz);
}
};
}
```
Given a `Class` object for a some request or response, this function will produce a marshaller. Using the marshallers, we can compose a full `MethodDescriptor` for each of the four CRUD methods. Here is an example of the Method descriptor for _Create_:
```java
static final MethodDescriptor<CreateRequest, CreateResponse> CREATE_METHOD =
MethodDescriptor.newBuilder(
marshallerFor(CreateRequest.class),
marshallerFor(CreateResponse.class))
.setFullMethodName(
MethodDescriptor.generateFullMethodName(SERVICE_NAME, "Create"))
.setType(MethodType.UNARY)
.build();
```
Note that if we were using Protobuf, we would use the existing Protobuf marshaller, and the
[method descriptors](https://github.com/carl-mastrangelo/kvstore/blob/03-nonblocking-server/build/generated/source/proto/main/grpc/io/grpc/examples/proto/KeyValueServiceGrpc.java#L44)
would be generated automatically.
## Sending RPCs
Now that we can marshal JSON requests and responses, we need to update our
[`KvClient`](https://github.com/carl-mastrangelo/kvstore/blob/b225d28c7c2f3c356b0f3753384b3329f2ab5911/src/main/java/io/grpc/examples/KvClient.java#L98),
the gRPC client used in the previous post, to use our MethodDescriptors. Additionally, since we won't be using any Protobuf types, the code needs to use `ByteBuffer` rather than `ByteString`. That said, we can still use the `grpc-stub` package on Maven to issue the RPC. Using the _Create_ method again as an example, here's how to make an RPC:
```java
ByteBuffer key = createRandomKey();
ClientCall<CreateRequest, CreateResponse> call =
chan.newCall(KvGson.CREATE_METHOD, CallOptions.DEFAULT);
KvGson.CreateRequest req = new KvGson.CreateRequest();
req.key = key.array();
req.value = randomBytes(MEAN_VALUE_SIZE).array();
ListenableFuture<CreateResponse> res = ClientCalls.futureUnaryCall(call, req);
// ...
```
As you can see, we create a new `ClientCall` object from the `MethodDescriptor`, create the request, and then send it using `ClientCalls.futureUnaryCall` in the stub library. gRPC takes care of the rest for us. You can also make blocking stubs or async stubs instead of future stubs.
## Receiving RPCs
To update the server, we need to create a key-value service and implementation. Recall that in gRPC, a _Server_ can handle one or more _Services_. Again, what Protobuf would normally have generated for us we need to write ourselves. Here is what the base service looks like:
```java
static abstract class KeyValueServiceImplBase implements BindableService {
public abstract void create(
KvGson.CreateRequest request, StreamObserver<CreateResponse> responseObserver);
public abstract void retrieve(/*...*/);
public abstract void update(/*...*/);
public abstract void delete(/*...*/);
/* Called by the Server to wire up methods to the handlers */
@Override
public final ServerServiceDefinition bindService() {
ServerServiceDefinition.Builder ssd = ServerServiceDefinition.builder(SERVICE_NAME);
ssd.addMethod(CREATE_METHOD, ServerCalls.asyncUnaryCall(
(request, responseObserver) -> create(request, responseObserver)));
ssd.addMethod(RETRIEVE_METHOD, /*...*/);
ssd.addMethod(UPDATE_METHOD, /*...*/);
ssd.addMethod(DELETE_METHOD, /*...*/);
return ssd.build();
}
}
```
`KeyValueServiceImplBase` will serve as both the service definition (which describes which methods the server can handle) and as the implementation (which describes what to do for each method). It serves as the glue between gRPC and our application logic. Practically no changes are needed to swap from Proto to GSON in the server code:
```java
final class KvService extends KvGson.KeyValueServiceImplBase {
@Override
public void create(
KvGson.CreateRequest request, StreamObserver<KvGson.CreateResponse> responseObserver) {
ByteBuffer key = ByteBuffer.wrap(request.key);
ByteBuffer value = ByteBuffer.wrap(request.value);
// ...
}
```
After implementing all the methods on the server, we now have a fully functioning gRPC Java, JSON encoding RPC system. And to show you there is nothing up my sleeve:
```sh
$ ./gradlew :dependencies | grep -i proto
$ # no proto deps!
```
## Optimizing the Code
While Gson is not as fast as Protobuf, there's no sense in not picking the low hanging fruit. Running the code we see the performance is pretty slow:
```sh
./gradlew installDist
time ./build/install/kvstore/bin/kvstore
INFO: Did 215.883 RPCs/s
```
What happened? In the previous [optimization](/blog/optimizing-grpc-part-2) post, we saw the Protobuf version do nearly _2,500 RPCs/s_. JSON is slow, but not _that_ slow. We can see what the problem is by printing out the JSON data as it goes through the marshaller:
```json
{"key":[4,-100,-48,22,-128,85,115,5,56,34,-48,-1,-119,60,17,-13,-118]}
```
That's not right! Looking at a `RetrieveRequest`, we see that the key bytes are being encoded as an array, rather than as a byte string. The wire size is much larger than it needs to be, and may not be compatible with other JSON code. To fix this, let's tell GSON to encode and decode this data as base64 encoded bytes:
```java
private static final Gson gson =
new GsonBuilder().registerTypeAdapter(byte[].class, new TypeAdapter<byte[]>() {
@Override
public void write(JsonWriter out, byte[] value) throws IOException {
out.value(Base64.getEncoder().encodeToString(value));
}
@Override
public byte[] read(JsonReader in) throws IOException {
return Base64.getDecoder().decode(in.nextString());
}
}).create();
```
Using this in our marshallers, we can see a dramatic performance difference:
```sh
./gradlew installDist
time ./build/install/kvstore/bin/kvstore
INFO: Did 2,202.2 RPCs/s
```
Almost **10x** faster than before! We can still take advantage of gRPC's efficiency while bringing our own encoders and messages.
## Conclusion
gRPC lets you use encoders other than Protobuf. It has no dependency on Protobuf and was specially made to work with a wide variety of environments. We can see that with a little extra boilerplate, we can use any encoder we want. While this post only covered JSON, gRPC is compatible with Thrift, Avro, Flatbuffers, Capn Proto, and even raw bytes! gRPC lets you be in control of how your data is handled. (We still recommend Protobuf though due to strong backwards compatibility, type checking, and performance it gives you.)
All the code is avaialable on [GitHub](https://github.com/carl-mastrangelo/kvstore/tree/04-gson-marshaller) if you would like to see a fully working implementation.

View File

@ -0,0 +1,75 @@
---
author: Jean de Klerk
author-link: https://github.com/jadekler
date: "2018-08-20T00:00:00Z"
published: true
title: gRPC on HTTP/2 Engineering a Robust, High Performance Protocol
url: blog/grpc_on_http2
---
In a [previous article](/blog/http2_smarter_at_scale), we explored how HTTP/2 dramatically increases network efficiency and enables real-time communication by providing a framework for long-lived connections. In this article, well look at how gRPC builds on HTTP/2s long-lived connections to create a performant, robust platform for inter-service communication. We will explore the relationship between gRPC and HTTP/2, how gRPC manages HTTP/2 connections, and how gRPC uses HTTP/2 to keep connections alive, healthy, and utilized.
<!--more-->
## gRPC Semantics
To begin, lets dive into how gRPC concepts relate to HTTP/2 concepts. gRPC introduces three new concepts: *channels* [1], *remote procedure calls* (RPCs), and *messages*. The relationship between the three is simple: each channel may have many RPCs while each RPC may have many messages.
<img src="/img/channels_mapping_2.png" title="Channel Mapping" alt="Channel Mapping" style="max-width: 800px">
Lets take a look at how gRPC semantics relate to HTTP/2:
<img src="/img/grpc_on_http2_mapping_2.png" title="gRPC on HTTP/2" alt="gRPC on HTTP/2" style="max-width: 800px">
Channels are a key concept in gRPC. Streams in HTTP/2 enable multiple concurrent conversations on a single connection; channels extend this concept by enabling multiple streams over multiple concurrent connections. On the surface, channels provide an easy interface for users to send messages into; underneath the hood, though, an incredible amount of engineering goes into keeping these connections alive, healthy, and utilized.
Channels represent virtual connections to an endpoint, which in reality may be backed by many HTTP/2 connections. RPCs are associated with a connection (this association is described further on). RPCs are in practice plain HTTP/2 streams. Messages are associated with RPCs and get sent as HTTP/2 data frames. To be more specific, messages are _layered_ on top of data frames. A data frame may have many gRPC messages, or if a gRPC message is quite large [2] it might span multiple data frames.
## Resolvers and Load Balancers
In order to keep connections alive, healthy, and utilized, gRPC utilizes a number of components, foremost among them *name resolvers* and *load balancers*. The resolver turns names into addresses and then hands these addresses to the load balancer. The load balancer is in charge of creating connections from these addresses and load balancing RPCs between connections.
<img src="/img/dns_to_load_balancer_mapping_3.png" title="Resolvers and Load Balancers" alt="Resolvers and Load Balancers" style="max-width: 800px">
<img src="/img/load_balance_round_robins_2.png" alt="Round Robin Load Balancer" style="max-width: 800px">
A DNS resolver, for example, might resolve some host name to 13 IP addresses, and then a RoundRobin balancer might create 13 connections - one to each address - and round robin RPCs across each connection. A simpler balancer might simply create a connection to the first address. Alternatively, a user who wants multiple connections but knows that the host name will only resolve to one address might have their balancer create connections against each address 10 times to ensure that multiple connections are used.
Resolvers and load balancers solve small but crucial problems in a gRPC system. This design is intentional: reducing the problem space to a few small, discrete problems helps users build custom components. These components can be used to fine-tune gRPC to fit each systems individual needs.
## Connection Management
Once configured, gRPC will keep the pool of connections - as defined by the resolver and balancer - healthy, alive, and utilized.
When a connection fails, the load balancer will begin to reconnect using the last known list of addresses [3]. Meanwhile, the resolver will begin attempting to re-resolve the list of host names. This is useful in a number of scenarios. If the proxy is no longer reachable, for example, wed want the resolver to update the list of addresses to not include that proxys address. To take another example: DNS entries might change over time, and so the list of addresses might need to be periodically updated. In this manner and others, gRPC is designed for long-term resiliency.
Once resolution is finished, the load balancer is informed of the new addresses. If addresses have changed, the load balancer may spin down connections to addresses not present in the new list or create connections to addresses that werent previously there.
## Identifying Failed Connections
The effectiveness of gRPC's connection management hinges upon its ability to identify failed connections. There are generally two types of connection failures: clean failures, in which the failure is communicated, and the less-clean failure, in which the failure is not communicated.
Lets consider a clean, easy-to-observe failure. Clean failures can occur when an endpoint intentionally kills the connection. For example, the endpoint may have gracefully shut down, or a timer may have been exceeded, prompting the endpoint to close the connection. When connections close cleanly, TCP semantics suffice: closing a connection causes the [FIN handshake](https://www.tcpipguide.com/free/t_TCPConnectionTermination-2.htm) to occur. This ends the HTTP/2 connection, which ends the gRPC connection. gRPC will immediately begin reconnecting (as described above). This is quite clean and requires no additional HTTP/2 or gRPC semantics.
The less clean version is where the endpoint dies or hangs without informing the client. In this case, TCP might undergo retry for as long as 10 minutes before the connection is considered failed. Of course, failing to recognize that the connection is dead for 10 minutes is unacceptable. gRPC solves this problem using HTTP/2 semantics: when configured using KeepAlive, gRPC will periodically send [HTTP/2 PING frames](https://http2.github.io/http2-spec/#PING). These frames bypass flow control and are used to establish whether the connection is alive. If a PING response does not return within a timely fashion, gRPC will consider the connection failed, close the connection, and begin reconnecting (as described above).
In this way, gRPC keeps a pool of connections healthy and uses HTTP/2 to ascertain the health of connections periodically. All of this behavior is opaque to the user, and message redirecting happens automatically and on the fly. Users simply send messages on a seemingly always-healthy pool of connections.
## Keeping Connections Alive
As mentioned above, KeepAlive provides a valuable benefit: periodically checking the health of the connection by sending an HTTP/2 PING to determine whether the connection is still alive. However, it has another equally useful benefit: signaling liveness to proxies.
Consider a client sending data to a server through a proxy. The client and server may be happy to keep a connection alive indefinitely, sending data as necessary. Proxies, on the other hand, are often quite resource constrained and may kill idle connections to save resources. Google Cloud Platform (GCP) load balancers disconnect apparently-idle connections after [10 minutes](https://cloud.google.com/compute/docs/troubleshooting#communicatewithinternet), and Amazon Web Services Elastic Load Balancers (AWS ELBs) disconnect them after [60 seconds](https://aws.amazon.com/articles/1636185810492479).
With gRPC periodically sending HTTP/2 PING frames on connections, the perception of a non-idle connection is created. Endpoints using the aforementioned idle kill rule would pass over killing these connections.
## A Robust, High Performance Protocol
HTTP/2 provides a foundation for long-lived, real-time communication streams. gRPC builds on top of this foundation with connection pooling, health semantics, efficient use of data frames and multiplexing, and KeepAlive.
Developers choosing protocols must choose those that meet todays demands as well as tomorrows. They are well served by choosing gRPC, whether it be for resiliency, performance, long-lived or short-lived communication, customizability, or simply knowing that their protocol will scale to extraordinarily massive traffic while remaining efficient all the way. To get going with gRPC and HTTP/2 right away, check out [gRPC's Getting Started guides](https://grpc.io/docs/).
## Footnotes
1. In Go, a gRPC channel is called ClientConn because the word “channel” has a language-specific meaning.
2. gRPC uses the HTTP/2 default max size for a data frame of 16kb. A message over 16kb may span multiple data frames, whereas a message below that size may share a data frame with some number of other messages.
3. This is the behavior of the RoundRobin balancer, but not every load balancer does or must behave this way.

View File

@ -0,0 +1,293 @@
---
author: Yuxuan Li
author-link: https://github.com/lyuxuan
date: "2018-09-05T00:00:00Z"
published: true
title: A short introduction to Channelz
url: blog/a_short_introduction_to_channelz
---
Channelz is a tool that provides comprehensive runtime info about connections at
different levels in gRPC. It is designed to help debug live programs, which may
be suffering from network, performance, configuration issues, etc. The
[gRFC](https://github.com/grpc/proposal/blob/master/A14-channelz.md) provides a
detailed explanation of channelz design and is the canonical reference for all
channelz implementations across languages. The purpose of this blog is to
familiarize readers with channelz service and how to use it for debugging
issues. The context of this post is set in
[gRPC-Go](https://github.com/grpc/grpc-go), but the overall idea should be
applicable across languages. At the time of writing, channelz is available for
[gRPC-Go](https://github.com/grpc/grpc-go) and
[gRPC-Java](https://github.com/grpc/grpc-java). Support for
[C++](https://github.com/grpc/grpc) and wrapped languages is coming soon.
<!--more-->
Let's learn channelz through a simple example which uses channelz to help debug
an issue. The
[helloworld](https://github.com/grpc/grpc-go/tree/master/examples/helloworld)
example from our repo is slightly modified to set up a buggy scenario. You can
find the full source code here:
[client](https://gist.github.com/lyuxuan/515fa6da7e0924b030e29b8be56fd90a),
[server](https://gist.github.com/lyuxuan/81dd08ca649a6c78a61acc7ab05e0fef).
********************************************************************************
> **Client setup:**
> The client will make 100 SayHello RPCs to a specified target and load balance
> the workload with the round robin policy. Each call has a 150ms timeout. RPC
> responses and errors are logged for debugging purposes.
********************************************************************************
Running the program, we notice in the log that there are intermittent errors
with error code **DeadlineExceeded** (as shown in Figure 1).
However, there's no clue about what is causing the deadline exceeded error and
there are many possibilities:
* network issue, e.g. connection lost
* proxy issue, e.g. dropped requests/responses in the middle
* server issue, e.g. lost requests or just slow to respond
<img src="/img/log.png" style="max-width: 947px">
<p style="text-align: center"> Figure 1. Program log
screenshort</p>
Let's turn on grpc INFO logging for more debug info and see if we can find
something helpful.
<img src="/img/logWithInfo.png" style="max-width: 997px">
<p style="text-align: center"> Figure 2.
gRPC INFO log</p>
As shown in Figure 2, the info log indicates that all three connections to the
server are connected and ready for transmitting RPCs. No suspicious event shows
up in the log. One thing that can be inferred from the info log is that all
connections are up all the time, therefore the lost connection hypothesis can be
ruled out.
To further narrow down the root cause of the issue, we will ask channelz for
help.
Channelz provides gRPC internal networking machinery stats through a gRPC
service. To enable channelz, users just need to register the channelz service to
a gRPC server in their program and start the server. The code snippet below
shows the API for registering channelz service to a
[grpc.Server](https://godoc.org/google.golang.org/grpc#Server). Note that this
has already been done for our example client.
```go
import "google.golang.org/grpc/channelz/service"
// s is a *grpc.Server
service.RegisterChannelzServiceToServer(s)
// call s.Serve() to serve channelz service
```
A web tool called
[grpc-zpages](https://github.com/grpc/grpc-experiments/tree/master/gdebug)
has been developed to conveniently serve channelz data through a web page.
First, configure the web app to connect to the gRPC port that's serving the
channelz service (see instructions from the previous link). Then, open the
channelz web page in the browser. You should see a web page like Figure 3. Now
we can start querying channelz!
<p align="center">
<img src="/img/mainpage.png" style="max-width: 935px">
</p>
<p style="text-align: center"> Figure 3.
Channelz main page</p>
As the error is on the client side, let's first click on
[TopChannels](https://github.com/grpc/proposal/blob/master/A14-channelz.md#gettopchannels).
TopChannels is a collection of root channels which don't have parents. In
gRPC-Go, a top channel is a
[ClientConn](https://godoc.org/google.golang.org/grpc#ClientConn) created by the
user through [Dial](https://godoc.org/google.golang.org/grpc#Dial) or
[DialContext](https://godoc.org/google.golang.org/grpc#DialContext), and used
for making RPC calls. Top channels are of
[Channel](https://github.com/grpc/grpc-proto/blob/9b13d199cc0d4703c7ea26c9c330ba695866eb23/grpc/channelz/v1/channelz.proto#L37)
type in channelz, which is an abstraction of a connection that an RPC can be
issued to.
<p align="center">
<img src="/img/topChan1.png" style="max-width: 815px">
</p>
<p style="text-align: center"> Figure 4.
TopChannels result</p>
So we click on the TopChannels, and a page like Figure 4 appears, which lists
all the live top channel(s) with related info.
As shown in Figure 5, there is only one top channel with id = 2 (Note that text
in square brackets is the reference name of the in memory channel object, which
may vary across languages).
Looking at the **Data** section, we can see there are 15 calls failed out of 100
on this channel.
<p align="center">
<img src="/img/topChan2.png" style="max-width: 815px">
</p>
<p style="text-align: center"> Figure 5.
Top Channel (id = 2)</p>
On the right hand side, it shows the channel has no child **Channels**, 3
**Subchannels** (as highlighted in Figure 6), and 0 **Sockets**.
<p align="center">
<img src="/img/topChan3.png" style="max-width: 815px">
</p>
<p style="text-align: center"> Figure 6.
Subchannels owned by the Channel (id = 2)</p>
A
[Subchannel](https://github.com/grpc/grpc-proto/blob/9b13d199cc0d4703c7ea26c9c330ba695866eb23/grpc/channelz/v1/channelz.proto#L61)
is an abstraction over a connection and used for load balancing. For example,
you want to send requests to "google.com". The resolver resolves "google.com" to
multiple backend addresses that serve "google.com". In this example, the client
is set up with the round robin load balancer, so all live backends are sent
equal traffic. Then the (logical) connection to each backend is represented as a
Subchannel. In gRPC-Go, a
[SubConn](https://godoc.org/google.golang.org/grpc/balancer#SubConn) can be seen
as a Subchannel.
The three Subchannels owned by the parent Channel means there are three
connections to three different backends for sending the RPCs to. Let's look
inside each of them for more info.
So we click on the first Subchannel ID (i.e. "4\[\]") listed, and a page like
Figure 7 renders. We can see that all calls on this Subchannel have succeeded.
Thus it's unlikely this Subchannel is related to the issue we are having.
<p align="center">
<img src="/img/subChan4.png" style="max-width: 815px">
</p>
<p style="text-align: center"> Figure 7.
Subchannel (id = 4)</p>
So we go back, and click on Subchannel 5 (i.e. "5\[\]"). Again, the web page
indicates that Subchannel 5 also never had any failed calls.
<p align="center">
<img src="/img/subChan6_1.png" style="max-width: 815px">
</p>
<p style="text-align: center"> Figure 8.
Subchannel (id = 6)</p>
And finally, we click on Subchannel 6. This time, there's something different.
As we can see in Figure 8, there are 15 out of 34 RPC calls failed on this
Subchannel. And remember that the parent Channel also has exactly 15 failed
calls. Therefore, Subchannel 6 is where the issue comes from. The state of the
Subchannel is **READY**, which means it is connected and is ready to transmit
RPCs. That rules out network connection problems. To dig up more info, let's
look at the Socket owned by this Subchannel.
A
[Socket](https://github.com/grpc/grpc-proto/blob/9b13d199cc0d4703c7ea26c9c330ba695866eb23/grpc/channelz/v1/channelz.proto#L227)
is roughly equivalent to a file descriptor, and can be generally regarded as the
TCP connection between two endpoints. In grpc-go,
[http2Client](https://github.com/grpc/grpc-go/blob/ce4f3c8a89229d9db3e0c30d28a9f905435ad365/internal/transport/http2_client.go#L46)
and
[http2Server](https://github.com/grpc/grpc-go/blob/ce4f3c8a89229d9db3e0c30d28a9f905435ad365/internal/transport/http2_server.go#L61)
correspond to Socket. Note that a network listener is also considered a Socket,
and will show up in the channelz Server info.
<p align="center">
<img src="/img/subChan6_2.png" style="max-width: 815px">
</p>
<p style="text-align: center"> Figure 9.
Subchannel (id = 6) owns Socket (id = 8)</p>
We click on Socket 8, which is at the bottom of the page (see Figure 9). And we
now see a page like Figure 10.
The page provides comprehensive info about the socket like the security
mechanism in use, stream count, message count, keepalives, flow control numbers,
etc. The socket options info is not shown in the screenshot, as there are a lot
of them and not related to the issue we are investigating.
The **Remote Address** field suggests that the backend we are having a problem
with is **"127.0.0.1:10003"**. The stream counts here correspond perfectly to
the call counts of the parent Subchannel. From this, we can know that the server
is not actively sending DeadlineExceeded errors. This is because if the
DeadlineExceeded error is returned by the server, then the streams would all be
successful. A client side stream's success is independent of whether the call
succeeds. It is determined by whether a HTTP2 frame with EOS bit set has been
received (refer to the
[gRFC](https://github.com/grpc/proposal/blob/master/A14-channelz.md#socket-data)
for more info). Also, we can see that the number of messages sent is 34, which
equals the number of calls, and it rules out the possibility of the client being
stuck somehow and results in deadline exceeded. In summary, we can narrow down
the issue to the server which serves on 127.0.0.1:10003. It may be that the
server is slow to respond, or some proxy in front of it is dropping requests.
<p align="center">
<img src="/img/socket8.png" style="max-width: 815px">
</p>
<p style="text-align: center"> Figure 10. Socket
(id = 8)</p>
As you see, channelz has helped us pinpoint the potential root cause of the
issue with just a few clicks. You can now concentrate on what's happening with
the pinpointed server. And again, channelz may help expedite the debugging at
the server side too.
We will stop here and let readers explore the server side channelz, which is
simpler than the client side. In channelz, a
[Server](https://github.com/grpc/grpc-proto/blob/9b13d199cc0d4703c7ea26c9c330ba695866eb23/grpc/channelz/v1/channelz.proto#L199)
is also an RPC entry point like a Channel, where incoming RPCs arrive and get
processed. In grpc-go, a
[grpc.Server](https://godoc.org/google.golang.org/grpc#Server) corresponds to a
channelz Server. Unlike Channel, Server only has Sockets (both listen socket(s)
and normal connected socket(s)) as its children.
Here are some hints for the readers:
* Look for the server with the address (127.0.0.1:10003).
* Look at the call counts.
* Go to the Socket(s) owned by the server.
* Look at the Socket stream counts and message counts.
You should notice that the number of messages received by the server socket is
the same as sent by the client socket (Socket 8), which rules out the case of
having a misbehaving proxy (dropping request) in the middle. And the number of
messages sent by the server socket is equal to the messages received at client
side, which means the server was not able to send back the response before
deadline. You may now look at the
[server](https://gist.github.com/lyuxuan/81dd08ca649a6c78a61acc7ab05e0fef) code
to verify whether it is indeed the cause.
********************************************************************************
> **Server setup:**
> The server side program starts up three GreeterServers, with two of them using
> an implementation
> ([server{}](https://gist.github.com/lyuxuan/81dd08ca649a6c78a61acc7ab05e0fef#file-main-go-L42))
> that imposes no delay when responding to the client, and one using an
> implementation
> ([slowServer{}](https://gist.github.com/lyuxuan/81dd08ca649a6c78a61acc7ab05e0fef#file-main-go-L50))
> which injects a variable delay of 100ms - 200ms before sending the response.
********************************************************************************
As you can see through this demo, channelz helped us quickly narrow down the
possible causes of an issue and is easy to use. For more resources, see the
detailed channelz
[gRFC](https://github.com/grpc/proposal/blob/master/A14-channelz.md). Find us on
github at [https://github.com/grpc/grpc-go](https://github.com/grpc/grpc-go).

View File

@ -0,0 +1,123 @@
---
author: Luc Perkins - CNCF, Stanley Cheung - Google, Kailash Sethuraman - Google
date: "2018-10-23T00:00:00Z"
published: true
title: gRPC-Web is Generally Available
url: blog/grpc-web-ga
---
We are excited to announce the GA release of
[gRPC-Web](https://www.npmjs.com/package/grpc-web), a JavaScript client library
that enables web apps to communicate directly with gRPC backend services,
without requiring an HTTP server to act as an intermediary. "GA" means that
gRPC-Web is now Generally Available and stable and qualified for production use.
<!--more-->
With gRPC-Web, you can now easily build truly end-to-end gRPC application
architectures by defining your client *and* server-side data types and service
interfaces with Protocol Buffers. This has been a hotly requested feature for a
while, and we are finally happy to say that it is now ready for production use.
In addition, being able to access gRPC services opens up new an exciting
possibilities for [web based
tooling](https://github.com/grpc/grpc-experiments/tree/master/gdebug) around gRPC.
## The Basics
gRPC-Web, just like gRPC, lets you define the service "contract" between client
(web) and backend gRPC services using Protocol Buffers. The client can then be
auto generated. To do this, you have a choice between the [Closure](https://developers.google.com/closure/compiler/) compiler
or the more widely used [CommonJS](https://requirejs.org/docs/commonjs.html).
This development process removes the need to manage concerns such as creating
custom JSON seralization and deserialization logic, wrangling HTTP status codes
(which can vary across REST APIs), managing content type negotiation etc.
From a broader architectural perspective, gRPC-Web enables end-to-end gRPC. The diagram below illustrates this:
<img src="/img/grpc-web-arch.png" style="max-width: 947px">
<p style="text-align: center"> Figure 1.
gRPC with gRPC-Web (left) and gRPC with REST (right)</p>
In the gRPC-Web universe on the left, a client application speaks Protocol Buffers to a gRPC backend server that speaks Protocol Buffers to other gRPC backend services. In the REST universe on the right, the web app speaks HTTP to a backend REST API server that then speaks Protocol Buffers to backend services.
## Advantages of using gRPC-Web
gRPC-Web will offer an ever-broader feature set over time, but heres whats in 1.0 today:
* **End-to-end gRPC** — Enables you to craft your entire RPC pipeline using Protocol Buffers. Imagine a scenario in which a client request goes to an HTTP server, which then interacts with 5 backend gRPC services. Theres a good chance that youll spend as much time building the HTTP interaction layer as you will building the entire rest of the pipeline.
* **Tighter coordination between frontend and backend teams** — With the entire RPC pipeline defined using Protocol Buffers, you no longer need to have your “microservices teams” alongside your “client team.” The client-backend interaction is just one more gRPC layer amongst others.
* **Easily generate client libraries** — With gRPC-Web, the server that interacts with the “outside” world, i.e. the membrane connecting your backend stack to the internet, is now a gRPC server instead of an HTTP server, that means that all of your services client libraries can be gRPC libraries. Need client libraries for Ruby, Python, Java, and 4 other languages? You no longer need to write HTTP clients for all of them.
## A gRPC-Web example
The previous section illustrated some of the high-level advantages of gRPC-Web for large-scale applications. Now lets get closer to the metal with an example: a simple TODO app. In gRPC-Web you can start with a simple ``todos.proto`` definition like this:
```proto
syntax = "proto3";
package todos;
message Todo {
string content = 1;
bool finished = 2;
}
message GetTodoRequest {
int32 id = 1;
}
service TodoService {
rpc GetTodoById (GetTodoRequest) returns (Todo);
}
```
CommonJS client-side code can be generated from this ``.proto`` definition with the following command:
```proto
protoc echo.proto \
--js_out=import_style=commonjs:./output \
--grpc-web_out=import_style=commonjs:./output
```
Now, fetching a list of TODOs from a backend gRPC service is as simple as:
```js
const {GetTodoRequest} = require('./todos_pb.js');
const {TodoServiceClient} = require('./todos_grpc_web_pb.js');
const todoService = new proto.todos.TodoServiceClient('http://localhost:8080');
const todoId = 1234;
var getTodoRequest = new proto.todos.GetTodoRequest();
getTodoRequest.setId(todoId);
var metadata = {};
var getTodo = todoService.getTodoById(getTodoRequest, metadata, (err, response) => {
if (err) {
console.log(err);
} else {
const todo = response.todo();
if (todo == null) {
console.log(`A TODO with the ID ${todoId} wasn't found`);
} else {
console.log(`Fetched TODO with ID ${todoId}: ${todo.content()}`);
}
}
});
```
Once you declare the data types and a service interface, gRPC-Web abstracts away all the boilerplate, leaving you with a clean and human-friendly API (essentially the same API as the current [Node.js](/docs/tutorials/basic/node/) for gRPC API, just transferred to the client).
On the backend, the gRPC server can be written in any language that supports gRPC, such as Go, Java, C++, Ruby, Node.js, and many others. The last piece of the puzzle is the service proxy. From the get-go, gRPC-Web will support [Envoy](https://envoyproxy.io) as the default service proxy, which has a built-in [envoy.grpc_web filter](https://www.envoyproxy.io/docs/envoy/latest/configuration/http_filters/grpc_web_filter#config-http-filters-grpc-web) that you can apply with just a few lines of copy-and-pastable configuration.
## Next Steps
Going GA means that the core building blocks are firmly in place and ready for usage in production web applications. But theres still much more to come for gRPC-Web. Check out the [official roadmap](https://github.com/grpc/grpc-web/blob/master/ROADMAP.md) to see what the core team envisions for the near future.
If youre interested in contributing to gRPC-Web, there are a few things we would love community help with:
* **Front-end framework integration** — Commonly used front-end frameworks like [React](https://reactjs.org), [Vue](https://vuejs.org) and [Angular](https://angularjs.org) don't yet offer official support for gRPC-Web. But we would love to see these frameworks support it since the integration between these frontend frameworks and gRPC-Web can be a vehicle to deliver user-perceivable performance benefits to applications. If you are interested in building out support for these frontend frameworks, let us know on the [gRPC.io mailing list](https://groups.google.com/forum/#!forum/grpc-io), [filing a feature request on github](https://github.com/grpc/grpc-web/issues) or via the feature survey form below.
* **Language-specific proxy support** — As of the GA release, [Envoy](https://envoyproxy.io) is the default proxy for gRPC-Web, offering support via a special module. NGNIX is also [supported](https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway/nginx). But wed also love to see development of in-process proxies for specific languages since they obviate the need for special proxies—such as Envoy and nginx—and would make using gRPC-Web even easier.
Wed also love to get feature requests from the community. Currently the best way to make feature requests is to fill out the [gRPC-Web roadmap features survey](https://docs.google.com/forms/d/1NjWpyRviohn5jaPntosBHXRXZYkh_Ffi4GxJZFibylM/viewform?edit_requested=true). When filling up the form, list features youd like to see and also let us know if youd like to contribute to the development of those features in the **Id like to contribute to** section. The gRPC-Web engineers will be sure to take that information to heart over the course of the projects development.
Most importantly, we want to thank all the Alpha and Beta users who have given us feedback, bug reports and pull requests contributions over the course of the past year. We would certainly hope to maintain this momentum and make sure this project brings tangible benefits to the developer community.

View File

@ -0,0 +1,41 @@
---
author: Carl Mastrangelo
author-link: https://carlmastrangelo.com/
date: "2018-12-11T00:00:00Z"
published: true
title: Visualizing gRPC Language Stacks
url: blog/grpc-stacks
---
Here is a high level overview of the gRPC Stacks. Each of the **10** default languages supported by gRPC has multiple layers, allowing you to customize what pieces you want in your application.
<!--more-->
There are three main stacks in gRPC: C-core, Go, and Java. Most of the languages are thin wrappers on top of the [C-based](https://github.com/grpc/grpc/tree/master/src/core) gRPC core library:
### Wrapped Languages:
<p><img src="/img/grpc-core-stack.svg" alt="gRPC Core Stack" style="max-width: 800px" /></p>
For example, a Python application calls into the generated Python stubs. These calls pass through interceptors, and into the wrapping library where the calls are translated into C calls. The gRPC C-core will encode the RPC as HTTP/2, optionally encrypt the data with TLS, and then write it to the network.
One of the cool things about gRPC is that you can swap these pieces out. For example, you could use C# instead, and use an In-Process transport. This would save you from having to go all the way down to the OS network layer. Another example is trying out the QUIC protocol, which allows you to open new connections quickly. Being able to run over a variety of transports based on the environment makes gRPC really flexible.
For each of the wrapped languages, the default HTTP/2 implementation is built into the C-core library, so there is no need to include an outside one. However, as you can see, it is possible to bring your own (such as with Cronet, the Chrome networking library).
### Go
In [gRPC-Go](https://github.com/grpc/grpc-go), the stack is much simpler, due to not having to support so many configurations. Here is a high level overview of the Go stack:
<p><img src="/img/grpc-go-stack.svg" alt="gRPC Go Stack" style="max-width: 800px" /></p>
The structure is a little different here. Since there is only one language, the flow from the top of the stack to the bottom is more linear. Unlike wrapped languages, gRPC Go can use either its own HTTP/2 implementation, or the Go `net/http` package.
### Java
Here is a high level overview of the [gRPC-Java](https://github.com/grpc/grpc-java) stack:
<p><img src="/img/grpc-java-stack.svg" alt="gRPC Java Stack" style="max-width: 800px" /></p>
Again, the structure is a little different. Java supports HTTP/2, QUIC, and In Process like the C-core. Unlike the C-Core though, applications commonly can bypass the generated stubs and interceptors, and speak directly to the Java Core library. Each structure is slightly different based on the needs of each language implementation of gRPC. Also unlike wrapped languages, gRPC Java separates the HTTP/2 implementation into pluggable libraries (such as Netty, OkHttp, or Cronet).

View File

@ -0,0 +1,206 @@
---
author: Kirill 'kkm' Katsnelson
author-link: https://github.com/kkm000
date: "2018-12-18T00:00:00Z"
published: true
title: "gRPC Meets .NET SDK And Visual Studio: Automatic Codegen On Build"
url: blog/grpc-dotnet-build
---
As part of Microsoft's move towards its cross-platform .NET offering, they have
greatly simplified the project file format, and allowed a tight integration of
third-party code generators with .NET projects. We are listening, and now proud
to introduce integrated compilation of Protocol Buffer and gRPC service
`.proto` files in .NET C# projects starting with the version 1.17 of the
Grpc.Tools NuGet package, now available from Nuget.org.
You no longer need to use hand-written scripts to generate code from `.proto`
files: The .NET build magic handles this for you. The integrated tools locate
the proto compiler and gRPC plugin, standard Protocol Buffer imports, and track
dependencies before invoking the code generators, so that the generated C#
source files are never out of date, at the same time keeping regeneration to
the minimum required. In essence, `.proto` files are treated as first-class
sources in a .NET C# project.
<!--more-->
## A Walkthrough
In this blog post, we'll walk through the simplest and probably the most common
scenario of creating a library from `.proto` files using the cross-platform
`dotnet` command. We will implement essentially a clone of the `Greeter`
library, shared by client and server projects in the [C# `Helloworld` example
directory
](https://github.com/grpc/grpc/tree/master/examples/csharp/Helloworld/Greeter).
### Create a new project
Let's start by creating a new library project.
```sh
~/work$ dotnet new classlib -o MyGreeter
The template "Class library" was created successfully.
~/work$ cd MyGreeter
~/work/MyGreeter$ ls -lF
total 12
-rw-rw-r-- 1 kkm kkm 86 Nov 9 16:10 Class1.cs
-rw-rw-r-- 1 kkm kkm 145 Nov 9 16:10 MyGreeter.csproj
drwxrwxr-x 2 kkm kkm 4096 Nov 9 16:10 obj/
```
Observe that the `dotnet new` command has created the file `Class1.cs` that
we won't need, so remove it. Also, we need some `.proto` files to compile. For
this exercise, we'll copy an example file [`examples/protos/helloworld.proto`
](https://github.com/grpc/grpc/blob/master/examples/protos/helloworld.proto)
from the gRPC distribution.
```sh
~/work/MyGreeter$ rm Class1.cs
~/work/MyGreeter$ wget -q https://raw.githubusercontent.com/grpc/grpc/master/examples/protos/helloworld.proto
```
(on Windows, use `del Class1.cs`, and, if you do not have the wget command,
just [open the above URL
](https://raw.githubusercontent.com/grpc/grpc/master/examples/protos/helloworld.proto)
and use a *Save As...* command from your Web browser).
Next, add required NuGet packages to the project:
```sh
~/work/MyGreeter$ dotnet add package Grpc
info : PackageReference for package 'Grpc' version '1.17.0' added to file '/home/kkm/work/MyGreeter/MyGreeter.csproj'.
~/work/MyGreeter$ dotnet add package Grpc.Tools
info : PackageReference for package 'Grpc.Tools' version '1.17.0' added to file '/home/kkm/work/MyGreeter/MyGreeter.csproj'.
~/work/MyGreeter$ dotnet add package Google.Protobuf
info : PackageReference for package 'Google.Protobuf' version '3.6.1' added to file '/home/kkm/work/MyGreeter/MyGreeter.csproj'.
```
### Add `.proto` files to the project
**Next comes an important part.** First of all, by default, a `.csproj` project
file automatically finds all `.cs` files in its directory, although
[Microsoft now recommends suppressing this globbing
behavior](https://docs.microsoft.com/dotnet/core/tools/csproj#recommendation),
so we too decided against globbing `.proto` files. Thus the `.proto`
files must be added to the project explicitly.
Second of all, it is important to add a property `PrivateAssets="All"` to the
Grpc.Tools package reference, so that it will not be needlessly fetched by the
consumers of your new library. This makes sense, as the package only contains
compilers, code generators and import files, which are not needed outside of
the project where the `.proto` files have been compiled. While not strictly
required in this simple walkthrough, it must be your standard practice to do
that always.
So edit the file `MyGreeter.csproj` to add the `helloworld.proto` so that it
will be compiled, and the `PrivateAssets` property to the Grpc.Tools package
reference. Your resulting project file should now look like this:
```xml
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Google.Protobuf" Version="3.6.1" />
<PackageReference Include="Grpc" Version="1.17.0" />
<!-- The Grpc.Tools package generates C# sources from .proto files during
project build, but is not needed by projects using the built library.
It's IMPORTANT to add the 'PrivateAssets="All"' to this reference: -->
<PackageReference Include="Grpc.Tools" Version="1.17.0" PrivateAssets="All" />
<!-- Explicitly include our helloworld.proto file by adding this line: -->
<Protobuf Include="helloworld.proto" />
</ItemGroup>
</Project>
```
### Build it!
At this point you can build the project with the `dotnet build` command to
compile the `.proto` file and the library assembly. For this walkthrough, we'll
add a logging switch `-v:n` to the command, so we can see that the command to
compile the `helloworld.proto` file was in fact run. You may find it a good
idea to always do that the very first time you compile a project!
Note that many output lines are omitted below, as the build output is quite
verbose.
```sh
~/work/MyGreeter$ dotnet build -v:n
Build started 11/9/18 5:33:44 PM.
1:7>Project "/home/kkm/work/MyGreeter/MyGreeter.csproj" on node 1 (Build target(s)).
1>_Protobuf_CoreCompile:
/home/kkm/.nuget/packages/grpc.tools/1.17.0/tools/linux_x64/protoc
--csharp_out=obj/Debug/netstandard2.0
--plugin=protoc-gen-grpc=/home/kkm/.nuget/packages/grpc.tools/1.17.0/tools/linux_x64/grpc_csharp_plugin
--grpc_out=obj/Debug/netstandard2.0 --proto_path=/home/kkm/.nuget/packages/grpc.tools/1.17.0/build/native/include
--proto_path=. --dependency_out=obj/Debug/netstandard2.0/da39a3ee5e6b4b0d_helloworld.protodep helloworld.proto
CoreCompile:
[ ... skipping long output ... ]
MyGreeter -> /home/kkm/work/MyGreeter/bin/Debug/netstandard2.0/MyGreeter.dll
Build succeeded.
```
If at this point you invoke the `dotnet build -v:n` command again, `protoc`
would not be invoked, and no C# sources would be compiled. But if you change
the `helloworld.proto` source, then its outputs will be regenerated and then
recompiled by the C# compiler during the build. This is a regular dependency
tracking behavior that you expect from modifying any source file.
Of course, you can also add `.cs` files to the same project: It is a regular C#
project building a .NET library, after all. This is done in our [RouteGuide
](https://github.com/grpc/grpc/tree/master/examples/csharp/RouteGuide/RouteGuide)
example.
### Where are the generated files?
You may wonder where the proto compiler and gRPC plugin output C# files are. By
default, they are placed in the same directory as other generated files, such
as objects (termed the "intermediate output" directory in the .NET build
parlance), under the `obj/` directory. This is a regular practice of .NET
builds, so that autogenerated files do not clutter the working directory or
accidentally placed under source control. Otherwise, they are accessible to the
tools like the debugger. You can see other autogenerated sources in that
directory, too:
```sh
~/work/MyGreeter$ find obj -name '*.cs'
obj/Debug/netstandard2.0/MyGreeter.AssemblyInfo.cs
obj/Debug/netstandard2.0/Helloworld.cs
obj/Debug/netstandard2.0/HelloworldGrpc.cs
```
(use `dir /s obj\*.cs` if you are following this walkthrough from a Windows
command prompt).
## There Is More To It
While the simplest default behavior is adequate in many cases, there are many
ways to fine-tune your `.proto` compilation process in a large project. We
encourage you to read the [documentation file BUILD-INTEGRATION.md
](https://github.com/grpc/grpc/blob/master/src/csharp/BUILD-INTEGRATION.md)
for available options if you find that the default arrangement does not suit
your workflow. The package also extends the Visual Studio's Properties window,
so you may set some options per file in the Visual Studio interface.
"Classic" `.csproj` projects and Mono are also supported.
## Share Your Experience
As with any initial release of a complex feature, we are thrilled to receive
your feedback. Did something not work as expected? Do you have a scenario that
is not easy to cover with the new tools? Do you have an idea how to improve the
workflow in general? Please read the documentation carefully, and then [open an
issue](https://github.com/grpc/grpc/issues) in the gRPC code repository on
GitHub. Your feedback is important to determine the future direction for our
build integration work!

View File

@ -0,0 +1,199 @@
---
author: Johan Brandhorst
author-link: https://jbrandhorst.com/
date: "2019-01-08T00:00:00Z"
published: true
title: The state of gRPC in the browser
url: blog/state-of-grpc-web
---
_This is a guest post by_
_[Johan Brandhorst](https://jbrandhorst.com), Software Engineer at_
_[InfoSum](https://www.infosum.com)._
gRPC 1.0 was released in August 2016 and has since grown to become one of the
premier technical solutions for application communications. It has been adopted
by startups, enterprise companies, and open source projects worldwide.
Its support for polyglot environments, focus on performance, type safety, and
developer productivity has transformed the way developers design their
architectures.
So far the benefits have largely only been available to mobile
app and backend developers, whilst frontend developers have had to continue to
rely on JSON REST interfaces as their primary means of information exchange.
However, with the release of gRPC-Web, gRPC is poised to become a valuable
addition in the toolbox of frontend developers.
In this post, I'll describe some of the history of gRPC in the browser, explore
the state of the world today, and share some thoughts on the future.
<!--more-->
# Beginnings
In the summer of 2016, both a team at Google and
Improbable<sup id="a1">[1](#f1)</sup> independently started working on
implementing something that could be called "gRPC for the browser". They soon
discovered each other's existence and got together to define a
spec<sup id="a2">[2](#f2)</sup> for the new protocol.
## The gRPC-Web Spec
It is currently impossible to implement the HTTP/2 gRPC
spec<sup id="a3">[3](#f3)</sup> in the browser, as there is simply no browser
API with enough fine-grained control over the requests. For example: there is
no way to force the use of HTTP/2, and even if there was, raw HTTP/2 frames are
inaccessible in browsers. The gRPC-Web spec starts from the point of view of the
HTTP/2 spec, and then defines the differences. These notably include:
- Supporting both HTTP/1.1 and HTTP/2.
- Sending of gRPC trailers at the very end of request/response bodies as
indicated by a new bit in the gRPC message header<sup id="a4">[4](#f4)</sup>.
- A mandatory proxy for translating between gRPC-Web requests and gRPC HTTP/2
responses.
## The Tech
The basic idea is to have the browser send normal HTTP requests (with Fetch or
XHR) and have a small proxy in front of the gRPC server to translate the
requests and responses to something the browser can use.
<p><img src="/img/grpc-web-proxy.png"
alt="The role of the gRPC-Web proxy" style="max-width: 800px" /></p>
# The Two Implementations
The teams at Google and Improbable both went on to implement the spec in two
different repositories<sup id="a5">[5](#f5),</sup><sup id="a6">[6](#f6)</sup>,
and with slightly different implementations, such that neither was entirely
conformant to the spec, and for a long time neither was compatible with the
other's proxy<sup id="a7">[7](#f7),</sup><sup id="a8">[8](#f8)</sup>.
The Improbable gRPC-Web client<sup id="a9">[9](#f9)</sup> is implemented in
TypeScript and available on npm as `@improbable-eng/grpc-web`<sup id="a10">[10](#f10)</sup>.
There is also a Go proxy available, both as a package that can be imported into
existing Go gRPC servers<sup id="a11">[11](#f11)</sup>, and as a standalone
proxy that can be used to expose an arbitrary gRPC server to a gRPC-Web
frontend<sup id="a12">[12](#f12)</sup>.
The Google gRPC-Web client<sup id="a13">[13](#f13)</sup> is implemented in
JavaScript using the Google Closure library<sup id="a14">[14](#f14)</sup> base.
It is available on npm as `grpc-web`<sup id="a15">[15](#f15)</sup>. It originally
shipped with a proxy implemented as an NGINX
extension<sup id="a16">[16](#f16)</sup>, but has since doubled down on an Envoy
proxy HTTP filter<sup id="a17">[17](#f17)</sup>, which is available in all
versions since v1.4.0.
## Feature Sets
The gRPC HTTP/2 implementations all support the four method types: unary,
server-side, client-side, and bi-directional streaming. However, the gRPC-Web
spec does not mandate any client-side or bi-directional streaming support
specifically, only that it will be implemented once WHATWG
Streams<sup id="a18">[18](#f18)</sup> are implemented in browsers.
The Google client supports unary and server-side streaming, but only when used
with the `grpcwebtext` mode. Only unary requests are fully supported in the
`grpcweb` mode. These two modes specify different ways to encode the protobuf
payload in the requests and responses.
The Improbable client supports both unary and server-side streaming, and has an
implementation that automatically chooses between XHR and Fetch based on the
browser capabilities.
Heres a table that summarizes the different features supported:
| Client / Feature | Transport | Unary | Server-side streams | Client-side & bi-directional streaming |
| ---------------------- | ------------ | ----- | -------------------------------- | -------------------------------------- |
| Improbable | Fetch/XHR | ✔️ | ✔️ | ❌<sup id="a19">[19](#f19)</sup> |
| Google (`grpcwebtext`) | XHR | ✔️ | ✔️ | ❌ |
| Google (`grpcweb`) | XHR | ✔️ | ❌<sup id="a20">[20](#f20)</sup> | ❌ |
For more information on this table, please see
[my compatibility test repo on github](https://github.com/johanbrandhorst/grpc-web-compatibility-test).
The compatibility tests may evolve into some automated test framework to enforce
and document the various compatibilities in the future.
## Compatibility Issues
Of course, with two different proxies also come compatibility issues.
Fortunately, these have recently been ironed out, so you can expect to use
either client with either proxy.
# The Future
The Google implementation announced version 1.0 and general availability in
October 2018<sup id="a21">[21](#f21)</sup> and has published a roadmap of future
goals<sup id="a22">[22](#f22)</sup>, including:
- An efficient JSON-like message encoding
- In-process proxies for Node, Python, Java and more
- Integration with popular frameworks (React, Angular, Vue)
- Fetch API transport for memory efficient streaming
- Bi-directional steaming support
Google is looking for feedback on what features are important to the community,
so if you think any of these are particularly valuable to you, then please fill
in their survey<sup id="a23">[23](#f23)</sup>.
Recent talks between the two projects have agreed on promoting the Google client
and Envoy proxy as preferred solutions for new users. The Improbable client and
proxy will remain as alternative implementations of the spec without the
Google Closure dependency, but should be considered experimental. A migration
guide will be produced for existing users to move to the Google client, and the
teams are working together to converge the generated APIs.
# Conclusion
The Google client will continue to have new features and fixes implemented at a
steady pace, with a team dedicated to its success, and it being the official
gRPC client. It doesnt have Fetch API support like the Improbable client, but
if this is an important feature for the community, it will be added. The Google
team and the greater community are collaborating on the official client to the
benefit of the gRPC community at large. Since the GA announcement the community
contributions to the Google gRPC-Web repo has increased dramatically.
When choosing between the two proxies, there's no difference in capability, so
it becomes a matter of your deployment model. Envoy will suit some
scenarios, while an in-process Go proxy has its own advantages.
If youre getting started with gRPC-Web today, first try the Google client. It
has strict API compatibility guarantees and is built on the rock-solid Google
Closure library base used by Gmail and Google Maps. If you _need_ Fetch API
memory efficiency or experimental websocket client-side and bi-directional
streaming, the Improbable client is a good choice, and it will continue to be
used and maintained by Improbable for the foreseeable future.
Either way, gRPC-Web is an excellent choice for web developers. It brings the
portability, performance, and engineering of a sophisticated protocol into the
browser, and marks an exciting time for frontend developers!
## References
1. <div id="f1"></div> <a href="https://improbable.io/games/blog/grpc-web-moving-past-restjson-towards-type-safe-web-apis">https://improbable.io/games/blog/grpc-web-moving-past-restjson-towards-type-safe-web-apis</a> [](#a1)
2. <div id="f2"></div> <a href="https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md">https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md</a> [](#a2)
3. <div id="f3"></div> <a href="https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md">https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md</a> [](#a3)
4. <div id="f4"></div> <a href="https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md#protocol-differences-vs-grpc-over-http2">https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md#protocol-differences-vs-grpc-over-http2</a> [](#a4)
5. <div id="f5"></div> <a href="https://github.com/improbable-eng/grpc-web">https://github.com/improbable-eng/grpc-web</a> [](#a5)
6. <div id="f6"></div> <a href="https://github.com/grpc/grpc-web">https://github.com/grpc/grpc-web</a> [](#a6)
7. <div id="f7"></div> <a href="https://github.com/improbable-eng/grpc-web/issues/162">https://github.com/improbable-eng/grpc-web/issues/162</a> [](#a7)
8. <div id="f8"></div> <a href="https://github.com/grpc/grpc-web/issues/91">https://github.com/grpc/grpc-web/issues/91</a> [](#a8)
9. <div id="f9"></div> <a href="https://github.com/improbable-eng/grpc-web/tree/master/ts">https://github.com/improbable-eng/grpc-web/tree/master/ts</a> [](#a9)
10. <div id="f10"></div> <a href="https://www.npmjs.com/package/@improbable-eng/grpc-web">https://www.npmjs.com/package/@improbable-eng/grpc-web</a> [](#a10)
11. <div id="f11"></div> <a href="https://github.com/improbable-eng/grpc-web/tree/master/go/grpcweb">https://github.com/improbable-eng/grpc-web/tree/master/go/grpcweb</a> [](#a11)
12. <div id="f12"></div> <a href="https://github.com/improbable-eng/grpc-web/tree/master/go/grpcwebproxy">https://github.com/improbable-eng/grpc-web/tree/master/go/grpcwebproxy</a> [](#a12)
13. <div id="f13"></div> <a href="https://github.com/grpc/grpc-web/tree/master/javascript/net/grpc/web">https://github.com/grpc/grpc-web/tree/master/javascript/net/grpc/web</a> [](#a13)
14. <div id="f14"></div> <a href="https://developers.google.com/closure/">https://developers.google.com/closure/</a> [](#a14)
15. <div id="f15"></div> <a href="https://www.npmjs.com/package/grpc-web">https://www.npmjs.com/package/grpc-web</a> [](#a15)
16. <div id="f16"></div> <a href="https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway">https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway</a> [](#a16)
17. <div id="f17"></div> <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http_filters/grpc_web_filter">https://www.envoyproxy.io/docs/envoy/latest/configuration/http_filters/grpc_web_filter</a> [](#a17)
18. <div id="f18"></div> <a href="https://streams.spec.whatwg.org/">https://streams.spec.whatwg.org/</a> [](#a18)
19. <div id="f19"></div>The Improbable client supports client-side and
bi-directional streaming with an experimental websocket transport. This is
not part of the gRPC-Web spec, and is not recommended for production use. [](#a19)
20. <div id="f20"></div>`grpcweb` allows server streaming methods to be called, but
it doesn't return data until the stream has closed. [](#a20)
21. <div id="f21"></div> <a href="https://grpc.io/blog/grpc-web-ga">https://grpc.io/blog/grpc-web-ga</a> [](#a21)
22. <div id="f22"></div> <a href="https://github.com/grpc/grpc-web/blob/master/ROADMAP.md">https://github.com/grpc/grpc-web/blob/master/ROADMAP.md</a> [](#a22)
23. <div id="f23"></div> <a href="https://docs.google.com/forms/d/1NjWpyRviohn5jaPntosBHXRXZYkh_Ffi4GxJZFibylM">https://docs.google.com/forms/d/1NjWpyRviohn5jaPntosBHXRXZYkh_Ffi4GxJZFibylM</a> [](#a23)

View File

@ -0,0 +1,26 @@
---
author: April Nassi
author-link: https://www.thisisnotapril.com/
date: "2019-03-08T00:00:00Z"
published: true
title: Dear gRPC
url: blog/hello-pancakes
---
Dear gRPC,
We messed up. We are so sorry that we missed your birthday this year. Last year we celebrated [with cake](https://twitter.com/grpcio/status/968618209803931648) and fanfare, but this year we dropped the ball. Please don't think that we love you any less.
You're 4 now and that's a big milestone! You're part of so much amazing technology at companies like Salesforce, Netflix, Spotify, Fanatics, and of course, Google. In fact just this week the [biggest API Google has](https://ads-developers.googleblog.com/2019/03/upgrade-to-new-google-ads-api-to-get.html) went production-ready with gRPC.
We're proud of you, gRPC, and we're going to make this up to you. For starters - we got you a puppy! He's an adorable **G**olden **R**etriever and his name is **P**an**C**akes. He loves to run back and forth with toys, packets, or messages. He's super active and no matter how much we train him, we just can't get him to REST. PanCakes is going to be your best friend, and ambassador.
<img src="https://raw.githubusercontent.com/grpc/grpc-community/master/PanCakes/Pancakes_Birthday.png" alt="gRPC Mascot PanCakes" style="max-width: 547px">
Even though it's a bit late, we still want to throw you a party, gRPC. Our friends at CNCF have planned a [big event](https://events.linuxfoundation.org/events/grpconf-2019/) for you on March 21, and there's going to be lots of people there! They'll be sharing stories about the cool things they've built, and meeting new people. It's an entire day all about you, and everyone there is going to learn so much. There will be other puppies who can play with PanCakes! Some of the amazing dogs from [Canine Companions for Independence](http://www.cci.org/) will be there to greet conference attendees and share how they help their humans live a more independent life.
We are so excited to see what this year holds for you, gRPC!
~ gRPC Maintainers
<img src="https://raw.githubusercontent.com/grpc/grpc-community/master/PanCakes/Pancakes_Birthday_4.png" alt="gRPC Mascot PanCakes" style="max-width: 547px">

View File

@ -0,0 +1,71 @@
---
title: Community
---
<div class="container">
<div class="row">
<p class="lead" style="margin-top:2%">gRPC has an active community of developers who are using, enhancing and building valuable integrations with other software projects. Wed love your help to improve and extend the project. You can reach us via <a href="https://groups.google.com/forum/#!forum/grpc-io">Mailing list</a>, <a href="https://gitter.im/grpc/grpc"> Gitter channel</a>, <a href="https://twitter.com/grpcio">Twitter</a> to start engaging with the project and its members.
</p>
<section class="community-section">
<h4 class="community-contribute community-title">Contribute on Github</h4>
<p>gRPC has an active community of developers who are using, enhancing and building valuable integrations with other software projects. We are always looking for active contributors in gRPC and gRPC Ecosystem. Here are a few areas where we would love community contribution in grpc project. Be sure to follow our <a href="/contribute/">community addition guidelines</a>.</p>
<div class="contribute-wrapper">
<div class="contribute-item">
<div class="item-content">
<a class="item-box" href="https://github.com/grpc/grpc/labels/disposition%2Fhelp%20wanted">
<p class="item-link">gRPC C-based</p>
</a>
<p class="item-desc">Or shortcut to: <a href="https://github.com/grpc/grpc/issues?q=is%3Aopen+is%3Aissue+label%3Aarea%2Fcore+label%3A%22disposition%2Fhelp+wanted%22">C</a>, <a href="https://github.com/grpc/grpc/issues?q=is%3Aopen+is%3Aissue+label%3A%22disposition%2Fhelp+wanted%22+label%3Alang%2Fc%2B%2B">C++</a>, <a href="https://github.com/grpc/grpc/issues?q=is%3Aopen+is%3Aissue+label%3A%22disposition%2Fhelp+wanted%22+label%3Alang%2Fnode">Node.js</a>, <a href="https://github.com/grpc/grpc/issues?q=is%3Aopen+is%3Aissue+label%3A%22disposition%2Fhelp+wanted%22+label%3Alang%2FPython">Python</a>, <a href="https://github.com/grpc/grpc/issues?q=is%3Aopen+is%3Aissue+label%3A%22disposition%2Fhelp+wanted%22+label%3Alang%2Fruby">Ruby</a>, <a href="https://github.com/grpc/grpc/issues?q=is%3Aopen+is%3Aissue+label%3A%22disposition%2Fhelp+wanted%22+label%3Alang%2FObjC">Objective-C</a>, <a href="https://github.com/grpc/grpc/issues?q=is%3Aopen+is%3Aissue+label%3A%22disposition%2Fhelp+wanted%22+label%3Alang%2Fphp">PHP</a>, and <a href="https://github.com/grpc/grpc/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+label%3A%22disposition%2Fhelp+wanted%22+label%3Alang%2Fc%23+">C#</a>
</p>
</div>
</div>
<div class="contribute-item">
<div class="item-content">
<a class="item-box" href="https://github.com/grpc/grpc-java/labels/help%20wanted">
<p class="item-link">gRPC Java</p>
</a>
<p class="item-desc">For Android Java and Java.</p>
</div>
</div>
<div class="contribute-item">
<div class="item-content">
<a class="item-box" href="https://github.com/grpc/grpc-go/labels/Status:%20help%20wanted">
<p class="item-link">gRPC Go</p>
</a>
<p class="item-desc">For the Go implementation</p>
</div>
</div>
</div>
<a href="/contribute/"><span>More on how to contribute to gRPC Documentation ></span></a>
</section>
<section class="community-section">
<h4 class="community-mailing community-title">Mailing List</h4>
<p>Any questions or suggestions? Just want to be in the loop of what is going on with the project? Join the <a href="https://groups.google.com/forum/#!forum/grpc-io">mailing list</a>
</section>
<section class="community-section">
<h4 class="community-irc community-title">Join gRPC Ecosystem</h4>
<p>We have an organization for all valuable projects around gRPC in the <a href="https://github.com/grpc-ecosystem">gRPC Ecosystem</a>. The goal is to have all projects around gRPC (showing integrations with other projects or building utilities on top of gRPC) to be showcased here. If you have a new project you would like to add to gRPC Ecosystem, please fill up the <a href="https://docs.google.com/a/google.com/forms/d/119zb79XRovQYafE9XKjz9sstwynCWcMpoJwHgZJvK74/edit">gRPC Ecosystem Project Request</a> form. Please read the <a href="https://github.com/grpc/grpc-contrib/blob/master/CONTRIBUTING.md">contribution guidelines</a> for gRPC Ecosystem before submitting.
</section>
<section class="community-section" style="width:100%;margin-bottom:5%;">
<h4 class="community-irc community-title">Gitter Channel</h4>
<p>Join other developers and users on the <a href="https://gitter.im/grpc/grpc">gRPC Gitter channel</a></p>
</section>
<section class="community-section">
<h4 style="margin-bottom:15%" class="community-reddit community-title">Reddit</h4>
<p style="margin-top:5%">Join the <a href="https://www.reddit.com/r/grpc/">subreddit</a></p>
</section>
<section class="community-section">
<h4 class="community-meetings community-title">Community Meetings</h4>
<p class="community-text">We hold a community video conference every other week. It's a way to discuss the status of work and show off things the community is working on. Meeting information and notes can be found at <a href="https://bit.ly/grpcmeetings">bit.ly/grpcmeetings</a>.</p>
</section>
</div>
</div>

View File

@ -0,0 +1,46 @@
---
title: Contribute
---
<div class="container markdown">
<div class="row">
To contribute to gRPC documentation, please fork the&nbsp;<a href="https://github.com/grpc/grpc.github.io">GitHub gRPC repository</a> &nbsp;and start submitting pull requests.
<div id="MainRepoInstructions">
<h2>Contribution guidelines for gRPC</h2>
<p>We welcome contributions to either of our three core repositories. <a href="https://github.com/grpc/grpc">gRPC</a>, <a href="https://github.com/grpc/grpc-java">gRPC Java</a> and <a href="https://github.com/grpc/grpc-go">gRPC Go</a>. </p>
<button class="btn btn-grpc waves-effect waves-light"><a href="https://github.com/grpc/grpc/blob/master/CONTRIBUTING.md">View Guidelines</a></button>
</div>
<div id="EcosystemInstructions">
<h2>Contribution guidelines for gRPC Ecosystem</h2>
<p><a href="https://github.com/grpc-ecosystem/">gRPC Ecosystem</a> is a different organization where we collect and curate valuable integrations of other projects with gRPC. You can propose a new project for it by filling up the <a hre="https://docs.google.com/a/google.com/forms/d/119zb79XRovQYafE9XKjz9sstwynCWcMpoJwHgZJvK74/edit">Propose new project form</a>.</p>
<button class="btn btn-grpc waves-effect waves-light"><a href="https://github.com/grpc/grpc-contrib/blob/master/CONTRIBUTING.md">View Guidelines</a></button>
<div id="generalInstructions">
<h2>Edit our site on github</h2>
<p>Click the below button to visit the repo for our site. You can then click the "Fork" button in the upper-right area of the screen to create a copy of our site on your GitHub account called a "fork." Make any changes you want in your fork, and when you are ready to send those changes to us, go to the index page for your fork and click "New Pull Request" to let us know about it.</p>
<button class="btn btn-grpc waves-effect waves-light"><a href="https://github.com/grpc/grpc.github.io">Browse this site's source code</a></button>
</div>
<div id="githubOrganization">
<h2>Being a member of the gRPC organization on github</h2>
<p>Being an organization member is not required for the vast majority of the contributions. Membership is required for certain administrative tasks such as accepting a pull request, or closing issues. If you wish to be part of the gRPC organization on github, please <a href="https://grpc.io/community/">get in touch with us</a>. Please note that in order to be part of the organization, your github account needs to have <a href="https://help.github.com/articles/securing-your-account-with-two-factor-authentication-2fa/">two factor security enabled</a>.</p>
</div>
<p></p>
<p></p>
<p></p>
</div>
</div>
</div>
</div>

168
content/docs/_index.html Normal file
View File

@ -0,0 +1,168 @@
---
title: "Documentation"
date: 2018-09-11T15:45:50+07:00
draft: false
---
<div class="docssection2">
Welcome to the developer documentation for gRPC.
Here you can learn about key gRPC concepts, find quick starts, reference
material, and tutorials for all our supported languages, and more. If youre new to gRPC we recommend that you read <a href="guides/"><b>What is
gRPC?</b></a> to find out more about our system and how it works. Or, if you want to see gRPC in action first, visit the <a href="quickstart/">QuickStart</a> for your favourite language.
</div>
<div class="doclangsection">
<h4 style="text-align:center;"> gRPC by Language</h4>
<div class="doccols">
<div class="docscol1">
<h3>C++</h3>
<a href="/docs/quickstart/cpp/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/c/">gRPC Basics Tutorial</a><br>
<a href="https://grpc.io/grpc/cpp/index.html">API Reference</a>
<br>
<h3 style="margin-top:3%;">Go</h3>
<a href="/docs/quickstart/go/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/go/">gRPC Basics Tutorial</a><br>
<a href="https://godoc.org/google.golang.org/grpc">API Reference</a><br>
<a href="/docs/reference/go/generated-code/">Generated Code Reference</a>
<br>
<h3 style="margin-top:3%;">Node.js</h3>
<a href="/docs/quickstart/node/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/node/">gRPC Basics Tutorial</a><br>
<a href="https://grpc.io/grpc/node/">API Reference</a>
<br>
<h3 style="margin-top:3%;">PHP</h3>
<a href="/docs/quickstart/php/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/php/">gRPC Basics Tutorial</a><br>
<a href="https://grpc.io/grpc/php/namespace-Grpc.html">API Reference</a>
</div>
<div class="docscol2">
<h3>Java</h3>
<a href="/docs/quickstart/java/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/java/">gRPC Basics Tutorial</a><br>
<a href="https://grpc.io/grpc-java/javadoc/index.html">API Reference</a><br>
<a href="/docs/reference/java/generated-code/">Generated Code Reference</a>
<br>
<h3 style="margin-top:3%;">Ruby</h3>
<a href="/docs/quickstart/ruby/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/ruby/">gRPC Basics Tutorial</a><br>
<a href="https://godoc.org/google.golang.org/grpc">API Reference</a>
<br>
<h3 style="margin-top:3%;">Android Java</h3>
<a href="/docs/quickstart/android/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/android/">gRPC Basics Tutorial</a><br>
<a href="https://grpc.io/grpc-java/javadoc/index.html">API Reference</a><br>
<a href="/docs/reference/java/generated-code/">Generated Code Reference</a>
<br>
<h3 style="margin-top:3%;">Dart</h3>
<a href="/docs/quickstart/dart/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/dart/">gRPC Basics Tutorial</a><br>
</div>
<div class="docscol3">
<h3>Python</h3>
<a href="/docs/quickstart/python/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/python/">gRPC Basics Tutorial</a><br>
<a href="https://grpc.io/grpc/python/">API Reference</a><br>
<a href="/docs/reference/python/generated-code/">Generated Code Reference</a>
<br>
<h3 style="margin-top:3%;">C#</h3>
<a href="/docs/quickstart/csharp/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/csharp/">gRPC Basics Tutorial</a><br>
<a href="https://grpc.io/grpc/csharp/api/Grpc.Core.html">API Reference</a>
<br>
<h3 style="margin-top:3%;">Objective-C</h3>
<a href="/docs/quickstart/objective-c/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/objective-c/">gRPC Basics Tutorial</a><br>
<a href="http://cocoadocs.org/docsets/gRPC/">API Reference</a>
<br>
<h3 style="margin-top:3%;">Web</h3>
<a href="/docs/quickstart/web/">Quick Start Guide</a><br>
<a href="/docs/tutorials/basic/web/">gRPC Basics Tutorial</a><br>
</div>
</div>
</div>
<div class="docusecasesection">
<h4 style="text-align:center;">gRPC by Use Cases</h4>
<div class="doccols" style="background-color:white;margin-top:0px"">
<div class="docstext">
gRPC is used in last mile of computing in mobile and web client since it can generate libraries for iOS and Android and uses standards based HTTP/2 as transport allowing it to easily traverse proxies and firewalls. There is also work underway to develop a JS library for use in browsers. Beyond that, it is ideal as a microservices interconnect, not just because the core protocol is very efficient but also because the framework has pluggable authentication, load balancing etc. Google itself is also transitioning to use it to connect microservices.
</div>
<div class="docscol1">
<h3 style="padding-bottom:5%;">Last Mile of Computing</h3>
<p><a href="/docs/tutorials/basic/android/">Mobile: Example RouteGuide Client on Android</a></p>
<p><a href="https://github.com/grpc/grpc-web">Web: Browser Client</a></p>
<br>
</div>
<div class="docscol2" >
<h3 style="padding-bottom:5%;">APIs</h3>
<p>
<a href="https://github.com/GoogleCloudPlatform/cloud-bigtable-client">Google Cloud BigTable Client APIs</a></p>
<p><a href="https://cloud.google.com/blog/big-data/2016/03/announcing-grpc-alpha-for-google-cloud-pubsub">Google Cloud PubSub APIs</a></p>
<p><a href="https://cloud.google.com/speech/reference/rpc/">Google Cloud Speech APIs</a></p>
<br>
</div>
<div class="docscol3">
<h3 style="padding-bottom:5%;">Microservices</h3>
<p><a href="https://github.com/dinowernli/java-grpc-prometheus">Monitoring gRPC services using Prometheus in Java</a></p>
<p><a href="https://github.com/mwitkow/go-grpc-prometheus">Tracing gRPC services using Zipkin in Java</a></p>
<p><a href="https://github.com/grpc/grpc/blob/master/doc/load-balancing.md">Load Balancing in gRPC</a></p>
<br>
</div>
</div>
</div>
<div class="docprojectsection">
<h4 style="text-align:center;">gRPC in Other Projects</h4>
<div class="doccols">
<div class="docstext">
gRPC now has a vibrant community of companies, projects, developers who are extending and building around it. Here are some popular projects.
</div>
<div class="docscol1">
<h3 style="padding-bottom:5%;">Popular Projects</h3>
<a href="https://github.com/grpc-ecosystem/grpc-gateway" style="display:block">gRPC Gateway</a>
<a href="https://github.com/grpc-ecosystem/polyglot" style="display:block">Polyglot: gRPC command line client</a>
<a href="https://github.com/google/flatbuffers/tree/master/grpc" style="display:block">FlatBuffer for gRPC</a>
<a href="https://github.com/go-kit/kit/tree/master/transport/grpc" style="display:block">Go kit with gRPC transport</a>
<a href="https://github.com/etcd-io/etcd" style="display:block">etcd</a>
<a href="https://github.com/cockroachdb/cockroach" style="display:block">cockroachdb</a>
<a href="https://github.com/openzipkin/brave/tree/master/archive/brave-grpc" style="display:block">Zipkin for gRPC</a>
<a href="https://github.com/pingcap/tidb" style="display:block">TiDB</a>
<a href="https://github.com/apache/bookkeeper" style="display:block">Apache BookKeeper</a>
<br>
</div>
</div>
</div>

View File

@ -0,0 +1,107 @@
---
layout: guides
title: Guides
---
This document introduces you to gRPC and protocol buffers. gRPC can use
protocol buffers as both its Interface Definition Language (IDL) and as its underlying message
interchange format. If youre new to gRPC and/or protocol buffers, read this!
If you just want to dive in and see gRPC in action first,
see our [Quick Starts](../quickstart).
<div id="toc" class="toc mobile-toc"></div>
### Overview
In gRPC a client application can directly call methods on a server application on a different machine as if it was a local object, making it easier for you to create distributed applications and services. As in many RPC systems, gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. On the server side, the server implements this interface and runs a gRPC server to handle client calls. On the client side, the client has a stub (referred to as just a client in some languages) that provides the same methods as the server.
![Concept Diagram](../../img/landing-2.svg)
gRPC clients and servers can run and talk to each other in a variety of environments - from servers inside Google to your own desktop - and can be written in any of gRPC's supported languages. So, for example, you can easily create a gRPC server in Java with clients in Go, Python, or Ruby. In addition, the latest Google APIs will have gRPC versions of their interfaces, letting you easily build Google functionality into your applications.
### Working with Protocol Buffers
By default gRPC uses [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview), Googles
mature open source mechanism for serializing structured data (although it
can be used with other data formats such as JSON). Here's a quick intro to how
it works. If you're already familiar with protocol buffers, feel free to skip
ahead to the next section.
The first step when working with protocol buffers is to define the structure
for the data you want to serialize in a *proto file*: this is an ordinary text
file with a `.proto` extension. Protocol buffer data is structured as
*messages*, where each message is a small logical record of information
containing a series of name-value pairs called *fields*. Here's a simple
example:
```proto
message Person {
string name = 1;
int32 id = 2;
bool has_ponycopter = 3;
}
```
Then, once you've specified your data structures, you use the protocol buffer
compiler `protoc` to generate data access classes in your preferred language(s)
from your proto definition. These provide simple accessors for each field
(like `name()` and `set_name()`) as well as methods to serialize/parse
the whole structure to/from raw bytes so, for instance, if your chosen
language is C++, running the compiler on the above example will generate a
class called `Person`. You can then use this class in your application to
populate, serialize, and retrieve Person protocol buffer messages.
As you'll see in more detail in our examples, you define gRPC services
in ordinary proto files, with RPC method parameters and return types specified as
protocol buffer messages:
```proto
// The greeter service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
gRPC also uses `protoc` with a special gRPC plugin to
generate code from your proto file. However, with the gRPC plugin, you get
generated gRPC client and server code, as well as the regular protocol buffer
code for populating, serializing, and retrieving your message types. We'll
look at this example in more detail below.
You can find out lots more about protocol buffers in the [Protocol Buffers
documentation](https://developers.google.com/protocol-buffers/docs/overview),
and find out how to get and install `protoc` with gRPC plugins in your chosen
language's Quickstart.
#### Protocol buffer versions
While protocol buffers have been available for open source users for some
time, our examples use a new flavor of protocol buffers called proto3, which
has a slightly simplified syntax, some useful new features, and supports
lots more languages. This is currently available in Java, C++, Python,
Objective-C, C#, a lite-runtime (Android Java), Ruby, and JavaScript from the
[protocol buffers GitHub repo](https://github.com/google/protobuf/releases),
as well as a Go language generator from the [golang/protobuf GitHub
repo](https://github.com/golang/protobuf), with more languages
in development. You can find out more in the [proto3 language
guide](https://developers.google.com/protocol-buffers/docs/proto3) and the
[reference documentation](https://developers.google.com/protocol-buffers/docs/reference/overview)
available for each language. The reference documentation also includes a
[formal specification](https://developers.google.com/protocol-buffers/docs/reference/proto3-spec)
for the `.proto` file format.
In general, while you can use proto2 (the current default protocol buffers
version), we recommend that you use proto3 with gRPC as it lets you use the
full range of gRPC-supported languages, as well as avoiding compatibility
issues with proto2 clients talking to proto3 servers and vice versa.

628
content/docs/guides/auth.md Normal file
View File

@ -0,0 +1,628 @@
---
layout: guides
title: Authentication
aliases: [/docs/guides/auth.html]
---
<p class="lead">This document provides an overview of gRPC authentication,
including our built-in supported auth mechanisms, how to plug in your own
authentication systems, and examples of how to use gRPC auth in our supported
languages.</p>
<div id="toc" class="toc mobile-toc"></div>
### Overview
gRPC is designed to work with a variety of authentication mechanisms, making it
easy to safely use gRPC to talk to other systems. You can use our supported
mechanisms - SSL/TLS with or without Google token-based authentication - or you
can plug in your own authentication system by extending our provided code.
gRPC also provides a simple authentication API that lets you provide all the
necessary authentication information as `Credentials` when creating a channel or
making a call.
### Supported auth mechanisms
The following authentication mechanisms are built-in to gRPC:
- **SSL/TLS**: gRPC has SSL/TLS integration and promotes the use of SSL/TLS
to authenticate the server, and to encrypt all the data exchanged between
the client and the server. Optional mechanisms are available for clients to
provide certificates for mutual authentication.
- **Token-based authentication with Google**: gRPC provides a generic
mechanism (described below) to attach metadata based credentials to requests
and responses. Additional support for acquiring access tokens
(typically OAuth2 tokens) while accessing Google APIs through gRPC is
provided for certain auth flows: you can see how this works in our code
examples below. In general this mechanism must be used *as well as* SSL/TLS
on the channel - Google will not allow connections without SSL/TLS, and
most gRPC language implementations will not let you send credentials on an
unencrypted channel.
<p class="note"> <strong>WARNING</strong>: Google credentials should only
be used to connect to Google services. Sending a Google issued OAuth2 token
to a non-Google service could result in this token being stolen and used to
impersonate the client to Google services.</p>
### Authentication API
gRPC provides a simple authentication API based around the unified concept of
Credentials objects, which can be used when creating an entire gRPC channel or
an individual call.
#### Credential types
Credentials can be of two types:
- **Channel credentials**, which are attached to a `Channel`, such as SSL
credentials.
- **Call credentials**, which are attached to a call (or `ClientContext` in
C++).
You can also combine these in a`CompositeChannelCredentials`, allowing you to
specify, for example, SSL details for the channel along with call credentials
for each call made on the channel. A `CompositeChannelCredentials` associates a
`ChannelCredentials` and a `CallCredentials` to create a new
`ChannelCredentials`. The result will send the authentication data associated
with the composed `CallCredentials`with every call made on the channel.
For example, you could create a `ChannelCredentials` from an `SslCredentials`
and an `AccessTokenCredentials`. The result when applied to a `Channel` would
send the appropriate access token for each call on this channel.
Individual `CallCredentials` can also be composed using
`CompositeCallCredentials`. The resulting `CallCredentials` when used in a call
will trigger the sending of the authentication data associated with the two
`CallCredentials`.
#### Using client-side SSL/TLS
Now let's look at how `Credentials` work with one of our supported auth
mechanisms. This is the simplest authentication scenario, where a client just
wants to authenticate the server and encrypt all data. The example is in C++,
but the API is similar for all languages: you can see how to enable SSL/TLS in
more languages in our Examples section below.
```cpp
// Create a default SSL ChannelCredentials object.
auto channel_creds = grpc::SslCredentials(grpc::SslCredentialsOptions());
// Create a channel using the credentials created in the previous step.
auto channel = grpc::CreateChannel(server_name, channel_creds);
// Create a stub on the channel.
std::unique_ptr<Greeter::Stub> stub(Greeter::NewStub(channel));
// Make actual RPC calls on the stub.
grpc::Status s = stub->sayHello(&context, *request, response);
```
For advanced use cases such as modifying the root CA or using client certs,
the corresponding options can be set in the `SslCredentialsOptions` parameter
passed to the factory method.
#### Using Google token-based authentication
gRPC applications can use a simple API to create a credential that works for
authentication with Google in various deployment scenarios. Again, our example
is in C++ but you can find examples in other languages in our Examples section.
```cpp
auto creds = grpc::GoogleDefaultCredentials();
// Create a channel, stub and make RPC calls (same as in the previous example)
auto channel = grpc::CreateChannel(server_name, creds);
std::unique_ptr<Greeter::Stub> stub(Greeter::NewStub(channel));
grpc::Status s = stub->sayHello(&context, *request, response);
```
This channel credentials object works for applications using Service Accounts as
well as for applications running in [Google Compute Engine
(GCE)](https://cloud.google.com/compute/). In the former case, the service
accounts private keys are loaded from the file named in the environment
variable `GOOGLE_APPLICATION_CREDENTIALS`. The keys are used to generate bearer
tokens that are attached to each outgoing RPC on the corresponding channel.
For applications running in GCE, a default service account and corresponding
OAuth2 scopes can be configured during VM setup. At run-time, this credential
handles communication with the authentication systems to obtain OAuth2 access
tokens and attaches them to each outgoing RPC on the corresponding channel.
#### Extending gRPC to support other authentication mechanisms
The Credentials plugin API allows developers to plug in their own type of
credentials. This consists of:
- The `MetadataCredentialsPlugin` abstract class, which contains the pure virtual
`GetMetadata` method that needs to be implemented by a sub-class created by
the developer.
- The `MetadataCredentialsFromPlugin` function, which creates a `CallCredentials`
from the `MetadataCredentialsPlugin`.
Here is example of a simple credentials plugin which sets an authentication
ticket in a custom header.
```cpp
class MyCustomAuthenticator : public grpc::MetadataCredentialsPlugin {
public:
MyCustomAuthenticator(const grpc::string& ticket) : ticket_(ticket) {}
grpc::Status GetMetadata(
grpc::string_ref service_url, grpc::string_ref method_name,
const grpc::AuthContext& channel_auth_context,
std::multimap<grpc::string, grpc::string>* metadata) override {
metadata->insert(std::make_pair("x-custom-auth-ticket", ticket_));
return grpc::Status::OK;
}
private:
grpc::string ticket_;
};
auto call_creds = grpc::MetadataCredentialsFromPlugin(
std::unique_ptr<grpc::MetadataCredentialsPlugin>(
new MyCustomAuthenticator("super-secret-ticket")));
```
A deeper integration can be achieved by plugging in a gRPC credentials
implementation at the core level. gRPC internals also allow switching out
SSL/TLS with other encryption mechanisms.
### Examples
These authentication mechanisms will be available in all gRPC's supported
languages. The following sections demonstrate how authentication and
authorization features described above appear in each language: more languages
are coming soon.
#### Go
##### Base case - no encryption or authentication
Client:
``` go
conn, _ := grpc.Dial("localhost:50051", grpc.WithInsecure())
// error handling omitted
client := pb.NewGreeterClient(conn)
// ...
```
Server:
``` go
s := grpc.NewServer()
lis, _ := net.Listen("tcp", "localhost:50051")
// error handling omitted
s.Serve(lis)
```
##### With server authentication SSL/TLS
Client:
``` go
creds, _ := credentials.NewClientTLSFromFile(certFile, "")
conn, _ := grpc.Dial("localhost:50051", grpc.WithTransportCredentials(creds))
// error handling omitted
client := pb.NewGreeterClient(conn)
// ...
```
Server:
``` go
creds, _ := credentials.NewServerTLSFromFile(certFile, keyFile)
s := grpc.NewServer(grpc.Creds(creds))
lis, _ := net.Listen("tcp", "localhost:50051")
// error handling omitted
s.Serve(lis)
```
##### Authenticate with Google
``` go
pool, _ := x509.SystemCertPool()
// error handling omitted
creds := credentials.NewClientTLSFromCert(pool, "")
perRPC, _ := oauth.NewServiceAccountFromFile("service-account.json", scope)
conn, _ := grpc.Dial(
"greeter.googleapis.com",
grpc.WithTransportCredentials(creds),
grpc.WithPerRPCCredentials(perRPC),
)
// error handling omitted
client := pb.NewGreeterClient(conn)
// ...
```
#### Ruby
##### Base case - no encryption or authentication
```ruby
stub = Helloworld::Greeter::Stub.new('localhost:50051', :this_channel_is_insecure)
...
```
##### With server authentication SSL/TLS
```ruby
creds = GRPC::Core::Credentials.new(load_certs) # load_certs typically loads a CA roots file
stub = Helloworld::Greeter::Stub.new('myservice.example.com', creds)
```
##### Authenticate with Google
```ruby
require 'googleauth' # from http://www.rubydoc.info/gems/googleauth/0.1.0
...
ssl_creds = GRPC::Core::ChannelCredentials.new(load_certs) # load_certs typically loads a CA roots file
authentication = Google::Auth.get_application_default()
call_creds = GRPC::Core::CallCredentials.new(authentication.updater_proc)
combined_creds = ssl_creds.compose(call_creds)
stub = Helloworld::Greeter::Stub.new('greeter.googleapis.com', combined_creds)
```
#### C++
##### Base case - no encryption or authentication
```cpp
auto channel = grpc::CreateChannel("localhost:50051", InsecureChannelCredentials());
std::unique_ptr<Greeter::Stub> stub(Greeter::NewStub(channel));
...
```
##### With server authentication SSL/TLS
```cpp
auto channel_creds = grpc::SslCredentials(grpc::SslCredentialsOptions());
auto channel = grpc::CreateChannel("myservice.example.com", channel_creds);
std::unique_ptr<Greeter::Stub> stub(Greeter::NewStub(channel));
...
```
##### Authenticate with Google
```cpp
auto creds = grpc::GoogleDefaultCredentials();
auto channel = grpc::CreateChannel("greeter.googleapis.com", creds);
std::unique_ptr<Greeter::Stub> stub(Greeter::NewStub(channel));
...
```
#### C&#35;
##### Base case - no encryption or authentication
```csharp
var channel = new Channel("localhost:50051", ChannelCredentials.Insecure);
var client = new Greeter.GreeterClient(channel);
...
```
##### With server authentication SSL/TLS
```csharp
var channelCredentials = new SslCredentials(File.ReadAllText("roots.pem")); // Load a custom roots file.
var channel = new Channel("myservice.example.com", channelCredentials);
var client = new Greeter.GreeterClient(channel);
```
##### Authenticate with Google
```csharp
using Grpc.Auth; // from Grpc.Auth NuGet package
...
// Loads Google Application Default Credentials with publicly trusted roots.
var channelCredentials = await GoogleGrpcCredentials.GetApplicationDefaultAsync();
var channel = new Channel("greeter.googleapis.com", channelCredentials);
var client = new Greeter.GreeterClient(channel);
...
```
##### Authenticate a single RPC call
```csharp
var channel = new Channel("greeter.googleapis.com", new SslCredentials()); // Use publicly trusted roots.
var client = new Greeter.GreeterClient(channel);
...
var googleCredential = await GoogleCredential.GetApplicationDefaultAsync();
var result = client.SayHello(request, new CallOptions(credentials: googleCredential.ToCallCredentials()));
...
```
#### Python
##### Base case - No encryption or authentication
```python
import grpc
import helloworld_pb2
channel = grpc.insecure_channel('localhost:50051')
stub = helloworld_pb2.GreeterStub(channel)
```
##### With server authentication SSL/TLS
Client:
```python
import grpc
import helloworld_pb2
with open('roots.pem', 'rb') as f:
creds = grpc.ssl_channel_credentials(f.read())
channel = grpc.secure_channel('myservice.example.com:443', creds)
stub = helloworld_pb2.GreeterStub(channel)
```
Server:
```python
import grpc
import helloworld_pb2
from concurrent import futures
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
with open('key.pem', 'rb') as f:
private_key = f.read()
with open('chain.pem', 'rb') as f:
certificate_chain = f.read()
server_credentials = grpc.ssl_server_credentials( ( (private_key, certificate_chain), ) )
# Adding GreeterServicer to server omitted
server.add_secure_port('myservice.example.com:443', server_credentials)
server.start()
# Server sleep omitted
```
##### Authenticate with Google using a JWT
```python
import grpc
import helloworld_pb2
from google import auth as google_auth
from google.auth import jwt as google_auth_jwt
from google.auth.transport import grpc as google_auth_transport_grpc
credentials, _ = google_auth.default()
jwt_creds = google_auth_jwt.OnDemandCredentials.from_signing_credentials(
credentials)
channel = google_auth_transport_grpc.secure_authorized_channel(
jwt_creds, None, 'greeter.googleapis.com:443')
stub = helloworld_pb2.GreeterStub(channel)
```
##### Authenticate with Google using an Oauth2 token
```python
import grpc
import helloworld_pb2
from google import auth as google_auth
from google.auth.transport import grpc as google_auth_transport_grpc
from google.auth.transport import requests as google_auth_transport_requests
credentials, _ = google_auth.default(scopes=(scope,))
request = google_auth_transport_requests.Request()
channel = google_auth_transport_grpc.secure_authorized_channel(
credentials, request, 'greeter.googleapis.com:443')
stub = helloworld_pb2.GreeterStub(channel)
```
#### Java
##### Base case - no encryption or authentication
```java
ManagedChannel channel = ManagedChannelBuilder.forAddress("localhost", 50051)
.usePlaintext(true)
.build();
GreeterGrpc.GreeterStub stub = GreeterGrpc.newStub(channel);
```
##### With server authentication SSL/TLS
In Java we recommend that you use OpenSSL when using gRPC over TLS. You can find
details about installing and using OpenSSL and other required libraries for both
Android and non-Android Java in the gRPC Java
[Security](https://github.com/grpc/grpc-java/blob/master/SECURITY.md#transport-security-tls)
documentation.
To enable TLS on a server, a certificate chain and private key need to be
specified in PEM format. Such private key should not be using a password.
The order of certificates in the chain matters: more specifically, the certificate
at the top has to be the host CA, while the one at the very bottom
has to be the root CA. The standard TLS port is 443, but we use 8443 below to
avoid needing extra permissions from the OS.
```java
Server server = ServerBuilder.forPort(8443)
// Enable TLS
.useTransportSecurity(certChainFile, privateKeyFile)
.addService(TestServiceGrpc.bindService(serviceImplementation))
.build();
server.start();
```
If the issuing certificate authority is not known to the client then a properly
configured `SslContext` or `SSLSocketFactory` should be provided to the
`NettyChannelBuilder` or `OkHttpChannelBuilder`, respectively.
On the client side, server authentication with SSL/TLS looks like this:
```java
// With server authentication SSL/TLS
ManagedChannel channel = ManagedChannelBuilder.forAddress("myservice.example.com", 443)
.build();
GreeterGrpc.GreeterStub stub = GreeterGrpc.newStub(channel);
// With server authentication SSL/TLS; custom CA root certificates; not on Android
ManagedChannel channel = NettyChannelBuilder.forAddress("myservice.example.com", 443)
.sslContext(GrpcSslContexts.forClient().trustManager(new File("roots.pem")).build())
.build();
GreeterGrpc.GreeterStub stub = GreeterGrpc.newStub(channel);
```
##### Authenticate with Google
The following code snippet shows how you can call the [Google Cloud PubSub
API](https://cloud.google.com/pubsub/overview) using gRPC with a service
account. The credentials are loaded from a key stored in a well-known location
or by detecting that the application is running in an environment that can
provide one automatically, e.g. Google Compute Engine. While this example is
specific to Google and its services, similar patterns can be followed for other
service providers.
```java
GoogleCredentials creds = GoogleCredentials.getApplicationDefault();
ManagedChannel channel = ManagedChannelBuilder.forTarget("greeter.googleapis.com")
.build();
GreeterGrpc.GreeterStub stub = GreeterGrpc.newStub(channel)
.withCallCredentials(MoreCallCredentials.from(creds));
```
#### Node.js
##### Base case - No encryption/authentication
```js
var stub = new helloworld.Greeter('localhost:50051', grpc.credentials.createInsecure());
```
##### With server authentication SSL/TLS
```js
var ssl_creds = grpc.credentials.createSsl(root_certs);
var stub = new helloworld.Greeter('myservice.example.com', ssl_creds);
```
##### Authenticate with Google
```js
// Authenticating with Google
var GoogleAuth = require('google-auth-library'); // from https://www.npmjs.com/package/google-auth-library
...
var ssl_creds = grpc.credentials.createSsl(root_certs);
(new GoogleAuth()).getApplicationDefault(function(err, auth) {
var call_creds = grpc.credentials.createFromGoogleCredential(auth);
var combined_creds = grpc.credentials.combineChannelCredentials(ssl_creds, call_creds);
var stub = new helloworld.Greeter('greeter.googleapis.com', combined_credentials);
});
```
##### Authenticate with Google using Oauth2 token (legacy approach)
```js
var GoogleAuth = require('google-auth-library'); // from https://www.npmjs.com/package/google-auth-library
...
var ssl_creds = grpc.Credentials.createSsl(root_certs); // load_certs typically loads a CA roots file
var scope = 'https://www.googleapis.com/auth/grpc-testing';
(new GoogleAuth()).getApplicationDefault(function(err, auth) {
if (auth.createScopeRequired()) {
auth = auth.createScoped(scope);
}
var call_creds = grpc.credentials.createFromGoogleCredential(auth);
var combined_creds = grpc.credentials.combineChannelCredentials(ssl_creds, call_creds);
var stub = new helloworld.Greeter('greeter.googleapis.com', combined_credentials);
});
```
#### PHP
##### Base case - No encryption/authorization
```php
$client = new helloworld\GreeterClient('localhost:50051', [
'credentials' => Grpc\ChannelCredentials::createInsecure(),
]);
...
```
##### Authenticate with Google
```php
function updateAuthMetadataCallback($context)
{
$auth_credentials = ApplicationDefaultCredentials::getCredentials();
return $auth_credentials->updateMetadata($metadata = [], $context->service_url);
}
$channel_credentials = Grpc\ChannelCredentials::createComposite(
Grpc\ChannelCredentials::createSsl(file_get_contents('roots.pem')),
Grpc\CallCredentials::createFromPlugin('updateAuthMetadataCallback')
);
$opts = [
'credentials' => $channel_credentials
];
$client = new helloworld\GreeterClient('greeter.googleapis.com', $opts);
````
##### Authenticate with Google using Oauth2 token (legacy approach)
```php
// the environment variable "GOOGLE_APPLICATION_CREDENTIALS" needs to be set
$scope = "https://www.googleapis.com/auth/grpc-testing";
$auth = Google\Auth\ApplicationDefaultCredentials::getCredentials($scope);
$opts = [
'credentials' => Grpc\Credentials::createSsl(file_get_contents('roots.pem'));
'update_metadata' => $auth->getUpdateMetadataFunc(),
];
$client = new helloworld\GreeterClient('greeter.googleapis.com', $opts);
```
#### Dart
##### Base case - no encryption or authentication
```dart
final channel = new ClientChannel('localhost',
port: 50051,
options: const ChannelOptions(
credentials: const ChannelCredentials.insecure()));
final stub = new GreeterClient(channel);
```
##### With server authentication SSL/TLS
```dart
// Load a custom roots file.
final trustedRoot = new File('roots.pem').readAsBytesSync();
final channelCredentials =
new ChannelCredentials.secure(certificates: trustedRoot);
final channelOptions = new ChannelOptions(credentials: channelCredentials);
final channel = new ClientChannel('myservice.example.com',
options: channelOptions);
final client = new GreeterClient(channel);
```
##### Authenticate with Google
```dart
// Uses publicly trusted roots by default.
final channel = new ClientChannel('greeter.googleapis.com');
final serviceAccountJson =
new File('service-account.json').readAsStringSync();
final credentials = new JwtServiceAccountAuthenticator(serviceAccountJson);
final client =
new GreeterClient(channel, options: credentials.toCallOptions);
```
##### Authenticate a single RPC call
```dart
// Uses publicly trusted roots by default.
final channel = new ClientChannel('greeter.googleapis.com');
final client = new GreeterClient(channel);
...
final serviceAccountJson =
new File('service-account.json').readAsStringSync();
final credentials = new JwtServiceAccountAuthenticator(serviceAccountJson);
final response =
await client.sayHello(request, options: credentials.toCallOptions);
```

View File

@ -0,0 +1,120 @@
---
layout: guides
title: Benchmarking
aliases: [/docs/guides/benchmarking.html]
---
<p class="lead">gRPC is designed to support high-performance
open-source RPCs in many languages. This document describes the
performance benchmarking tools, the scenarios considered by the tests,
and the testing infrastructure.</p>
<div id="toc"></div>
<a name="Overview"></a>
### Overview
gRPC is designed for both high-performance and high-productivity
design of distributed applications. Continuous performance
benchmarking is a critical part of the gRPC development
workflow. Multi-language performance tests run hourly against
the master branch, and these numbers are reported to a dashboard for
visualization.
* [Multi-language performance dashboard @latest_release (lastest available stable release)](https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5636470266134528)
* [Multi-language performance dashboard @master (latest dev version)](https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5652536396611584)
* [C++ detailed performance dashboard @master (latest dev version)](https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5685265389584384)
Additional benchmarking provides fine grained insights into where
CPU is spent.
* [C++ full-stack microbenchmarks](https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5684961520648192)
* [C Core filter benchmarks](https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5740240702537728)
* [C Core shared component benchmarks](https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5641826627223552&container=789696829&widget=512792852)
* [C Core HTTP/2 microbenchmarks](https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5732910535540736)
### Performance testing design
Each language implements a performance testing worker that implements
a gRPC
[WorkerService](https://github.com/grpc/grpc/blob/master/src/proto/grpc/testing/worker_service.proto). This
service directs the worker to act as either a client or a server for
the actual benchmark test, represented as
[BenchmarkService](https://github.com/grpc/grpc/blob/master/src/proto/grpc/testing/benchmark_service.proto). That
service has two methods:
* UnaryCall - a unary RPC of a simple request that specifies the number of bytes to return in the response
* StreamingCall - a streaming RPC that allows repeated ping-pongs of request and response messages akin to the UnaryCall
![gRPC performance testing worker diagram](/img/testing_framework.png)
These workers are controlled by a
[driver](https://github.com/grpc/grpc/blob/master/test/cpp/qps/qps_json_driver.cc)
that takes as input a scenario description (in JSON format) and an
environment variable specifying the host:port of each worker process.
<a name="Languages under test"></a>
### Languages under test
The following languages have continuous performance testing as both
clients and servers at master:
* C++
* Java
* Go
* C#
* node.js
* Python
* Ruby
Additionally, all languages derived from C core have limited
performance testing (smoke testing) conducted at every pull request.
In addition to running as both the client-side and server-side of
performance tests, all languages are tested as clients against a C++
server, and as servers against a C++ client. This test aims to provide
the current upper bound of performance for a given language's client or
server implementation without testing the other side.
Although PHP or mobile environments do not support a gRPC server
(which is needed for our performance tests), their client-side
performance can be benchmarked using a proxy WorkerService written in
another language. This code is implemented for PHP but is not yet in
continuous testing mode.
<a name="Scenarios under test"></a>
### Scenarios under test
There are several important scenarios under test and displayed in the dashboards
above, including the following:
* Contentionless latency - the median and tail response latencies seen with only 1 client sending a single message at a time using StreamingCall
* QPS - the messages/second rate when there are 2 clients and a total of 64 channels, each of which has 100 outstanding messages at a time sent using StreamingCall
* Scalability (for selected languages) - the number of messages/second per server core
Most performance testing is using secure communication and
protobufs. Some C++ tests additionally use insecure communication and
the generic (non-protobuf) API to display peak performance. Additional
scenarios may be added in the future.
<a name="Testing infrastructure"></a>
### Testing infrastructure
All performance benchmarks are run as instances in GCE through our
Jenkins testing infrastructure. In addition to the gRPC performance
scenarios described above, we also run baseline [netperf
TCP_RR](http://www.netperf.org) latency numbers in order to understand
the underlying network characteristics. These numbers are present on
our dashboard and sometimes vary depending on where our instances
happen to be allocated within GCE.
Most test instances are 8-core systems, and these are used for both
latency and QPS measurement. For C++ and Java, we additionally support
QPS testing on 32-core systems. All QPS tests use 2 identical client machines
for each server, to make sure that QPS measurement is not client-limited.

View File

@ -0,0 +1,224 @@
---
layout: guides
title: gRPC Concepts
aliases: [/docs/guides/concepts.html]
---
<p class="lead">This document introduces some key gRPC concepts with an overview
of gRPC's architecture and RPC life cycle.</p>
It assumes that you've read [What is gRPC?](/docs/guides). For
language-specific details, see the Quick Start, tutorial, and reference
documentation for your chosen language(s), where available (complete reference
docs are coming soon).
<div id="toc" class="toc mobile-toc"></div>
### Overview
#### Service definition
Like many RPC systems, gRPC is based around the idea of defining a service,
specifying the methods that can be called remotely with their parameters and
return types. By default, gRPC uses [protocol
buffers](https://developers.google.com/protocol-buffers/) as the Interface
Definition Language (IDL) for describing both the service interface and the
structure of the payload messages. It is possible to use other alternatives if
desired.
```proto
service HelloService {
rpc SayHello (HelloRequest) returns (HelloResponse);
}
message HelloRequest {
string greeting = 1;
}
message HelloResponse {
string reply = 1;
}
```
gRPC lets you define four kinds of service method:
- Unary RPCs where the client sends a single request to the server and gets a
single response back, just like a normal function call.
```proto
rpc SayHello(HelloRequest) returns (HelloResponse){
}
```
- Server streaming RPCs where the client sends a request to the server and gets
a stream to read a sequence of messages back. The client reads from the
returned stream until there are no more messages. gRPC guarantees message
ordering within an individual RPC call.
```proto
rpc LotsOfReplies(HelloRequest) returns (stream HelloResponse){
}
```
- Client streaming RPCs where the client writes a sequence of messages and sends
them to the server, again using a provided stream. Once the client has
finished writing the messages, it waits for the server to read them and return
its response. Again gRPC guarantees message ordering within an individual RPC
call.
```proto
rpc LotsOfGreetings(stream HelloRequest) returns (HelloResponse) {
}
```
- Bidirectional streaming RPCs where both sides send a sequence of messages
using a read-write stream. The two streams operate independently, so clients
and servers can read and write in whatever order they like: for example, the
server could wait to receive all the client messages before writing its
responses, or it could alternately read a message then write a message, or
some other combination of reads and writes. The order of messages in each
stream is preserved.
```proto
rpc BidiHello(stream HelloRequest) returns (stream HelloResponse){
}
```
We'll look at the different types of RPC in more detail in the RPC life cycle section below.
#### Using the API surface
Starting from a service definition in a .proto file, gRPC provides protocol
buffer compiler plugins that generate client- and server-side code. gRPC users
typically call these APIs on the client side and implement the corresponding API
on the server side.
- On the server side, the server implements the methods declared by the service
and runs a gRPC server to handle client calls. The gRPC infrastructure decodes
incoming requests, executes service methods, and encodes service responses.
- On the client side, the client has a local object known as *stub* (for some
languages, the preferred term is *client*) that implements the same methods as
the service. The client can then just call those methods on the local object,
wrapping the parameters for the call in the appropriate protocol buffer
message type - gRPC looks after sending the request(s) to the server and
returning the server's protocol buffer response(s).
#### Synchronous vs. asynchronous
Synchronous RPC calls that block until a response arrives from the server are
the closest approximation to the abstraction of a procedure call that RPC
aspires to. On the other hand, networks are inherently asynchronous and in many
scenarios it's useful to be able to start RPCs without blocking the current
thread.
The gRPC programming surface in most languages comes in both synchronous and
asynchronous flavors. You can find out more in each language's tutorial and
reference documentation (complete reference docs are coming soon).
### RPC life cycle
Now let's take a closer look at what happens when a gRPC client calls a gRPC
server method. We won't look at implementation details, you can find out more
about these in our language-specific pages.
#### Unary RPC
First let's look at the simplest type of RPC, where the client sends a single request and gets back a single response.
- Once the client calls the method on the stub/client object, the server is
notified that the RPC has been invoked with the client's [metadata](#metadata)
for this call, the method name, and the specified [deadline](#deadlines) if
applicable.
- The server can then either send back its own initial metadata (which must be
sent before any response) straight away, or wait for the client's request
message - which happens first is application-specific.
- Once the server has the client's request message, it does whatever work is
necessary to create and populate its response. The response is then returned
(if successful) to the client together with status details (status code and
optional status message) and optional trailing metadata.
- If the status is OK, the client then gets the response, which completes the
call on the client side.
#### Server streaming RPC
A server-streaming RPC is similar to our simple example, except the server sends
back a stream of responses after getting the client's request message. After
sending back all its responses, the server's status details (status code and
optional status message) and optional trailing metadata are sent back to
complete on the server side. The client completes once it has all the server's
responses.
#### Client streaming RPC
A client-streaming RPC is also similar to our simple example, except the client
sends a stream of requests to the server instead of a single request. The server
sends back a single response, typically but not necessarily after it has
received all the client's requests, along with its status details and optional
trailing metadata.
#### Bidirectional streaming RPC
In a bidirectional streaming RPC, again the call is initiated by the client
calling the method and the server receiving the client metadata, method name,
and deadline. Again the server can choose to send back its initial metadata or
wait for the client to start sending requests.
What happens next depends on the application, as the client and server can read
and write in any order - the streams operate completely independently. So, for
example, the server could wait until it has received all the client's messages
before writing its responses, or the server and client could "ping-pong": the
server gets a request, then sends back a response, then the client sends another
request based on the response, and so on.
<a name="deadlines"></a>
#### Deadlines/Timeouts
gRPC allows clients to specify how long they are willing to wait for an RPC to
complete before the RPC is terminated with the error `DEADLINE_EXCEEDED`. On
the server side, the server can query to see if a particular RPC has timed out,
or how much time is left to complete the RPC.
How the deadline or timeout is specified varies from language to language - for
example, not all languages have a default deadline, some language APIs work in
terms of a deadline (a fixed point in time), and some language APIs work in
terms of timeouts (durations of time).
#### RPC termination
In gRPC, both the client and server make independent and local determinations of
the success of the call, and their conclusions may not match. This means that,
for example, you could have an RPC that finishes successfully on the server side
("I have sent all my responses!") but fails on the client side ("The responses
arrived after my deadline!"). It's also possible for a server to decide to
complete before a client has sent all its requests.
#### Cancelling RPCs
Either the client or the server can cancel an RPC at any time. A cancellation
terminates the RPC immediately so that no further work is done. It is *not* an
"undo": changes made before the cancellation will not be rolled back.
<a name="metadata"></a>
#### Metadata
Metadata is information about a particular RPC call (such as <a href="/docs/guides/auth/">authentication details</a>) in the
form of a list of key-value pairs, where the keys are strings and the values are
typically strings (but can be binary data). Metadata is opaque to gRPC itself -
it lets the client provide information associated with the call to the server
and vice versa.
Access to metadata is language-dependent.
#### Channels
A gRPC channel provides a connection to a gRPC server on a specified host and
port and is used when creating a client stub (or just "client" in some
languages). Clients can specify channel arguments to modify gRPC's default
behaviour, such as switching on and off message compression. A channel has
state, including <code>connected</code> and <code>idle</code>.
How gRPC deals with closing down channels is language-dependent. Some languages
also permit querying channel state.

View File

@ -0,0 +1,8 @@
---
layout: guides
title: Contribution Guidelines
---
# Contribution Guidelines
Coming soon!

View File

@ -0,0 +1,59 @@
---
layout: guides
title: Error Handling
aliases: [/docs/guides/error.html]
---
<p class="lead"> This page describes how gRPC deals with errors, including gRPC's built-in error codes. Example code in different languages can be found <a href="https://github.com/avinassh/grpc-errors">here</a>.</p>
<div id="toc" class="toc mobile-toc"></div>
### Error model
As you'll have seen in our concepts document and examples, when a gRPC call
completes successfully the server returns an `OK` status to the client
(depending on the language the `OK` status may or may not be directly used in
your code). But what happens if the call isn't successful?
If an error occurs, gRPC returns one of its error status codes instead, with an
optional string error message that provides further details about what happened.
Error information is available to gRPC clients in all supported languages.
### Error status codes
Errors are raised by gRPC under various circumstances, from network failures to
unauthenticated connections, each of which is associated with a particular
status code. The following error status codes are supported in all gRPC
languages.
#### General errors
Case | Status code
-----|-----------
Client application cancelled the request | GRPC&#95;STATUS&#95;CANCELLED
Deadline expired before server returned status | GRPC&#95;STATUS&#95;DEADLINE_EXCEEDED
Method not found on server | GRPC&#95;STATUS&#95;UNIMPLEMENTED
Server shutting down | GRPC&#95;STATUS&#95;UNAVAILABLE
Server threw an exception (or did something other than returning a status code to terminate the RPC) | GRPC&#95;STATUS&#95;UNKNOWN
<br>
#### Network failures
Case | Status code
-----|-----------
No data transmitted before deadline expires. Also applies to cases where some data is transmitted and no other failures are detected before the deadline expires | GRPC&#95;STATUS&#95;DEADLINE_EXCEEDED
Some data transmitted (for example, the request metadata has been written to the TCP connection) before the connection breaks | GRPC&#95;STATUS&#95;UNAVAILABLE
<br>
#### Protocol errors
Case | Status code
-----|-----------
Could not decompress but compression algorithm supported | GRPC&#95;STATUS&#95;INTERNAL
Compression mechanism used by client not supported by the server | GRPC&#95;STATUS&#95;UNIMPLEMENTED
Flow-control resource limits reached | GRPC&#95;STATUS&#95;RESOURCE_EXHAUSTED
Flow-control protocol violation | GRPC&#95;STATUS&#95;INTERNAL
Error parsing returned status | GRPC&#95;STATUS&#95;UNKNOWN
Unauthenticated: credentials failed to get metadata | GRPC&#95;STATUS&#95;UNAUTHENTICATED
Invalid host set in authority metadata | GRPC&#95;STATUS&#95;UNAUTHENTICATED
Error parsing response protocol buffer | GRPC&#95;STATUS&#95;INTERNAL
Error parsing request protocol buffer | GRPC&#95;STATUS&#95;INTERNAL

View File

@ -0,0 +1,19 @@
---
title: Quick Start
layout: quickstart
---
<p class="lead">
Get started with gRPC
</p>
<div id="toc" class="toc mobile-toc"></div>
These pages show you how to get up and running as quickly as possible in gRPC,
including installing all the tools youll need.
There is a Quick Start for each gRPC supported language with accompanying sample
code for a simple ```Hello World``` example for you to explore and update.
For an overview of some of the core concepts in gRPC, see [gRPC Concepts](/docs/guides/concepts/).
For more tutorials and examples, see our [Tutorials](/docs/tutorials).
You can read more about gRPC in general in [What is gRPC?](/docs/guides).

View File

@ -0,0 +1,205 @@
---
layout: quickstart
title: Android Java Quickstart
aliases: [/docs/quickstart/android.html]
---
<p class="lead">This guide gets you started with gRPC in Android Java with a simple
working example.</p>
<div id="toc"></div>
### Before you begin
#### Prerequisites
* `JDK`: version 7 or higher
* Android SDK: API level 14 or higher
* An android device set up for [USB
debugging](https://developer.android.com/studio/command-line/adb.html#Enabling)
or an [Android Virtual
Device](https://developer.android.com/studio/run/managing-avds.html)
Note: gRPC Java does not support running a server on an Android device. For this
quickstart, the Android client app will connect to a server running on your
local (non-Android) computer.
### Download the example
You'll need a local copy of the example code to work through this quickstart.
Download the example code from our GitHub repository (the following command
clones the entire repository, but you just need the examples for this quickstart
and other tutorials):
```sh
$ # Clone the repository at the latest release to get the example code:
$ git clone -b {{< param grpc_java_release_tag >}} https://github.com/grpc/grpc-java
$ # Navigate to the Java examples:
$ cd grpc-java/examples
```
### Run a gRPC application
1. Compile the server
```sh
$ ./gradlew installDist
```
2. Run the server
```sh
$ ./build/install/examples/bin/hello-world-server
```
3. In another terminal, compile and run the client
```sh
$ cd android/helloworld
$ ../../gradlew installDebug
```
Congratulations! You've just run a client-server application with gRPC.
### Update a gRPC service
Now let's look at how to update the application with an extra method on the
server for the client to call. Our gRPC service is defined using protocol
buffers; you can find out lots more about how to define a service in a `.proto`
file in [gRPC Basics: Android Java](/docs/tutorials/basic/android/). For now all you need to know is that both the
server and the client "stub" have a `SayHello` RPC method that takes a
`HelloRequest` parameter from the client and returns a `HelloResponse` from the
server, and that this method is defined like this:
```java
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
Let's update this so that the `Greeter` service has two methods. Edit
`src/main/proto/helloworld.proto` and update it with a new `SayHelloAgain`
method, with the same request and response types:
```java
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
(Don't forget to save the file!)
### Update and run the application
When we recompile the example, normal compilation will regenerate
`GreeterGrpc.java`, which contains our generated gRPC client and server classes.
This also regenerates classes for populating, serializing, and retrieving our
request and response types.
However, we still need to implement and call the new method in the human-written
parts of our example application.
#### Update the server
Check out the Java quickstart [here](/docs/quickstart/java/#update-the-server).
#### Update the client
In the same directory, open
`app/src/main/java/io/grpc/helloworldexample/HelloworldActivity.java`. Call the new
method like this:
```java
try {
HelloRequest message = HelloRequest.newBuilder().setName(mMessage).build();
HelloReply reply = stub.sayHello(message);
reply = stub.sayHelloAgain(message);
} catch (Exception e) {
StringWriter sw = new StringWriter();
PrintWriter pw = new PrintWriter(sw);
e.printStackTrace(pw);
pw.flush();
return "Failed... : " + System.lineSeparator() + sw;
}
```
#### Run!
Just like we did before, from the `examples` directory:
1. Compile the server
```sh
$ ./gradlew installDist
```
2. Run the server
```sh
$ ./build/install/examples/bin/hello-world-server
```
3. In another terminal, compile and install the client to your device
```sh
$ cd android/helloworld
$ ../../gradlew installDebug
```
#### Connecting to the Hello World server via USB
To run the application on a physical device via USB debugging, you must
configure USB port forwarding to allow the device to communicate with the server
running on your computer. This is done via the `adb` command line tool as
follows:
```sh
adb reverse tcp:8080 tcp:50051
```
This sets up port forwarding from port `8080` on the device to port `50051` on
the connected computer, which is the port that the Hello World server is
listening on.
Now you can run the Android Hello World app on your device, using `localhost`
and `8080` as the `Host` and `Port`.
#### Connecting to the Hello World server from an Android Virtual Device
To run the Hello World app on an Android Virtual Device, you don't need to
enable port forwarding. Instead, the emulator can use the IP address
`10.0.2.2` to refer to the host machine. Inside the Android Hello World app,
enter `10.0.2.2` and `50051` as the `Host` and `Port`.
### What's next
- Read a full explanation of how gRPC works in [What is gRPC?](/docs/guides/)
and [gRPC Concepts](/docs/guides/concepts/)
- Work through a more detailed tutorial in [gRPC Basics: Android Java](/docs/tutorials/basic/android/)
- Explore the gRPC Java core API in its [reference
documentation](/grpc-java/javadoc/)

View File

@ -0,0 +1,257 @@
---
layout: quickstart
title: C++ Quickstart
aliases: [/docs/quickstart/cpp.html]
---
<p class="lead">This guide gets you started with gRPC in C++ with a simple
working example.</p>
<div id="toc"></div>
### Before you begin
#### Install gRPC
To install gRPC on your system, follow the [instructions to install gRPC C++ via make](https://github.com/grpc/grpc/blob/master/src/cpp/README.md#make).
To run the example code, please ensure `pkg-config` is installed on your
machine before you build and install gRPC in the previous step, since the
example `Makefile`s try to look up the installed gRPC path using `pkg-config`.
On Debian-based systems like Ubuntu, this can usually be done via
`sudo apt-get install pkg-config`.
#### Install Protocol Buffers v3
While not mandatory to use gRPC, gRPC applications usually leverage Protocol
Buffers v3 for service definitions and data serialization, and our example code
uses Protocol Buffers as well as gRPC. If you don't already have it installed on
your system, you can install the version cloned alongside gRPC. First ensure
that you are running these commands in the gRPC tree you just built in the from
the previous step.
```sh
$ cd third_party/protobuf
$ make && sudo make install
```
### Build the example
Always assuming you have gRPC properly installed, go into the example's
directory:
```sh
$ cd examples/cpp/helloworld/
```
Let's build the example client and server:
```sh
$ make
```
Most failures at this point are a result of a faulty installation (or having
installed gRPC to a non-standard location. Check out [the installation
instructions for details](https://github.com/grpc/grpc/blob/master/src/cpp/README.md#make)).
### Try it!
From the `examples/cpp/helloworld` directory, run the server, which will listen
on port 50051:
```sh
$ ./greeter_server
```
From a different terminal, run the client:
```sh
$ ./greeter_client
```
If things go smoothly, you will see the `Greeter received: Hello world` in the
client side output.
Congratulations! You've just run a client-server application with gRPC.
### Update a gRPC service
Now let's look at how to update the application with an extra method on the
server for the client to call. Our gRPC service is defined using protocol
buffers; you can find out lots more about how to define a service in a `.proto`
file in [What is gRPC?](/docs/guides/) and [gRPC Basics:
C++](/docs/tutorials/basic/c/). For now all you need to know is that both the server and the client
"stub" have a `SayHello` RPC method that takes a `HelloRequest` parameter from
the client and returns a `HelloResponse` from the server, and that this method
is defined like this:
```protobuf
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
Let's update this so that the `Greeter` service has two methods. Edit
`examples/protos/helloworld.proto` (from the root of the cloned repository) and
update it with a new `SayHelloAgain` method, with the same request and response
types:
```protobuf
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
(Don't forget to save the file!)
### Generate gRPC code
Next we need to update the gRPC code used by our application to use the new
service definition. From the `examples/cpp/helloworld` directory:
```sh
$ make
```
This regenerates `helloworld.pb.{h,cc}` and `helloworld.grpc.pb.{h,cc}`, which
contains our generated client and server classes, as well as classes for
populating, serializing, and retrieving our request and response types.
### Update and run the application
We now have new generated server and client code, but we still need to implement
and call the new method in the human-written parts of our example application.
#### Update the server
In the same directory, open `greeter_server.cc`. Implement the new method like
this:
```c++
class GreeterServiceImpl final : public Greeter::Service {
Status SayHello(ServerContext* context, const HelloRequest* request,
HelloReply* reply) override {
// ... (pre-existing code)
}
Status SayHelloAgain(ServerContext* context, const HelloRequest* request,
HelloReply* reply) override {
std::string prefix("Hello again ");
reply->set_message(prefix + request->name());
return Status::OK;
}
};
```
#### Update the client
A new `SayHelloAgain` method is now available in the stub. We'll follow the same
pattern as for the already present `SayHello` and add a new `SayHelloAgain`
method to `GreeterClient`:
```c++
class GreeterClient {
public:
// ...
std::string SayHello(const std::string& user) {
// ...
}
std::string SayHelloAgain(const std::string& user) {
// Follows the same pattern as SayHello.
HelloRequest request;
request.set_name(user);
HelloReply reply;
ClientContext context;
// Here we can use the stub's newly available method we just added.
Status status = stub_->SayHelloAgain(&context, request, &reply);
if (status.ok()) {
return reply.message();
} else {
std::cout << status.error_code() << ": " << status.error_message()
<< std::endl;
return "RPC failed";
}
}
```
Finally, we exercise this new method in `main`:
```c++
int main(int argc, char** argv) {
// ...
std::string reply = greeter.SayHello(user);
std::cout << "Greeter received: " << reply << std::endl;
reply = greeter.SayHelloAgain(user);
std::cout << "Greeter received: " << reply << std::endl;
return 0;
}
```
#### Run!
Just like we did before, from the `examples/cpp/helloworld` directory:
1. Build the client and server after having made changes:
```sh
$ make
```
2. Run the server
```sh
$ ./greeter_server
```
3. On a different terminal, run the client
```sh
$ ./greeter_client
```
You should see the updated output:
```sh
$ ./greeter_client
Greeter received: Hello world
Greeter received: Hello again world
```
### What's next
- Read a full explanation of how gRPC works in [What is gRPC?](/docs/guides/)
and [gRPC Concepts](/docs/guides/concepts/)
- Work through a more detailed tutorial in [gRPC Basics: C++](/docs/tutorials/basic/c/)
- Explore the gRPC C++ core API in its [reference
documentation](/grpc/cpp/)

View File

@ -0,0 +1,242 @@
---
title: C# Quick Start
layout: quickstart
aliases: [/docs/quickstart/csharp.html]
---
<p class="lead">This guide gets you started with gRPC in C# with a simple
working example.</p>
<div id="toc"></div>
### Before you begin
#### Prerequisites
Whether you're using Windows, OS X, or Linux, you can follow this
example by using either an IDE and its build tools,
or by using the the .NET Core SDK command line tools.
First, make sure you have installed the
[gRPC C# prerequisites](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/src/csharp/README.md#prerequisites).
You will also need Git to download the sample code.
### Download the example
You'll need a local copy of the example code to work through this quickstart.
Download the example code from our GitHub repository (the following command
clones the entire repository, but you just need the examples for this quickstart
and other tutorials):
```sh
$ # Clone the repository to get the example code:
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
$ cd grpc
```
This document will walk you through the "Hello World" example.
The projects and source files can be found in the `examples/csharp/Helloworld` directory.
The example in this walkthrough already adds the necessary
dependencies for you (`Grpc`, `Grpc.Tools` and `Google.Protobuf` NuGet packages).
### Build the example
#### Using Visual Studio (or Visual Studio for Mac)
* Open the solution `Greeter.sln` with Visual Studio
* Build the solution
#### Using .NET Core SDK from the command line
From the `examples/csharp/Helloworld` directory:
```sh
> dotnet build Greeter.sln
```
*NOTE: If you want to use gRPC C# from a project that uses the "classic" .csproj files (supported by Visual Studio 2013, 2015 and older versions of Mono), please refer to the
[Greeter using "classic" .csproj](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/csharp/HelloworldLegacyCsproj/README.md) example.*
### Run a gRPC application
From the `examples/csharp/Helloworld` directory:
* Run the server
```sh
> cd GreeterServer
> dotnet run -f netcoreapp2.1
```
* In another terminal, run the client
```sh
> cd GreeterClient
> dotnet run -f netcoreapp2.1
```
Congratulations! You've just run a client-server application with gRPC.
### Update a gRPC service
Now let's look at how to update the application with an extra method on the
server for the client to call. Our gRPC service is defined using protocol
buffers; you can find out lots more about how to define a service in a `.proto`
file in [gRPC Basics: C#](/docs/tutorials/basic/csharp/). For now all you need to know is that both the
server and the client "stub" have a `SayHello` RPC method that takes a
`HelloRequest` parameter from the client and returns a `HelloResponse` from the
server, and that this method is defined like this:
```C#
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
Let's update this so that the `Greeter` service has two methods. Edit
`examples/protos/helloworld.proto` and update it with a new `SayHelloAgain`
method, with the same request and response types:
```C#
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
(Don't forget to save the file!)
### Generate gRPC code
Next we need to update the gRPC code used by our application to use the new service definition.
The `Grpc.Tools` NuGet package contains the protoc and protobuf C# plugin binaries needed
to generate the code. Starting from version 1.17 the package also integrates with
MSBuild to provide [automatic C# code generation](https://github.com/grpc/grpc/blob/master/src/csharp/BUILD-INTEGRATION.md)
from `.proto` files.
This example project already depends on the `Grpc.Tools.{{< param grpc_release_tag_no_v >}}` NuGet package so just re-building the solution
is enough to regenerate the code from our modified `.proto` file.
You can rebuild just like we first built the original
example by running `dotnet build Greeter.sln` or by clicking "Build" in Visual Studio.
The build regenerates the following files
under the `Greeter/obj/Debug/TARGET_FRAMEWORK` directory:
* `Helloworld.cs` contains all the protocol buffer code to populate,
serialize, and retrieve our request and response message types
* `HelloworldGrpc.cs` provides generated client and server classes,
including:
* an abstract class `Greeter.GreeterBase` to inherit from when defining
Greeter service implementations
* a class `Greeter.GreeterClient` that can be used to access remote Greeter
instances
### Update and run the application
We now have new generated server and client code, but we still need to implement
and call the new method in the human-written parts of our example application.
#### Update the server
With the `Greeter.sln` open in your IDE, open `GreeterServer/Program.cs`.
Implement the new method by editing the GreeterImpl class like this:
```C#
class GreeterImpl : Greeter.GreeterBase
{
// Server side handler of the SayHello RPC
public override Task<HelloReply> SayHello(HelloRequest request, ServerCallContext context)
{
return Task.FromResult(new HelloReply { Message = "Hello " + request.Name });
}
// Server side handler for the SayHelloAgain RPC
public override Task<HelloReply> SayHelloAgain(HelloRequest request, ServerCallContext context)
{
return Task.FromResult(new HelloReply { Message = "Hello again " + request.Name });
}
}
```
#### Update the client
With the same `Greeter.sln` open in your IDE, open `GreeterClient/Program.cs`.
Call the new method like this:
```C#
public static void Main(string[] args)
{
Channel channel = new Channel("127.0.0.1:50051", ChannelCredentials.Insecure);
var client = new Greeter.GreeterClient(channel);
String user = "you";
var reply = client.SayHello(new HelloRequest { Name = user });
Console.WriteLine("Greeting: " + reply.Message);
var secondReply = client.SayHelloAgain(new HelloRequest { Name = user });
Console.WriteLine("Greeting: " + secondReply.Message);
channel.ShutdownAsync().Wait();
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
```
#### Rebuild the modified example
Rebuild the newly modified example just like we first built the original
example by running `dotnet build Greeter.sln` or by clicking "Build" in Visual Studio.
#### Run!
Just like we did before, from the `examples/csharp/Helloworld` directory:
* Run the server
```sh
> cd GreeterServer
> dotnet run -f netcoreapp2.1
```
* In another terminal, run the client
```sh
> cd GreeterClient
> dotnet run -f netcoreapp2.1
```
### What's next
- Read a full explanation of how gRPC works in [What is gRPC?](/docs/guides/)
and [gRPC Concepts](/docs/guides/concepts/)
- Work through a more detailed tutorial in [gRPC Basics: C#](/docs/tutorials/basic/csharp/)
- Explore the gRPC C# core API in its [reference
documentation](/grpc/csharp/api/Grpc.Core.html)

View File

@ -0,0 +1,230 @@
---
layout: quickstart
title: Dart Quickstart
aliases: [/docs/quickstart/dart.html]
---
<p class="lead">This guide gets you started with gRPC in Dart with a simple
working example.</p>
<div id="toc"></div>
### Prerequisites
#### Dart SDK
gRPC requires Dart SDK version 2.0 or higher. Dart gRPC supports Flutter and Server platforms.
For installation instructions, follow this guide: [Install Dart](https://www.dartlang.org/install)
#### Install Protocol Buffers v3
While not mandatory to use gRPC, gRPC applications usually leverage Protocol
Buffers v3 for service definitions and data serialization, and our example code
uses Protocol Buffers as well as gRPC.
The simplest way to install the protoc compiler is to download pre-compiled
binaries for your operating system (`protoc-<version>-<os>.zip`) from here:
[https://github.com/google/protobuf/releases](https://github.com/google/protobuf/releases)
* Unzip this file.
* Update the environment variable `PATH` to include the path to the protoc
binary file.
Next, install the protoc plugin for Dart
```sh
$ pub global activate protoc_plugin
```
The compiler plugin, `protoc-gen-dart`, is installed in `$HOME/.pub-cache/bin`.
It must be in your $PATH for the protocol compiler, protoc, to find it.
```sh
$ export PATH=$PATH:$HOME/.pub-cache/bin
```
### Download the example
You'll need a local copy of the example code to work through this quickstart.
Download the example code from our GitHub repository (the following command
clones the entire repository, but you just need the examples for this quickstart
and other tutorials):
```sh
$ # Clone the repository at the latest release to get the example code:
$ git clone https://github.com/grpc/grpc-dart
$ # Navigate to the "Hello World" Dart example:
$ cd grpc-dart/example/helloworld
```
### Run a gRPC application
From the `example/helloworld` directory:
1. Download package dependencies
```sh
$ pub get
```
2. Run the server
```sh
$ dart bin/server.dart
```
3. In another terminal, run the client
```sh
$ dart bin/client.dart
```
Congratulations! You've just run a client-server application with gRPC.
### Update a gRPC service
Now let's look at how to update the application with an extra method on the
server for the client to call. Our gRPC service is defined using protocol
buffers; you can find out lots more about how to define a service in a `.proto`
file in [gRPC Basics: Dart](/docs/tutorials/basic/dart/). For now all you need to know is that both the
server and the client "stub" have a `SayHello` RPC method that takes a
`HelloRequest` parameter from the client and returns a `HelloReply` from the
server, and that this method is defined like this:
```dart
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
Let's update this so that the `Greeter` service has two methods. Edit
`protos/helloworld.proto` and update it with a new `SayHelloAgain`
method, with the same request and response types:
```dart
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
(Don't forget to save the file!)
### Generate gRPC code
Next we need to update the gRPC code used by our application to use the new
service definition.
From the `example/helloworld` directory, run:
```sh
$ protoc --dart_out=grpc:lib/src/generated -Iprotos protos/helloworld.proto
```
This regenerates the files in `lib/src/generated` which contain our generated
request and response classes, and client and server classes.
### Update and run the application
We now have new generated server and client code, but we still need to implement
and call the new method in the human-written parts of our example application.
#### Update the server
In the same directory, open `bin/server.dart`. Implement the new method like
this:
```dart
class GreeterService extends GreeterServiceBase {
@override
Future<HelloReply> sayHello(ServiceCall call, HelloRequest request) async {
return new HelloReply()..message = 'Hello, ${request.name}!';
}
@override
Future<HelloReply> sayHelloAgain(
ServiceCall call, HelloRequest request) async {
return new HelloReply()..message = 'Hello again, ${request.name}!';
}
}
...
```
#### Update the client
In the same directory, open `bin/client.dart`. Call the new method like this:
```dart
Future<Null> main(List<String> args) async {
final channel = new ClientChannel('localhost',
port: 50051,
options: const ChannelOptions(
credentials: const ChannelCredentials.insecure()));
final stub = new GreeterClient(channel);
final name = args.isNotEmpty ? args[0] : 'world';
try {
var response = await stub.sayHello(new HelloRequest()..name = name);
print('Greeter client received: ${response.message}');
response = await stub.sayHelloAgain(new HelloRequest()..name = name);
print('Greeter client received: ${response.message}');
} catch (e) {
print('Caught error: $e');
}
await channel.shutdown();
}
```
#### Run!
Just like we did before, from the `example/helloworld` directory:
1. Run the server
```sh
$ dart bin/server.dart
```
2. In another terminal, run the client
```sh
$ dart bin/client.dart
```
### What's next
- Read a full explanation of how gRPC works in [What is gRPC?](/docs/guides/)
and [gRPC Concepts](/docs/guides/concepts/)
- Work through a more detailed tutorial in [gRPC Basics: Dart](/docs/tutorials/basic/dart/)
### Reporting issues
Should you encounter an issue, please help us out by
<a href="https://github.com/grpc/grpc-dart/issues/new">filing issues</a>
in our issue tracker.</p>

View File

@ -0,0 +1,210 @@
---
title: Go Quick Start
layout: quickstart
aliases: [/docs/quickstart/go.html]
---
<p class="lead">This guide gets you started with gRPC in Go with a simple
working example.</p>
<div id="toc"></div>
### Prerequisites
#### Go version
gRPC requires Go 1.6 or higher.
```sh
$ go version
```
For installation instructions, follow this guide: [Getting Started - The Go Programming Language](https://golang.org/doc/install)
#### Install gRPC
Use the following command to install gRPC.
```sh
$ go get -u google.golang.org/grpc
```
#### Install Protocol Buffers v3
Install the protoc compiler that is used to generate gRPC service code. The simplest way to do this is to download pre-compiled binaries for your platform(`protoc-<version>-<platform>.zip`) from here: [https://github.com/google/protobuf/releases](https://github.com/google/protobuf/releases)
* Unzip this file.
* Update the environment variable `PATH` to include the path to the protoc binary file.
Next, install the protoc plugin for Go
```sh
$ go get -u github.com/golang/protobuf/protoc-gen-go
```
The compiler plugin, protoc-gen-go, will be installed in $GOBIN, defaulting to $GOPATH/bin. It must be in your $PATH for the protocol compiler, protoc, to find it.
```sh
$ export PATH=$PATH:$GOPATH/bin
```
### Download the example
The grpc code that was fetched with `go get google.golang.org/grpc` also contains the examples. They can be found under the examples dir: `$GOPATH/src/google.golang.org/grpc/examples`.
### Build the example
Change to the example directory
```sh
$ cd $GOPATH/src/google.golang.org/grpc/examples/helloworld
```
gRPC services are defined in a `.proto` file, which is used to generate a corresponding `.pb.go` file. The `.pb.go` file is generated by compiling the `.proto` file using the protocol compiler: `protoc`.
For the purpose of this example, the `helloworld.pb.go` file has already been generated (by compiling `helloworld.proto`), and can be found in this directory: `$GOPATH/src/google.golang.org/grpc/examples/helloworld/helloworld`
This `helloworld.pb.go` file contains:
* Generated client and server code.
* Code for populating, serializing, and retrieving our `HelloRequest` and `HelloReply` message types.
### Try it!
To compile and run the server and client code, the `go run` command can be used.
In the examples directory:
```sh
$ go run greeter_server/main.go
```
From a different terminal:
```sh
$ go run greeter_client/main.go
```
If things go smoothly, you will see the `Greeting: Hello world` in the client side output.
Congratulations! You've just run a client-server application with gRPC.
### Update a gRPC service
Now let's look at how to update the application with an extra method on the
server for the client to call. Our gRPC service is defined using protocol
buffers; you can find out lots more about how to define a service in a `.proto`
file in [What is gRPC?](/docs/guides) and [gRPC Basics:
Go](/docs/tutorials/basic/go/). For now all you need to know is that both the server and the client
"stub" have a `SayHello` RPC method that takes a `HelloRequest` parameter from
the client and returns a `HelloReply` from the server, and that this method
is defined like this:
```protobuf
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
Let's update this so that the `Greeter` service has two methods. Make sure you are in the same examples dir as above (`$GOPATH/src/google.golang.org/grpc/examples/helloworld`)
Edit `helloworld/helloworld.proto` and update it with a new `SayHelloAgain` method, with the same request and response
types:
```protobuf
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
### Generate gRPC code
Next we need to update the gRPC code used by our application to use the new
service definition. From the same examples dir as above (`$GOPATH/src/google.golang.org/grpc/examples/helloworld`)
```sh
$ protoc -I helloworld/ helloworld/helloworld.proto --go_out=plugins=grpc:helloworld
```
This regenerates the helloworld.pb.go with our new changes.
### Update and run the application
We now have new generated server and client code, but we still need to implement
and call the new method in the human-written parts of our example application.
#### Update the server
Edit `greeter_server/main.go` and add the following function to it:
```go
func (s *server) SayHelloAgain(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
return &pb.HelloReply{Message: "Hello again " + in.Name}, nil
}
```
#### Update the client
Edit `greeter_client/main.go` to add the following code to the main function.
```go
r, err = c.SayHelloAgain(ctx, &pb.HelloRequest{Name: name})
if err != nil {
log.Fatalf("could not greet: %v", err)
}
log.Printf("Greeting: %s", r.Message)
```
#### Run!
Run the server
```sh
$ go run greeter_server/main.go
```
On a different terminal, run the client
```sh
$ go run greeter_client/main.go
```
You should see the updated output:
```sh
$ go run greeter_client/main.go
Greeting: Hello world
Greeting: Hello again world
```
### What's next
- Read a full explanation of how gRPC works in [What is gRPC?](/docs/guides/)
and [gRPC Concepts](/docs/guides/concepts/)
- Work through a more detailed tutorial in [gRPC Basics: Go](/docs/tutorials/basic/go/)
- Explore the gRPC Go core API in its [reference
documentation](https://godoc.org/google.golang.org/grpc)

View File

@ -0,0 +1,202 @@
---
layout: quickstart
title: Java Quickstart
aliases: [/docs/quickstart/java.html]
---
<p class="lead">This guide gets you started with gRPC in Java with a simple
working example.</p>
<div id="toc"></div>
### Before you begin
#### Prerequisites
* `JDK`: version 7 or higher
### Download the example
You'll need a local copy of the example code to work through this quickstart.
Download the example code from our GitHub repository (the following command
clones the entire repository, but you just need the examples for this quickstart
and other tutorials):
```sh
$ # Clone the repository at the latest release to get the example code:
$ git clone -b {{< param grpc_java_release_tag >}} https://github.com/grpc/grpc-java
$ # Navigate to the Java examples:
$ cd grpc-java/examples
```
### Run a gRPC application
From the `examples` directory:
1. Compile the client and server
```sh
$ ./gradlew installDist
```
2. Run the server
```sh
$ ./build/install/examples/bin/hello-world-server
```
3. In another terminal, run the client
```sh
$ ./build/install/examples/bin/hello-world-client
```
Congratulations! You've just run a client-server application with gRPC.
### Update a gRPC service
Now let's look at how to update the application with an extra method on the
server for the client to call. Our gRPC service is defined using protocol
buffers; you can find out lots more about how to define a service in a `.proto`
file in [gRPC Basics: Java](/docs/tutorials/basic/java/). For now all you need to know is that both the
server and the client "stub" have a `SayHello` RPC method that takes a
`HelloRequest` parameter from the client and returns a `HelloReply` from the
server, and that this method is defined like this:
```java
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
Let's update this so that the `Greeter` service has two methods. Edit
`src/main/proto/helloworld.proto` and update it with a new `SayHelloAgain`
method, with the same request and response types:
```java
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
(Don't forget to save the file!)
### Update and run the application
When we recompile the example, normal compilation will regenerate
`GreeterGrpc.java`, which contains our generated gRPC client and server classes.
This also regenerates classes for populating, serializing, and retrieving our
request and response types.
However, we still need to implement and call the new method in the human-written
parts of our example application.
#### Update the server
In the same directory, open
`src/main/java/io/grpc/examples/helloworld/HelloWorldServer.java`. Implement the
new method like this:
```java
private class GreeterImpl extends GreeterGrpc.GreeterImplBase {
@Override
public void sayHello(HelloRequest req, StreamObserver<HelloReply> responseObserver) {
HelloReply reply = HelloReply.newBuilder().setMessage("Hello " + req.getName()).build();
responseObserver.onNext(reply);
responseObserver.onCompleted();
}
@Override
public void sayHelloAgain(HelloRequest req, StreamObserver<HelloReply> responseObserver) {
HelloReply reply = HelloReply.newBuilder().setMessage("Hello again " + req.getName()).build();
responseObserver.onNext(reply);
responseObserver.onCompleted();
}
}
...
```
#### Update the client
In the same directory, open
`src/main/java/io/grpc/examples/helloworld/HelloWorldClient.java`. Call the new
method like this:
```java
public void greet(String name) {
logger.info("Will try to greet " + name + " ...");
HelloRequest request = HelloRequest.newBuilder().setName(name).build();
HelloReply response;
try {
response = blockingStub.sayHello(request);
} catch (StatusRuntimeException e) {
logger.log(Level.WARNING, "RPC failed: {0}", e.getStatus());
return;
}
logger.info("Greeting: " + response.getMessage());
try {
response = blockingStub.sayHelloAgain(request);
} catch (StatusRuntimeException e) {
logger.log(Level.WARNING, "RPC failed: {0}", e.getStatus());
return;
}
logger.info("Greeting: " + response.getMessage());
}
```
#### Run!
Just like we did before, from the `examples` directory:
1. Compile the client and server
```sh
$ ./gradlew installDist
```
2. Run the server
```sh
$ ./build/install/examples/bin/hello-world-server
```
3. In another terminal, run the client
```sh
$ ./build/install/examples/bin/hello-world-client
```
### What's next
- Read a full explanation of how gRPC works in [What is gRPC?](/docs/guides/)
and [gRPC Concepts](/docs/guides/concepts/)
- Work through a more detailed tutorial in [gRPC Basics: Java](/docs/tutorials/basic/java/)
- Explore the gRPC Java core API in its [reference
documentation](/grpc-java/javadoc/)

View File

@ -0,0 +1,175 @@
---
title: Node Quick Start
layout: quickstart
aliases: [/docs/quickstart/node.html]
---
<p class="lead">This guide gets you started with gRPC in Node with a simple
working example.</p>
<div id="toc"></div>
### Before you begin
#### Prerequisites
* `node`: version 4.0.0 or higher
### Download the example
You'll need a local copy of the example code to work through this quickstart.
Download the example code from our GitHub repository (the following command
clones the entire repository, but you just need the examples for this quickstart
and other tutorials):
```sh
$ # Clone the repository to get the example code
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
$ # Navigate to the dynamic codegen "hello, world" Node example:
$ cd grpc/examples/node/dynamic_codegen
$ # Install the example's dependencies
$ npm install
```
### Run a gRPC application
From the `examples/node/dynamic_codegen` directory:
1. Run the server
```sh
$ node greeter_server.js
```
2. In another terminal, run the client
```sh
$ node greeter_client.js
```
Congratulations! You've just run a client-server application with gRPC.
### Update a gRPC service
Now let's look at how to update the application with an extra method on the
server for the client to call. Our gRPC service is defined using protocol
buffers; you can find out lots more about how to define a service in a `.proto`
file in [gRPC Basics: Node](/docs/tutorials/basic/node/). For now all you need
to know is that both the server and the client "stub" have a `SayHello` RPC
method that takes a `HelloRequest` parameter from the client and returns a
`HelloReply` from the server, and that this method is defined like this:
```proto
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
Let's update this so that the `Greeter` service has two methods. Edit
`examples/protos/helloworld.proto` and update it with a new `SayHelloAgain`
method, with the same request and response types:
```proto
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
(Don't forget to save the file!)
### Update and run the application
We now have a new service definition, but we still need to implement and call
the new method in the human-written parts of our example application.
#### Update the server
In the same directory, open `greeter_server.js`. Implement the new method like
this:
```js
function sayHello(call, callback) {
callback(null, {message: 'Hello ' + call.request.name});
}
function sayHelloAgain(call, callback) {
callback(null, {message: 'Hello again, ' + call.request.name});
}
function main() {
var server = new grpc.Server();
server.addProtoService(hello_proto.Greeter.service,
{sayHello: sayHello, sayHelloAgain: sayHelloAgain});
server.bind('0.0.0.0:50051', grpc.ServerCredentials.createInsecure());
server.start();
}
...
```
#### Update the client
In the same directory, open `greeter_client.js`. Call the new method like this:
```js
function main() {
var client = new hello_proto.Greeter('localhost:50051',
grpc.credentials.createInsecure());
client.sayHello({name: 'you'}, function(err, response) {
console.log('Greeting:', response.message);
});
client.sayHelloAgain({name: 'you'}, function(err, response) {
console.log('Greeting:', response.message);
});
}
```
#### Run!
Just like we did before, from the `examples/node/dynamic_codegen` directory:
1. Run the server
```sh
$ node greeter_server.js
```
2. In another terminal, run the client
```sh
$ node greeter_client.js
```
### What's next
- Read a full explanation of how gRPC works in [What is gRPC?](/docs/guides/)
and [gRPC Concepts](/docs/guides/concepts/)
- Work through a more detailed tutorial in [gRPC Basics: Node](/docs/tutorials/basic/node/)
- Explore the gRPC Node core API in its [reference
documentation](/grpc/node/)
- We do have more than one grpc implementation for nodejs. [Learn about the pros and cons of each here](https://github.com/grpc/grpc-node/blob/master/PACKAGE-COMPARISON.md).

View File

@ -0,0 +1,315 @@
---
layout: quickstart
title: Objective-C Quickstart
aliases: [/docs/quickstart/objective-c.html]
---
<p class="lead">This guide gets you started with gRPC on the iOS platform in
Objective-C with a simple working example.</p>
<div id="toc"></div>
### Before you begin
#### System requirement
The minimum deployment iOS version for gRPC is 7.0.
OS X El Capitan (version 10.11) or above is required to build and run this
Quickstart.
#### Prerequisites
* `CocoaPods`: version 1.0 or higher
* Check status and version of CocoaPods on your system with command `pod
--version`.
* If CocoaPods is not installed, follow the install instructions on CocoaPods
[website](https://cocoapods.org).
* `Xcode`: version 7.2 or higher
* Check your Xcode version by running Xcode from Lauchpad, then select
"Xcode->About Xcode" in the menu.
* Make sure the command line developer tools are installed:
```sh
[sudo] xcode-select --install
```
* `Homebrew`
* Check status and version of Homebrew on your system with command `brew
--version`.
* If Homebrew is not installed, install with:
```sh
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
```
* `autoconf`, `automake`, `libtool`, `pkg-config`
* Install with Homebrew
```sh
brew install autoconf automake libtool pkg-config
```
### Download the example
You'll need a local copy of the sample app source code to work through this
Quickstart. Copy the source code from GitHub
[repository](https://github.com/grpc/grpc):
```sh
$ git clone --recursive -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc.git
```
### Install gRPC plugins and libraries
```sh
$ cd grpc
$ make
$ [sudo] make install
```
### Install protoc compiler
```sh
$ brew tap grpc/grpc
$ brew install protobuf
```
### Run the server
For this sample app, we need a gRPC server running on the local machine. gRPC
Objective-C API supports creating gRPC clients but not gRPC servers. Therefore
instead we build and run the C++ server in the same repository:
```sh
$ cd examples/cpp/helloworld
$ make
$ ./greeter_server &
```
### Run the client
#### Generate client libraries and dependencies
Have CocoaPods generate and install the client library from our .proto files, as
well as installing several dependencies:
```sh
$ cd ../../objective-c/helloworld
$ pod install
```
(This might have to compile OpenSSL, which takes around 15 minutes if Cocoapods
doesn't have it yet on your computer's cache.)
#### Run the client app
Open the Xcode workspace created by CocoaPods:
```sh
$ open HelloWorld.xcworkspace
```
This will open the app project with Xcode. Run the app in an iOS simulator
by pressing the Run button on the top left corner of Xcode window. You can check
the calling code in `main.m` and see the results in Xcode's console.
The code sends a `HLWHelloRequest` containing the string "Objective-C" to a
local server. The server responds with a `HLWHelloResponse`, which contains a
string "Hello Objective-C" that is then output to the console.
Congratulations! You've just run a client-server application with gRPC.
### Update a gRPC service
Now let's look at how to update the application with an extra method on the
server for the client to call. Our gRPC service is defined using Protocol
Buffers; you can find out lots more about how to define a service in a `.proto`
file in Protocol Buffers
[website](https://developers.google.com/protocol-buffers/). For now all you
need to know is that both the server and the client "stub" have a `SayHello`
RPC method that takes a `HelloRequest` parameter from the client and returns a
`HelloResponse` from the server, and that this method is defined like this:
```c
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
Let's update this so that the `Greeter` service has two methods. Edit
`examples/protos/helloworld.proto` and update it with a new `SayHelloAgain`
method, with the same request and response types:
```c
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
(Don't forget to save the file!)
### Update the client and server
We now have a new gRPC service definition, but we still need to implement and
call the new method in the human-written parts of our example application.
#### Update the server
As you remember, gRPC doesn't provide a server API for Objective-C. Instead, we
need to update the C++ sample server. Open
`examples/cpp/helloworld/greeter_server.cc`. Implement the new method like this:
```c++
class GreeterServiceImpl final : public Greeter::Service {
Status SayHello(ServerContext* context, const HelloRequest* request,
HelloReply* reply) override {
std::string prefix("Hello ");
reply->set_message(prefix + request->name());
return Status::OK;
}
Status SayHelloAgain(ServerContext* context, const HelloRequest* request,
HelloReply* reply) override {
std::string prefix("Hello again ");
reply->set_message(prefix + request->name());
return Status::OK;
}
};
```
#### Update the client
Edit `examples/objective-c/helloworld/main.m` to call the new method like this:
```c
int main(int argc, char * argv[]) {
@autoreleasepool {
[GRPCCall useInsecureConnectionsForHost:kHostAddress];
[GRPCCall setUserAgentPrefix:@"HelloWorld/1.0" forHost:kHostAddress];
HLWGreeter *client = [[HLWGreeter alloc] initWithHost:kHostAddress];
HLWHelloRequest *request = [HLWHelloRequest message];
request.name = @"Objective-C";
[client sayHelloWithRequest:request handler:^(HLWHelloReply *response, NSError *error) {
NSLog(@"%@", response.message);
}];
[client sayHelloAgainWithRequest:request handler:^(HLWHelloReply *response, NSError *error) {
NSLog(@"%@", response.message);
}];
return UIApplicationMain(argc, argv, nil, NSStringFromClass([AppDelegate class]));
}
}
```
### Build and run
First terminate the server process already running in the background:
```sh
$ pkill greeter_server
```
Then in directory `examples/cpp/helloworld`, build and run the updated server
with the following commands:
```sh
$ make
$ ./greeter_server &
```
Change directory to `examples/objective-c/helloworld`, then clean up and
reinstall Pods for the client app with the following commands:
```sh
$ rm -Rf Pods
$ rm Podfile.lock
$ rm -Rf HelloWorld.xcworkspace
$ pod install
```
This regenerates files in `Pods/HelloWorld` based on the new proto file we wrote
above. Open the client Xcode project in Xcode:
```sh
$ open HelloWorld.xcworkspace
```
and run the client app. If you look at the console messages, you should see two RPC calls,
one to SayHello and one to SayHelloAgain.
### Troubleshooting
**When installing CocoaPods, error prompt `activesupport requires Ruby version >= 2.2.2.`**
Install an older version of `activesupport`, then install CocoaPods:
```sh
[sudo] gem install activesupport -v 4.2.6
[sudo] gem install cocoapods
```
**When installing dependencies with CocoaPods, error prompt `Unable to find a specification for !ProtoCompiler-gRPCPlugin`**
Update the local clone of spec repo by running `pod repo update`
**Compiler error when compiling `objective_c_plugin.cc`**
Removing `protobuf` package with Homebrew before building gRPC may solve
this problem. We are working on a more elegant fix.
**When building HellowWorld, error prompt `ld: unknown option: --no-as-needed`**
This problem is due to linker `ld` in Apple LLVM not supporting the
--no-as-needed option. We are working on a fix right now and will merge the fix
very soon.
**When building grpc, error prompt `cannot find install-sh install.sh or shtool`**
Remove the gRPC directory, clone a new one and try again. It is likely that some
auto generated files are corrupt; remove and rebuild may solve the problem.
**When building grpc, error prompt `Can't exec "aclocal"`**
The package `automake` is missing. Install `automake` should solve this problem.
**When building grpc, error prompt `possibly undefined macro: AC_PROG_LIBTOOL`**
The package `libtool` is missing. Install `libtool` should solve this problem.
**When building grpc, error prompt `cannot find install-sh, install.sh, or shtool`**
Some of the auto generated files are corrupt. Remove the entire gRPC directory,
clone from GitHub, and build again.
**Cannot find `protoc` when building HelloWorld**
Run `brew install protobuf` to get `protoc` compiler.
### What's next
- Read a full explanation of how gRPC works in [What is gRPC?](/docs/guides/)
and [gRPC Concepts](/docs/guides/concepts/)
- Work through a more detailed tutorial in [gRPC Basics: Objective-C](/docs/tutorials/basic/objective-c/)
- Explore the Objective-C core API in its [reference
documentation](http://cocoadocs.org/docsets/gRPC/)

View File

@ -0,0 +1,446 @@
---
layout: quickstart
title: PHP Quickstart
aliases: [/docs/quickstart/php.html]
---
<p class="lead">This guide gets you started with gRPC in PHP with a simple
working example.</p>
<div id="toc"></div>
### Prerequisites
* `php` 5.5 or above, 7.0 or above
* `pecl`
* `composer`
* `phpunit` (optional)
**Install PHP and PECL on Ubuntu/Debian:**
For PHP5:
```sh
$ sudo apt-get install php5 php5-dev php-pear phpunit
```
For PHP7:
```sh
$ sudo apt-get install php7.0 php7.0-dev php-pear phpunit
```
or
```sh
$ sudo apt-get install php php-dev php-pear phpunit
```
**Install PHP and PECL on CentOS/RHEL 7:**
```sh
$ sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
$ sudo rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm
$ sudo yum install php56w php56w-devel php-pear phpunit gcc zlib-devel
```
**Install PHP and PECL on Mac:**
```sh
$ brew install homebrew/php/php56-grpc
$ curl -O http://pear.php.net/go-pear.phar
$ sudo php -d detect_unicode=0 go-pear.phar
```
**Install Composer (Linux or Mac):**
```sh
$ curl -sS https://getcomposer.org/installer | php
$ sudo mv composer.phar /usr/local/bin/composer
```
**Install PHPUnit (Linux or Mac):**
```sh
$ wget https://phar.phpunit.de/phpunit-old.phar
$ chmod +x phpunit-old.phar
$ sudo mv phpunit-old.phar /usr/bin/phpunit
```
### Install the gRPC PHP extension
There are two ways to install gRPC PHP extension.
* `pecl`
* `build from source`
#### Using PECL
```sh
sudo pecl install grpc
```
or specific version
```sh
sudo pecl install grpc-1.7.0
```
Note: for users on CentOS/RHEL 6, unfortunately this step wont work.
Please follow the instructions below to compile the PECL extension from source.
##### Install on Windows
You can download the pre-compiled gRPC extension from the PECL
[website](https://pecl.php.net/package/grpc)
#### Build from Source with gRPC C core library
Clone this repository
```sh
$ git clone -b $(curl -L https://grpc.io/release) https://github.com/grpc/grpc
```
##### Build and install the gRPC C core library
```sh
$ cd grpc
$ git submodule update --init
$ make
$ sudo make install
```
##### Build and install gRPC PHP extension
Compile the gRPC PHP extension
```sh
$ cd grpc/src/php/ext/grpc
$ phpize
$ ./configure
$ make
$ sudo make install
```
This will compile and install the gRPC PHP extension into the
standard PHP extension directory. You should be able to run
the [unit tests](#unit-tests), with the PHP extension installed.
#### Update php.ini
After installing the gRPC extension, make sure you add this line
to your `php.ini` file, (e.g. `/etc/php5/cli/php.ini`,
`/etc/php5/apache2/php.ini`, or `/usr/local/etc/php/5.6/php.ini`),
depending on where your PHP installation is.
```sh
extension=grpc.so
```
**Add the gRPC PHP library as a Composer dependency**
You need to add this to your project's `composer.json` file.
```json
"require": {
"grpc/grpc": "v1.7.0"
}
```
To run tests with generated stub code from `.proto` files, you will also
need the `composer` and `protoc` binaries. You can find out how to get these below.
### Install other prerequisites for both Mac OS X and Linux
* `protoc: protobuf compiler`
* `protobuf.so: protobuf runtime library`
* `grpc_php_plugin: Generates PHP gRPC service interface out of Protobuf IDL`
#### Install Protobuf compiler
If you don't have it already, you need to install the protobuf compiler
`protoc`, version 3.4.0+ (the newer the better) for the current gRPC version.
If you installed already, make sure the protobuf version is compatible with the
grpc version you installed. If you build grpc.so from source, you can check
the version of grpc inside package.xml file.
The compatibility between the grpc and protobuf version is listed as table below:
grpc | protobuf
--- | ---
v1.0.0 | 3.0.0(GA)
v1.0.1 | 3.0.2
v1.1.0 | 3.1.0
v1.2.0 | 3.2.0
v1.2.0 | 3.2.0
v1.3.4 | 3.3.0
v1.3.5 | 3.2.0
v1.4.0 | 3.3.0
v1.6.0 | 3.4.0
If `protoc` hasn't been installed, you can download the `protoc` binaries from
[the protocol buffers GitHub repository](https://github.com/google/protobuf/releases).
Then unzip this file and Update the environment variable `PATH` to include the path to
the protoc binary file./protobuf/releases).
Then unzip this file and Update the environment variable `PATH` to include the path to
the protoc binary file.
If you really must compile `protoc` from source, you can run the following
commands, but this is risky because there is no easy way to uninstall /
upgrade to a newer release.
```sh
$ cd grpc/third_party/protobuf
$ ./autogen.sh && ./configure && make
$ sudo make install
```
#### Protobuf Runtime library
There are two protobuf runtime libraries to choose from. They are identical
in terms of APIs offered. The C implementation provides better performance,
while the native implementation is easier to install. Make sure the installed
protobuf version works with grpc version.
##### 1. C implementation (for better performance)
``` sh
$ sudo pecl install protobuf
```
or specific version
``` sh
$ sudo pecl install protobuf-3.4.0
```
After protobuf extension is installed, Update php.ini by adding this line
to your `php.ini` file, (e.g. `/etc/php5/cli/php.ini`,
`/etc/php5/apache2/php.ini`, or `/usr/local/etc/php/5.6/php.ini`),
depending on where your PHP installation is.
```sh
extension=protobuf.so
```
##### 2. PHP implementation (for easier installation)
Add this to your `composer.json` file:
```json
"require": {
"google/protobuf": "^v3.3.0"
}
```
#### PHP Protoc Plugin
You need the gRPC PHP protoc plugin to generate the client stub classes.
It can generate server and client code from .proto service definitions.
It should already been compiled when you run `make` from the root directory
of this repo. The plugin can be found in the `bins/opt` directory. We are
planning to provide a better way to download and install the plugin
in the future.
You can also just build the gRPC PHP protoc plugin by running:
```sh
$ git clone -b $(curl -L https://grpc.io/release) https://github.com/grpc/grpc
$ cd grpc
$ git submodule update --init
$ make grpc_php_plugin
```
Plugin may use the new feature of the new protobuf version, thus please also
make sure that the protobuf version installed is compatible with the grpc version
you build this plugin.
### Download the example
You'll need a local copy of the example code to work through this quickstart.
Download the example code from our GitHub repository (the following command
clones the entire repository, but you just need the examples for this quickstart
and other tutorials):
Note that currently you can only create clients in PHP for gRPC services -
you can find out how to create gRPC servers in our other tutorials,
e.g. [Node.js](/docs/tutorials/basic/node/).
```sh
$ # Clone the repository to get the example code:
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
$ # Build grpc_php_plugin to generate proto files if not build before
$ cd grpc && git submodule update --init && make grpc_php_plugin
$ # Navigate to the "hello, world" PHP example:
$ cd examples/php
$ ./greeter_proto_gen.sh
$ composer install
```
### Run a gRPC application
From the `examples/node` directory:
1. Run the server
```sh
$ npm install
$ cd dynamic_codegen
$ node greeter_server.js
```
In another terminal, from the `examples/php` directory:
1. Run the client
```sh
$ ./run_greeter_client.sh
```
Congratulations! You've just run a client-server application with gRPC.
### Update a gRPC service
Now let's look at how to update the application with an extra method on the
server for the client to call. Our gRPC service is defined using protocol
buffers; you can find out lots more about how to define a service in a `.proto`
file in [gRPC Basics: PHP](/docs/tutorials/basic/php/). For now all you need to know is that both the
server and the client "stub" have a `SayHello` RPC method that takes a
`HelloRequest` parameter from the client and returns a `HelloResponse` from
the server, and that this method is defined like this:
```php
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
Let's update this so that the `Greeter` service has two methods. Edit
`examples/protos/helloworld.proto` and update it with a new `SayHelloAgain`
method, with the same request and response types:
```php
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
(Don't forget to save the file!)
### Generate gRPC code
Next we need to update the gRPC code used by our application to use the new
service definition. From the `grpc` root directory:
```sh
$ protoc --proto_path=examples/protos \
--php_out=examples/php \
--grpc_out=examples/php \
--plugin=protoc-gen-grpc=bins/opt/grpc_php_plugin \
./examples/protos/helloworld.proto
```
or running the helper script under the `grpc/example/php` directory if you build
grpc-php-plugin by source:
```sh
$ ./greeter_proto_gen.sh
```
This regenerates the protobuf files, which contain our generated client classes,
as well as classes for populating, serializing, and retrieving our request and
response types.
### Update and run the application
We now have new generated client code, but we still need to implement and call
the new method in the human-written parts of our example application.
#### Update the server
In the same directory, open `greeter_server.js`. Implement the new method like
this:
```js
function sayHello(call, callback) {
callback(null, {message: 'Hello ' + call.request.name});
}
function sayHelloAgain(call, callback) {
callback(null, {message: 'Hello again, ' + call.request.name});
}
function main() {
var server = new grpc.Server();
server.addProtoService(hello_proto.Greeter.service,
{sayHello: sayHello, sayHelloAgain: sayHelloAgain});
server.bind('0.0.0.0:50051', grpc.ServerCredentials.createInsecure());
server.start();
}
...
```
#### Update the client
In the same directory, open `greeter_client.php`. Call the new method like this:
```php
$request = new Helloworld\HelloRequest();
$request->setName($name);
list($reply, $status) = $client->SayHello($request)->wait();
$message = $reply->getMessage();
list($reply, $status) = $client->SayHelloAgain($request)->wait();
$message = $reply->getMessage();
```
#### Run!
Just like we did before, from the `examples/node/dynamic_codegen` directory:
1. Run the server
```sh
$ node greeter_server.js
```
In another terminal, from the `examples/php` directory:
2. Run the client
```sh
$ ./run_greeter_client.sh
```
### What's next
- Read a full explanation of how gRPC works in [What is gRPC?](/docs/guides/)
and [gRPC Concepts](/docs/guides/concepts/)
- Work through a more detailed tutorial in [gRPC Basics: PHP](/docs/tutorials/basic/php/)
- Explore the gRPC PHP core API in its [reference
documentation](/grpc/php/namespace-Grpc.html)

View File

@ -0,0 +1,234 @@
---
layout: quickstart
title: Python Quickstart
aliases: [/docs/quickstart/python.html]
---
<p class="lead">This guide gets you started with gRPC in Python with a simple
working example.</p>
<div id="toc"></div>
### Before you begin
#### Prerequisites
gRPC Python is supported for use with Python 2.7 or Python 3.4 or higher.
Ensure you have `pip` version 9.0.1 or higher:
```sh
$ python -m pip install --upgrade pip
```
If you cannot upgrade `pip` due to a system-owned installation, you can
run the example in a virtualenv:
```sh
$ python -m pip install virtualenv
$ virtualenv venv
$ source venv/bin/activate
$ python -m pip install --upgrade pip
```
#### Install gRPC
Install gRPC:
```sh
$ python -m pip install grpcio
```
Or, to install it system wide:
```sh
$ sudo python -m pip install grpcio
```
On El Capitan OSX, you may get the following error:
```sh
$ OSError: [Errno 1] Operation not permitted: '/tmp/pip-qwTLbI-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
```
You can work around this using:
```sh
$ python -m pip install grpcio --ignore-installed
```
#### Install gRPC tools
Python's gRPC tools include the protocol buffer compiler `protoc` and the
special plugin for generating server and client code from `.proto` service
definitions. For the first part of our quickstart example, we've already
generated the server and client stubs from
[helloworld.proto](https://github.com/grpc/grpc/tree/{{< param grpc_release_tag >}}/examples/protos/helloworld.proto),
but you'll need the tools for the rest of our quickstart, as well as later
tutorials and your own projects.
To install gRPC tools, run:
```sh
$ python -m pip install grpcio-tools
```
### Download the example
You'll need a local copy of the example code to work through this quickstart.
Download the example code from our GitHub repository (the following command
clones the entire repository, but you just need the examples for this quickstart
and other tutorials):
```sh
$ # Clone the repository to get the example code:
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
$ # Navigate to the "hello, world" Python example:
$ cd grpc/examples/python/helloworld
```
### Run a gRPC application
From the `examples/python/helloworld` directory:
1. Run the server
```sh
$ python greeter_server.py
```
2. In another terminal, run the client
```sh
$ python greeter_client.py
```
Congratulations! You've just run a client-server application with gRPC.
### Update a gRPC service
Now let's look at how to update the application with an extra method on the
server for the client to call. Our gRPC service is defined using protocol
buffers; you can find out lots more about how to define a service in a `.proto`
file in [What is gRPC?](/docs/guides/) and [gRPC Basics: Python](/docs/tutorials/basic/python/). For now all you need
to know is that both the server and the client "stub" have a `SayHello` RPC
method that takes a `HelloRequest` parameter from the client and returns a
`HelloReply` from the server, and that this method is defined like this:
```proto
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
Let's update this so that the `Greeter` service has two methods. Edit
`examples/protos/helloworld.proto` and update it with a new `SayHelloAgain`
method, with the same request and response types:
```proto
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
(Don't forget to save the file!)
### Generate gRPC code
Next we need to update the gRPC code used by our application to use the new
service definition.
From the `examples/python/helloworld` directory, run:
```sh
$ python -m grpc_tools.protoc -I../../protos --python_out=. --grpc_python_out=. ../../protos/helloworld.proto
```
This regenerates `helloworld_pb2.py` which contains our generated request and
response classes and `helloworld_pb2_grpc.py` which contains our generated
client and server classes.
### Update and run the application
We now have new generated server and client code, but we still need to implement
and call the new method in the human-written parts of our example application.
#### Update the server
In the same directory, open `greeter_server.py`. Implement the new method like
this:
```py
class Greeter(helloworld_pb2_grpc.GreeterServicer):
def SayHello(self, request, context):
return helloworld_pb2.HelloReply(message='Hello, %s!' % request.name)
def SayHelloAgain(self, request, context):
return helloworld_pb2.HelloReply(message='Hello again, %s!' % request.name)
...
```
#### Update the client
In the same directory, open `greeter_client.py`. Call the new method like this:
```py
def run():
channel = grpc.insecure_channel('localhost:50051')
stub = helloworld_pb2_grpc.GreeterStub(channel)
response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
print("Greeter client received: " + response.message)
response = stub.SayHelloAgain(helloworld_pb2.HelloRequest(name='you'))
print("Greeter client received: " + response.message)
```
#### Run!
Just like we did before, from the `examples/python/helloworld` directory:
1. Run the server
```sh
$ python greeter_server.py
```
2. In another terminal, run the client
```sh
$ python greeter_client.py
```
### What's next
- Read a full explanation of how gRPC works in [What is gRPC?](/docs/guides/)
and [gRPC Concepts](/docs/guides/concepts/)
- Work through a more detailed tutorial in [gRPC Basics: Python](/docs/tutorials/basic/python/)
- Explore the gRPC Python core API in its [reference
documentation](/grpc/python/)

View File

@ -0,0 +1,194 @@
---
title: Ruby Quick Start
layout: quickstart
aliases: [/docs/quickstart/ruby.html]
---
<p class="lead">This guide gets you started with gRPC in Ruby with a simple
working example.</p>
<div id="toc"></div>
### Before you begin
#### Prerequisites
* `ruby`: version 2 or higher
#### Install gRPC
```
$ gem install grpc
```
#### Install gRPC tools
Ruby's gRPC tools include the protocol buffer compiler `protoc` and the special
plugin for generating server and client code from the `.proto` service
definitions. For the first part of our quickstart example, we've already
generated the server and client stubs from
[helloworld.proto](https://github.com/grpc/grpc/tree/{{< param grpc_release_tag >}}/examples/protos/helloworld.proto),
but you'll need the tools for the rest of our quickstart, as well as later
tutorials and your own projects.
To install gRPC tools, run:
```sh
gem install grpc-tools
```
### Download the example
You'll need a local copy of the example code to work through this quickstart.
Download the example code from our GitHub repository (the following command
clones the entire repository, but you just need the examples for this quickstart
and other tutorials):
```sh
$ # Clone the repository to get the example code:
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
$ # Navigate to the "hello, world" Ruby example:
$ cd grpc/examples/ruby
```
### Run a gRPC application
From the `examples/ruby` directory:
1. Run the server
```sh
$ ruby greeter_server.rb
```
2. In another terminal, run the client
```sh
$ ruby greeter_client.rb
```
Congratulations! You've just run a client-server application with gRPC.
### Update a gRPC service
Now let's look at how to update the application with an extra method on the
server for the client to call. Our gRPC service is defined using protocol
buffers; you can find out lots more about how to define a service in a `.proto`
file in [gRPC Basics: Ruby](/docs/tutorials/basic/ruby/). For now all you need
to know is that both the server and the client "stub" have a `SayHello` RPC
method that takes a `HelloRequest` parameter from the client and returns a
`HelloResponse` from the server, and that this method is defined like this:
```proto
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
Let's update this so that the `Greeter` service has two methods. Edit
`examples/protos/helloworld.proto` and update it with a new `SayHelloAgain`
method, with the same request and response types:
```proto
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
```
(Don't forget to save the file!)
### Generate gRPC code
Next we need to update the gRPC code used by our application to use the new
service definition. From the `examples/ruby/` directory:
```sh
$ grpc_tools_ruby_protoc -I ../protos --ruby_out=lib --grpc_out=lib ../protos/helloworld.proto
```
This regenerates `lib/helloworld_services_pb.rb`, which contains our generated
client and server classes.
#### Update the server
In the same directory, open `greeter_server.rb`. Implement the new method like this
```rb
class GreeterServer < Helloworld::Greeter::Service
def say_hello(hello_req, _unused_call)
Helloworld::HelloReply.new(message: "Hello #{hello_req.name}")
end
def say_hello_again(hello_req, _unused_call)
Helloworld::HelloReply.new(message: "Hello again, #{hello_req.name}")
end
end
...
```
#### Update the client
In the same directory, open `greeter_client.rb`. Call the new method like this:
```rb
def main
stub = Helloworld::Greeter::Stub.new('localhost:50051', :this_channel_is_insecure)
user = ARGV.size > 0 ? ARGV[0] : 'world'
message = stub.say_hello(Helloworld::HelloRequest.new(name: user)).message
p "Greeting: #{message}"
message = stub.say_hello_again(Helloworld::HelloRequest.new(name: user)).message
p "Greeting: #{message}"
end
```
#### Run!
Just like we did before, from the `examples/ruby` directory:
1. Run the server
```sh
$ ruby greeter_server.rb
```
2. In another terminal, run the client
```sh
$ ruby greeter_client.rb
```
### What's next
- Read a full explanation of how gRPC works in [What is gRPC?](/docs/guides/)
and [gRPC Concepts](/docs/guides/concepts/)
- Work through a more detailed tutorial in [gRPC Basics: Ruby](/docs/tutorials/basic/ruby/)
- Explore the gRPC Ruby core API in its [reference
documentation](http://www.rubydoc.info/gems/grpc)

View File

@ -0,0 +1,59 @@
---
title: Web Quick Start
layout: quickstart
aliases: [/docs/quickstart/web.html]
---
<p class="lead">This guide gets you started with gRPC-Web with a simple
working example from the browser.</p>
<div id="toc"></div>
### Prerequisites
* `docker` and `docker-compose`
This demo requires Docker Compose file
[version 3](https://docs.docker.com/compose/compose-file/). Please refer to
[Docker website](https://docs.docker.com/compose/install/#install-compose) on how to install Docker.
### Run an Echo example from the browser!
```sh
$ git clone https://github.com/grpc/grpc-web
$ cd grpc-web
$ docker-compose pull
$ docker-compose up -d node-server envoy commonjs-client
```
Open a browser tab, and go to:
```sh
http://localhost:8081/echotest.html
```
To shutdown, run `docker-compose down`.
### What is Happening?
In this demo, there are three key components:
1. `node-server`: This is a standard gRPC Server, implemented in Node.
This server listens at port `:9090` and implements the service's business
logic.
2. `envoy`: This is the Envoy proxy. It listens at `:8080` and forwards the
browser's gRPC-Web requests to port `:9090`. This is done via a config file
`envoy.yaml`.
3. `commonjs-client`: This component generates the client stub class using
the `protoc-gen-grpc-web` protoc plugin, compiles all the JS dependencies
using `webpack`, and hosts the static content `echotest.html` and
`dist/main.js` using a simple web server at port `:8081`. Once the user
interacts with the webpage, it sends a gRPC-Web request to the Envoy proxy
endpoint at `:8080`.
### What's next
- Work through a more detailed tutorial in [gRPC Basics: Web](/docs/tutorials/basic/web/)

View File

@ -0,0 +1,21 @@
---
bodyclass: docs
headline: 'Language Specific API Reference'
layout: docs
title: Reference
type: markdown
---
<p class="lead">Links to the language specific automatically generated API reference documentation.</p>
<ul>
<li><a target="_blank" href="/grpc/cpp/index.html">C++ API</a></li>
<li><a target="_blank" href="/grpc-java/javadoc/index.html">Java API</a></li>
<li><a target="_blank" href="/grpc/python/">Python API</a></li>
<li><a target="_blank" href="http://www.rubydoc.info/gems/grpc">Ruby API</a></li>
<li><a target="_blank" href="/grpc/node/">Node.js API</a></li>
<li><a target="_blank" href="/grpc/csharp/api/Grpc.Core.html">C# API</a></li>
<li><a target="_blank" href="https://godoc.org/google.golang.org/grpc">Go API</a></li>
<li><a target="_blank" href="/grpc/php/namespace-Grpc.html">PHP API</a></li>
<li><a target="_blank" href="http://cocoadocs.org/docsets/gRPC/">Objective-C API</a></li>
<li><a target="_blank" href="/grpc/core/">gRPC Core Library (for wrapped languages)</a></ul></li>
</ul>

View File

@ -0,0 +1,15 @@
---
bodyclass: docs
headline: C++ Client Reference
layout: docs
title: C++ Client Reference
---
<p class="lead">Being familiar with these will go a long way.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse
cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>

View File

@ -0,0 +1,15 @@
---
bodyclass: docs
headline: C++ Server Reference
layout: docs
title: C++ Server Reference
---
<p class="lead">Being familiar with these will go a long way.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse
cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>

View File

@ -0,0 +1,180 @@
---
bodyclass: docs
headline: Go Generated Code Reference
layout: docs
aliases: [/docs/reference/go/generated-code.html]
---
# Go Generated Code Reference
This guide describes the code generated with the [grpc plugin](https://godoc.org/github.com/golang/protobuf/protoc-gen-go/grpc) to `protoc-gen-go`
when compiling `.proto` files with `protoc`.
You can find out how to define a gRPC service in a `.proto` file in [Service Definitions](/docs/guides/concepts/#service-definition).
<p class="note"><strong>Thread-safety</strong>: note that client-side RPC invocations and server-side RPC handlers <i>are thread-safe</i> and are meant
to be run on concurrent goroutines. But also note that for <i>individual streams</i>, incoming and outgoing data is bi-directional but serial;
so e.g. <i>individual streams</i> do not support <i>concurrent reads</i> or <i>concurrent writes</i> (but reads are safely concurrent <i>with</i> writes).
</p>
## Methods on generated server interfaces
On the server side, each `service Bar` in the `.proto` file results in the function:
`func RegisterBarServer(s *grpc.Server, srv BarServer)`
The application can define a concrete implementation of the `BarServer` interface and register it with a `grpc.Server` instance
(before starting the server instance) by using this function.
### Unary methods
These methods have the following signature on the generated service interface:
`Foo(context.Context, *MsgA) (*MsgB, error)`
In this context, `MsgA` is the protobuf message sent from the client, and `MsgB` is the protobuf message sent back from the server.
### Server-streaming methods
These methods have the following signature on the generated service interface:
`Foo(*MsgA, <ServiceName>_FooServer) error`
In this context, `MsgA` is the single request from the client, and the `<ServiceName>_FooServer` parameter represents the server-to-client stream
of `MsgB` messages.
`<ServiceName>_FooServer` has an embedded `grpc.ServerStream` and the following interface:
```go
type <ServiceName>_FooServer interface {
Send(*MsgB) error
grpc.ServerStream
}
```
The server-side handler can send a stream of protobuf messages to the client through this parameter's `Send` method. End-of-stream for the server-to-client
stream is caused by the `return` of the handler method.
### Client-streaming methods
These methods have the following signature on the generated service interface:
`Foo(<ServiceName>_FooServer) error`
In this context, `<ServiceName>_FooServer` can be used both to read the client-to-server message stream and to send the single server response message.
`<ServiceName>_FooServer` has an embedded `grpc.ServerStream` and the following interface:
```go
type <ServiceName>_FooServer interface {
SendAndClose(*MsgA) error
Recv() (*MsgB, error)
grpc.ServerStream
}
```
The server-side handler can repeatedly call `Recv` on this parameter in order to receive the full stream of
messages from the client. `Recv` returns `(nil, io.EOF)` once it has reached the end of the stream.
The single response message from the server is sent by calling the `SendAndClose` method on this `<ServiceName>_FooServer` parameter.
Note that `SendAndClose` must be called once and only once.
### Bidi-streaming methods
These methods have the following signature on the generated service interface:
`Foo(<ServiceName>_FooServer) error`
In this context, `<ServiceName>_FooServer` can be used to access both the client-to-server message stream and the server-to-client message stream.
`<ServiceName>_FooServer` has an embedded `grpc.ServerStream` and the following interface:
```go
type <ServiceName>_FooServer interface {
Send(*MsgA) error
Recv() (*MsgB, error)
grpc.ServerStream
}
```
The server-side handler can repeatedly call `Recv` on this parameter in order to read the client-to-server message stream.
`Recv` returns `(nil, io.EOF)` once it has reached the end of the client-to-server stream.
The response server-to-client message stream is sent by repeatedly calling the `Send` method of on this `ServiceName>_FooServer` parameter.
End-of-stream for the server-to-client stream is indicated by the `return` of the bidi method handler.
## Methods on generated client interfaces
For client side usage, each `service Bar` in the `.proto` file also results in the function: `func BarClient(cc *grpc.ClientConn) BarClient`, which
returns a concrete implementation of the `BarClient` interface (this concrete implementation also lives in the generated `.pb.go` file).
### Unary Methods
These methods have the following signature on the generated client stub:
`(ctx context.Context, in *MsgA, opts ...grpc.CallOption) (*MsgB, error)`
In this context, `MsgA` is the single request from client to server, and `MsgB` contains the response sent back from the server.
### Server-Streaming methods
These methods have the following signature on the generated client stub:
`Foo(ctx context.Context, in *MsgA, opts ...grpc.CallOption) (<ServiceName>_FooClient, error)`
In this context, `<ServiceName>_FooClient` represents the server-to-client `stream` of `MsgB` messages.
This stream has an embedded `grpc.ClientStream` and the following interface:
```go
type <ServiceName>_FooClient interface {
Recv() (*MsgB, error)
grpc.ClientStream
}
```
The stream begins when the client calls the `Foo` method on the stub.
The client can then repeatedly call the `Recv` method on the returned `<ServiceName>_FooClient` <i>stream</i> in order to read the server-to-client response stream.
This `Recv` method returns `(nil, io.EOF)` once the server-to-client stream has been completely read through.
### Client-Streaming methods
These methods have the following signature on the generated client stub:
`Foo(ctx context.Context, opts ...grpc.CallOption) (<ServiceName>_FooClient, error)`
In this context, `<ServiceName>_FooClient` represents the client-to-server `stream` of `MsgA` messages.
`<ServiceName>_FooClient` has an embedded `grpc.ClientStream` and the following interface:
```go
type <ServiceName>_FooClient interface {
Send(*MsgA) error
CloseAndRecv() (*MsgA, error)
grpc.ClientStream
}
```
The stream begins when the client calls the `Foo` method on the stub.
The client can then repeatedly call the `Send` method on the returned `<ServiceName>_FooClient` stream in order to send the client-to-server message stream.
The `CloseAndRecv` method on this stream must be called once and only once, in order to both close the client-to-server stream
and receive the single response message from the server.
### Bidi-Streaming methods
These methods have the following signature on the generated client stub:
`Foo(ctx context.Context, opts ...grpc.CallOption) (<ServiceName>_FooClient, error)`
In this context, `<ServiceName>_FooClient` represents both the client-to-server and server-to-client message streams.
`<ServiceName>_FooClient` has an embedded `grpc.ClientStream` and the following interface:
```go
type <ServiceName>_FooClient interface {
Send(*MsgA) error
Recv() (*MsgB, error)
grpc.ClientStream
}
```
The stream begins when the client calls the `Foo` method on the stub.
The client can then repeatedly call the `Send` method on the returned `<SericeName>_FooClient` stream in order to send the
client-to-server message stream. The client can also repeatedly call `Recv` on this stream in order to
receive the full server-to-client message stream.
End-of-stream for the server-to-client stream is indicated by a return value of `(nil, io.EOF)` on the `Recv` method of the stream.
End-of-stream for the client-to-server stream can be indicated from the client by calling the `CloseSend` method on the stream.
## Packages and Namespaces
When the `protoc` compiler is invoked with `--go_out=plugins=grpc:`, the `proto package` to Go package translation
works the same as when the `protoc-gen-go` plugin is used without the `grpc` plugin.
So, for example, if `foo.proto` declares itself to be in `package foo`, then the generated `foo.pb.go` file will also be in
the Go package `foo`.

View File

@ -0,0 +1,15 @@
---
bodyclass: docs
layout: docs
title: Java Client Reference
---
<h2 class="page-header">Java Client Reference</h2>
<p class="lead">Being familiar with these will go a long way.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse
cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>

View File

@ -0,0 +1,322 @@
---
bodyclass: docs
title: Java Generated Code Reference
layout: docs
aliases: [/docs/reference/java/generated-code.html]
---
# Java Generated Code Reference
## Packages
For each service defined in a .proto file, the Java code generation produces a
Java class. The class name is the service's name suffixed by `Grpc`. The package
for the generated code is specified in the .proto file using the `java_package`
option.
For example, if `ServiceName` is defined in a .proto file containing the
following:
```protobuf
package grpcexample;
option java_package = "io.grpc.examples";
```
Then the generated class will be `io.grpc.examples.ServiceNameGrpc`.
If `java_package` is not specified, the generated class will use the `package`
as specified in the .proto file. This should be avoided, as proto packages
usually do not begin with a reversed domain name.
## Service Stub
The generated Java code contains an inner abstract class suffixed with
`ImplBase`, such as `ServiceNameImplBase`. This class defines one Java method
for each method in the service definition. It is up to the service implementer
to extend this class and implement the functionality of these methods. Without
being overridden, the methods return an error to the client saying the method is
unimplemented.
The signatures of the stub methods in `ServiceNameImplBase` vary depending on
the type of RPCs it handles. There are four types of gRPC service methods:
unary, server-streaming, client-streaming, and bidirectional-streaming.
### Unary
The service stub signature for a unary RPC method `unaryExample`:
```java
public void unaryExample(
RequestType request,
StreamObserver<ResponseType> responseObserver)
```
### Server-streaming
The service stub signature for a server-streaming RPC method
`serverStreamingExample`:
```java
public void serverStreamingExample(
RequestType request,
StreamObserver<ResponseType> responseObserver)
```
Notice that the signatures for unary and server-streaming RPCs are the same. A
single `RequestType` is received from the client, and the service implementation
sends its response(s) by invoking `responseObserver.onNext(ResponseType
response)`.
### Client-streaming
The service stub signature for a client-streaming RPC method
`clientStreamingExample`:
```java
public StreamObserver<RequestType> clientStreamingExample(
StreamObserver<ResponseType> responseObserver)
```
### Bidirectional-streaming
The service stub signature for a bidirectional-streaming RPC method
`bidirectionalStreamingExample`:
```java
public StreamObserver<RequestType> bidirectionalStreamingExample(
StreamObserver<ResponseType> responseObserver)
```
The signatures for client and bidirectional-streaming RPCs are the same. Since
the client can send multiple messages to the service, the service implementation
is reponsible for returning a `StreamObserver<RequestType>` instance. This
`StreamObserver` is invoked whenever additional messages are received from the
client.
## Client Stubs
The generated class also contains stubs for use by gRPC clients to call methods
defined by the service. Each stub wraps a `Channel`, supplied by the user of the
generated code. The stub uses this channel to send RPCs to the service.
gRPC Java generates code for three types of stubs: asynchronous, blocking, and
future. Each type of stub has a corresponding class in the generated code, such
as `ServiceNameStub`, `ServiceNameBlockingStub`, and `ServiceNameFutureStub`.
### Asynchronous Stub
RPCs made via an asynchronous stub operate entirely through callbacks on
`StreamObserver`.
The asynchronous stub contains one Java method for each method from the service
definition.
A new asynchronous stub is instantiated via the `ServiceNameGrpc.newStub(Channel
channel)` static method.
#### Unary
The asynchronous stub signature for a unary RPC method `unaryExample`:
```java
public void unaryExample(
RequestType request,
StreamObserver<ResponseType> responseObserver)
```
#### Server-streaming
The asynchronous stub signature for a server-streaming RPC method
`serverStreamingExample`:
```java
public void serverStreamingExample(
RequestType request,
StreamObserver<ResponseType> responseObserver)
```
#### Client-streaming
The asynchronous stub signature for a client-streaming RPC method
`clientStreamingExample`:
```java
public StreamObserver<RequestType> clientStreamingExample(
StreamObserver<ResponseType> responseObserver)
```
#### Bidirectional-streaming
The asynchronous stub signature for a bidirectional-streaming RPC method
`bidirectionalStreamingExample`:
```java
public StreamObserver<RequestType> bidirectionalStreamingExample(
StreamObserver<ResponseType> responseObserver)
```
### Blocking Stub
RPCs made through a blocking stub, as the name implies, block until the response
from the service is available.
The blocking stub contains one Java method for each unary and server-streaming
method in the service definition. Blocking stubs do not support client-streaming
or bidirectional-streaming RPCs.
A new blocking stub is instantiated via the
`ServiceNameGrpc.newBlockingStub(Channel channel)` static method.
#### Unary
The blocking stub signature for a unary RPC method `unaryExample`:
```java
public ResponseType unaryExample(RequestType request)
```
#### Server-streaming
The blocking stub signature for a server-streaming RPC method
`serverStreamingExample`:
```java
public Iterator<ResponseType> serverStreamingExample(RequestType request)
```
### Future Stub
RPCs made via a future stub wrap the return value of the asynchronous stub in a
`GrpcFuture<ResponseType>`, which implements the
`com.google.common.util.concurrent.ListenableFuture` interface.
The future stub contains one Java method for each unary method in the service
definition. Future stubs do not support streaming calls.
A new future stub is instantiated via the `ServiceNameGrpc.newFutureStub(Channel
channel)` static method.
#### Unary
The future stub signature for a unary RPC method `unaryExample`:
```java
public ListenableFuture<ResponseType> unaryExample(RequestType request)
```
## Codegen
Typically the build system handles creation of the gRPC generated code.
For protobuf-based codegen, you can put your `.proto` files in the `src/main/proto`
and `src/test/proto` directories along with an appropriate plugin.
A typical [protobuf-maven-plugin][] configuration for generating gRPC and Protocol
Buffers code would look like the following:
```xml
<build>
<extensions>
<extension>
<groupId>kr.motd.maven</groupId>
<artifactId>os-maven-plugin</artifactId>
<version>1.4.1.Final</version>
</extension>
</extensions>
<plugins>
<plugin>
<groupId>org.xolstice.maven.plugins</groupId>
<artifactId>protobuf-maven-plugin</artifactId>
<version>0.5.0</version>
<configuration>
<protocArtifact>com.google.protobuf:protoc:3.3.0:exe:${os.detected.classifier}</protocArtifact>
<pluginId>grpc-java</pluginId>
<pluginArtifact>io.grpc:protoc-gen-grpc-java:1.4.0:exe:${os.detected.classifier}</pluginArtifact>
</configuration>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>compile-custom</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
```
Eclipse and NetBeans users should also look at `os-maven-plugin`'s
[IDE documentation](https://github.com/trustin/os-maven-plugin#issues-with-eclipse-m2e-or-other-ides).
[protobuf-maven-plugin]: https://www.xolstice.org/protobuf-maven-plugin/
A typical [protobuf-gradle-plugin][] configuration would look like the following:
```gradle
apply plugin: 'java'
apply plugin: 'com.google.protobuf'
buildscript {
repositories {
mavenCentral()
}
dependencies {
// ASSUMES GRADLE 2.12 OR HIGHER. Use plugin version 0.7.5 with earlier
// gradle versions
classpath 'com.google.protobuf:protobuf-gradle-plugin:0.8.0'
}
}
protobuf {
protoc {
artifact = "com.google.protobuf:protoc:3.2.0"
}
plugins {
grpc {
artifact = 'io.grpc:protoc-gen-grpc-java:1.4.0'
}
}
generateProtoTasks {
all()*.plugins {
grpc {}
}
}
}
```
[protobuf-gradle-plugin]: https://github.com/google/protobuf-gradle-plugin
Bazel developers can use the
[`java_grpc_library`](https://github.com/grpc/grpc-java/blob/master/java_grpc_library.bzl)
rule, typically as follows:
```java
load("@grpc_java//:java_grpc_library.bzl", "java_grpc_library")
proto_library(
name = "helloworld_proto",
srcs = ["src/main/proto/helloworld.proto"],
)
java_proto_library(
name = "helloworld_java_proto",
deps = [":helloworld_proto"],
)
java_grpc_library(
name = "helloworld_java_grpc",
srcs = [":helloworld_proto"],
deps = [":helloworld_java_proto"],
)
```
Android developers please see [this](/docs/tutorials/basic/android/#generating-client-code) for reference.
If you wish to invoke the protobuf plugin for gRPC Java directly,
the command-line syntax is as follows:
```sh
$ protoc --plugin=protoc-gen-grpc-java \
--grpc-java_out="$OUTPUT_FILE" --proto_path="$DIR_OF_PROTO_FILE" "$PROTO_FILE"
```

View File

@ -0,0 +1,14 @@
---
bodyclass: docs
layout: docs
title: Java Server Reference
headline : Java Server Reference
---
<p class="lead">Being familiar with these will go a long way.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse
cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>

View File

@ -0,0 +1,156 @@
---
bodyclass: docs
layout: docs
headline: Python Generated Code Reference
aliases: [/docs/reference/python/generated-code.html]
---
# Python Generated Code Reference
## Introduction
gRPC Python relies on the protocol buffers compiler (`protoc`) to generate
code. It uses a plugin to supplement the generated code by plain `protoc`
with gRPC-specific code. For a `.proto` service description containing
gRPC services, the plain `protoc` generated code is synthesized in
a `_pb2.py` file, and the gRPC-specifc code lands in a `_grpc_pb2.py` file.
The latter python module imports the former. In this guide, we focus
on the gRPC-specific subset of the generated code.
## Illustrative Example
Let's look at the following `FortuneTeller` proto service:
```proto
service FortuneTeller {
// Returns the horoscope and zodiac sign for the given month and day.
rpc TellFortune(HoroscopeRequest) returns (HoroscopeResponse) {
// errors: invalid month or day, fortune unavailable
}
// Replaces the fortune for the given zodiac sign with the provided one.
rpc SuggestFortune(SuggestionRequest) returns (SuggestionResponse) {
// errors: invalid zodiac sign
}
}
```
gRPC `protoc` plugin will synthesize code elements along the lines
of what follows in the corresponding `_pb2_grpc.py` file:
```python
import grpc
import fortune_pb2
class FortuneTellerStub(object):
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.TellFortune = channel.unary_unary(
'/example.FortuneTeller/TellFortune',
request_serializer=fortune_pb2.HoroscopeRequest.SerializeToString,
response_deserializer=fortune_pb2.HoroscopeResponse.FromString,
)
self.SuggestFortune = channel.unary_unary(
'/example.FortuneTeller/SuggestFortune',
request_serializer=fortune_pb2.SuggestionRequest.SerializeToString,
response_deserializer=fortune_pb2.SuggestionResponse.FromString,
)
class FortuneTellerServicer(object):
def TellFortune(self, request, context):
"""Returns the horoscope and zodiac sign for the given month and day.
errors: invalid month or day, fortune unavailable
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SuggestFortune(self, request, context):
"""Replaces the fortune for the given zodiac sign with the provided
one.
errors: invalid zodiac sign
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_FortuneTellerServicer_to_server(servicer, server):
rpc_method_handlers = {
'TellFortune': grpc.unary_unary_rpc_method_handler(
servicer.TellFortune,
request_deserializer=fortune_pb2.HoroscopeRequest.FromString,
response_serializer=fortune_pb2.HoroscopeResponse.SerializeToString,
),
'SuggestFortune': grpc.unary_unary_rpc_method_handler(
servicer.SuggestFortune,
request_deserializer=fortune_pb2.SuggestionRequest.FromString,
response_serializer=fortune_pb2.SuggestionResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'example.FortuneTeller', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
```
## Code Elements
The gRPC generated code starts by importing the `grpc` package and the plain
`_pb2` module, synthesized by `protoc`, which defines non-gRPC-specifc code
elements, like the classes corresponding to protocol buffers messages and
descriptors used by reflection.
For each service `Foo` in the `.proto` file, three primary elements are
generated:
- [**Stub**](#stub): `FooStub` used by the client to connect to a gRPC service.
- [**Servicer**](#servicer): `FooServicer` used by the server to implement a
gRPC service.
- [**Registration Function**](#registration-function):
`add_FooServicer_to_server` function used to register a servicer with a
`grpc.Server` object.
### Stub
<a name="stub"></a>The generated `Stub` class is used by the gRPC clients. It
will have a constructor that takes a `grpc.Channel` object and initializes the
stub. For each method in the service, the initializer adds a corresponding
attribute to the stub object with the same name. Depending on the RPC type
(*i.e.* unary or streaming), the value of that attribute will be callable
objects of type
[UnaryUnaryMultiCallable](/grpc/python/grpc.html?#grpc.UnaryUnaryMultiCallable),
[UnaryStreamMultiCallable](/grpc/python/grpc.html?#grpc.UnaryStreamMultiCallable),
[StreamUnaryMultiCallable](/grpc/python/grpc.html?#grpc.StreamUnaryMultiCallable),
or
[StreamStreamMultiCallable](/grpc/python/grpc.html?#grpc.StreamStreamMultiCallable).
### Servicer
<a name="servicer"></a>For each service, a `Servicer` class is generated. This
class is intended to serve as the superclass of a service implementation. For
each method in the service, a corresponding function in the `Servicer` class
will be synthesized which is intended to be overriden in the actual service
implementation. Comments associated with code elements
in the `.proto` file will be transferred over as docstrings in
the generated python code.
### Registration Function
<a name="registration-function"></a>For each service, a function will be
generated that registers a `Servicer` object implementing it on a `grpc.Server`
object, so that the server would be able to appropriately route the queries to
the respective servicer. This function takes an object that implements the
`Servicer`, typically an instance of a subclass of the generated `Servicer`
code element described above, and a
[`grpc.Server`](/grpc/python/_modules/grpc.html#Server)
object.

View File

@ -0,0 +1,13 @@
---
bodyclass: docs
headline: 'gRPC Samples'
layout: docs
title: Samples
type: markdown
---
<p class="lead">Here are some sample apps to help developers build certain functionalities</p>
<ul>
<li><a target="_blank" href="https://github.com/GoogleCloudPlatform/ios-docs-samples/tree/master/speech/Objective-C/Speech-gRPC-Streaming">Bidirectional streaming iOS client using Cloud Speech API</a></li>
<li><a target="_blank" href="https://github.com/david-cao/gRPCBenchmarks">Android app benchmarking JSON/HTTP/1.1 and gRPC</a></li>
</ul>

View File

@ -0,0 +1,25 @@
---
layout: docs
title: Presentations & Talks
type: markdown
---
<p class="lead">gRPC has been talked about in many conferences and sessions. Here are a few interesting ones:</p>
{{< youtube OZ_Qmklc4zE >}}<br><br>
{{< youtube F2znfxn_5Hg >}}<br><br>
{{< youtube S7WIYLcPS1Y >}}<br><br>
{{< youtube F2WYEFLTKEw >}}<br><br>
{{< youtube UZcvnApm81U >}}<br><br>
{{< youtube UOIJNygDNlE >}}<br><br>
{{< youtube RvUP7vX2P4s >}}<br><br>
{{< youtube nz-LcdoMYWA >}}<br><br>
{{< youtube sZx3oZt7LVg >}}<br><br>
{{< vimeo 190648663 >}}
<h3>Slides only</h3>
<ul>
<li><a target="_blank" href="https://www.slideshare.net/VarunTalwar4/grpc-overview">gRPC Overview: Talk at Slack, Feb 2016</a></li>
<li><a target="_blank" href="https://www.slideshare.net/sujatatibre/g-rpc-talk-with-intel-3">Google and Intel speak on NFV and SFC Service Delivery</a></li>
<li><a target="_blank" href="https://www.slideshare.net/VarunTalwar4/grpc-design-and-implementation">gRPC Design and Implementation, Stanford Platforms Lab, March 2016</a></li>
</ul>

View File

@ -0,0 +1,12 @@
---
layout: tutorials
title: Tutorials
---
This section contains tutorials for each of our supported languages. They
introduce you to gRPC's API and associated concepts, and the different RPC types
that are available. If you just want to dive straight in with a working example
first, see our [Quickstarts](/docs/quickstart).
We also have a growing number of tutorials on follow-on topics, with more in the
pipeline.

View File

@ -0,0 +1,216 @@
---
layout: tutorials
title: Asynchronous Basics - C++
aliases: [docs/tutorials/async/helloasync-cpp.html]
---
This tutorial shows you how to write a simple server and client in C++ using
gRPC's asynchronous/non-blocking APIs. It assumes you are already familiar with
writing simple synchronous gRPC code, as described in [gRPC Basics:
C++](/docs/tutorials/basic/c/). The example used in this tutorial follows on
from the basic [Greeter example](https://github.com/grpc/grpc/tree/{{< param grpc_release_tag >}}/examples/cpp/helloworld) we used in the
[overview](/docs/). You'll find it along with installation
instructions in
[grpc/examples/cpp/helloworld](https://github.com/grpc/grpc/tree/{{< param grpc_release_tag >}}/examples/cpp/helloworld).
<div id="toc"></div>
### Overview
gRPC uses the
[`CompletionQueue`](/grpc/cpp/classgrpc_1_1_completion_queue.html)
API for asynchronous operations. The basic work flow
is as follows:
- bind a `CompletionQueue` to an RPC call
- do something like a read or write, present with a unique `void*` tag
- call `CompletionQueue::Next` to wait for operations to complete. If a tag
appears, it indicates that the corresponding operation is complete.
### Async client
To use an asynchronous client to call a remote method, you first create a
channel and stub, just as you do in a [synchronous
client](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/cpp/helloworld/greeter_client.cc). Once you have your stub, you do
the following to make an asynchronous call:
- Initiate the RPC and create a handle for it. Bind the RPC to a
`CompletionQueue`.
```c
CompletionQueue cq;
std::unique_ptr<ClientAsyncResponseReader<HelloReply> > rpc(
stub_->AsyncSayHello(&context, request, &cq));
```
- Ask for the reply and final status, with a unique tag
```c
Status status;
rpc->Finish(&reply, &status, (void*)1);
```
- Wait for the completion queue to return the next tag. The reply and status are
ready once the tag passed into the corresponding `Finish()` call is returned.
```c
void* got_tag;
bool ok = false;
cq.Next(&got_tag, &ok);
if (ok && got_tag == (void*)1) {
// check reply and status
}
```
You can see the complete client example in
[greeter&#95;async&#95;client.cc](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/cpp/helloworld/greeter_async_client.cc).
### Async server
The server implementation requests an RPC call with a tag and then waits for the
completion queue to return the tag. The basic flow for handling an RPC
asynchronously is:
- Build a server exporting the async service
```c
helloworld::Greeter::AsyncService service;
ServerBuilder builder;
builder.AddListeningPort("0.0.0.0:50051", InsecureServerCredentials());
builder.RegisterAsyncService(&service);
auto cq = builder.AddCompletionQueue();
auto server = builder.BuildAndStart();
```
- Request one RPC, providing a unique tag
```c
ServerContext context;
HelloRequest request;
ServerAsyncResponseWriter<HelloReply> responder;
service.RequestSayHello(&context, &request, &responder, &cq, &cq, (void*)1);
```
- Wait for the completion queue to return the tag. The context, request and
responder are ready once the tag is retrieved.
```c
HelloReply reply;
Status status;
void* got_tag;
bool ok = false;
cq.Next(&got_tag, &ok);
if (ok && got_tag == (void*)1) {
// set reply and status
responder.Finish(reply, status, (void*)2);
}
```
- Wait for the completion queue to return the tag. The RPC is finished when the
tag is back.
```c
void* got_tag;
bool ok = false;
cq.Next(&got_tag, &ok);
if (ok && got_tag == (void*)2) {
// clean up
}
```
This basic flow, however, doesn't take into account the server handling multiple
requests concurrently. To deal with this, our complete async server example uses
a `CallData` object to maintain the state of each RPC, and uses the address of
this object as the unique tag for the call.
```c
class CallData {
public:
// Take in the "service" instance (in this case representing an asynchronous
// server) and the completion queue "cq" used for asynchronous communication
// with the gRPC runtime.
CallData(Greeter::AsyncService* service, ServerCompletionQueue* cq)
: service_(service), cq_(cq), responder_(&ctx_), status_(CREATE) {
// Invoke the serving logic right away.
Proceed();
}
void Proceed() {
if (status_ == CREATE) {
// As part of the initial CREATE state, we *request* that the system
// start processing SayHello requests. In this request, "this" acts are
// the tag uniquely identifying the request (so that different CallData
// instances can serve different requests concurrently), in this case
// the memory address of this CallData instance.
service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
this);
// Make this instance progress to the PROCESS state.
status_ = PROCESS;
} else if (status_ == PROCESS) {
// Spawn a new CallData instance to serve new clients while we process
// the one for this CallData. The instance will deallocate itself as
// part of its FINISH state.
new CallData(service_, cq_);
// The actual processing.
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
responder_.Finish(reply_, Status::OK, this);
status_ = FINISH;
} else {
GPR_ASSERT(status_ == FINISH);
// Once in the FINISH state, deallocate ourselves (CallData).
delete this;
}
}
}
```
For simplicity the server only uses one completion queue for all events, and
runs a main loop in `HandleRpcs` to query the queue:
```c
void HandleRpcs() {
// Spawn a new CallData instance to serve new clients.
new CallData(&service_, cq_.get());
void* tag; // uniquely identifies a request.
bool ok;
while (true) {
// Block waiting to read the next event from the completion queue. The
// event is uniquely identified by its tag, which in this case is the
// memory address of a CallData instance.
cq_->Next(&tag, &ok);
GPR_ASSERT(ok);
static_cast<CallData*>(tag)->Proceed();
}
}
```
#### Shutting Down the Server
We've been using a completion queue to get the async notifications. Care must be
taken to shut it down *after* the server has also been shut down.
Remember we got our completion queue instance `cq_` in `ServerImpl::Run()` by
running `cq_ = builder.AddCompletionQueue()`. Looking at
`ServerBuilder::AddCompletionQueue`'s documentation we see that
> ... Caller is required to shutdown the server prior to shutting down the
> returned completion queue.
Refer to `ServerBuilder::AddCompletionQueue`'s full docstring for more details.
What this means in our example is that `ServerImpl's` destructor looks like:
```c
~ServerImpl() {
server_->Shutdown();
// Always shutdown the completion queue after the server.
cq_->Shutdown();
}
```
You can see our complete server example in
[greeter&#95;async&#95;server.cc](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/cpp/helloworld/greeter_async_server.cc).

View File

@ -0,0 +1,218 @@
---
layout: tutorials
title: OAuth2 on gRPC - Objective-C
aliases: [/docs/tutorials/auth/oauth2-objective-c.html]
---
This example demonstrates how to use OAuth2 on gRPC to make
authenticated API calls on behalf of a user.
By walking through it you'll also learn how to use the Objective-C gRPC API to:
- Initialize and configure a remote call object before the RPC is started.
- Set request metadata elements on a call, which are semantically equivalent to
HTTP request headers.
- Read response metadata from a call, which is equivalent to HTTP response
headers and trailers.
It assumes you know the basics on how to make gRPC API calls using the
Objective-C client library, as shown in [gRPC Basics:
Objective-C](/docs/tutorials/basic/objective-c/) and the
[overview](/docs/), and are familiar with OAuth2 concepts like _access
token_.
<div id="toc"></div>
<a name="setup"></a>
### Example code and setup
The example code for our tutorial is in
[gprc/examples/objective-c/auth_sample](https://github.com/grpc/grpc/tree/
{{< param grpc_release_tag >}}/examples/objective-c/auth_sample). To
download the example, clone this repository by running the following commands:
```sh
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
$ cd grpc
$ git submodule update --init
```
Then change your current directory to `examples/objective-c/auth_sample`:
```sh
$ cd examples/objective-c/auth_sample
```
Our example is a simple application with two views. The first view lets a user
sign in and out using the OAuth2 flow of Google's [iOS SignIn
library](https://developers.google.com/identity/sign-in/ios/). (Google's library
is used in this example because the test gRPC service we are going to call
expects Google account credentials, but neither gRPC nor the Objective-C client
library is tied to any specific OAuth2 provider). The second view makes a gRPC
request to the test server, using the access token obtained by the first view.
Note: OAuth2 libraries need the application to register and obtain an ID from
the identity provider (in the case of this example app, Google). The app's XCode
project is configured using that ID, so you shouldn't copy this project "as is"
for your own app: it would result in your app being identified in the consent
screen as "gRPC-AuthSample", and not having access to real Google services.
Instead, configure your own XCode project following the [instructions
here](https://developers.google.com/identity/sign-in/ios/).
As with the other Objective-C examples, you also should have
[Cocoapods](https://cocoapods.org/#install) installed, as well as the relevant
tools to generate the client library code. You can obtain the latter by
following [these setup instructions](https://github.com/grpc/homebrew-grpc).
<a name="try"></a>
### Try it out!
To try the sample app, first have Cocoapods generate and install the client library for our .proto
files:
```sh
$ pod install
```
(This might have to compile OpenSSL, which takes around 15 minutes if Cocoapods
doesn't have it yet on your computer's cache).
Finally, open the XCode workspace created by Cocoapods, and run the app.
The first view, `SelectUserViewController.h/m`, asks you to sign in with your
Google account, and to give the "gRPC-AuthSample" app the following permissions:
- View your email address.
- View your basic profile info.
- "Test scope for access to the Zoo service".
This last permission, corresponding to the scope
`https://www.googleapis.com/auth/xapi.zoo` doesn't grant any real capability:
it's only used for testing. You can log out at any time.
The second view, `MakeRPCViewController.h/m`, makes a gRPC request to a test
server at https://grpc-test.sandbox.google.com, sending the access token along
with the request. The test service simply validates the token and writes in its
response which user it belongs to, and which scopes it gives access to. (The
client application already knows those two values; it's a way to verify that
everything went as expected).
The next sections guide you step-by-step through how the gRPC call in
`MakeRPCViewController` is performed. You can see the complete code in
[MakeRPCViewController.m](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/objective-c/auth_sample/MakeRPCViewController.m).
<a name="rpc-object"></a>
### Create an RPC object
The other basic tutorials show how to invoke an RPC by calling an asynchronous
method in a generated client object. However, to make an authenticated call you
need to initialize an object that represents the RPC, and configure it _before_
starting the network request. First let's look at how to create the RPC object.
Assume you have a proto service definition like this:
```protobuf
option objc_class_prefix = "AUTH";
service TestService {
rpc UnaryCall(Request) returns (Response);
}
```
A `unaryCallWithRequest:handler:` method, with which you're already familiar, is
generated for the `AUTHTestService` class:
```objective-c
[client unaryCallWithRequest:request handler:^(AUTHResponse *response, NSError *error) {
...
}];
```
In addition, an `RPCToUnaryCallWithRequest:handler:` method is generated, which returns a
not-yet-started RPC object:
```objective-c
#import <ProtoRPC/ProtoRPC.h>
ProtoRPC *call =
[client RPCToUnaryCallWithRequest:request handler:^(AUTHResponse *response, NSError *error) {
...
}];
```
You can start the RPC represented by this object at any later time like this:
```objective-c
[call start];
```
<a name="request-metadata"></a>
### Setting request metadata: Auth header with an access token
Now let's look at how to configure some settings on the RPC object. The
`ProtoRPC` class has a `requestHeaders` property (inherited from `GRPCCall`)
defined like this:
```objective-c
@property(atomic, readonly) id<GRPCRequestHeaders> requestHeaders
```
You can think of the `GRPCRequestHeaders` protocol as equivalent to the
`NSMutableDictionary` class. Setting elements of this dictionary of metadata
keys and values means this metadata will be sent on the wire when the call is
started. gRPC metadata are pieces of information about the call sent by the
client to the server (and vice versa). They take the form of key-value pairs and
are essentially opaque to gRPC itself.
For convenience, the property is initialized with an empty
`NSMutableDictionary`, so that request metadata elements can be set like this:
```objective-c
call.requestHeaders[@"My-Header"] = @"Value for this header";
call.requestHeaders[@"Another-Header"] = @"Its value";
```
A typical use of metadata is for authentication details, as in our example. If
you have an access token, OAuth2 specifies it is to be sent in this format:
```objective-c
call.requestHeaders[@"Authorization"] = [@"Bearer " stringByAppendingString:accessToken];
```
<a name="response-metadata"></a>
### Getting response metadata: Auth challenge header
The `ProtoRPC` class also inherits a pair of properties, `responseHeaders` and
`responseTrailers`, analogous to the request metadata we just looked at but sent
back by the server to the client. They are defined like this:
```objective-c
@property(atomic, readonly) NSDictionary *responseHeaders;
@property(atomic, readonly) NSDictionary *responseTrailers;
```
In OAuth2, if there's an authentication error the server will send back a
challenge header. This is returned in the RPC's response headers. To access
this, as in our example's error-handling code, you write:
```objective-c
call.responseHeaders[@"www-authenticate"]
```
Note that, as gRPC metadata elements are mapped to HTTP/2 headers (or trailers),
the keys of the response metadata are always ASCII strings in lowercase.
Many uses cases of response metadata involve getting more details about an RPC
error. For convenience, when a `NSError` instance is passed to an RPC handler
block, the response headers and trailers dictionaries can also be accessed this
way:
```objective-c
error.userInfo[kGRPCHeadersKey] == call.responseHeaders
error.userInfo[kGRPCTrailersKey] == call.responseTrailers
```

View File

@ -0,0 +1,343 @@
---
layout: tutorials
title: gRPC Basics - Android Java
aliases: [/docs/tutorials/basic/android.html]
---
This tutorial provides a basic Android Java programmer's introduction to working with gRPC.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Generate client code using the protocol buffer compiler.
- Use the Java gRPC API to write a simple mobile client for your service.
It assumes that you have read the [Overview](/docs/) and are familiar with [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview).
This guide also does not cover anything on the server side. You can check the [Java guide](/docs/tutorials/basic/java/) for more information.
<div id="toc"></div>
### Why use gRPC?
Our example is a simple route mapping application that lets clients get information about features on their route, create a summary of their route, and exchange route information such as traffic updates with the server and other clients.
With gRPC we can define our service once in a .proto file and implement clients and servers in any of gRPC's supported languages, which in turn can be run in environments ranging from servers inside Google to your own tablet - all the complexity of communication between different languages and environments is handled for you by gRPC. We also get all the advantages of working with protocol buffers, including efficient serialization, a simple IDL, and easy interface updating.
### Example code and setup
The example code for our tutorial is in [grpc-java's examples/android](https://github.com/grpc/grpc-java/tree/{{< param grpc_release_tag >}}/examples/android). To download the example, clone the `grpc-java` repository by running the following command:
```sh
$ git clone -b {{< param grpc_java_release_tag >}} https://github.com/grpc/grpc-java.git
```
Then change your current directory to `grpc-java/examples/android`:
```sh
$ cd grpc-java/examples/android
```
You also should have the relevant tools installed to generate the client interface code - if you don't already, follow the setup instructions in [the Java README](https://github.com/grpc/grpc-java/blob/{{< param grpc_release_tag >}}/README.md).
### Defining the service
Our first step (as you'll know from the [Overview](/docs/)) is to define the gRPC *service* and the method *request* and *response* types using [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). You can see the complete .proto file in [`routeguide/app/src/main/proto/route_guide.proto`](https://github.com/grpc/grpc-java/blob/{{< param grpc_release_tag >}}/examples/android/routeguide/app/src/main/proto/route_guide.proto).
As we're generating Java code in this example, we've specified a `java_package` file option in our .proto:
```proto
option java_package = "io.grpc.examples";
```
This specifies the package we want to use for our generated Java classes. If no explicit `java_package` option is given in the .proto file, then by default the proto package (specified using the "package" keyword) will be used. However, proto packages generally do not make good Java packages since proto packages are not expected to start with reverse domain names. If we generate code in another language from this .proto, the `java_package` option has no effect.
To define a service, we specify a named `service` in the .proto file:
```proto
service RouteGuide {
...
}
```
Then we define `rpc` methods inside our service definition, specifying their request and response types. gRPC lets you define four kinds of service method, all of which are used in the `RouteGuide` service:
- A *simple RPC* where the client sends a request to the server using the stub and waits for a response to come back, just like a normal function call.
```proto
// Obtains the feature at a given position.
rpc GetFeature(Point) returns (Feature) {}
```
- A *server-side streaming RPC* where the client sends a request to the server and gets a stream to read a sequence of messages back. The client reads from the returned stream until there are no more messages. As you can see in our example, you specify a server-side streaming method by placing the `stream` keyword before the *response* type.
```proto
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc ListFeatures(Rectangle) returns (stream Feature) {}
```
- A *client-side streaming RPC* where the client writes a sequence of messages and sends them to the server, again using a provided stream. Once the client has finished writing the messages, it waits for the server to read them all and return its response. You specify a client-side streaming method by placing the `stream` keyword before the *request* type.
```proto
// Accepts a stream of Points on a route being traversed, returning a
// RouteSummary when traversal is completed.
rpc RecordRoute(stream Point) returns (RouteSummary) {}
```
- A *bidirectional streaming RPC* where both sides send a sequence of messages using a read-write stream. The two streams operate independently, so clients and servers can read and write in whatever order they like: for example, the server could wait to receive all the client messages before writing its responses, or it could alternately read a message then write a message, or some other combination of reads and writes. The order of messages in each stream is preserved. You specify this type of method by placing the `stream` keyword before both the request and the response.
```proto
// Accepts a stream of RouteNotes sent while a route is being traversed,
// while receiving other RouteNotes (e.g. from other users).
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
```
Our .proto file also contains protocol buffer message type definitions for all the request and response types used in our service methods - for example, here's the `Point` message type:
```proto
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
```
### Generating client code
Next we need to generate the gRPC client interfaces from our .proto
service definition. We do this using the protocol buffer compiler `protoc` with
a special gRPC Java plugin. You need to use the
[proto3](https://github.com/google/protobuf/releases) compiler (which supports
both proto2 and proto3 syntax) in order to generate gRPC services.
The build system for this example is also part of Java gRPC itself's build. You
can refer to the <a
href="https://github.com/grpc/grpc-java/blob/{{< param grpc_release_tag >}}/README.md">README</a> and
<a href="https://github.com/grpc/grpc-java/blob/{{< param grpc_release_tag >}}/examples/android/routeguide/app/build.gradle#L26">build.gradle</a> for
how to generate code from your own .proto files.
Note that for Android, we will use protobuf lite which is optimized for mobile usecase.
The following classes are generated from our service definition:
- `Feature.java`, `Point.java`, `Rectangle.java`, and others which contain
all the protocol buffer code to populate, serialize, and retrieve our request
and response message types.
- `RouteGuideGrpc.java` which contains (along with some other useful code):
- a base class for `RouteGuide` servers to implement,
`RouteGuideGrpc.RouteGuideImplBase`, with all the methods defined in the `RouteGuide`
service.
- *stub* classes that clients can use to talk to a `RouteGuide` server.
### Creating the client
In this section, we'll look at creating a Java client for our `RouteGuide` service. You can see our complete example client code in [`routeguide/app/src/main/java/io/grpc/routeguideexample/RouteGuideActivity.java`](https://github.com/grpc/grpc-java/blob/{{< param grpc_release_tag >}}/examples/android/routeguide/app/src/main/java/io/grpc/routeguideexample/RouteGuideActivity.java).
#### Creating a stub
To call service methods, we first need to create a *stub*, or rather, two stubs:
- a *blocking/synchronous* stub: this means that the RPC call waits for the server to respond, and will either return a response or raise an exception.
- a *non-blocking/asynchronous* stub that makes non-blocking calls to the server, where the response is returned asynchronously. You can make certain types of streaming call only using the asynchronous stub.
First we need to create a gRPC *channel* for our stub, specifying the server address and port we want to connect to:
We use a `ManagedChannelBuilder` to create the channel.
```java
mChannel = ManagedChannelBuilder.forAddress(host, port).usePlaintext(true).build();
```
Now we can use the channel to create our stubs using the `newStub` and `newBlockingStub` methods provided in the `RouteGuideGrpc` class we generated from our .proto.
```java
blockingStub = RouteGuideGrpc.newBlockingStub(mChannel);
asyncStub = RouteGuideGrpc.newStub(mChannel);
```
#### Calling service methods
Now let's look at how we call our service methods.
##### Simple RPC
Calling the simple RPC `GetFeature` on the blocking stub is as straightforward as calling a local method.
```java
Point request = Point.newBuilder().setLatitude(lat).setLongitude(lon).build();
Feature feature = blockingStub.getFeature(request);
```
We create and populate a request protocol buffer object (in our case `Point`), pass it to the `getFeature()` method on our blocking stub, and get back a `Feature`.
##### Server-side streaming RPC
Next, let's look at a server-side streaming call to `ListFeatures`, which returns a stream of geographical `Feature`s:
```java
Rectangle request =
Rectangle.newBuilder()
.setLo(Point.newBuilder().setLatitude(lowLat).setLongitude(lowLon).build())
.setHi(Point.newBuilder().setLatitude(hiLat).setLongitude(hiLon).build()).build();
Iterator<Feature> features = blockingStub.listFeatures(request);
```
As you can see, it's very similar to the simple RPC we just looked at, except instead of returning a single `Feature`, the method returns an `Iterator` that the client can use to read all the returned `Feature`s.
##### Client-side streaming RPC
Now for something a little more complicated: the client-side streaming method `RecordRoute`, where we send a stream of `Point`s to the server and get back a single `RouteSummary`. For this method we need to use the asynchronous stub. If you've already read [Creating the server](/docs/tutorials/basic/java/#creating-the-server) some of this may look very familiar - asynchronous streaming RPCs are implemented in a similar way on both sides.
```java
private String recordRoute(List<Point> points, int numPoints, RouteGuideStub asyncStub)
throws InterruptedException, RuntimeException {
final StringBuffer logs = new StringBuffer();
appendLogs(logs, "*** RecordRoute");
final CountDownLatch finishLatch = new CountDownLatch(1);
StreamObserver<RouteSummary> responseObserver = new StreamObserver<RouteSummary>() {
@Override
public void onNext(RouteSummary summary) {
appendLogs(logs, "Finished trip with {0} points. Passed {1} features. "
+ "Travelled {2} meters. It took {3} seconds.", summary.getPointCount(),
summary.getFeatureCount(), summary.getDistance(),
summary.getElapsedTime());
}
@Override
public void onError(Throwable t) {
failed = t;
finishLatch.countDown();
}
@Override
public void onCompleted() {
appendLogs(logs, "Finished RecordRoute");
finishLatch.countDown();
}
};
StreamObserver<Point> requestObserver = asyncStub.recordRoute(responseObserver);
try {
// Send numPoints points randomly selected from the points list.
Random rand = new Random();
for (int i = 0; i < numPoints; ++i) {
int index = rand.nextInt(points.size());
Point point = points.get(index);
appendLogs(logs, "Visiting point {0}, {1}", RouteGuideUtil.getLatitude(point),
RouteGuideUtil.getLongitude(point));
requestObserver.onNext(point);
// Sleep for a bit before sending the next one.
Thread.sleep(rand.nextInt(1000) + 500);
if (finishLatch.getCount() == 0) {
// RPC completed or errored before we finished sending.
// Sending further requests won't error, but they will just be thrown away.
break;
}
}
} catch (RuntimeException e) {
// Cancel RPC
requestObserver.onError(e);
throw e;
}
// Mark the end of requests
requestObserver.onCompleted();
// Receiving happens asynchronously
if (!finishLatch.await(1, TimeUnit.MINUTES)) {
throw new RuntimeException(
"Could not finish rpc within 1 minute, the server is likely down");
}
if (failed != null) {
throw new RuntimeException(failed);
}
return logs.toString();
}
```
As you can see, to call this method we need to create a `StreamObserver`, which implements a special interface for the server to call with its `RouteSummary` response. In our `StreamObserver` we:
- Override the `onNext()` method to print out the returned information when the server writes a `RouteSummary` to the message stream.
- Override the `onCompleted()` method (called when the *server* has completed the call on its side) to set a `SettableFuture` that we can check to see if the server has finished writing.
We then pass the `StreamObserver` to the asynchronous stub's `recordRoute()` method and get back our own `StreamObserver` request observer to write our `Point`s to send to the server. Once we've finished writing points, we use the request observer's `onCompleted()` method to tell gRPC that we've finished writing on the client side. Once we're done, we check our `SettableFuture` to check that the server has completed on its side.
##### Bidirectional streaming RPC
Finally, let's look at our bidirectional streaming RPC `RouteChat()`.
```java
private String routeChat(RouteGuideStub asyncStub) throws InterruptedException,
RuntimeException {
final StringBuffer logs = new StringBuffer();
appendLogs(logs, "*** RouteChat");
final CountDownLatch finishLatch = new CountDownLatch(1);
StreamObserver<RouteNote> requestObserver =
asyncStub.routeChat(new StreamObserver<RouteNote>() {
@Override
public void onNext(RouteNote note) {
appendLogs(logs, "Got message \"{0}\" at {1}, {2}", note.getMessage(),
note.getLocation().getLatitude(),
note.getLocation().getLongitude());
}
@Override
public void onError(Throwable t) {
failed = t;
finishLatch.countDown();
}
@Override
public void onCompleted() {
appendLogs(logs,"Finished RouteChat");
finishLatch.countDown();
}
});
try {
RouteNote[] requests =
{newNote("First message", 0, 0), newNote("Second message", 0, 1),
newNote("Third message", 1, 0), newNote("Fourth message", 1, 1)};
for (RouteNote request : requests) {
appendLogs(logs, "Sending message \"{0}\" at {1}, {2}", request.getMessage(),
request.getLocation().getLatitude(),
request.getLocation().getLongitude());
requestObserver.onNext(request);
}
} catch (RuntimeException e) {
// Cancel RPC
requestObserver.onError(e);
throw e;
}
// Mark the end of requests
requestObserver.onCompleted();
// Receiving happens asynchronously
if (!finishLatch.await(1, TimeUnit.MINUTES)) {
throw new RuntimeException(
"Could not finish rpc within 1 minute, the server is likely down");
}
if (failed != null) {
throw new RuntimeException(failed);
}
return logs.toString();
}
```
As with our client-side streaming example, we both get and return a `StreamObserver` response observer, except this time we send values via our method's response observer while the server is still writing messages to *their* message stream. The syntax for reading and writing here is exactly the same as for our client-streaming method. Although each side will always get the other's messages in the order they were written, both the client and server can read and write in any order — the streams operate completely independently.
### Try it out!
Follow the instructions in the example directory [README](https://github.com/grpc/grpc-java/blob/{{< param grpc_release_tag >}}/examples/android/README.md) to build and run the client and server.

View File

@ -0,0 +1,535 @@
---
layout: tutorials
title: gRPC Basics - C++
aliases: [/docs/tutorials/basic/c.html]
---
This tutorial provides a basic C++ programmer's introduction to working with gRPC.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Generate server and client code using the protocol buffer compiler.
- Use the C++ gRPC API to write a simple client and server for your service.
It assumes that you have read the [Overview](/docs/) and are familiar
with [protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). Note
that the example in this tutorial uses the proto3 version of the protocol
buffers language: you can find out more in
the [proto3 language
guide](https://developers.google.com/protocol-buffers/docs/proto3) and [C++
generated code
guide](https://developers.google.com/protocol-buffers/docs/reference/cpp-generated).
<div id="toc"></div>
### Why use gRPC?
Our example is a simple route mapping application that lets clients get
information about features on their route, create a summary of their route, and
exchange route information such as traffic updates with the server and other
clients.
With gRPC we can define our service once in a .proto file and implement clients
and servers in any of gRPC's supported languages, which in turn can be run in
environments ranging from servers inside Google to your own tablet - all the
complexity of communication between different languages and environments is
handled for you by gRPC. We also get all the advantages of working with protocol
buffers, including efficient serialization, a simple IDL, and easy interface
updating.
### Example code and setup
The example code for our tutorial is in
[grpc/grpc/examples/cpp/route_guide](https://github.com/grpc/grpc/tree/
{{< param grpc_release_tag >}}/examples/cpp/route_guide). To
download the example, clone the `grpc` repository by running the following
command:
```sh
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
```
Then change your current directory to `examples/cpp/route_guide`:
```sh
$ cd examples/cpp/route_guide
```
You also should have the relevant tools installed to generate the server and
client interface code - if you don't already, follow the setup instructions in
[the C++ quick start guide](/docs/quickstart/cpp).
### Defining the service
Our first step (as you'll know from the [Overview](/docs/)) is to
define the gRPC *service* and the method *request* and *response* types using
[protocol buffers](https://developers.google.com/protocol-buffers/docs/overview).
You can see the complete .proto file in
[`examples/protos/route_guide.proto`](https://github.com/grpc/grpc/blob/
{{< param grpc_release_tag >}}/examples/protos/route_guide.proto).
To define a service, you specify a named `service` in your .proto file:
```c
service RouteGuide {
...
}
```
Then you define `rpc` methods inside your service definition, specifying their
request and response types. gRPC lets you define four kinds of service method,
all of which are used in the `RouteGuide` service:
- A *simple RPC* where the client sends a request to the server using the stub
and waits for a response to come back, just like a normal function call.
```c
// Obtains the feature at a given position.
rpc GetFeature(Point) returns (Feature) {}
```
- A *server-side streaming RPC* where the client sends a request to the server
and gets a stream to read a sequence of messages back. The client reads from
the returned stream until there are no more messages. As you can see in our
example, you specify a server-side streaming method by placing the `stream`
keyword before the *response* type.
```c
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc ListFeatures(Rectangle) returns (stream Feature) {}
```
- A *client-side streaming RPC* where the client writes a sequence of messages
and sends them to the server, again using a provided stream. Once the client
has finished writing the messages, it waits for the server to read them all
and return its response. You specify a client-side streaming method by placing
the `stream` keyword before the *request* type.
```c
// Accepts a stream of Points on a route being traversed, returning a
// RouteSummary when traversal is completed.
rpc RecordRoute(stream Point) returns (RouteSummary) {}
```
- A *bidirectional streaming RPC* where both sides send a sequence of messages
using a read-write stream. The two streams operate independently, so clients
and servers can read and write in whatever order they like: for example, the
server could wait to receive all the client messages before writing its
responses, or it could alternately read a message then write a message, or
some other combination of reads and writes. The order of messages in each
stream is preserved. You specify this type of method by placing the `stream`
keyword before both the request and the response.
```c
// Accepts a stream of RouteNotes sent while a route is being traversed,
// while receiving other RouteNotes (e.g. from other users).
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
```
Our .proto file also contains protocol buffer message type definitions for all
the request and response types used in our service methods - for example, here's
the `Point` message type:
```c
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
```
### Generating client and server code
Next we need to generate the gRPC client and server interfaces from our .proto
service definition. We do this using the protocol buffer compiler `protoc` with
a special gRPC C++ plugin.
For simplicity, we've provided a [Makefile](https://github.com/grpc/grpc/blob/
{{< param grpc_release_tag >}}/examples/cpp/route_guide/Makefile)
that runs `protoc` for you with the appropriate plugin, input, and output (if
you want to run this yourself, make sure you've installed protoc and followed
the gRPC code [installation instructions](https://github.com/grpc/grpc/blob/
{{< param grpc_release_tag >}}/src/cpp/README.md#make) first):
```sh
$ make route_guide.grpc.pb.cc route_guide.pb.cc
```
which actually runs:
```sh
$ protoc -I ../../protos --grpc_out=. --plugin=protoc-gen-grpc=`which grpc_cpp_plugin` ../../protos/route_guide.proto
$ protoc -I ../../protos --cpp_out=. ../../protos/route_guide.proto
```
Running this command generates the following files in your current directory:
- `route_guide.pb.h`, the header which declares your generated message classes
- `route_guide.pb.cc`, which contains the implementation of your message classes
- `route_guide.grpc.pb.h`, the header which declares your generated service
classes
- `route_guide.grpc.pb.cc`, which contains the implementation of your service
classes
These contain:
- All the protocol buffer code to populate, serialize, and retrieve our request
and response message types
- A class called `RouteGuide` that contains
- a remote interface type (or *stub*) for clients to call with the methods
defined in the `RouteGuide` service.
- two abstract interfaces for servers to implement, also with the methods
defined in the `RouteGuide` service.
<a name="server"></a>
### Creating the server
First let's look at how we create a `RouteGuide` server. If you're only
interested in creating gRPC clients, you can skip this section and go straight
to [Creating the client](#client) (though you might find it interesting
anyway!).
There are two parts to making our `RouteGuide` service do its job:
- Implementing the service interface generated from our service definition:
doing the actual "work" of our service.
- Running a gRPC server to listen for requests from clients and return the
service responses.
You can find our example `RouteGuide` server in
[examples/cpp/route_guide/route_guide_server.cc](https://github.com/grpc/grpc/blob/
{{< param grpc_release_tag >}}/examples/cpp/route_guide/route_guide_server.cc).
Let's take a closer look at how it works.
#### Implementing RouteGuide
As you can see, our server has a `RouteGuideImpl` class that implements the
generated `RouteGuide::Service` interface:
```cpp
class RouteGuideImpl final : public RouteGuide::Service {
...
}
```
In this case we're implementing the *synchronous* version of `RouteGuide`, which
provides our default gRPC server behaviour. It's also possible to implement an
asynchronous interface, `RouteGuide::AsyncService`, which allows you to further
customize your server's threading behaviour, though we won't look at this in
this tutorial.
`RouteGuideImpl` implements all our service methods. Let's look at the simplest
type first, `GetFeature`, which just gets a `Point` from the client and returns
the corresponding feature information from its database in a `Feature`.
```cpp
Status GetFeature(ServerContext* context, const Point* point,
Feature* feature) override {
feature->set_name(GetFeatureName(*point, feature_list_));
feature->mutable_location()->CopyFrom(*point);
return Status::OK;
}
```
The method is passed a context object for the RPC, the client's `Point` protocol
buffer request, and a `Feature` protocol buffer to fill in with the response
information. In the method we populate the `Feature` with the appropriate
information, and then `return` with an `OK` status to tell gRPC that we've
finished dealing with the RPC and that the `Feature` can be returned to the
client.
Note that all service methods can (and will!) be called from multiple threads at
the same time. You have to make sure that your method implementations are
thread safe. In our example, `feature_list_` is never changed after
construction, so it is safe by design. But if `feature_list_` would change during
the lifetime of the service, we would need to synchronize access to this member.
Now let's look at something a bit more complicated - a streaming RPC.
`ListFeatures` is a server-side streaming RPC, so we need to send back multiple
`Feature`s to our client.
```cpp
Status ListFeatures(ServerContext* context, const Rectangle* rectangle,
ServerWriter<Feature>* writer) override {
auto lo = rectangle->lo();
auto hi = rectangle->hi();
long left = std::min(lo.longitude(), hi.longitude());
long right = std::max(lo.longitude(), hi.longitude());
long top = std::max(lo.latitude(), hi.latitude());
long bottom = std::min(lo.latitude(), hi.latitude());
for (const Feature& f : feature_list_) {
if (f.location().longitude() >= left &&
f.location().longitude() <= right &&
f.location().latitude() >= bottom &&
f.location().latitude() <= top) {
writer->Write(f);
}
}
return Status::OK;
}
```
As you can see, instead of getting simple request and response objects in our
method parameters, this time we get a request object (the `Rectangle` in which
our client wants to find `Feature`s) and a special `ServerWriter` object. In the
method, we populate as many `Feature` objects as we need to return, writing them
to the `ServerWriter` using its `Write()` method. Finally, as in our simple RPC,
we `return Status::OK` to tell gRPC that we've finished writing responses.
If you look at the client-side streaming method `RecordRoute` you'll see it's
quite similar, except this time we get a `ServerReader` instead of a request
object and a single response. We use the `ServerReader`s `Read()` method to
repeatedly read in our client's requests to a request object (in this case a
`Point`) until there are no more messages: the server needs to check the return
value of `Read()` after each call. If `true`, the stream is still good and it
can continue reading; if `false` the message stream has ended.
```cpp
while (stream->Read(&point)) {
...//process client input
}
```
Finally, let's look at our bidirectional streaming RPC `RouteChat()`.
```cpp
Status RouteChat(ServerContext* context,
ServerReaderWriter<RouteNote, RouteNote>* stream) override {
std::vector<RouteNote> received_notes;
RouteNote note;
while (stream->Read(&note)) {
for (const RouteNote& n : received_notes) {
if (n.location().latitude() == note.location().latitude() &&
n.location().longitude() == note.location().longitude()) {
stream->Write(n);
}
}
received_notes.push_back(note);
}
return Status::OK;
}
```
This time we get a `ServerReaderWriter` that can be used to read *and* write
messages. The syntax for reading and writing here is exactly the same as for our
client-streaming and server-streaming methods. Although each side will always
get the other's messages in the order they were written, both the client and
server can read and write in any order — the streams operate completely
independently.
#### Starting the server
Once we've implemented all our methods, we also need to start up a gRPC server
so that clients can actually use our service. The following snippet shows how we
do this for our `RouteGuide` service:
```cpp
void RunServer(const std::string& db_path) {
std::string server_address("0.0.0.0:50051");
RouteGuideImpl service(db_path);
ServerBuilder builder;
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
builder.RegisterService(&service);
std::unique_ptr<Server> server(builder.BuildAndStart());
std::cout << "Server listening on " << server_address << std::endl;
server->Wait();
}
```
As you can see, we build and start our server using a `ServerBuilder`. To do this, we:
1. Create an instance of our service implementation class `RouteGuideImpl`.
2. Create an instance of the factory `ServerBuilder` class.
3. Specify the address and port we want to use to listen for client requests
using the builder's `AddListeningPort()` method.
4. Register our service implementation with the builder.
5. Call `BuildAndStart()` on the builder to create and start an RPC server for
our service.
6. Call `Wait()` on the server to do a blocking wait until process is killed or
`Shutdown()` is called.
<a name="client"></a>
### Creating the client
In this section, we'll look at creating a C++ client for our `RouteGuide`
service. You can see our complete example client code in
[examples/cpp/route_guide/route_guide_client.cc](https://github.com/grpc/grpc/blob/
{{< param grpc_release_tag >}}/examples/cpp/route_guide/route_guide_client.cc).
#### Creating a stub
To call service methods, we first need to create a *stub*.
First we need to create a gRPC *channel* for our stub, specifying the server
address and port we want to connect to - in our case we'll use no SSL:
```cpp
grpc::CreateChannel("localhost:50051", grpc::InsecureChannelCredentials());
```
Note: In order to set additional options for the *channel*, use the `grpc::CreateCustomChannel()` api with any special channel arguments - `grpc::ChannelArguments`
Now we can use the channel to create our stub using the `NewStub` method provided in the `RouteGuide` class we generated from our .proto.
```cpp
public:
RouteGuideClient(std::shared_ptr<ChannelInterface> channel,
const std::string& db)
: stub_(RouteGuide::NewStub(channel)) {
...
}
```
#### Calling service methods
Now let's look at how we call our service methods. Note that in this tutorial
we're calling the *blocking/synchronous* versions of each method: this means
that the RPC call waits for the server to respond, and will either return a
response or raise an exception.
##### Simple RPC
Calling the simple RPC `GetFeature` is nearly as straightforward as calling a
local method.
```cpp
Point point;
Feature feature;
point = MakePoint(409146138, -746188906);
GetOneFeature(point, &feature);
...
bool GetOneFeature(const Point& point, Feature* feature) {
ClientContext context;
Status status = stub_->GetFeature(&context, point, feature);
...
}
```
As you can see, we create and populate a request protocol buffer object (in our
case `Point`), and create a response protocol buffer object for the server to
fill in. We also create a `ClientContext` object for our call - you can
optionally set RPC configuration values on this object, such as deadlines,
though for now we'll use the default settings. Note that you cannot reuse this
object between calls. Finally, we call the method on the stub, passing it the
context, request, and response. If the method returns `OK`, then we can read the
response information from the server from our response object.
```cpp
std::cout << "Found feature called " << feature->name() << " at "
<< feature->location().latitude()/kCoordFactor_ << ", "
<< feature->location().longitude()/kCoordFactor_ << std::endl;
```
##### Streaming RPCs
Now let's look at our streaming methods. If you've already read [Creating the
server](#server) some of this may look very familiar - streaming RPCs are
implemented in a similar way on both sides. Here's where we call the server-side
streaming method `ListFeatures`, which returns a stream of geographical
`Feature`s:
```cpp
std::unique_ptr<ClientReader<Feature> > reader(
stub_->ListFeatures(&context, rect));
while (reader->Read(&feature)) {
std::cout << "Found feature called "
<< feature.name() << " at "
<< feature.location().latitude()/kCoordFactor_ << ", "
<< feature.location().longitude()/kCoordFactor_ << std::endl;
}
Status status = reader->Finish();
```
Instead of passing the method a context, request, and response, we pass it a
context and request and get a `ClientReader` object back. The client can use the
`ClientReader` to read the server's responses. We use the `ClientReader`s
`Read()` method to repeatedly read in the server's responses to a response
protocol buffer object (in this case a `Feature`) until there are no more
messages: the client needs to check the return value of `Read()` after each
call. If `true`, the stream is still good and it can continue reading; if
`false` the message stream has ended. Finally, we call `Finish()` on the stream
to complete the call and get our RPC status.
The client-side streaming method `RecordRoute` is similar, except there we pass
the method a context and response object and get back a `ClientWriter`.
```cpp
std::unique_ptr<ClientWriter<Point> > writer(
stub_->RecordRoute(&context, &stats));
for (int i = 0; i < kPoints; i++) {
const Feature& f = feature_list_[feature_distribution(generator)];
std::cout << "Visiting point "
<< f.location().latitude()/kCoordFactor_ << ", "
<< f.location().longitude()/kCoordFactor_ << std::endl;
if (!writer->Write(f.location())) {
// Broken stream.
break;
}
std::this_thread::sleep_for(std::chrono::milliseconds(
delay_distribution(generator)));
}
writer->WritesDone();
Status status = writer->Finish();
if (status.IsOk()) {
std::cout << "Finished trip with " << stats.point_count() << " points\n"
<< "Passed " << stats.feature_count() << " features\n"
<< "Travelled " << stats.distance() << " meters\n"
<< "It took " << stats.elapsed_time() << " seconds"
<< std::endl;
} else {
std::cout << "RecordRoute rpc failed." << std::endl;
}
```
Once we've finished writing our client's requests to the stream using `Write()`,
we need to call `WritesDone()` on the stream to let gRPC know that we've
finished writing, then `Finish()` to complete the call and get our RPC status.
If the status is `OK`, our response object that we initially passed to
`RecordRoute()` will be populated with the server's response.
Finally, let's look at our bidirectional streaming RPC `RouteChat()`. In this
case, we just pass a context to the method and get back a `ClientReaderWriter`,
which we can use to both write and read messages.
```cpp
std::shared_ptr<ClientReaderWriter<RouteNote, RouteNote> > stream(
stub_->RouteChat(&context));
```
The syntax for reading and writing here is exactly the same as for our
client-streaming and server-streaming methods. Although each side will always
get the other's messages in the order they were written, both the client and
server can read and write in any order — the streams operate completely
independently.
### Try it out!
Build client and server:
```sh
$ make
```
Run the server, which will listen on port 50051:
```sh
$ ./route_guide_server
```
Run the client (in a different terminal):
```sh
$ ./route_guide_client
```

View File

@ -0,0 +1,508 @@
---
layout: tutorials
title: gRPC Basics - C#
aliases: [/docs/tutorials/basic/csharp.html]
---
This tutorial provides a basic C# programmer's introduction to working with gRPC.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Generate server and client code using the protocol buffer compiler.
- Use the C# gRPC API to write a simple client and server for your service.
It assumes that you have read the [Overview](/docs/) and are familiar
with [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). Note that the
example in this tutorial uses the proto3 version of the protocol buffers
language: you can find out more in the
[proto3 language guide](https://developers.google.com/protocol-buffers/docs/proto3) and
[C# generated code reference](https://developers.google.com/protocol-buffers/docs/reference/csharp-generated).
<div id="toc"></div>
### Why use gRPC?
Our example is a simple route mapping application that lets clients get
information about features on their route, create a summary of their route, and
exchange route information such as traffic updates with the server and other
clients.
With gRPC we can define our service once in a .proto file and implement clients
and servers in any of gRPC's supported languages, which in turn can be run in
environments ranging from servers inside Google to your own tablet - all the
complexity of communication between different languages and environments is
handled for you by gRPC. We also get all the advantages of working with protocol
buffers, including efficient serialization, a simple IDL, and easy interface
updating.
### Example code and setup
The example code for our tutorial is in
[grpc/grpc/examples/csharp/RouteGuide](https://github.com/grpc/grpc/tree/
{{< param grpc_release_tag >}}/examples/csharp/RouteGuide). To
download the example, clone the `grpc` repository by running the following
command:
```sh
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
$ cd grpc
```
All the files for this tutorial are in the directory
`examples/csharp/RouteGuide`. Open the solution
`examples/csharp/RouteGuide/RouteGuide.sln` from Visual Studio (Windows or Mac) or Visual Studio Code.
For additional installation details, see the [How to use
instructions](https://github.com/grpc/grpc/tree/
{{< param grpc_release_tag >}}/src/csharp#how-to-use).
### Defining the service
Our first step (as you'll know from the [Overview](/docs/)) is to
define the gRPC *service* and the method *request* and *response* types using
[protocol buffers](https://developers.google.com/protocol-buffers/docs/overview).
You can see the complete .proto file in
[`examples/protos/route_guide.proto`](https://github.com/grpc/grpc/blob/
{{< param grpc_release_tag >}}/examples/protos/route_guide.proto).
To define a service, you specify a named `service` in your .proto file:
```protobuf
service RouteGuide {
...
}
```
Then you define `rpc` methods inside your service definition, specifying their
request and response types. gRPC lets you define four kinds of service method,
all of which are used in the `RouteGuide` service:
- A *simple RPC* where the client sends a request to the server using the client
object and waits for a response to come back, just like a normal function
call.
```protobuf
// Obtains the feature at a given position.
rpc GetFeature(Point) returns (Feature) {}
```
- A *server-side streaming RPC* where the client sends a request to the server
and gets a stream to read a sequence of messages back. The client reads from
the returned stream until there are no more messages. As you can see in our
example, you specify a server-side streaming method by placing the `stream`
keyword before the *response* type.
```protobuf
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc ListFeatures(Rectangle) returns (stream Feature) {}
```
- A *client-side streaming RPC* where the client writes a sequence of messages
and sends them to the server, again using a provided stream. Once the client
has finished writing the messages, it waits for the server to read them all
and return its response. You specify a client-side streaming method by placing
the `stream` keyword before the *request* type.
```protobuf
// Accepts a stream of Points on a route being traversed, returning a
// RouteSummary when traversal is completed.
rpc RecordRoute(stream Point) returns (RouteSummary) {}
```
- A *bidirectional streaming RPC* where both sides send a sequence of messages
using a read-write stream. The two streams operate independently, so clients
and servers can read and write in whatever order they like: for example, the
server could wait to receive all the client messages before writing its
responses, or it could alternately read a message then write a message, or
some other combination of reads and writes. The order of messages in each
stream is preserved. You specify this type of method by placing the `stream`
keyword before both the request and the response.
```protobuf
// Accepts a stream of RouteNotes sent while a route is being traversed,
// while receiving other RouteNotes (e.g. from other users).
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
```
Our .proto file also contains protocol buffer message type definitions for all
the request and response types used in our service methods - for example, here's
the `Point` message type:
```protobuf
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
```
### Generating client and server code
Next we need to generate the gRPC client and server interfaces from our .proto
service definition. This can be done by invoking the protocol buffer compiler `protoc` with
a special gRPC C# plugin from the command line, but starting from version
1.17 the `Grpc.Tools` NuGet package integrates with MSBuild to provide [automatic C# code generation](https://github.com/grpc/grpc/blob/master/src/csharp/BUILD-INTEGRATION.md)
from `.proto` files, which gives much better developer experience by running
the right commands for you as part of the build.
This example already has a dependency on `Grpc.Tools` NuGet package and the
`route_guide.proto` has already been added to the project, so the only thing
needed to generate the client and server code is to build the solution.
That can be done by running `dotnet build RouteGuide.sln` or building directly
in Visual Studio.
The build regenerates the following files
under the `RouteGuide/obj/Debug/TARGET_FRAMEWORK` directory:
- `RouteGuide.cs` contains all the protocol buffer code to populate,
serialize, and retrieve our request and response message types
- `RouteGuideGrpc.cs` provides generated client and server classes,
including:
- an abstract class `RouteGuide.RouteGuideBase` to inherit from when defining
RouteGuide service implementations
- a class `RouteGuide.RouteGuideClient` that can be used to access remote
RouteGuide instances
<a name="server"></a>
### Creating the server
First let's look at how we create a `RouteGuide` server. If you're only
interested in creating gRPC clients, you can skip this section and go straight
to [Creating the client](#client) (though you might find it interesting
anyway!).
There are two parts to making our `RouteGuide` service do its job:
- Implementing the service functionality by inheriting from the base class
generated from our service definition: doing the actual "work" of our service.
- Running a gRPC server to listen for requests from clients and return the
service responses.
You can find our example `RouteGuide` server in
[examples/csharp/RouteGuide/RouteGuideServer/RouteGuideImpl.cs](https://github.com/grpc/grpc/blob/
{{< param grpc_release_tag >}}/examples/csharp/RouteGuide/RouteGuideServer/RouteGuideImpl.cs).
Let's take a closer look at how it works.
#### Implementing RouteGuide
As you can see, our server has a `RouteGuideImpl` class that inherits from the
generated `RouteGuide.RouteGuideBase`:
```csharp
// RouteGuideImpl provides an implementation of the RouteGuide service.
public class RouteGuideImpl : RouteGuide.RouteGuideBase
```
##### Simple RPC
`RouteGuideImpl` implements all our service methods. Let's look at the simplest
type first, `GetFeature`, which just gets a `Point` from the client and returns
the corresponding feature information from its database in a `Feature`.
```csharp
public override Task<Feature> GetFeature(Point request, Grpc.Core.ServerCallContext context)
{
return Task.FromResult(CheckFeature(request));
}
```
The method is passed a context for the RPC (which is empty in the alpha
release), the client's `Point` protocol buffer request, and returns a `Feature`
protocol buffer. In the method we create the `Feature` with the appropriate
information, and then return it. To allow asynchronous implementation, the
method returns `Task<Feature>` rather than just `Feature`. You are free to
perform your computations synchronously and return the result once you've
finished, just as we do in the example.
##### Server-side streaming RPC
Now let's look at something a bit more complicated - a streaming RPC.
`ListFeatures` is a server-side streaming RPC, so we need to send back multiple
`Feature` protocol buffers to our client.
```csharp
// in RouteGuideImpl
public override async Task ListFeatures(Rectangle request,
Grpc.Core.IServerStreamWriter<Feature> responseStream,
Grpc.Core.ServerCallContext context)
{
var responses = features.FindAll( (feature) => feature.Exists() && request.Contains(feature.Location) );
foreach (var response in responses)
{
await responseStream.WriteAsync(response);
}
}
```
As you can see, here the request object is a `Rectangle` in which our client
wants to find `Feature`s, but instead of returning a simple response we need to
write responses to an asynchronous stream `IServerStreamWriter` using async
method `WriteAsync`.
##### Client-side streaming RPC
Similarly, the client-side streaming method `RecordRoute` uses an
[IAsyncEnumerator](https://github.com/Reactive-Extensions/Rx.NET/blob/master/Ix.NET/Source/System.Interactive.Async/IAsyncEnumerator.cs),
to read the stream of requests using the async method `MoveNext` and the
`Current` property.
```csharp
public override async Task<RouteSummary> RecordRoute(Grpc.Core.IAsyncStreamReader<Point> requestStream,
Grpc.Core.ServerCallContext context)
{
int pointCount = 0;
int featureCount = 0;
int distance = 0;
Point previous = null;
var stopwatch = new Stopwatch();
stopwatch.Start();
while (await requestStream.MoveNext())
{
var point = requestStream.Current;
pointCount++;
if (CheckFeature(point).Exists())
{
featureCount++;
}
if (previous != null)
{
distance += (int) previous.GetDistance(point);
}
previous = point;
}
stopwatch.Stop();
return new RouteSummary
{
PointCount = pointCount,
FeatureCount = featureCount,
Distance = distance,
ElapsedTime = (int)(stopwatch.ElapsedMilliseconds / 1000)
};
}
```
##### Bidirectional streaming RPC
Finally, let's look at our bidirectional streaming RPC `RouteChat`.
```csharp
public override async Task RouteChat(Grpc.Core.IAsyncStreamReader<RouteNote> requestStream,
Grpc.Core.IServerStreamWriter<RouteNote> responseStream,
Grpc.Core.ServerCallContext context)
{
while (await requestStream.MoveNext())
{
var note = requestStream.Current;
List<RouteNote> prevNotes = AddNoteForLocation(note.Location, note);
foreach (var prevNote in prevNotes)
{
await responseStream.WriteAsync(prevNote);
}
}
}
```
Here the method receives both `requestStream` and `responseStream` arguments.
Reading the requests is done the same way as in the client-side streaming method
`RecordRoute`. Writing the responses is done the same way as in the server-side
streaming method `ListFeatures`.
#### Starting the server
Once we've implemented all our methods, we also need to start up a gRPC server
so that clients can actually use our service. The following snippet shows how we
do this for our `RouteGuide` service:
```csharp
var features = RouteGuideUtil.ParseFeatures(RouteGuideUtil.DefaultFeaturesFile);
Server server = new Server
{
Services = { RouteGuide.BindService(new RouteGuideImpl(features)) },
Ports = { new ServerPort("localhost", Port, ServerCredentials.Insecure) }
};
server.Start();
Console.WriteLine("RouteGuide server listening on port " + port);
Console.WriteLine("Press any key to stop the server...");
Console.ReadKey();
server.ShutdownAsync().Wait();
```
As you can see, we build and start our server using `Grpc.Core.Server` class. To
do this, we:
1. Create an instance of `Grpc.Core.Server`.
1. Create an instance of our service implementation class `RouteGuideImpl`.
1. Register our service implementation by adding its service definition to the
`Services` collection (We obtain the service definition from the generated
`RouteGuide.BindService` method).
1. Specify the address and port we want to use to listen for client requests.
This is done by adding `ServerPort` to the `Ports` collection.
1. Call `Start` on the server instance to start an RPC server for our service.
<a name="client"></a>
### Creating the client
In this section, we'll look at creating a C# client for our `RouteGuide`
service. You can see our complete example client code in
[examples/csharp/RouteGuide/RouteGuideClient/Program.cs](https://github.com/grpc/grpc/blob/
{{< param grpc_release_tag >}}/examples/csharp/RouteGuide/RouteGuideClient/Program.cs).
#### Creating a client object
To call service methods, we first need to create a client object (also referred
to as *stub* for other gRPC languages).
First, we need to create a gRPC client channel that will connect to gRPC server.
Then, we create an instance of the `RouteGuite.RouteGuideClient` class generated
from our .proto, passing the channel as an argument.
```csharp
Channel channel = new Channel("127.0.0.1:50052", ChannelCredentials.Insecure);
var client = new RouteGuide.RouteGuideClient(channel);
// YOUR CODE GOES HERE
channel.ShutdownAsync().Wait();
```
#### Calling service methods
Now let's look at how we call our service methods. gRPC C# provides asynchronous
versions of each of the supported method types. For convenience, gRPC C# also
provides a synchronous method stub, but only for simple (single request/single
response) RPCs.
##### Simple RPC
Calling the simple RPC `GetFeature` in a synchronous way is nearly as
straightforward as calling a local method.
```csharp
Point request = new Point { Latitude = 409146138, Longitude = -746188906 };
Feature feature = client.GetFeature(request);
```
As you can see, we create and populate a request protocol buffer object (in our
case `Point`), and call the desired method on the client object, passing it the
request. If the RPC finishes with success, the response protocol buffer (in our
case `Feature`) is returned. Otherwise, an exception of type `RpcException` is
thrown, indicating the status code of the problem.
Alternatively, if you are in an async context, you can call an asynchronous
version of the method and use the `await` keyword to await the result:
```csharp
Point request = new Point { Latitude = 409146138, Longitude = -746188906 };
Feature feature = await client.GetFeatureAsync(request);
```
##### Streaming RPCs
Now let's look at our streaming methods. If you've already read [Creating the
server](#server) some of this may look very familiar - streaming RPCs are
implemented in a similar way on both sides. The difference with respect to
simple call is that the client methods return an instance of a call object. This
provides access to request/response streams and/or the asynchronous result,
depending on the streaming type you are using.
Here's where we call the server-side streaming method `ListFeatures`, which has
the property `ReponseStream` of type `IAsyncEnumerator<Feature>`
```csharp
using (var call = client.ListFeatures(request))
{
while (await call.ResponseStream.MoveNext())
{
Feature feature = call.ResponseStream.Current;
Console.WriteLine("Received " + feature.ToString());
}
}
```
The client-side streaming method `RecordRoute` is similar, except we use the
property `RequestStream` to write the requests one by one using `WriteAsync`,
and eventually signal that no more requests will be sent using `CompleteAsync`.
The method result can be obtained through the property `ResponseAsync`.
```csharp
using (var call = client.RecordRoute())
{
foreach (var point in points)
{
await call.RequestStream.WriteAsync(point);
}
await call.RequestStream.CompleteAsync();
RouteSummary summary = await call.ResponseAsync;
}
```
Finally, let's look at our bidirectional streaming RPC `RouteChat`. In this
case, we write the request to `RequestStream` and receive the responses from
`ResponseStream`. As you can see from the example, the streams are independent
of each other.
```csharp
using (var call = client.RouteChat())
{
var responseReaderTask = Task.Run(async () =>
{
while (await call.ResponseStream.MoveNext())
{
var note = call.ResponseStream.Current;
Console.WriteLine("Received " + note);
}
});
foreach (RouteNote request in requests)
{
await call.RequestStream.WriteAsync(request);
}
await call.RequestStream.CompleteAsync();
await responseReaderTask;
}
```
### Try it out!
#### Build the client and server:
##### Using Visual Studio (or Visual Studio For Mac)
- Open the solution `examples/csharp/RouteGuide/RouteGuide.sln` and select **Build**.
##### Using "dotnet" command line tool
- Run `dotnet build RouteGuide.sln` from the `examples/csharp/RouteGuide` directory.
See the [quickstart](../../quickstart/csharp.html) for additional instructions on building
the gRPC example with the `dotnet` command line tool.
Run the server, which will listen on port 50052:
```
> cd RouteGuideServer/bin/Debug/netcoreapp2.1
> dotnet exec RouteGuideServer.dll
```
Run the client (in a different terminal):
```
> cd RouteGuideClient/bin/Debug/netcoreapp2.1
> dotnet exec RouteGuideClient.dll
```
You can also run the server and client directly from Visual Studio.

View File

@ -0,0 +1,547 @@
---
layout: tutorials
title: gRPC Basics - Dart
aliases: [/docs/tutorials/basic/dart.html]
---
This tutorial provides a basic Dart programmer's introduction to
working with gRPC.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Generate server and client code using the protocol buffer compiler.
- Use the Dart gRPC API to write a simple client and server for your service.
It assumes that you have read the [Overview](/docs/) and are familiar
with [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). Note that the
example in this tutorial uses the proto3 version of the protocol buffers
language: you can find out more in the
[proto3 language
guide](https://developers.google.com/protocol-buffers/docs/proto3).
<div id="toc"></div>
### Why use gRPC?
Our example is a simple route mapping application that lets clients get
information about features on their route, create a summary of their route, and
exchange route information such as traffic updates with the server and other
clients.
With gRPC we can define our service once in a .proto file and implement clients
and servers in any of gRPC's supported languages, which in turn can be run in
environments ranging from servers inside Google to your own tablet - all the
complexity of communication between different languages and environments is
handled for you by gRPC. We also get all the advantages of working with protocol
buffers, including efficient serialization, a simple IDL, and easy interface
updating.
### Example code and setup
The example code for our tutorial is in
[grpc/grpc-dart/example/route_guide](https://github.com/grpc/grpc-dart/tree/master/example/route_guide).
To download the example, clone the `grpc-dart` repository by running the following
command:
```sh
$ git clone https://github.com/grpc/grpc-dart.git
```
Then change your current directory to `grpc-dart/example/route_guide`:
```sh
$ cd grpc-dart/example/route_guide
```
You also should have the relevant tools installed to generate the server and client interface code - if you don't already, follow the setup instructions in [the Dart quick start guide](/docs/quickstart/dart/).
### Defining the service
Our first step (as you'll know from the [Overview](/docs/)) is to
define the gRPC *service* and the method *request* and *response* types using
[protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). You can see the
complete .proto file in
[`example/route_guide/protos/route_guide.proto`](https://github.com/grpc/grpc-dart/blob/master/example/route_guide/protos/route_guide.proto).
To define a service, you specify a named `service` in your .proto file:
```proto
service RouteGuide {
...
}
```
Then you define `rpc` methods inside your service definition, specifying their
request and response types. gRPC lets you define four kinds of service method,
all of which are used in the `RouteGuide` service:
- A *simple RPC* where the client sends a request to the server using the stub
and waits for a response to come back, just like a normal function call.
```proto
// Obtains the feature at a given position.
rpc GetFeature(Point) returns (Feature) {}
```
- A *server-side streaming RPC* where the client sends a request to the server
and gets a stream to read a sequence of messages back. The client reads from
the returned stream until there are no more messages. As you can see in our
example, you specify a server-side streaming method by placing the `stream`
keyword before the *response* type.
```proto
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc ListFeatures(Rectangle) returns (stream Feature) {}
```
- A *client-side streaming RPC* where the client writes a sequence of messages
and sends them to the server, again using a provided stream. Once the client
has finished writing the messages, it waits for the server to read them all
and return its response. You specify a client-side streaming method by placing
the `stream` keyword before the *request* type.
```proto
// Accepts a stream of Points on a route being traversed, returning a
// RouteSummary when traversal is completed.
rpc RecordRoute(stream Point) returns (RouteSummary) {}
```
- A *bidirectional streaming RPC* where both sides send a sequence of messages
using a read-write stream. The two streams operate independently, so clients
and servers can read and write in whatever order they like: for example, the
server could wait to receive all the client messages before writing its
responses, or it could alternately read a message then write a message, or
some other combination of reads and writes. The order of messages in each
stream is preserved. You specify this type of method by placing the `stream`
keyword before both the request and the response.
```proto
// Accepts a stream of RouteNotes sent while a route is being traversed,
// while receiving other RouteNotes (e.g. from other users).
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
```
Our .proto file also contains protocol buffer message type definitions for all the request and response types used in our service methods - for example, here's the `Point` message type:
```proto
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
```
### Generating client and server code
Next we need to generate the gRPC client and server interfaces from our .proto
service definition. We do this using the protocol buffer compiler `protoc` with
a special Dart plugin.
This is similar to what we did in the [quickstart guide](/docs/quickstart/)
From the `route_guide` example directory run :
```sh
protoc -I protos/ protos/route_guide.proto --dart_out=grpc:lib/src/generated
```
Running this command generates the following files in the `lib/src/generated`
directory under the `route_guide` example directory:
- `route_guide.pb.dart`
- `route_guide.pbenum.dart`
- `route_guide.pbgrpc.dart`
- `route_guide.pbjson.dart`
This contains:
- All the protocol buffer code to populate, serialize, and retrieve our request
and response message types
- An interface type (or *stub*) for clients to call with the methods defined in
the `RouteGuide` service.
- An interface type for servers to implement, also with the methods defined in
the `RouteGuide` service.
<a name="server"></a>
### Creating the server
First let's look at how we create a `RouteGuide` server. If you're only
interested in creating gRPC clients, you can skip this section and go straight
to [Creating the client](#client) (though you might find it interesting
anyway!).
There are two parts to making our `RouteGuide` service do its job:
- Implementing the service interface generated from our service definition:
doing the actual "work" of our service.
- Running a gRPC server to listen for requests from clients and dispatch them to
the right service implementation.
You can find our example `RouteGuide` server in
[grpc-dart/example/route_guide/lib/src/server.dart](https://github.com/grpc/grpc-dart/tree/master/example/route_guide/lib/src/server.dart).
Let's take a closer look at how it works.
#### Implementing RouteGuide
As you can see, our server has a `RouteGuideService` class that extends the
generated abstract `RouteGuideServiceBase` class:
```dart
class RouteGuideService extends RouteGuideServiceBase {
Future<Feature> getFeature(grpc.ServiceCall call, Point request) async {
...
}
Stream<Feature> listFeatures(
grpc.ServiceCall call, Rectangle request) async* {
...
}
Future<RouteSummary> recordRoute(
grpc.ServiceCall call, Stream<Point> request) async {
...
}
Stream<RouteNote> routeChat(
grpc.ServiceCall call, Stream<RouteNote> request) async* {
...
}
...
}
```
##### Simple RPC
`RouteGuideService` implements all our service methods. Let's look at the
simplest type first, `GetFeature`, which just gets a `Point` from the client and
returns the corresponding feature information from its database in a `Feature`.
```dart
/// GetFeature handler. Returns a feature for the given location.
/// The [context] object provides access to client metadata, cancellation, etc.
@override
Future<Feature> getFeature(grpc.ServiceCall call, Point request) async {
return featuresDb.firstWhere((f) => f.location == request,
orElse: () => new Feature()..location = request);
}
```
The method is passed a context object for the RPC and the client's `Point`
protocol buffer request. It returns a `Feature` protocol buffer object with the
response information. In the method we populate the `Feature` with the appropriate
information, and then `return` it to the gRPC framework, which sends it back to
the client.
##### Server-side streaming RPC
Now let's look at one of our streaming RPCs. `ListFeatures` is a server-side
streaming RPC, so we need to send back multiple `Feature`s to our client.
```dart
/// ListFeatures handler. Returns a stream of features within the given
/// rectangle.
@override
Stream<Feature> listFeatures(
grpc.ServiceCall call, Rectangle request) async* {
final normalizedRectangle = _normalize(request);
// For each feature, check if it is in the given bounding box
for (var feature in featuresDb) {
if (feature.name.isEmpty) continue;
final location = feature.location;
if (_contains(normalizedRectangle, location)) {
yield feature;
}
}
}
```
As you can see, instead of getting and returning simple request and response
objects in our method, this time we get a request object (the `Rectangle` in
which our client wants to find `Feature`s) and return a `Stream` of `Feature`
objects.
In the method, we populate as many `Feature` objects as we need to return,
adding them to the returned stream using `yield`. The stream is automatically
closed when the method returns, telling gRPC that we have finished writing
responses.
Should any error happen in this call, the error will be added as an exception
to the stream, and the gRPC layer will translate it into an appropriate RPC
status to be sent on the wire.
##### Client-side streaming RPC
Now let's look at something a little more complicated: the client-side
streaming method `RecordRoute`, where we get a stream of `Point`s from the
client and return a single `RouteSummary` with information about their trip. As
you can see, this time the request parameter is a stream, which the server can
use to both read request messages from the client. The server returns its single
response just like in the simple RPC case.
```dart
/// RecordRoute handler. Gets a stream of points, and responds with statistics
/// about the "trip": number of points, number of known features visited,
/// total distance traveled, and total time spent.
@override
Future<RouteSummary> recordRoute(
grpc.ServiceCall call, Stream<Point> request) async {
int pointCount = 0;
int featureCount = 0;
double distance = 0.0;
Point previous;
final timer = new Stopwatch();
await for (var location in request) {
if (!timer.isRunning) timer.start();
pointCount++;
final feature = featuresDb.firstWhere((f) => f.location == location,
orElse: () => null);
if (feature != null) {
featureCount++;
}
// For each point after the first, add the incremental distance from the
// previous point to the total distance value.
if (previous != null) distance += _distance(previous, location);
previous = location;
}
timer.stop();
return new RouteSummary()
..pointCount = pointCount
..featureCount = featureCount
..distance = distance.round()
..elapsedTime = timer.elapsed.inSeconds;
}
```
In the method body we use `await for` in the request stream to repeatedly read
in our client's requests (in this case `Point` objects) until there are no more
messages. Once the request stream is done, the server can return its
`RouteSummary`.
##### Bidirectional streaming RPC
Finally, let's look at our bidirectional streaming RPC `RouteChat()`.
```dart
/// RouteChat handler. Receives a stream of message/location pairs, and
/// responds with a stream of all previous messages at each of those
/// locations.
@override
Stream<RouteNote> routeChat(
grpc.ServiceCall call, Stream<RouteNote> request) async* {
await for (var note in request) {
final notes = routeNotes.putIfAbsent(note.location, () => <RouteNote>[]);
for (var note in notes) yield note;
notes.add(note);
}
}
```
This time we get a stream of `RouteNote` that, as in our client-side streaming
example, can be used to read messages. However, this time we return values via
our method's returned stream while the client is still writing messages to
*their* message stream.
The syntax for reading and writing here is the same as our client-streaming and
server-streaming methods. Although each side will always get the other's messages
in the order they were written, both the client and server can read and write in
any order — the streams operate completely independently.
#### Starting the server
Once we've implemented all our methods, we also need to start up a gRPC server
so that clients can actually use our service. The following snippet shows how we
do this for our `RouteGuide` service:
```dart
Future<Null> main(List<String> args) async {
final server =
new grpc.Server([new RouteGuideService()]);
await server.serve(port: 8080);
print('Server listening...');
}
```
To build and start a server, we:
1. Create an instance of the gRPC server using `new grpc.Server()`,
giving a list of service implementations.
1. Call `serve()` on the server to start listening for requests, optionally passing
in the address and port to listen on. The server will continue to serve requests
asynchronously until `shutdown()` is called on it.
<a name="client"></a>
### Creating the client
In this section, we'll look at creating a Dart client for our `RouteGuide`
service. You can see our complete example client code in
[grpc-dart/example/route_guide/lib/src/client.dart](https://github.com/grpc/grpc-dart/tree/master/example/route_guide/lib/src/client.dart).
#### Creating a stub
To call service methods, we first need to create a gRPC *channel* to communicate
with the server. We create this by passing the server address and port number to
`new ClientChannel()` as follows:
```dart
final channel = new ClientChannel('127.0.0.1',
port: 8080,
options: const ChannelOptions(
credentials: const ChannelCredentials.insecure()));
```
You can use `ChannelOptions` to set TLS options (e.g., trusted certificates) for
the channel, if necessary.
Once the gRPC *channel* is setup, we need a client *stub* to perform RPCs. We
get by creating a new instance of the `RouteGuideClient` object provided in the
package we generated from our .proto.
```dart
final client = new RouteGuideClient(channel,
options: new CallOptions(timeout: new Duration(seconds: 30)));
```
You can use `CallOptions` to set the auth credentials (e.g., GCE credentials,
JWT credentials) if the service you request requires that - however, we don't
need to do this for our `RouteGuide` service.
#### Calling service methods
Now let's look at how we call our service methods. Note that in gRPC-Dart, RPCs
are always asynchronous, which means that the RPC returns a `Future` or `Stream`
that must be listened to, to get the response from the server or an error.
##### Simple RPC
Calling the simple RPC `GetFeature` is nearly as straightforward as calling a
local method.
```dart
final point = new Point()
..latitude = 409146138
..longitude = -746188906;
final feature = await stub.getFeature(point));
```
As you can see, we call the method on the stub we got earlier. In our method
parameters we pass a request protocol buffer object (in our case `Point`).
We can also pass an optional `CallOptions` object which lets us change our RPC's
behaviour if necessary, such as time-out. If the call doesn't return an error,
the returned `Future` completes with the response information from the server.
If there is an error, the `Future` will complete with the error.
##### Server-side streaming RPC
Here's where we call the server-side streaming method `ListFeatures`, which
returns a stream of geographical `Feature`s. If you've already read [Creating
the server](#server) some of this may look very familiar - streaming RPCs are
implemented in a similar way on both sides.
```dart
final rect = new Rectangle()...; // initialize a Rectangle
try {
await for (var feature in stub.listFeatures(rect)) {
print(feature);
}
catch (e) {
print('ERROR: $e');
}
```
As in the simple RPC, we pass the method a request. However, instead of getting
a `Future` back, we get a `Stream`. The client can use the stream to read the
server's responses.
We use `await for` on the returned stream to repeatedly read in the server's
responses to a response protocol buffer object (in this case a `Feature`) until
there are no more messages.
##### Client-side streaming RPC
The client-side streaming method `RecordRoute` is similar to the server-side
method, except that we pass the method a `Stream` and get a `Future` back.
```dart
final random = new Random();
// Generate a number of random points
Stream<Point> generateRoute(int count) async* {
for (int i = 0; i < count; i++) {
final point = featuresDb[random.nextInt(featuresDb.length)].location;
yield point;
}
}
final pointCount = random.nextInt(100) + 2; // Traverse at least two points
final summary = await stub.recordRoute(generateRoute(pointCount));
print('Route summary: $summary');
```
Since the `generateRoute()` method is `async*`, the points will be generated when
gRPC listens to the request stream and sends the point messages to the server. Once
the stream is done (when `generateRoute()` returns), gRPC knows that we've finished
writing and are expecting to receive a response. The returned `Future` will either
complete with the `RouteSummary` message received from the server, or an error.
##### Bidirectional streaming RPC
Finally, let's look at our bidirectional streaming RPC `RouteChat()`. As in the
case of `RecordRoute`, we pass the method a stream where we will write the request
messages, and like in `ListFeatures`, we get back a stream that we can use to read
the response messages. However, this time we will send values via our method's stream
while the server is also writing messages to *their* message stream.
```dart
Stream<RouteNote> outgoingNotes = ...;
final responses = stub.routeChat(outgoingNotes);
await for (var note in responses) {
print('Got message ${note.message} at ${note.location.latitude}, ${note
.location.longitude}');
}
```
The syntax for reading and writing here is very similar to our client-side and
server-side streaming methods. Although each side will always get the other's
messages in the order they were written, both the client and server can read and
write in any order — the streams operate completely independently.
### Try it out!
Go to the `examples/route_guide` folder.
First, make sure dependencies are downloaded:
```sh
$ pub get
```
To run the server, simply:
```sh
$ dart bin/server.dart
```
Likewise, to run the client:
```sh
$ dart bin/client.dart
```
### Reporting issues
Should you encounter an issue, please help us out by
<a href="https://github.com/grpc/grpc-dart/issues/new">filing issues</a>
in our issue tracker.</p>

View File

@ -0,0 +1,585 @@
---
layout: tutorials
title: gRPC Basics - Go
aliases: [/docs/tutorials/basic/go.html]
---
This tutorial provides a basic Go programmer's introduction to
working with gRPC.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Generate server and client code using the protocol buffer compiler.
- Use the Go gRPC API to write a simple client and server for your service.
It assumes that you have read the [Overview](/docs/) and are familiar
with [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). Note that the
example in this tutorial uses the proto3 version of the protocol buffers
language: you can find out more in the
[proto3 language
guide](https://developers.google.com/protocol-buffers/docs/proto3) and the [Go
generated code
guide](https://developers.google.com/protocol-buffers/docs/reference/go-generated).
<div id="toc"></div>
### Why use gRPC?
Our example is a simple route mapping application that lets clients get
information about features on their route, create a summary of their route, and
exchange route information such as traffic updates with the server and other
clients.
With gRPC we can define our service once in a .proto file and implement clients
and servers in any of gRPC's supported languages, which in turn can be run in
environments ranging from servers inside Google to your own tablet - all the
complexity of communication between different languages and environments is
handled for you by gRPC. We also get all the advantages of working with protocol
buffers, including efficient serialization, a simple IDL, and easy interface
updating.
### Example code and setup
The example code for our tutorial is in
[grpc/grpc-go/examples/route_guide](https://github.com/grpc/grpc-go/tree/master/examples/route_guide).
To download the example, clone the `grpc-go` repository by running the following
command:
```sh
$ go get google.golang.org/grpc
```
Then change your current directory to `grpc-go/examples/route_guide`:
```sh
$ cd $GOPATH/src/google.golang.org/grpc/examples/route_guide
```
You also should have the relevant tools installed to generate the server and client interface code - if you don't already, follow the setup instructions in [the Go quick start guide](/docs/quickstart/go/).
### Defining the service
Our first step (as you'll know from the [Overview](/docs/)) is to
define the gRPC *service* and the method *request* and *response* types using
[protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). You can see the
complete .proto file in
[`examples/route_guide/routeguide/route_guide.proto`](https://github.com/grpc/grpc-go/blob/master/examples/route_guide/routeguide/route_guide.proto).
To define a service, you specify a named `service` in your .proto file:
```proto
service RouteGuide {
...
}
```
Then you define `rpc` methods inside your service definition, specifying their
request and response types. gRPC lets you define four kinds of service method,
all of which are used in the `RouteGuide` service:
- A *simple RPC* where the client sends a request to the server using the stub
and waits for a response to come back, just like a normal function call.
```proto
// Obtains the feature at a given position.
rpc GetFeature(Point) returns (Feature) {}
```
- A *server-side streaming RPC* where the client sends a request to the server
and gets a stream to read a sequence of messages back. The client reads from
the returned stream until there are no more messages. As you can see in our
example, you specify a server-side streaming method by placing the `stream`
keyword before the *response* type.
```proto
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc ListFeatures(Rectangle) returns (stream Feature) {}
```
- A *client-side streaming RPC* where the client writes a sequence of messages
and sends them to the server, again using a provided stream. Once the client
has finished writing the messages, it waits for the server to read them all
and return its response. You specify a client-side streaming method by placing
the `stream` keyword before the *request* type.
```proto
// Accepts a stream of Points on a route being traversed, returning a
// RouteSummary when traversal is completed.
rpc RecordRoute(stream Point) returns (RouteSummary) {}
```
- A *bidirectional streaming RPC* where both sides send a sequence of messages
using a read-write stream. The two streams operate independently, so clients
and servers can read and write in whatever order they like: for example, the
server could wait to receive all the client messages before writing its
responses, or it could alternately read a message then write a message, or
some other combination of reads and writes. The order of messages in each
stream is preserved. You specify this type of method by placing the `stream`
keyword before both the request and the response.
```proto
// Accepts a stream of RouteNotes sent while a route is being traversed,
// while receiving other RouteNotes (e.g. from other users).
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
```
Our .proto file also contains protocol buffer message type definitions for all the request and response types used in our service methods - for example, here's the `Point` message type:
```proto
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
```
### Generating client and server code
Next we need to generate the gRPC client and server interfaces from our .proto
service definition. We do this using the protocol buffer compiler `protoc` with
a special gRPC Go plugin.
This is similar to what we did in the [quickstart guide](/docs/quickstart/go/)
From the `route_guide` example directory run :
```sh
protoc -I routeguide/ routeguide/route_guide.proto --go_out=plugins=grpc:routeguide
```
Running this command generates the following file in the `routeguide` directory under the `route_guide` example directory:
- `route_guide.pb.go`
This contains:
- All the protocol buffer code to populate, serialize, and retrieve our request
and response message types
- An interface type (or *stub*) for clients to call with the methods defined in
the `RouteGuide` service.
- An interface type for servers to implement, also with the methods defined in
the `RouteGuide` service.
<a name="server"></a>
### Creating the server
First let's look at how we create a `RouteGuide` server. If you're only
interested in creating gRPC clients, you can skip this section and go straight
to [Creating the client](#client) (though you might find it interesting
anyway!).
There are two parts to making our `RouteGuide` service do its job:
- Implementing the service interface generated from our service definition:
doing the actual "work" of our service.
- Running a gRPC server to listen for requests from clients and dispatch them to
the right service implementation.
You can find our example `RouteGuide` server in
[grpc-go/examples/route_guide/server/server.go](https://github.com/grpc/grpc-go/tree/master/examples/route_guide/server/server.go).
Let's take a closer look at how it works.
#### Implementing RouteGuide
As you can see, our server has a `routeGuideServer` struct type that implements the generated `RouteGuideServer` interface:
```go
type routeGuideServer struct {
...
}
...
func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) {
...
}
...
func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error {
...
}
...
func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error {
...
}
...
func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error {
...
}
...
```
##### Simple RPC
`routeGuideServer` implements all our service methods. Let's look at the
simplest type first, `GetFeature`, which just gets a `Point` from the client and
returns the corresponding feature information from its database in a `Feature`.
```go
func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) {
for _, feature := range s.savedFeatures {
if proto.Equal(feature.Location, point) {
return feature, nil
}
}
// No feature was found, return an unnamed feature
return &pb.Feature{"", point}, nil
}
```
The method is passed a context object for the RPC and the client's `Point`
protocol buffer request. It returns a `Feature` protocol buffer object with the
response information and an `error`. In the method we populate the `Feature`
with the appropriate information, and then `return` it along with an `nil` error
to tell gRPC that we've finished dealing with the RPC and that the `Feature` can
be returned to the client.
##### Server-side streaming RPC
Now let's look at one of our streaming RPCs. `ListFeatures` is a server-side
streaming RPC, so we need to send back multiple `Feature`s to our client.
```go
func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error {
for _, feature := range s.savedFeatures {
if inRange(feature.Location, rect) {
if err := stream.Send(feature); err != nil {
return err
}
}
}
return nil
}
```
As you can see, instead of getting simple request and response objects in our
method parameters, this time we get a request object (the `Rectangle` in which
our client wants to find `Feature`s) and a special
`RouteGuide_ListFeaturesServer` object to write our responses.
In the method, we populate as many `Feature` objects as we need to return,
writing them to the `RouteGuide_ListFeaturesServer` using its `Send()` method.
Finally, as in our simple RPC, we return a `nil` error to tell gRPC that we've
finished writing responses. Should any error happen in this call, we return a
non-`nil` error; the gRPC layer will translate it into an appropriate RPC status
to be sent on the wire.
##### Client-side streaming RPC
Now let's look at something a little more complicated: the client-side
streaming method `RecordRoute`, where we get a stream of `Point`s from the
client and return a single `RouteSummary` with information about their trip. As
you can see, this time the method doesn't have a request parameter at all.
Instead, it gets a `RouteGuide_RecordRouteServer` stream, which the server can
use to both read *and* write messages - it can receive client messages using
its `Recv()` method and return its single response using its `SendAndClose()`
method.
```go
func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error {
var pointCount, featureCount, distance int32
var lastPoint *pb.Point
startTime := time.Now()
for {
point, err := stream.Recv()
if err == io.EOF {
endTime := time.Now()
return stream.SendAndClose(&pb.RouteSummary{
PointCount: pointCount,
FeatureCount: featureCount,
Distance: distance,
ElapsedTime: int32(endTime.Sub(startTime).Seconds()),
})
}
if err != nil {
return err
}
pointCount++
for _, feature := range s.savedFeatures {
if proto.Equal(feature.Location, point) {
featureCount++
}
}
if lastPoint != nil {
distance += calcDistance(lastPoint, point)
}
lastPoint = point
}
}
```
In the method body we use the `RouteGuide_RecordRouteServer`'s `Recv()` method to
repeatedly read in our client's requests to a request object (in this case a
`Point`) until there are no more messages: the server needs to check the
error returned from `Read()` after each call. If this is `nil`, the stream is
still good and it can continue reading; if it's `io.EOF` the message stream has
ended and the server can return its `RouteSummary`. If it has any other value,
we return the error "as is" so that it'll be translated to an RPC status by the
gRPC layer.
##### Bidirectional streaming RPC
Finally, let's look at our bidirectional streaming RPC `RouteChat()`.
```go
func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error {
for {
in, err := stream.Recv()
if err == io.EOF {
return nil
}
if err != nil {
return err
}
key := serialize(in.Location)
... // look for notes to be sent to client
for _, note := range s.routeNotes[key] {
if err := stream.Send(note); err != nil {
return err
}
}
}
}
```
This time we get a `RouteGuide_RouteChatServer` stream that, as in our
client-side streaming example, can be used to read and write messages. However,
this time we return values via our method's stream while the client is still
writing messages to *their* message stream.
The syntax for reading and writing here is very similar to our client-streaming
method, except the server uses the stream's `Send()` method rather than
`SendAndClose()` because it's writing multiple responses. Although each side
will always get the other's messages in the order they were written, both the
client and server can read and write in any order — the streams operate
completely independently.
#### Starting the server
Once we've implemented all our methods, we also need to start up a gRPC server
so that clients can actually use our service. The following snippet shows how we
do this for our `RouteGuide` service:
```go
flag.Parse()
lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port))
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
grpcServer := grpc.NewServer()
pb.RegisterRouteGuideServer(grpcServer, &routeGuideServer{})
... // determine whether to use TLS
grpcServer.Serve(lis)
```
To build and start a server, we:
1. Specify the port we want to use to listen for client requests using `lis, err
:= net.Listen("tcp", fmt.Sprintf(":%d", *port))`.
2. Create an instance of the gRPC server using `grpc.NewServer()`.
3. Register our service implementation with the gRPC server.
4. Call `Serve()` on the server with our port details to do a blocking wait
until the process is killed or `Stop()` is called.
<a name="client"></a>
### Creating the client
In this section, we'll look at creating a Go client for our `RouteGuide`
service. You can see our complete example client code in
[grpc-go/examples/route_guide/client/client.go](https://github.com/grpc/grpc-go/tree/master/examples/route_guide/client/client.go).
#### Creating a stub
To call service methods, we first need to create a gRPC *channel* to communicate
with the server. We create this by passing the server address and port number to
`grpc.Dial()` as follows:
```go
conn, err := grpc.Dial(*serverAddr)
if err != nil {
...
}
defer conn.Close()
```
You can use `DialOptions` to set the auth credentials (e.g., TLS, GCE
credentials, JWT credentials) in `grpc.Dial` if the service you request requires
that - however, we don't need to do this for our `RouteGuide` service.
Once the gRPC *channel* is setup, we need a client *stub* to perform RPCs. We
get this using the `NewRouteGuideClient` method provided in the `pb` package we
generated from our .proto.
```go
client := pb.NewRouteGuideClient(conn)
```
#### Calling service methods
Now let's look at how we call our service methods. Note that in gRPC-Go, RPCs
operate in a blocking/synchronous mode, which means that the RPC call waits for
the server to respond, and will either return a response or an error.
##### Simple RPC
Calling the simple RPC `GetFeature` is nearly as straightforward as calling a
local method.
```go
feature, err := client.GetFeature(context.Background(), &pb.Point{409146138, -746188906})
if err != nil {
...
}
```
As you can see, we call the method on the stub we got earlier. In our method
parameters we create and populate a request protocol buffer object (in our case
`Point`). We also pass a `context.Context` object which lets us change our RPC's
behaviour if necessary, such as time-out/cancel an RPC in flight. If the call
doesn't return an error, then we can read the response information from the
server from the first return value.
```go
log.Println(feature)
```
##### Server-side streaming RPC
Here's where we call the server-side streaming method `ListFeatures`, which
returns a stream of geographical `Feature`s. If you've already read [Creating
the server](#server) some of this may look very familiar - streaming RPCs are
implemented in a similar way on both sides.
```go
rect := &pb.Rectangle{ ... } // initialize a pb.Rectangle
stream, err := client.ListFeatures(context.Background(), rect)
if err != nil {
...
}
for {
feature, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
log.Fatalf("%v.ListFeatures(_) = _, %v", client, err)
}
log.Println(feature)
}
```
As in the simple RPC, we pass the method a context and a request. However,
instead of getting a response object back, we get back an instance of
`RouteGuide_ListFeaturesClient`. The client can use the
`RouteGuide_ListFeaturesClient` stream to read the server's responses.
We use the `RouteGuide_ListFeaturesClient`'s `Recv()` method to repeatedly read
in the server's responses to a response protocol buffer object (in this case a
`Feature`) until there are no more messages: the client needs to check the error
`err` returned from `Recv()` after each call. If `nil`, the stream is still good
and it can continue reading; if it's `io.EOF` then the message stream has ended;
otherwise there must be an RPC error, which is passed over through `err`.
##### Client-side streaming RPC
The client-side streaming method `RecordRoute` is similar to the server-side
method, except that we only pass the method a context and get a
`RouteGuide_RecordRouteClient` stream back, which we can use to both write *and*
read messages.
```go
// Create a random number of random points
r := rand.New(rand.NewSource(time.Now().UnixNano()))
pointCount := int(r.Int31n(100)) + 2 // Traverse at least two points
var points []*pb.Point
for i := 0; i < pointCount; i++ {
points = append(points, randomPoint(r))
}
log.Printf("Traversing %d points.", len(points))
stream, err := client.RecordRoute(context.Background())
if err != nil {
log.Fatalf("%v.RecordRoute(_) = _, %v", client, err)
}
for _, point := range points {
if err := stream.Send(point); err != nil {
if err == io.EOF {
break
}
log.Fatalf("%v.Send(%v) = %v", stream, point, err)
}
}
reply, err := stream.CloseAndRecv()
if err != nil {
log.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil)
}
log.Printf("Route summary: %v", reply)
```
The `RouteGuide_RecordRouteClient` has a `Send()` method that we can use to send
requests to the server. Once we've finished writing our client's requests to the
stream using `Send()`, we need to call `CloseAndRecv()` on the stream to let
gRPC know that we've finished writing and are expecting to receive a response.
We get our RPC status from the `err` returned from `CloseAndRecv()`. If the
status is `nil`, then the first return value from `CloseAndRecv()` will be a
valid server response.
##### Bidirectional streaming RPC
Finally, let's look at our bidirectional streaming RPC `RouteChat()`. As in the
case of `RecordRoute`, we only pass the method a context object and get back a
stream that we can use to both write and read messages. However, this time we
return values via our method's stream while the server is still writing messages
to *their* message stream.
```go
stream, err := client.RouteChat(context.Background())
waitc := make(chan struct{})
go func() {
for {
in, err := stream.Recv()
if err == io.EOF {
// read done.
close(waitc)
return
}
if err != nil {
log.Fatalf("Failed to receive a note : %v", err)
}
log.Printf("Got message %s at point(%d, %d)", in.Message, in.Location.Latitude, in.Location.Longitude)
}
}()
for _, note := range notes {
if err := stream.Send(note); err != nil {
log.Fatalf("Failed to send a note: %v", err)
}
}
stream.CloseSend()
<-waitc
```
The syntax for reading and writing here is very similar to our client-side
streaming method, except we use the stream's `CloseSend()` method once we've
finished our call. Although each side will always get the other's messages in
the order they were written, both the client and server can read and write in
any order — the streams operate completely independently.
### Try it out!
To compile and run the server, assuming you are in the folder
`$GOPATH/src/google.golang.org/grpc/examples/route_guide`, simply:
```sh
$ go run server/server.go
```
Likewise, to run the client:
```sh
$ go run client/client.go
```

View File

@ -0,0 +1,683 @@
---
layout: tutorials
title: gRPC Basics - Java
aliases: [/docs/tutorials/basic/java.html]
---
This tutorial provides a basic Java programmer's introduction to
working with gRPC.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Generate server and client code using the protocol buffer compiler.
- Use the Java gRPC API to write a simple client and server for your service.
It assumes that you have read the [Overview](/docs/) and are familiar
with [protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). Note
that the example in this tutorial uses the
[proto3](https://github.com/google/protobuf/releases) version of the protocol
buffers language: you can find out more in the [proto3 language
guide](https://developers.google.com/protocol-buffers/docs/proto3) and [Java
generated code
guide](https://developers.google.com/protocol-buffers/docs/reference/java-generated).
<div id="toc"></div>
### Why use gRPC?
Our example is a simple route mapping application that lets clients get
information about features on their route, create a summary of their route, and
exchange route information such as traffic updates with the server and other
clients.
With gRPC we can define our service once in a .proto file and implement clients
and servers in any of gRPC's supported languages, which in turn can be run in
environments ranging from servers inside Google to your own tablet - all the
complexity of communication between different languages and environments is
handled for you by gRPC. We also get all the advantages of working with protocol
buffers, including efficient serialization, a simple IDL, and easy interface
updating.
### Example code and setup
The example code for our tutorial is in
[grpc/grpc-java/examples/src/main/java/io/grpc/examples](https://github.com/grpc/grpc-java/tree/master/examples/src/main/java/io/grpc/examples).
To download the example, clone the latest release in `grpc-java` repository by
running the following command:
```sh
$ git clone -b {{< param grpc_java_release_tag >}} https://github.com/grpc/grpc-java.git
```
Then change your current directory to `grpc-java/examples`:
```sh
$ cd grpc-java/examples
```
### Defining the service
Our first step (as you'll know from the [Overview](/docs/)) is to
define the gRPC *service* and the method *request* and *response* types using
[protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). You can
see the complete .proto file in
[`grpc-java/examples/src/main/proto/route_guide.proto`](https://github.com/grpc/grpc-java/blob/master/examples/src/main/proto/route_guide.proto).
As we're generating Java code in this example, we've specified a `java_package`
file option in our .proto:
```proto
option java_package = "io.grpc.examples.routeguide";
```
This specifies the package we want to use for our generated Java classes. If no
explicit `java_package` option is given in the .proto file, then by default the
proto package (specified using the "package" keyword) will be used. However,
proto packages generally do not make good Java packages since proto packages are
not expected to start with reverse domain names. If we generate code in another
language from this .proto, the `java_package` option has no effect.
To define a service, we specify a named `service` in the .proto file:
```proto
service RouteGuide {
...
}
```
Then we define `rpc` methods inside our service definition, specifying their
request and response types. gRPC lets you define four kinds of service methods,
all of which are used in the `RouteGuide` service:
- A *simple RPC* where the client sends a request to the server using the stub
and waits for a response to come back, just like a normal function call.
```proto
// Obtains the feature at a given position.
rpc GetFeature(Point) returns (Feature) {}
```
- A *server-side streaming RPC* where the client sends a request to the server
and gets a stream to read a sequence of messages back. The client reads from
the returned stream until there are no more messages. As you can see in our
example, you specify a server-side streaming method by placing the `stream`
keyword before the *response* type.
```proto
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc ListFeatures(Rectangle) returns (stream Feature) {}
```
- A *client-side streaming RPC* where the client writes a sequence of messages
and sends them to the server, again using a provided stream. Once the client
has finished writing the messages, it waits for the server to read them all
and return its response. You specify a client-side streaming method by placing
the `stream` keyword before the *request* type.
```proto
// Accepts a stream of Points on a route being traversed, returning a
// RouteSummary when traversal is completed.
rpc RecordRoute(stream Point) returns (RouteSummary) {}
```
- A *bidirectional streaming RPC* where both sides send a sequence of messages
using a read-write stream. The two streams operate independently, so clients
and servers can read and write in whatever order they like: for example, the
server could wait to receive all the client messages before writing its
responses, or it could alternately read a message then write a message, or
some other combination of reads and writes. The order of messages in each
stream is preserved. You specify this type of method by placing the `stream`
keyword before both the request and the response.
```proto
// Accepts a stream of RouteNotes sent while a route is being traversed,
// while receiving other RouteNotes (e.g. from other users).
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
```
Our .proto file also contains protocol buffer message type definitions for all
the request and response types used in our service methods - for example, here's
the `Point` message type:
```proto
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
```
### Generating client and server code
Next we need to generate the gRPC client and server interfaces from our .proto
service definition. We do this using the protocol buffer compiler `protoc` with
a special gRPC Java plugin. You need to use the
[proto3](https://github.com/google/protobuf/releases) compiler (which supports
both proto2 and proto3 syntax) in order to generate gRPC services.
When using Gradle or Maven, the protoc build plugin can generate the necessary
code as part of the build. You can refer to the <a
href="https://github.com/grpc/grpc-java/blob/master/README.md">README</a> for
how to generate code from your own .proto files.
The following classes are generated from our service definition:
- `Feature.java`, `Point.java`, `Rectangle.java`, and others which contain all
the protocol buffer code to populate, serialize, and retrieve our request and
response message types.
- `RouteGuideGrpc.java` which contains (along with some other useful code):
- a base class for `RouteGuide` servers to implement,
`RouteGuideGrpc.RouteGuideImplBase`, with all the methods defined in the
`RouteGuide` service.
- *stub* classes that clients can use to talk to a `RouteGuide` server.
<a name="server"></a>
### Creating the server
First let's look at how we create a `RouteGuide` server. If you're only
interested in creating gRPC clients, you can skip this section and go straight
to [Creating the client](#client) (though you might find it interesting
anyway!).
There are two parts to making our `RouteGuide` service do its job:
- Overriding the service base class generated from our service definition: doing
the actual "work" of our service.
- Running a gRPC server to listen for requests from clients and return the
service responses.
You can find our example `RouteGuide` server in
[grpc-java/examples/src/main/java/io/grpc/examples/routeguide/RouteGuideServer.java](https://github.com/grpc/grpc-java/blob/master/examples/src/main/java/io/grpc/examples/routeguide/RouteGuideServer.java).
Let's take a closer look at how it works.
#### Implementing RouteGuide
As you can see, our server has a `RouteGuideService` class that extends the
generated `RouteGuideGrpc.RouteGuideImplBase` abstract class:
```java
private static class RouteGuideService extends RouteGuideGrpc.RouteGuideImplBase {
...
}
```
#### Simple RPC
`RouteGuideService` implements all our service methods. Let's
look at the simplest type first, `GetFeature`, which just gets a `Point` from
the client and returns the corresponding feature information from its database
in a `Feature`.
```java
@Override
public void getFeature(Point request, StreamObserver<Feature> responseObserver) {
responseObserver.onNext(checkFeature(request));
responseObserver.onCompleted();
}
...
private Feature checkFeature(Point location) {
for (Feature feature : features) {
if (feature.getLocation().getLatitude() == location.getLatitude()
&& feature.getLocation().getLongitude() == location.getLongitude()) {
return feature;
}
}
// No feature was found, return an unnamed feature.
return Feature.newBuilder().setName("").setLocation(location).build();
}
```
`getFeature()` takes two parameters:
- `Point`: the request
- `StreamObserver<Feature>`: a response observer, which is a special interface
for the server to call with its response.
To return our response to the client and complete the call:
1. We construct and populate a `Feature` response object to return to the
client, as specified in our service definition. In this example, we do this
in a separate private `checkFeature()` method.
2. We use the response observer's `onNext()` method to return the `Feature`.
3. We use the response observer's `onCompleted()` method to specify that we've
finished dealing with the RPC.
##### Server-side streaming RPC
Next let's look at one of our streaming RPCs. `ListFeatures` is a server-side
streaming RPC, so we need to send back multiple `Feature`s to our client.
```java
private final Collection<Feature> features;
...
@Override
public void listFeatures(Rectangle request, StreamObserver<Feature> responseObserver) {
int left = min(request.getLo().getLongitude(), request.getHi().getLongitude());
int right = max(request.getLo().getLongitude(), request.getHi().getLongitude());
int top = max(request.getLo().getLatitude(), request.getHi().getLatitude());
int bottom = min(request.getLo().getLatitude(), request.getHi().getLatitude());
for (Feature feature : features) {
if (!RouteGuideUtil.exists(feature)) {
continue;
}
int lat = feature.getLocation().getLatitude();
int lon = feature.getLocation().getLongitude();
if (lon >= left && lon <= right && lat >= bottom && lat <= top) {
responseObserver.onNext(feature);
}
}
responseObserver.onCompleted();
}
```
Like the simple RPC, this method gets a request object (the `Rectangle` in which
our client wants to find `Feature`s) and a `StreamObserver` response observer.
This time, we get as many `Feature` objects as we need to return to the client
(in this case, we select them from the service's feature collection based on
whether they're inside our request `Rectangle`), and write them each in turn to
the response observer using its `onNext()` method. Finally, as in our simple
RPC, we use the response observer's `onCompleted()` method to tell gRPC that
we've finished writing responses.
##### Client-side streaming RPC
Now let's look at something a little more complicated: the client-side streaming
method `RecordRoute`, where we get a stream of `Point`s from the client and
return a single `RouteSummary` with information about their trip.
```java
@Override
public StreamObserver<Point> recordRoute(final StreamObserver<RouteSummary> responseObserver) {
return new StreamObserver<Point>() {
int pointCount;
int featureCount;
int distance;
Point previous;
long startTime = System.nanoTime();
@Override
public void onNext(Point point) {
pointCount++;
if (RouteGuideUtil.exists(checkFeature(point))) {
featureCount++;
}
// For each point after the first, add the incremental distance from the previous point
// to the total distance value.
if (previous != null) {
distance += calcDistance(previous, point);
}
previous = point;
}
@Override
public void onError(Throwable t) {
logger.log(Level.WARNING, "Encountered error in recordRoute", t);
}
@Override
public void onCompleted() {
long seconds = NANOSECONDS.toSeconds(System.nanoTime() - startTime);
responseObserver.onNext(RouteSummary.newBuilder().setPointCount(pointCount)
.setFeatureCount(featureCount).setDistance(distance)
.setElapsedTime((int) seconds).build());
responseObserver.onCompleted();
}
};
}
```
As you can see, like the previous method types our method gets a
`StreamObserver` response observer parameter, but this time it returns a
`StreamObserver` for the client to write its `Point`s.
In the method body we instantiate an anonymous `StreamObserver` to return, in
which we:
- Override the `onNext()` method to get features and other information each time
the client writes a `Point` to the message stream.
- Override the `onCompleted()` method (called when the *client* has finished
writing messages) to populate and build our `RouteSummary`. We then call our
method's own response observer's `onNext()` with our `RouteSummary`, and then
call its `onCompleted()` method to finish the call from the server side.
##### Bidirectional streaming RPC
Finally, let's look at our bidirectional streaming RPC `RouteChat()`.
```java
@Override
public StreamObserver<RouteNote> routeChat(final StreamObserver<RouteNote> responseObserver) {
return new StreamObserver<RouteNote>() {
@Override
public void onNext(RouteNote note) {
List<RouteNote> notes = getOrCreateNotes(note.getLocation());
// Respond with all previous notes at this location.
for (RouteNote prevNote : notes.toArray(new RouteNote[0])) {
responseObserver.onNext(prevNote);
}
// Now add the new note to the list
notes.add(note);
}
@Override
public void onError(Throwable t) {
logger.log(Level.WARNING, "Encountered error in routeChat", t);
}
@Override
public void onCompleted() {
responseObserver.onCompleted();
}
};
}
```
As with our client-side streaming example, we both get and return a
`StreamObserver` response observer, except this time we return values via our
method's response observer while the client is still writing messages to *their*
message stream. The syntax for reading and writing here is exactly the same as
for our client-streaming and server-streaming methods. Although each side will
always get the other's messages in the order they were written, both the client
and server can read and write in any order — the streams operate completely
independently.
#### Starting the server
Once we've implemented all our methods, we also need to start up a gRPC server
so that clients can actually use our service. The following snippet shows how we
do this for our `RouteGuide` service:
```java
public RouteGuideServer(int port, URL featureFile) throws IOException {
this(ServerBuilder.forPort(port), port, RouteGuideUtil.parseFeatures(featureFile));
}
/** Create a RouteGuide server using serverBuilder as a base and features as data. */
public RouteGuideServer(ServerBuilder<?> serverBuilder, int port, Collection<Feature> features) {
this.port = port;
server = serverBuilder.addService(new RouteGuideService(features))
.build();
}
...
public void start() throws IOException {
server.start();
logger.info("Server started, listening on " + port);
...
}
```
As you can see, we build and start our server using a `ServerBuilder`.
To do this, we:
1. Specify the address and port we want to use to listen for client requests
using the builder's `forPort()` method.
1. Create an instance of our service implementation class `RouteGuideService`
and pass it to the builder's `addService()` method.
1. Call `build()` and `start()` on the builder to create and start an RPC server
for our service.
<a name="client"></a>
## Creating the client
In this section, we'll look at creating a Java client for our `RouteGuide`
service. You can see our complete example client code in
[grpc-java/examples/src/main/java/io/grpc/examples/routeguide/RouteGuideClient.java](https://github.com/grpc/grpc-java/blob/master/examples/src/main/java/io/grpc/examples/routeguide/RouteGuideClient.java).
#### Creating a stub
To call service methods, we first need to create a *stub*, or rather, two stubs:
- a *blocking/synchronous* stub: this means that the RPC call waits for the
server to respond, and will either return a response or raise an exception.
- a *non-blocking/asynchronous* stub that makes non-blocking calls to the
server, where the response is returned asynchronously. You can make certain
types of streaming call only using the asynchronous stub.
First we need to create a gRPC *channel* for our stub, specifying the server
address and port we want to connect to:
```java
public RouteGuideClient(String host, int port) {
this(ManagedChannelBuilder.forAddress(host, port).usePlaintext());
}
/** Construct client for accessing RouteGuide server using the existing channel. */
public RouteGuideClient(ManagedChannelBuilder<?> channelBuilder) {
channel = channelBuilder.build();
blockingStub = RouteGuideGrpc.newBlockingStub(channel);
asyncStub = RouteGuideGrpc.newStub(channel);
}
```
We use a `ManagedChannelBuilder` to create the channel.
Now we can use the channel to create our stubs using the `newStub` and
`newBlockingStub` methods provided in the `RouteGuideGrpc` class we generated
from our .proto.
```java
blockingStub = RouteGuideGrpc.newBlockingStub(channel);
asyncStub = RouteGuideGrpc.newStub(channel);
```
#### Calling service methods
Now let's look at how we call our service methods.
##### Simple RPC
Calling the simple RPC `GetFeature` on the blocking stub is as straightforward
as calling a local method.
```java
Point request = Point.newBuilder().setLatitude(lat).setLongitude(lon).build();
Feature feature;
try {
feature = blockingStub.getFeature(request);
} catch (StatusRuntimeException e) {
logger.log(Level.WARNING, "RPC failed: {0}", e.getStatus());
return;
}
```
We create and populate a request protocol buffer object (in our case `Point`),
pass it to the `getFeature()` method on our blocking stub, and get back a
`Feature`.
If an error occurs, it is encoded as a `Status`, which we can obtain from the
`StatusRuntimeException`.
##### Server-side streaming RPC
Next, let's look at a server-side streaming call to `ListFeatures`, which
returns a stream of geographical `Feature`s:
```java
Rectangle request =
Rectangle.newBuilder()
.setLo(Point.newBuilder().setLatitude(lowLat).setLongitude(lowLon).build())
.setHi(Point.newBuilder().setLatitude(hiLat).setLongitude(hiLon).build()).build();
Iterator<Feature> features;
try {
features = blockingStub.listFeatures(request);
} catch (StatusRuntimeException ex) {
logger.log(Level.WARNING, "RPC failed: {0}", e.getStatus());
return;
}
```
As you can see, it's very similar to the simple RPC we just looked at, except
instead of returning a single `Feature`, the method returns an `Iterator` that
the client can use to read all the returned `Feature`s.
##### Client-side streaming RPC
Now for something a little more complicated: the client-side streaming method
`RecordRoute`, where we send a stream of `Point`s to the server and get back a
single `RouteSummary`. For this method we need to use the asynchronous stub. If
you've already read [Creating the server](#server) some of this may look very
familiar - asynchronous streaming RPCs are implemented in a similar way on both
sides.
```java
public void recordRoute(List<Feature> features, int numPoints) throws InterruptedException {
info("*** RecordRoute");
final CountDownLatch finishLatch = new CountDownLatch(1);
StreamObserver<RouteSummary> responseObserver = new StreamObserver<RouteSummary>() {
@Override
public void onNext(RouteSummary summary) {
info("Finished trip with {0} points. Passed {1} features. "
+ "Travelled {2} meters. It took {3} seconds.", summary.getPointCount(),
summary.getFeatureCount(), summary.getDistance(), summary.getElapsedTime());
}
@Override
public void onError(Throwable t) {
Status status = Status.fromThrowable(t);
logger.log(Level.WARNING, "RecordRoute Failed: {0}", status);
finishLatch.countDown();
}
@Override
public void onCompleted() {
info("Finished RecordRoute");
finishLatch.countDown();
}
};
StreamObserver<Point> requestObserver = asyncStub.recordRoute(responseObserver);
try {
// Send numPoints points randomly selected from the features list.
Random rand = new Random();
for (int i = 0; i < numPoints; ++i) {
int index = rand.nextInt(features.size());
Point point = features.get(index).getLocation();
info("Visiting point {0}, {1}", RouteGuideUtil.getLatitude(point),
RouteGuideUtil.getLongitude(point));
requestObserver.onNext(point);
// Sleep for a bit before sending the next one.
Thread.sleep(rand.nextInt(1000) + 500);
if (finishLatch.getCount() == 0) {
// RPC completed or errored before we finished sending.
// Sending further requests won't error, but they will just be thrown away.
return;
}
}
} catch (RuntimeException e) {
// Cancel RPC
requestObserver.onError(e);
throw e;
}
// Mark the end of requests
requestObserver.onCompleted();
// Receiving happens asynchronously
finishLatch.await(1, TimeUnit.MINUTES);
}
```
As you can see, to call this method we need to create a `StreamObserver`, which
implements a special interface for the server to call with its `RouteSummary`
response. In our `StreamObserver` we:
- Override the `onNext()` method to print out the returned information when the
server writes a `RouteSummary` to the message stream.
- Override the `onCompleted()` method (called when the *server* has completed
the call on its side) to reduce a `CountDownLatch` that we can check to see if
the server has finished writing.
We then pass the `StreamObserver` to the asynchronous stub's `recordRoute()`
method and get back our own `StreamObserver` request observer to write our
`Point`s to send to the server. Once we've finished writing points, we use the
request observer's `onCompleted()` method to tell gRPC that we've finished
writing on the client side. Once we're done, we check our `CountDownLatch` to
check that the server has completed on its side.
##### Bidirectional streaming RPC
Finally, let's look at our bidirectional streaming RPC `RouteChat()`.
```java
public void routeChat() throws Exception {
info("*** RoutChat");
final CountDownLatch finishLatch = new CountDownLatch(1);
StreamObserver<RouteNote> requestObserver =
asyncStub.routeChat(new StreamObserver<RouteNote>() {
@Override
public void onNext(RouteNote note) {
info("Got message \"{0}\" at {1}, {2}", note.getMessage(), note.getLocation()
.getLatitude(), note.getLocation().getLongitude());
}
@Override
public void onError(Throwable t) {
Status status = Status.fromThrowable(t);
logger.log(Level.WARNING, "RouteChat Failed: {0}", status);
finishLatch.countDown();
}
@Override
public void onCompleted() {
info("Finished RouteChat");
finishLatch.countDown();
}
});
try {
RouteNote[] requests =
{newNote("First message", 0, 0), newNote("Second message", 0, 1),
newNote("Third message", 1, 0), newNote("Fourth message", 1, 1)};
for (RouteNote request : requests) {
info("Sending message \"{0}\" at {1}, {2}", request.getMessage(), request.getLocation()
.getLatitude(), request.getLocation().getLongitude());
requestObserver.onNext(request);
}
} catch (RuntimeException e) {
// Cancel RPC
requestObserver.onError(e);
throw e;
}
// Mark the end of requests
requestObserver.onCompleted();
// Receiving happens asynchronously
finishLatch.await(1, TimeUnit.MINUTES);
}
```
As with our client-side streaming example, we both get and return a
`StreamObserver` response observer, except this time we send values via our
method's response observer while the server is still writing messages to *their*
message stream. The syntax for reading and writing here is exactly the same as
for our client-streaming method. Although each side will always get the other's
messages in the order they were written, both the client and server can read and
write in any order — the streams operate completely independently.
### Try it out!
Follow the instructions in the example directory
[README](https://github.com/grpc/grpc-java/blob/master/examples/README.md) to
build and run the client and server.

View File

@ -0,0 +1,528 @@
---
layout: tutorials
title: gRPC Basics - Node.js
aliases: [/docs/tutorials/basic/node.html]
---
This tutorial provides a basic Node.js programmer's introduction
to working with gRPC.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Use the Node.js gRPC API to write a simple client and server for your service.
It assumes that you have read the [Overview](/docs/) and are familiar
with [protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). Note
that the example in this tutorial uses the
[proto3](https://github.com/google/protobuf/releases) version of the protocol
buffers language. You can find out more in the
[proto3 language guide](https://developers.google.com/protocol-buffers/docs/proto3).
<div id="toc"></div>
### Why use gRPC?
Our example is a simple route mapping application that lets clients get
information about features on their route, create a summary of their route, and
exchange route information such as traffic updates with the server and other
clients.
With gRPC we can define our service once in a .proto file and implement clients
and servers in any of gRPC's supported languages, which in turn can be run in
environments ranging from servers inside Google to your own tablet - all the
complexity of communication between different languages and environments is
handled for you by gRPC. We also get all the advantages of working with protocol
buffers, including efficient serialization, a simple IDL, and easy interface
updating.
### Example code and setup
The example code for our tutorial is in
[grpc/grpc/examples/node/dynamic&#95;codegen/route&#95;guide](https://github.com/grpc/grpc/tree/
{{< param grpc_release_tag >}}/examples/node/dynamic_codegen/route_guide).
As you'll see if you look at the repository, there's also a very similar-looking
example in
[grpc/grpc/examples/node/static&#95;codegen/route&#95;guide](https://github.com/grpc/grpc/tree/
{{< param grpc_release_tag >}}/examples/node/static_codegen/route_guide).
We have two versions of our route guide example because there are two ways to
generate the code needed to work with protocol buffers in Node.js - one approach
uses `Protobuf.js` to dynamically generate the code at runtime, the other uses
code statically generated using the protocol buffer compiler `protoc`. The
examples behave identically, and either server can be used with either client.
As suggested by the directory name, we'll be using the version with dynamically
generated code in this document, but feel free to look at the static code
example too.
To download the example, clone the `grpc` repository by running the following
command:
```sh
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
$ cd grpc
```
Then change your current directory to `examples/node`:
```sh
$ cd examples/node
```
You also should have the relevant tools installed to generate the server and
client interface code - if you don't already, follow the setup instructions in
[the Node.js quick start guide](/docs/quickstart/node/).
### Defining the service
Our first step (as you'll know from the [Overview](/docs/)) is to
define the gRPC *service* and the method *request* and *response* types using
[protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). You can
see the complete .proto file in
[`examples/protos/route_guide.proto`](https://github.com/grpc/grpc/blob/
{{< param grpc_release_tag >}}/examples/protos/route_guide.proto).
To define a service, you specify a named `service` in your .proto file:
```protobuf
service RouteGuide {
...
}
```
Then you define `rpc` methods inside your service definition, specifying their
request and response types. gRPC lets you define four kinds of service method,
all of which are used in the `RouteGuide` service:
- A *simple RPC* where the client sends a request to the server using the stub
and waits for a response to come back, just like a normal function call.
```protobuf
// Obtains the feature at a given position.
rpc GetFeature(Point) returns (Feature) {}
```
- A *server-side streaming RPC* where the client sends a request to the server
and gets a stream to read a sequence of messages back. The client reads from
the returned stream until there are no more messages. As you can see in our
example, you specify a server-side streaming method by placing the `stream`
keyword before the *response* type.
```protobuf
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc ListFeatures(Rectangle) returns (stream Feature) {}
```
- A *client-side streaming RPC* where the client writes a sequence of messages
and sends them to the server, again using a provided stream. Once the client
has finished writing the messages, it waits for the server to read them all
and return its response. You specify a client-side streaming method by placing
the `stream` keyword before the *request* type.
```protobuf
// Accepts a stream of Points on a route being traversed, returning a
// RouteSummary when traversal is completed.
rpc RecordRoute(stream Point) returns (RouteSummary) {}
```
- A *bidirectional streaming RPC* where both sides send a sequence of messages
using a read-write stream. The two streams operate independently, so clients
and servers can read and write in whatever order they like: for example, the
server could wait to receive all the client messages before writing its
responses, or it could alternately read a message then write a message, or
some other combination of reads and writes. The order of messages in each
stream is preserved. You specify this type of method by placing the `stream`
keyword before both the request and the response.
```protobuf
// Accepts a stream of RouteNotes sent while a route is being traversed,
// while receiving other RouteNotes (e.g. from other users).
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
```
Our .proto file also contains protocol buffer message type definitions for all
the request and response types used in our service methods - for example, here's
the `Point` message type:
```protobuf
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
```
### Loading service descriptors from proto files
The Node.js library dynamically generates service descriptors and client stub
definitions from `.proto` files loaded at runtime.
To load a `.proto` file, simply `require` the gRPC proto loader library and use its
`loadSync()` method, then pass the output to the gRPC library's `loadPackageDefinition` method:
```js
var PROTO_PATH = __dirname + '/../../../protos/route_guide.proto';
var grpc = require('grpc');
var protoLoader = require('@grpc/proto-loader');
// Suggested options for similarity to existing grpc.load behavior
var packageDefinition = protoLoader.loadSync(
PROTO_PATH,
{keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
});
var protoDescriptor = grpc.loadPackageDefinition(packageDefinition);
// The protoDescriptor object has the full package hierarchy
var routeguide = protoDescriptor.routeguide;
```
Once you've done this, the stub constructor is in the `routeguide` namespace
(`protoDescriptor.routeguide.RouteGuide`) and the service descriptor (which is
used to create a server) is a property of the stub
(`protoDescriptor.routeguide.RouteGuide.service`);
<a name="server"></a>
### Creating the server
First let's look at how we create a `RouteGuide` server. If you're only
interested in creating gRPC clients, you can skip this section and go straight
to [Creating the client](#client) (though you might find it interesting
anyway!).
There are two parts to making our `RouteGuide` service do its job:
- Implementing the service interface generated from our service definition:
doing the actual "work" of our service.
- Running a gRPC server to listen for requests from clients and return the
service responses.
You can find our example `RouteGuide` server in
[examples/node/dynamic&#95;codegen/route&#95;guide/route&#95;guide&#95;server.js](https://github.com/grpc/grpc/blob/
{{< param grpc_release_tag >}}/examples/node/dynamic_codegen/route_guide/route_guide_server.js).
Let's take a closer look at how it works.
#### Implementing RouteGuide
As you can see, our server has a `Server` constructor generated from the
`RouteGuide.service` descriptor object
```js
var Server = new grpc.Server();
```
In this case we're implementing the *asynchronous* version of `RouteGuide`,
which provides our default gRPC server behaviour.
The functions in `route_guide_server.js` implement all our service methods.
Let's look at the simplest type first, `getFeature`, which just gets a `Point`
from the client and returns the corresponding feature information from its
database in a `Feature`.
```js
function checkFeature(point) {
var feature;
// Check if there is already a feature object for the given point
for (var i = 0; i < feature_list.length; i++) {
feature = feature_list[i];
if (feature.location.latitude === point.latitude &&
feature.location.longitude === point.longitude) {
return feature;
}
}
var name = '';
feature = {
name: name,
location: point
};
return feature;
}
function getFeature(call, callback) {
callback(null, checkFeature(call.request));
}
```
The method is passed a call object for the RPC, which has the `Point` parameter
as a property, and a callback to which we can pass our returned `Feature`. In
the method body we populate a `Feature` corresponding to the given point and
pass it to the callback, with a null first parameter to indicate that there is
no error.
Now let's look at something a bit more complicated - a streaming RPC.
`listFeatures` is a server-side streaming RPC, so we need to send back multiple
`Feature`s to our client.
```js
function listFeatures(call) {
var lo = call.request.lo;
var hi = call.request.hi;
var left = _.min([lo.longitude, hi.longitude]);
var right = _.max([lo.longitude, hi.longitude]);
var top = _.max([lo.latitude, hi.latitude]);
var bottom = _.min([lo.latitude, hi.latitude]);
// For each feature, check if it is in the given bounding box
_.each(feature_list, function(feature) {
if (feature.name === '') {
return;
}
if (feature.location.longitude >= left &&
feature.location.longitude <= right &&
feature.location.latitude >= bottom &&
feature.location.latitude <= top) {
call.write(feature);
}
});
call.end();
}
```
As you can see, instead of getting the call object and callback in our method
parameters, this time we get a `call` object that implements the `Writable`
interface. In the method, we create as many `Feature` objects as we need to
return, writing them to the `call` using its `write()` method. Finally, we call
`call.end()` to indicate that we have sent all messages.
If you look at the client-side streaming method `RecordRoute` you'll see it's
quite similar to the unary call, except this time the `call` parameter
implements the `Reader` interface. The `call`'s `'data'` event fires every time
there is new data, and the `'end'` event fires when all data has been read. Like
the unary case, we respond by calling the callback
```js
call.on('data', function(point) {
// Process user data
});
call.on('end', function() {
callback(null, result);
});
```
Finally, let's look at our bidirectional streaming RPC `RouteChat()`.
```js
function routeChat(call) {
call.on('data', function(note) {
var key = pointKey(note.location);
/* For each note sent, respond with all previous notes that correspond to
* the same point */
if (route_notes.hasOwnProperty(key)) {
_.each(route_notes[key], function(note) {
call.write(note);
});
} else {
route_notes[key] = [];
}
// Then add the new note to the list
route_notes[key].push(JSON.parse(JSON.stringify(note)));
});
call.on('end', function() {
call.end();
});
}
```
This time we get a `call` implementing `Duplex` that can be used to read *and*
write messages. The syntax for reading and writing here is exactly the same as
for our client-streaming and server-streaming methods. Although each side will
always get the other's messages in the order they were written, both the client
and server can read and write in any order — the streams operate completely
independently.
#### Starting the server
Once we've implemented all our methods, we also need to start up a gRPC server
so that clients can actually use our service. The following snippet shows how we
do this for our `RouteGuide` service:
```js
function getServer() {
var server = new grpc.Server();
server.addProtoService(routeguide.RouteGuide.service, {
getFeature: getFeature,
listFeatures: listFeatures,
recordRoute: recordRoute,
routeChat: routeChat
});
return server;
}
var routeServer = getServer();
routeServer.bind('0.0.0.0:50051', grpc.ServerCredentials.createInsecure());
routeServer.start();
```
As you can see, we build and start our server with the following steps:
1. Create a `Server` constructor from the `RouteGuide` service descriptor.
1. Implement the service methods.
1. Create an instance of the server by calling the `Server` constructor with
the method implementations.
1. Specify the address and port we want to use to listen for client requests
using the instance's `bind()` method.
1. Call `start()` on the instance to start the RPC server.
<a name="client"></a>
### Creating the client
In this section, we'll look at creating a Node.js client for our `RouteGuide`
service. You can see our complete example client code in
[examples/node/dynamic&#95;codegen/route&#95;guide/route&#95;guide&#95;client.js](https://github.com/grpc/grpc/blob/
{{< param grpc_release_tag >}}/examples/node/dynamic_codegen/route_guide/route_guide_client.js).
#### Creating a stub
To call service methods, we first need to create a *stub*. To do this, we just
need to call the RouteGuide stub constructor, specifying the server address and
port.
```js
new routeguide.RouteGuide('localhost:50051', grpc.credentials.createInsecure());
```
#### Calling service methods
Now let's look at how we call our service methods. Note that all of these
methods are asynchronous: they use either events or callbacks to retrieve
results.
##### Simple RPC
Calling the simple RPC `GetFeature` is nearly as straightforward as calling a
local asynchronous method.
```js
var point = {latitude: 409146138, longitude: -746188906};
stub.getFeature(point, function(err, feature) {
if (err) {
// process error
} else {
// process feature
}
});
```
As you can see, we create and populate a request object. Finally, we call the
method on the stub, passing it the request and callback. If there is no error,
then we can read the response information from the server from our response
object.
```js
console.log('Found feature called "' + feature.name + '" at ' +
feature.location.latitude/COORD_FACTOR + ', ' +
feature.location.longitude/COORD_FACTOR);
```
##### Streaming RPCs
Now let's look at our streaming methods. If you've already read [Creating the
server](#server) some of this may look very familiar - streaming RPCs are
implemented in a similar way on both sides. Here's where we call the server-side
streaming method `ListFeatures`, which returns a stream of geographical
`Feature`s:
```js
var call = client.listFeatures(rectangle);
call.on('data', function(feature) {
console.log('Found feature called "' + feature.name + '" at ' +
feature.location.latitude/COORD_FACTOR + ', ' +
feature.location.longitude/COORD_FACTOR);
});
call.on('end', function() {
// The server has finished sending
});
call.on('error', function(e) {
// An error has occurred and the stream has been closed.
});
call.on('status', function(status) {
// process status
});
```
Instead of passing the method a request and callback, we pass it a request and
get a `Readable` stream object back. The client can use the `Readable`'s
`'data'` event to read the server's responses. This event fires with each
`Feature` message object until there are no more messages. Errors in the `'data'`
callback will not cause the stream to be closed. The `'error'` event
indicates that an error has occurred and the stream has been closed. The
`'end'` event indicates that the server has finished sending and no errors
occured. Only one of `'error'` or `'end'` will be emitted. Finally, the
`'status'` event fires when the server sends the status.
The client-side streaming method `RecordRoute` is similar, except there we pass
the method a callback and get back a `Writable`.
```js
var call = client.recordRoute(function(error, stats) {
if (error) {
callback(error);
}
console.log('Finished trip with', stats.point_count, 'points');
console.log('Passed', stats.feature_count, 'features');
console.log('Travelled', stats.distance, 'meters');
console.log('It took', stats.elapsed_time, 'seconds');
});
function pointSender(lat, lng) {
return function(callback) {
console.log('Visiting point ' + lat/COORD_FACTOR + ', ' +
lng/COORD_FACTOR);
call.write({
latitude: lat,
longitude: lng
});
_.delay(callback, _.random(500, 1500));
};
}
var point_senders = [];
for (var i = 0; i < num_points; i++) {
var rand_point = feature_list[_.random(0, feature_list.length - 1)];
point_senders[i] = pointSender(rand_point.location.latitude,
rand_point.location.longitude);
}
async.series(point_senders, function() {
call.end();
});
```
Once we've finished writing our client's requests to the stream using `write()`,
we need to call `end()` on the stream to let gRPC know that we've finished
writing. If the status is `OK`, the `stats` object will be populated with the
server's response.
Finally, let's look at our bidirectional streaming RPC `routeChat()`. In this
case, we just pass a context to the method and get back a `Duplex` stream
object, which we can use to both write and read messages.
```js
var call = client.routeChat();
```
The syntax for reading and writing here is exactly the same as for our
client-streaming and server-streaming methods. Although each side will always
get the other's messages in the order they were written, both the client and
server can read and write in any order — the streams operate completely
independently.
### Try it out!
Build client and server:
```sh
$ npm install
```
Run the server, which will listen on port 50051:
```sh
$ node ./dynamic_codegen/route_guide/route_guide_server.js --db_path=./dynamic_codegen/route_guide/route_guide_db.json
```
Run the client (in a different terminal):
```sh
$ node ./dynamic_codegen/route_guide/route_guide_client.js --db_path=./dynamic_codegen/route_guide/route_guide_db.json
```

View File

@ -0,0 +1,403 @@
---
layout: tutorials
title: gRPC Basics - Objective-C
aliases: [/docs/tutorials/basic/objective-c.html]
---
This tutorial provides a basic Objective-C programmer's
introduction to working with gRPC.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Generate client code using the protocol buffer compiler.
- Use the Objective-C gRPC API to write a simple client for your service.
It assumes a passing familiarity with [protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). Note
that the example in this tutorial uses the proto3 version of the protocol
buffers language: you can find out more in
the [proto3 language
guide](https://developers.google.com/protocol-buffers/docs/proto3) and the
[Objective-C generated code
guide](https://developers.google.com/protocol-buffers/docs/reference/objective-c-generated).
<div id="toc"></div>
<a name="why-grpc"></a>
### Why use gRPC?
With gRPC you can define your service once in a .proto file and implement
clients and servers in any of gRPC's supported languages, which in turn can be
run in environments ranging from servers inside Google to your own tablet - all
the complexity of communication between different languages and environments is
handled for you by gRPC. You also get all the advantages of working with
protocol buffers, including efficient serialization, a simple IDL, and easy
interface updating.
gRPC and proto3 are specially suited for mobile clients: gRPC is implemented on
top of HTTP/2, which results in network bandwidth savings over using HTTP/1.1.
Serialization and parsing of the proto binary format is more efficient than the
equivalent JSON, resulting in CPU and battery savings. And proto3 uses a runtime
that has been optimized over the years at Google to keep code size to a minimum.
The latter is important in Objective-C, because the ability of the compiler to
strip unused code is limited by the dynamic nature of the language.
<a name="setup"></a>
### Example code and setup
The example code for our tutorial is in
[grpc/grpc/examples/objective-c/route_guide](https://github.com/grpc/grpc/tree/{{< param grpc_release_tag >}}/examples/objective-c/route_guide).
To download the example, clone the `grpc` repository by running the following
commands:
```sh
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
$ cd grpc
$ git submodule update --init
```
Then change your current directory to `examples/objective-c/route_guide`:
```sh
$ cd examples/objective-c/route_guide
```
Our example is a simple route mapping application that lets clients get
information about features on their route, create a summary of their route, and
exchange route information such as traffic updates with the server and other
clients.
You also should have [Cocoapods](https://cocoapods.org/#install) installed, as
well as the relevant tools to generate the client library code (and a server in
another language, for testing). You can obtain the latter by following [these
setup instructions](https://github.com/grpc/homebrew-grpc).
<a name="try"></a>
## Try it out!
To try the sample app, we need a gRPC server running locally. Let's compile and
run, for example, the C++ server in this repository:
```sh
$ pushd ../../cpp/route_guide
$ make
$ ./route_guide_server &
$ popd
```
Now have Cocoapods generate and install the client library for our .proto files:
```sh
$ pod install
```
(This might have to compile OpenSSL, which takes around 15 minutes if Cocoapods
doesn't have it yet on your computer's cache).
Finally, open the XCode workspace created by Cocoapods, and run the app. You can
check the calling code in `ViewControllers.m` and see the results in XCode's log
console.
The next sections guide you step-by-step through how this proto service is
defined, how to generate a client library from it, and how to create an app that
uses that library.
<a name="proto"></a>
### Defining the service
First let's look at how the service we're using is defined. A gRPC *service* and
its method *request* and *response* types using [protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). You can
see the complete .proto file for our example in
[`examples/protos/route_guide.proto`](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/protos/route_guide.proto).
To define a service, you specify a named `service` in your .proto file:
```protobuf
service RouteGuide {
...
}
```
Then you define `rpc` methods inside your service definition, specifying their
request and response types. Protocol buffers let you define four kinds of
service method, all of which are used in the `RouteGuide` service:
- A *simple RPC* where the client sends a request to the server and receives a
response later, just like a normal remote procedure call.
```protobuf
// Obtains the feature at a given position.
rpc GetFeature(Point) returns (Feature) {}
```
- A *response-streaming RPC* where the client sends a request to the server and
gets back a stream of response messages. You specify a response-streaming
method by placing the `stream` keyword before the *response* type.
```protobuf
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc ListFeatures(Rectangle) returns (stream Feature) {}
```
- A *request-streaming RPC* where the client sends a sequence of messages to the
server. Once the client has finished writing the messages, it waits for the
server to read them all and return its response. You specify a
request-streaming method by placing the `stream` keyword before the *request*
type.
```protobuf
// Accepts a stream of Points on a route being traversed, returning a
// RouteSummary when traversal is completed.
rpc RecordRoute(stream Point) returns (RouteSummary) {}
```
- A *bidirectional streaming RPC* where both sides send a sequence of messages
to the other. The two streams operate independently, so clients and servers
can read and write in whatever order they like: for example, the server could
wait to receive all the client messages before writing its responses, or it
could alternately read a message then write a message, or some other
combination of reads and writes. The order of messages in each stream is
preserved. You specify this type of method by placing the `stream` keyword
before both the request and the response.
```protobuf
// Accepts a stream of RouteNotes sent while a route is being traversed,
// while receiving other RouteNotes (e.g. from other users).
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
```
Our .proto file also contains protocol buffer message type definitions for all
the request and response types used in our service methods - for example, here's
the `Point` message type:
```protobuf
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
```
You can specify a prefix to be used for your generated classes by adding the
`objc_class_prefix` option at the top of the file. For example:
```protobuf
option objc_class_prefix = "RTG";
```
<a name="protoc"></a>
### Generating client code
Next we need to generate the gRPC client interfaces from our .proto service
definition. We do this using the protocol buffer compiler (`protoc`) with a
special gRPC Objective-C plugin.
For simplicity, we've provided a [Podspec
file](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/objective-c/route_guide/RouteGuide.podspec)
that runs `protoc` for you with the appropriate plugin, input, and output, and
describes how to compile the generated files. You just need to run in this
directory (`examples/objective-c/route_guide`):
```sh
$ pod install
```
which, before installing the generated library in the XCode project of this sample, runs:
```sh
$ protoc -I ../../protos --objc_out=Pods/RouteGuide --objcgrpc_out=Pods/RouteGuide ../../protos/route_guide.proto
```
Running this command generates the following files under `Pods/RouteGuide/`:
- `RouteGuide.pbobjc.h`, the header which declares your generated message
classes.
- `RouteGuide.pbobjc.m`, which contains the implementation of your message
classes.
- `RouteGuide.pbrpc.h`, the header which declares your generated service
classes.
- `RouteGuide.pbrpc.m`, which contains the implementation of your service
classes.
These contain:
- All the protocol buffer code to populate, serialize, and retrieve our request
and response message types.
- A class called `RTGRouteGuide` that lets clients call the methods defined in
the `RouteGuide` service.
You can also use the provided Podspec file to generate client code from any
other proto service definition; just replace the name (matching the file name),
version, and other metadata.
<a name="client"></a>
### Creating the client application
In this section, we'll look at creating an Objective-C client for our
`RouteGuide` service. You can see our complete example client code in
[examples/objective-c/route_guide/ViewControllers.m](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/objective-c/route_guide/ViewControllers.m).
(Note: In your apps, for maintainability and readability reasons, you shouldn't
put all of your view controllers in a single file; it's done here only to
simplify the learning process).
#### Constructing a service object
To call service methods, we first need to create a service object, an instance
of the generated `RTGRouteGuide` class. The designated initializer of the class
expects a `NSString *` with the server address and port we want to connect to:
```objective-c
#import <GRPCClient/GRPCCall+Tests.h>
#import <RouteGuide/RouteGuide.pbrpc.h>
static NSString * const kHostAddress = @"localhost:50051";
...
[GRPCCall useInsecureConnectionsForHost:kHostAddress];
RTGRouteGuide *service = [[RTGRouteGuide alloc] initWithHost:kHostAddress];
```
Notice that before constructing our service object we've told the gRPC library
to use insecure connections for that host:port pair. This is because the server
we will be using to test our client doesn't use
[TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security). This is fine
because it will be running locally on our development machine. The most common
case, though, is connecting with a gRPC server on the internet, running gRPC
over TLS. For that case, the `useInsecureConnectionsForHost:` call isn't needed,
and the port defaults to 443 if absent.
#### Calling service methods
Now let's look at how we call our service methods. As you will see, all these
methods are asynchronous, so you can call them from the main thread of your app
without worrying about freezing your UI or the OS killing your app.
##### Simple RPC
Calling the simple RPC `GetFeature` is as straightforward as calling any other
asynchronous method on Cocoa.
```objective-c
RTGPoint *point = [RTGPoint message];
point.latitude = 40E7;
point.longitude = -74E7;
[service getFeatureWithRequest:point handler:^(RTGFeature *response, NSError *error) {
if (response) {
// Successful response received
} else {
// RPC error
}
}];
```
As you can see, we create and populate a request protocol buffer object (in our
case `RTGPoint`). Then, we call the method on the service object, passing it the
request, and a block to handle the response (or any RPC error). If the RPC
finishes successfully, the handler block is called with a `nil` error argument,
and we can read the response information from the server from the response
argument. If, instead, some RPC error happens, the handler block is called with
a `nil` response argument, and we can read the details of the problem from the
error argument.
```objective-c
NSLog(@"Found feature called %@ at %@.", response.name, response.location);
```
##### Streaming RPCs
Now let's look at our streaming methods. Here's where we call the
response-streaming method `ListFeatures`, which results in our client app
receiving a stream of geographical `RTGFeature`s:
```objective-c
[service listFeaturesWithRequest:rectangle
eventHandler:^(BOOL done, RTGFeature *response, NSError *error) {
if (response) {
NSLog(@"Found feature at %@ called %@.", response.location, response.name);
} else if (error) {
NSLog(@"RPC error: %@", error);
}
}];
```
Notice how the signature of the handler block now includes a `BOOL done`
parameter. The handler block can be called any number of times; only on the last
call is the `done` argument value set to `YES`. If an error occurs, the RPC
finishes and the handler is called with the arguments `(YES, nil, error)`.
The request-streaming method `RecordRoute` expects a stream of `RTGPoint`s from
the cient. This stream is passed to the method as an object that conforms to the
`GRXWriter` protocol. The simplest way to create one is to initialize one from a
`NSArray` object:
```objective-c
#import <gRPC/GRXWriter+Immediate.h>
...
RTGPoint *point1 = [RTGPoint message];
point.latitude = 40E7;
point.longitude = -74E7;
RTGPoint *point2 = [RTGPoint message];
point.latitude = 40E7;
point.longitude = -74E7;
GRXWriter *locationsWriter = [GRXWriter writerWithContainer:@[point1, point2]];
[service recordRouteWithRequestsWriter:locationsWriter handler:^(RTGRouteSummary *response, NSError *error) {
if (response) {
NSLog(@"Finished trip with %i points", response.pointCount);
NSLog(@"Passed %i features", response.featureCount);
NSLog(@"Travelled %i meters", response.distance);
NSLog(@"It took %i seconds", response.elapsedTime);
} else {
NSLog(@"RPC error: %@", error);
}
}];
```
The `GRXWriter` protocol is generic enough to allow for asynchronous streams, streams of future values, or even infinite streams.
Finally, let's look at our bidirectional streaming RPC `RouteChat()`. The way to
call a bidirectional streaming RPC is just a combination of how to call
request-streaming RPCs and response-streaming RPCs.
```objective-c
[service routeChatWithRequestsWriter:notesWriter handler:^(BOOL done, RTGRouteNote *note, NSError *error) {
if (note) {
NSLog(@"Got message %@ at %@", note.message, note.location);
} else if (error) {
NSLog(@"RPC error: %@", error);
}
if (done) {
NSLog(@"Chat ended.");
}
}];
```
The semantics for the handler block and the `GRXWriter` argument here are
exactly the same as for our request-streaming and response-streaming methods.
Although both client and server will always get the other's messages in the
order they were written, the two streams operate completely independently.

View File

@ -0,0 +1,360 @@
---
layout: tutorials
title: gRPC Basics - PHP
aliases: [/docs/tutorials/basic/php.html]
---
This tutorial provides a basic PHP programmer's introduction to
working with gRPC.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Generate client code using the protocol buffer compiler.
- Use the PHP gRPC API to write a simple client for your service.
It assumes a passing familiarity with [protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). Note
that the example in this tutorial uses the proto2 version of the protocol
buffers language.
Also note that currently you can only create clients in PHP for gRPC services -
you can find out how to create gRPC servers in our other tutorials, e.g.
[Node.js](/docs/tutorials/basic/node/).
<div id="toc"></div>
<a name="why-grpc"></a>
### Why use gRPC?
With gRPC you can define your service once in a .proto file and implement
clients and servers in any of gRPC's supported languages, which in turn can be
run in environments ranging from servers inside Google to your own tablet - all
the complexity of communication between different languages and environments is
handled for you by gRPC. You also get all the advantages of working with
protocol buffers, including efficient serialization, a simple IDL, and easy
interface updating.
<a name="setup"></a>
### Example code and setup
The example code for our tutorial is in
[grpc/grpc/examples/php/route_guide](https://github.com/grpc/grpc/tree/{{< param grpc_release_tag >}}/examples/php/route_guide).
To download the example, clone the `grpc` repository by running the following
command:
```sh
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
```
You need grpc-php-plugin to help you generate proto files. You can build it from source:
```sh
$ cd grpc && git submodule update --init && make grpc_php_plugin
```
Then change your current directory to `examples/php/route_guide` and generate proto files:
```sh
$ cd examples/php/route_guide
$ ./route_guide_proto_gen.sh
```
Our example is a simple route mapping application that lets clients get
information about features on their route, create a summary of their route, and
exchange route information such as traffic updates with the server and other
clients.
You also should have the relevant tools installed to generate the client
interface code (and a server in another language, for testing). You can obtain
the latter by following [these setup
instructions](/docs/tutorials/basic/node/).
<a name="try"></a>
### Try it out!
To try the sample app, we need a gRPC server running locally. Let's compile and
run, for example, the Node.js server in this repository:
```sh
$ cd ../../node
$ npm install
$ cd dynamic_codegen/route_guide
$ nodejs ./route_guide_server.js --db_path=route_guide_db.json
```
Run the PHP client (in a different terminal):
```sh
$ ./run_route_guide_client.sh
```
The next sections guide you step-by-step through how this proto service is
defined, how to generate a client library from it, and how to create a client
stub that uses that library.
<a name="proto"></a>
### Defining the service
First let's look at how the service we're using is defined. A gRPC *service* and
its method *request* and *response* types using [protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). You can
see the complete .proto file for our example in
[`examples/protos/route_guide.proto`](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/protos/route_guide.proto).
To define a service, you specify a named `service` in your .proto file:
```protobuf
service RouteGuide {
...
}
```
Then you define `rpc` methods inside your service definition, specifying their
request and response types. Protocol buffers let you define four kinds of
service method, all of which are used in the `RouteGuide` service:
- A *simple RPC* where the client sends a request to the server and receives a
response later, just like a normal remote procedure call.
```protobuf
// Obtains the feature at a given position.
rpc GetFeature(Point) returns (Feature) {}
```
- A *response-streaming RPC* where the client sends a request to the server and
gets back a stream of response messages. You specify a response-streaming
method by placing the `stream` keyword before the *response* type.
```protobuf
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc ListFeatures(Rectangle) returns (stream Feature) {}
```
- A *request-streaming RPC* where the client sends a sequence of messages to the
server. Once the client has finished writing the messages, it waits for the
server to read them all and return its response. You specify a
request-streaming method by placing the `stream` keyword before the *request*
type.
```protobuf
// Accepts a stream of Points on a route being traversed, returning a
// RouteSummary when traversal is completed.
rpc RecordRoute(stream Point) returns (RouteSummary) {}
```
- A *bidirectional streaming RPC* where both sides send a sequence of messages
to the other. The two streams operate independently, so clients and servers
can read and write in whatever order they like: for example, the server could
wait to receive all the client messages before writing its responses, or it
could alternately read a message then write a message, or some other
combination of reads and writes. The order of messages in each stream is
preserved. You specify this type of method by placing the `stream` keyword
before both the request and the response.
```protobuf
// Accepts a stream of RouteNotes sent while a route is being traversed,
// while receiving other RouteNotes (e.g. from other users).
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
```
Our .proto file also contains protocol buffer message type definitions for all
the request and response types used in our service methods - for example, here's
the `Point` message type:
```protobuf
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
```
<a name="protoc"></a>
### Generating client code
The PHP client stub implementation of the proto files can be generated by the
gRPC PHP Protoc Plugin. To compile the plugin:
```sh
$ make grpc_php_plugin
```
To generate the client stub implementation .php file:
```sh
$ cd grpc
$ protoc --proto_path=examples/protos \
--php_out=examples/php/route_guide \
--grpc_out=examples/php/route_guide \
--plugin=protoc-gen-grpc=bins/opt/grpc_php_plugin \
./examples/protos/route_guide.proto
```
or running the helper script under the `grpc/example/php/route_guide` directory if you build
grpc-php-plugin by source:
```sh
$ ./route_guide_proto_gen.sh
```
A number of files will be generated in the `examples/php/route_guide` directory.
You do not need to modify those files.
To load these generated files, add this section to your `composer.json` file under
`examples/php` directory
```json
"autoload": {
"psr-4": {
"": "route_guide/"
}
}
```
The file contains:
- All the protocol buffer code to populate, serialize, and retrieve our request
and response message types.
- A class called `Routeguide\RouteGuideClient` that lets clients call the methods
defined in the `RouteGuide` service.
<a name="client"></a>
### Creating the client
In this section, we'll look at creating a PHP client for our `RouteGuide`
service. You can see our complete example client code in
[examples/php/route_guide/route_guide_client.php](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/php/route_guide/route_guide_client.php).
#### Constructing a client object
To call service methods, we first need to create a client object, an instance of
the generated `RouteGuideClient` class. The constructor of the class expects the
server address and port we want to connect to:
```php
$client = new Routeguide\RouteGuideClient('localhost:50051', [
'credentials' => Grpc\ChannelCredentials::createInsecure(),
]);
```
#### Calling service methods
Now let's look at how we call our service methods.
##### Simple RPC
Calling the simple RPC `GetFeature` is nearly as straightforward as calling a
local asynchronous method.
```php
$point = new Routeguide\Point();
$point->setLatitude(409146138);
$point->setLongitude(-746188906);
list($feature, $status) = $client->GetFeature($point)->wait();
```
As you can see, we create and populate a request object, i.e. an
`Routeguide\Point` object. Then, we call the method on the stub, passing it the
request object. If there is no error, then we can read the response information
from the server from our response object, i.e. an `Routeguide\Feature` object.
```php
print sprintf("Found %s \n at %f, %f\n", $feature->getName(),
$feature->getLocation()->getLatitude() / COORD_FACTOR,
$feature->getLocation()->getLongitude() / COORD_FACTOR);
```
##### Streaming RPCs
Now let's look at our streaming methods. Here's where we call the server-side
streaming method `ListFeatures`, which returns a stream of geographical
`Feature`s:
```php
$lo_point = new Routeguide\Point();
$hi_point = new Routeguide\Point();
$lo_point->setLatitude(400000000);
$lo_point->setLongitude(-750000000);
$hi_point->setLatitude(420000000);
$hi_point->setLongitude(-730000000);
$rectangle = new Routeguide\Rectangle();
$rectangle->setLo($lo_point);
$rectangle->setHi($hi_point);
$call = $client->ListFeatures($rectangle);
// an iterator over the server streaming responses
$features = $call->responses();
foreach ($features as $feature) {
// process each feature
} // the loop will end when the server indicates there is no more responses to be sent.
```
The `$call->responses()` method call returns an iterator. When the server sends
a response, a `$feature` object will be returned in the `foreach` loop, until
the server indiciates that there will be no more responses to be sent.
The client-side streaming method `RecordRoute` is similar, except that we call
`$call->write($point)` for each point we want to write from the client side and
get back a `Routeguide\RouteSummary`.
```php
$call = $client->RecordRoute();
for ($i = 0; $i < $num_points; $i++) {
$point = new Routeguide\Point();
$point->setLatitude($lat);
$point->setLongitude($long);
$call->write($point);
}
list($route_summary, $status) = $call->wait();
```
Finally, let's look at our bidirectional streaming RPC `routeChat()`. In this
case, we just pass a context to the method and get back a `BidiStreamingCall`
stream object, which we can use to both write and read messages.
```php
$call = $client->RouteChat();
```
To write messages from the client:
```php
foreach ($notes as $n) {
$route_note = new Routeguide\RouteNote();
$call->write($route_note);
}
$call->writesDone();
```
To read messages from the server:
```php
while ($route_note_reply = $call->read()) {
// process $route_note_reply
}
```
Each side will always get the other's messages in the order they were written,
both the client and server can read and write in any order — the streams operate
completely independently.

View File

@ -0,0 +1,407 @@
---
layout: tutorials
title: gRPC Basics - Python
aliases: [/docs/tutorials/basic/python.html]
---
This tutorial provides a basic Python programmer's introduction
to working with gRPC.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Generate server and client code using the protocol buffer compiler.
- Use the Python gRPC API to write a simple client and server for your service.
It assumes that you have read the [Overview](/docs/guides/#overview) and are familiar
with [protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). You can
find out more in the [proto3 language
guide](https://developers.google.com/protocol-buffers/docs/proto3) and [Python
generated code
guide](https://developers.google.com/protocol-buffers/docs/reference/python-generated).
<div id="toc"></div>
### Why use gRPC?
This example is a simple route mapping application that lets clients get
information about features on their route, create a summary of their route, and
exchange route information such as traffic updates with the server and other
clients.
With gRPC you can define your service once in a .proto file and implement
clients and servers in any of gRPC's supported languages, which in turn can be
run in environments ranging from servers inside Google to your own tablet -
all the complexity of communication between different languages and environments
is handled for you by gRPC. You also get all the advantages of working with
protocol buffers, including efficient serialization, a simple IDL, and easy
interface updating.
### Example code and setup
The example code for this tutorial is in
[grpc/grpc/examples/python/route_guide](https://github.com/grpc/grpc/tree/{{< param grpc_release_tag >}}/examples/python/route_guide).
To download the example, clone the `grpc` repository by running the following
command:
```sh
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
```
Then change your current directory to `examples/python/route_guide` in the repository:
```sh
$ cd grpc/examples/python/route_guide
```
You also should have the relevant tools installed to generate the server and
client interface code - if you don't already, follow the setup instructions in
[the Python quick start guide](/docs/quickstart/python).
### Defining the service
Your first step (as you'll know from the [Overview](/docs/guides/#overview)) is to
define the gRPC *service* and the method *request* and *response* types using
[protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). You can
see the complete .proto file in
[`examples/protos/route_guide.proto`](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/protos/route_guide.proto).
To define a service, you specify a named `service` in your .proto file:
```protobuf
service RouteGuide {
// (Method definitions not shown)
}
```
Then you define `rpc` methods inside your service definition, specifying their
request and response types. gRPC lets you define four kinds of service method,
all of which are used in the `RouteGuide` service:
- A *simple RPC* where the client sends a request to the server using the stub
and waits for a response to come back, just like a normal function call.
```protobuf
// Obtains the feature at a given position.
rpc GetFeature(Point) returns (Feature) {}
```
- A *response-streaming RPC* where the client sends a request to the server and
gets a stream to read a sequence of messages back. The client reads from the
returned stream until there are no more messages. As you can see in the
example, you specify a response-streaming method by placing the `stream`
keyword before the *response* type.
```protobuf
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc ListFeatures(Rectangle) returns (stream Feature) {}
```
- A *request-streaming RPC* where the client writes a sequence of messages and
sends them to the server, again using a provided stream. Once the client has
finished writing the messages, it waits for the server to read them all and
return its response. You specify a request-streaming method by placing the
`stream` keyword before the *request* type.
```protobuf
// Accepts a stream of Points on a route being traversed, returning a
// RouteSummary when traversal is completed.
rpc RecordRoute(stream Point) returns (RouteSummary) {}
```
- A *bidirectionally-streaming RPC* where both sides send a sequence of messages
using a read-write stream. The two streams operate independently, so clients
and servers can read and write in whatever order they like: for example, the
server could wait to receive all the client messages before writing its
responses, or it could alternately read a message then write a message, or
some other combination of reads and writes. The order of messages in each
stream is preserved. You specify this type of method by placing the `stream`
keyword before both the request and the response.
```protobuf
// Accepts a stream of RouteNotes sent while a route is being traversed,
// while receiving other RouteNotes (e.g. from other users).
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
```
Your .proto file also contains protocol buffer message type definitions for all
the request and response types used in our service methods - for example, here's
the `Point` message type:
```protobuf
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
```
### Generating client and server code
Next you need to generate the gRPC client and server interfaces from your .proto
service definition.
First, install the grpcio-tools package:
```sh
$ pip install grpcio-tools
```
Use the following command to generate the Python code:
```sh
$ python -m grpc_tools.protoc -I../../protos --python_out=. --grpc_python_out=. ../../protos/route_guide.proto
```
Note that as we've already provided a version of the generated code in the
example directory, running this command regenerates the appropriate file rather
than creates a new one. The generated code files are called
`route_guide_pb2.py` and `route_guide_pb2_grpc.py` and contain:
- classes for the messages defined in route_guide.proto
- classes for the service defined in route_guide.proto
- `RouteGuideStub`, which can be used by clients to invoke RouteGuide RPCs
- `RouteGuideServicer`, which defines the interface for implementations
of the RouteGuide service
- a function for the service defined in route_guide.proto
- `add_RouteGuideServicer_to_server`, which adds a RouteGuideServicer to
a `grpc.Server`
Note: The `2` in pb2 indicates that the generated code is following Protocol Buffers Python API version 2. Version 1 is obsolete. It has no relation to the Protocol Buffers Language version, which is the one indicated by `syntax = "proto3"` or `syntax = "proto2"` in a .proto file.
<a name="server"></a>
### Creating the server
First let's look at how you create a `RouteGuide` server. If you're only
interested in creating gRPC clients, you can skip this section and go straight
to [Creating the client](#client) (though you might find it interesting
anyway!).
Creating and running a `RouteGuide` server breaks down into two work items:
- Implementing the servicer interface generated from our service definition with
functions that perform the actual "work" of the service.
- Running a gRPC server to listen for requests from clients and transmit
responses.
You can find the example `RouteGuide` server in
[examples/python/route_guide/route_guide_server.py](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/python/route_guide/route_guide_server.py).
#### Implementing RouteGuide
`route_guide_server.py` has a `RouteGuideServicer` class that subclasses the
generated class `route_guide_pb2_grpc.RouteGuideServicer`:
```python
# RouteGuideServicer provides an implementation of the methods of the RouteGuide service.
class RouteGuideServicer(route_guide_pb2_grpc.RouteGuideServicer):
```
`RouteGuideServicer` implements all the `RouteGuide` service methods.
##### Simple RPC
Let's look at the simplest type first, `GetFeature`, which just gets a `Point`
from the client and returns the corresponding feature information from its
database in a `Feature`.
```python
def GetFeature(self, request, context):
feature = get_feature(self.db, request)
if feature is None:
return route_guide_pb2.Feature(name="", location=request)
else:
return feature
```
The method is passed a `route_guide_pb2.Point` request for the RPC, and a
`grpc.ServicerContext` object that provides RPC-specific information such as
timeout limits. It returns a `route_guide_pb2.Feature` response.
##### Response-streaming RPC
Now let's look at the next method. `ListFeatures` is a response-streaming RPC
that sends multiple `Feature`s to the client.
```python
def ListFeatures(self, request, context):
left = min(request.lo.longitude, request.hi.longitude)
right = max(request.lo.longitude, request.hi.longitude)
top = max(request.lo.latitude, request.hi.latitude)
bottom = min(request.lo.latitude, request.hi.latitude)
for feature in self.db:
if (feature.location.longitude >= left and
feature.location.longitude <= right and
feature.location.latitude >= bottom and
feature.location.latitude <= top):
yield feature
```
Here the request message is a `route_guide_pb2.Rectangle` within which the
client wants to find `Feature`s. Instead of returning a single response the
method yields zero or more responses.
##### Request-streaming RPC
The request-streaming method `RecordRoute` uses an
[iterator](https://docs.python.org/2/library/stdtypes.html#iterator-types) of
request values and returns a single response value.
```python
def RecordRoute(self, request_iterator, context):
point_count = 0
feature_count = 0
distance = 0.0
prev_point = None
start_time = time.time()
for point in request_iterator:
point_count += 1
if get_feature(self.db, point):
feature_count += 1
if prev_point:
distance += get_distance(prev_point, point)
prev_point = point
elapsed_time = time.time() - start_time
return route_guide_pb2.RouteSummary(point_count=point_count,
feature_count=feature_count,
distance=int(distance),
elapsed_time=int(elapsed_time))
```
##### Bidirectional streaming RPC
Lastly let's look at the bidirectionally-streaming method `RouteChat`.
```python
def RouteChat(self, request_iterator, context):
prev_notes = []
for new_note in request_iterator:
for prev_note in prev_notes:
if prev_note.location == new_note.location:
yield prev_note
prev_notes.append(new_note)
```
This method's semantics are a combination of those of the request-streaming
method and the response-streaming method. It is passed an iterator of request
values and is itself an iterator of response values.
#### Starting the server
Once you have implemented all the `RouteGuide` methods, the next step is to
start up a gRPC server so that clients can actually use your service:
```python
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
route_guide_pb2_grpc.add_RouteGuideServicer_to_server(
RouteGuideServicer(), server)
server.add_insecure_port('[::]:50051')
server.start()
```
Because `start()` does not block you may need to sleep-loop if there is nothing
else for your code to do while serving.
<a name="client"></a>
### Creating the client
You can see the complete example client code in
[examples/python/route_guide/route_guide_client.py](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/python/route_guide/route_guide_client.py).
#### Creating a stub
To call service methods, we first need to create a *stub*.
We instantiate the `RouteGuideStub` class of the `route_guide_pb2_grpc`
module, generated from our .proto.
```python
channel = grpc.insecure_channel('localhost:50051')
stub = route_guide_pb2_grpc.RouteGuideStub(channel)
```
#### Calling service methods
For RPC methods that return a single response ("response-unary" methods), gRPC
Python supports both synchronous (blocking) and asynchronous (non-blocking)
control flow semantics. For response-streaming RPC methods, calls immediately
return an iterator of response values. Calls to that iterator's `next()` method
block until the response to be yielded from the iterator becomes available.
##### Simple RPC
A synchronous call to the simple RPC `GetFeature` is nearly as straightforward
as calling a local method. The RPC call waits for the server to respond, and
will either return a response or raise an exception:
```python
feature = stub.GetFeature(point)
```
An asynchronous call to `GetFeature` is similar, but like calling a local method
asynchronously in a thread pool:
```python
feature_future = stub.GetFeature.future(point)
feature = feature_future.result()
```
##### Response-streaming RPC
Calling the response-streaming `ListFeatures` is similar to working with
sequence types:
```python
for feature in stub.ListFeatures(rectangle):
```
##### Request-streaming RPC
Calling the request-streaming `RecordRoute` is similar to passing an iterator
to a local method. Like the simple RPC above that also returns a single
response, it can be called synchronously or asynchronously:
```python
route_summary = stub.RecordRoute(point_iterator)
```
```python
route_summary_future = stub.RecordRoute.future(point_iterator)
route_summary = route_summary_future.result()
```
##### Bidirectional streaming RPC
Calling the bidirectionally-streaming `RouteChat` has (as is the case on the
service-side) a combination of the request-streaming and response-streaming
semantics:
```python
for received_route_note in stub.RouteChat(sent_route_note_iterator):
```
### Try it out!
Run the server, which will listen on port 50051:
```sh
$ python route_guide_server.py
```
Run the client (in a different terminal):
```sh
$ python route_guide_client.py
```

View File

@ -0,0 +1,405 @@
---
layout: tutorials
title: gRPC Basics - Ruby
aliases: [/docs/tutorials/basic/ruby.html]
---
This tutorial provides a basic Ruby programmer's introduction to working with gRPC.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Generate server and client code using the protocol buffer compiler.
- Use the Ruby gRPC API to write a simple client and server for your service.
It assumes that you have read the [Overview](/docs/) and are familiar
with [protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). Note
that the example in this tutorial uses the proto3 version of the protocol
buffers language: you can find out more in
the [proto3 language
guide](https://developers.google.com/protocol-buffers/docs/proto3).
<div id="toc"></div>
### Why use gRPC?
Our example is a simple route mapping application that lets clients get
information about features on their route, create a summary of their route, and
exchange route information such as traffic updates with the server and other
clients.
With gRPC we can define our service once in a .proto file and implement clients
and servers in any of gRPC's supported languages, which in turn can be run in
environments ranging from servers inside Google to your own tablet - all the
complexity of communication between different languages and environments is
handled for you by gRPC. We also get all the advantages of working with protocol
buffers, including efficient serialization, a simple IDL, and easy interface
updating.
### Example code and setup
The example code for our tutorial is in
[grpc/grpc/examples/ruby/route_guide](https://github.com/grpc/grpc/tree/{{< param grpc_release_tag >}}/examples/ruby/route_guide).
To download the example, clone the `grpc` repository by running the following
command:
```sh
$ git clone -b {{< param grpc_release_tag >}} https://github.com/grpc/grpc
$ cd grpc
```
Then change your current directory to `examples/ruby/route_guide`:
```sh
$ cd examples/ruby/route_guide
```
You also should have the relevant tools installed to generate the server and
client interface code - if you don't already, follow the setup instructions in
[the Ruby quick start guide](/docs/quickstart/ruby/).
### Defining the service
Our first step (as you'll know from the [Overview](/docs/)) is to
define the gRPC *service* and the method *request* and *response* types using
[protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview). You can
see the complete .proto file in
[`examples/protos/route_guide.proto`](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/protos/route_guide.proto).
To define a service, you specify a named `service` in your .proto file:
```protobuf
service RouteGuide {
...
}
```
Then you define `rpc` methods inside your service definition, specifying their
request and response types. gRPC lets you define four kinds of service method,
all of which are used in the `RouteGuide` service:
- A *simple RPC* where the client sends a request to the server using the stub
and waits for a response to come back, just like a normal function call.
```protobuf
// Obtains the feature at a given position.
rpc GetFeature(Point) returns (Feature) {}
```
- A *server-side streaming RPC* where the client sends a request to the server
and gets a stream to read a sequence of messages back. The client reads from
the returned stream until there are no more messages. As you can see in our
example, you specify a server-side streaming method by placing the `stream`
keyword before the *response* type.
```protobuf
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc ListFeatures(Rectangle) returns (stream Feature) {}
```
- A *client-side streaming RPC* where the client writes a sequence of messages
and sends them to the server, again using a provided stream. Once the client
has finished writing the messages, it waits for the server to read them all
and return its response. You specify a client-side streaming method by placing
the `stream` keyword before the *request* type.
```protobuf
// Accepts a stream of Points on a route being traversed, returning a
// RouteSummary when traversal is completed.
rpc RecordRoute(stream Point) returns (RouteSummary) {}
```
- A *bidirectional streaming RPC* where both sides send a sequence of messages
using a read-write stream. The two streams operate independently, so clients
and servers can read and write in whatever order they like: for example, the
server could wait to receive all the client messages before writing its
responses, or it could alternately read a message then write a message, or
some other combination of reads and writes. The order of messages in each
stream is preserved. You specify this type of method by placing the `stream`
keyword before both the request and the response.
```protobuf
// Accepts a stream of RouteNotes sent while a route is being traversed,
// while receiving other RouteNotes (e.g. from other users).
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
```
Our .proto file also contains protocol buffer message type definitions for all
the request and response types used in our service methods - for example, here's
the `Point` message type:
```protobuf
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
```
### Generating client and server code
Next we need to generate the gRPC client and server interfaces from our .proto
service definition. We do this using the protocol buffer compiler `protoc` with
a special gRPC Ruby plugin.
If you want to run this yourself, make sure you've installed protoc and followed
the gRPC Ruby plugin [installation
instructions](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/src/ruby/README.md) first):
Once that's done, the following command can be used to generate the ruby code.
```sh
$ grpc_tools_ruby_protoc -I ../../protos --ruby_out=../lib --grpc_out=../lib ../../protos/route_guide.proto
```
Running this command regenerates the following files in the lib directory:
- `lib/route_guide.pb` defines a module `Examples::RouteGuide`
- This contain all the protocol buffer code to populate, serialize, and
retrieve our request and response message types
- `lib/route_guide_services.pb`, extends `Examples::RouteGuide` with stub and
service classes
- a class `Service` for use as a base class when defining RouteGuide service
implementations
- a class `Stub` that can be used to access remote RouteGuide instances
<a name="server"></a>
### Creating the server
First let's look at how we create a `RouteGuide` server. If you're only
interested in creating gRPC clients, you can skip this section and go straight
to [Creating the client](#client) (though you might find it interesting
anyway!).
There are two parts to making our `RouteGuide` service do its job:
- Implementing the service interface generated from our service definition:
doing the actual "work" of our service.
- Running a gRPC server to listen for requests from clients and return the
service responses.
You can find our example `RouteGuide` server in
[examples/ruby/route_guide/route_guide_server.rb](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/ruby/route_guide/route_guide_server.rb).
Let's take a closer look at how it works.
#### Implementing RouteGuide
As you can see, our server has a `ServerImpl` class that extends the generated
`RouteGuide::Service`:
```ruby
# ServerImpl provides an implementation of the RouteGuide service.
class ServerImpl < RouteGuide::Service
```
`ServerImpl` implements all our service methods. Let's look at the simplest type
first, `GetFeature`, which just gets a `Point` from the client and returns the
corresponding feature information from its database in a `Feature`.
```ruby
def get_feature(point, _call)
name = @feature_db[{
'longitude' => point.longitude,
'latitude' => point.latitude }] || ''
Feature.new(location: point, name: name)
end
```
The method is passed a _call for the RPC, the client's `Point` protocol buffer
request, and returns a `Feature` protocol buffer. In the method we create the
`Feature` with the appropriate information, and then `return` it.
Now let's look at something a bit more complicated - a streaming RPC.
`ListFeatures` is a server-side streaming RPC, so we need to send back multiple
`Feature`s to our client.
```ruby
# in ServerImpl
def list_features(rectangle, _call)
RectangleEnum.new(@feature_db, rectangle).each
end
```
As you can see, here the request object is a `Rectangle` in which our client
wants to find `Feature`s, but instead of returning a simple response we need to
return an [Enumerator](https://ruby-doc.org//core-2.2.0/Enumerator.html) that
yields the responses. In the method, we use a helper class `RectangleEnum`, to
act as an Enumerator implementation.
Similarly, the client-side streaming method `record_route` uses an
[Enumerable](https://ruby-doc.org//core-2.2.0/Enumerable.html), but here it's
obtained from the call object, which we've ignored in the earlier examples.
`call.each_remote_read` yields each message sent by the client in turn.
```ruby
call.each_remote_read do |point|
...
end
```
Finally, let's look at our bidirectional streaming RPC `route_chat`.
```ruby
def route_chat(notes)
q = EnumeratorQueue.new(self)
t = Thread.new do
begin
notes.each do |n|
...
end
end
q = EnumeratorQueue.new(self)
...
return q.each_item
end
```
Here the method receives an
[Enumerable](https://ruby-doc.org//core-2.2.0/Enumerable.html), but also returns
an [Enumerator](https://ruby-doc.org//core-2.2.0/Enumerator.html) that yields the
responses. The implementation demonstrates how to set these up so that the
requests and responses can be handled concurrently. Although each side will
always get the other's messages in the order they were written, both the client
and server can read and write in any order — the streams operate completely
independently.
#### Starting the server
Once we've implemented all our methods, we also need to start up a gRPC server
so that clients can actually use our service. The following snippet shows how we
do this for our `RouteGuide` service:
```ruby
addr = "0.0.0.0:8080"
s = GRPC::RpcServer.new
s.add_http2_port(addr, :this_port_is_insecure)
logger.info("... running insecurely on #{addr}")
s.handle(ServerImpl.new(feature_db))
s.run_till_terminated
```
As you can see, we build and start our server using a `GRPC::RpcServer`. To do
this, we:
1. Create an instance of our service implementation class `ServerImpl`.
1. Specify the address and port we want to use to listen for client requests
using the builder's `add_http2_port` method.
1. Register our service implementation with the `GRPC::RpcServer`.
1. Call `run` on the`GRPC::RpcServer` to create and start an RPC server for our
service.
<a name="client"></a>
### Creating the client
In this section, we'll look at creating a Ruby client for our `RouteGuide`
service. You can see our complete example client code in
[examples/ruby/route_guide/route_guide_client.rb](https://github.com/grpc/grpc/blob/{{< param grpc_release_tag >}}/examples/ruby/route_guide/route_guide_client.rb).
#### Creating a stub
To call service methods, we first need to create a *stub*.
We use the `Stub` class of the `RouteGuide` module generated from our .proto.
```ruby
stub = RouteGuide::Stub.new('localhost:50051')
```
#### Calling service methods
Now let's look at how we call our service methods. Note that the gRPC Ruby only
provides *blocking/synchronous* versions of each method: this means that the
RPC call waits for the server to respond, and will either return a response or
raise an exception.
##### Simple RPC
Calling the simple RPC `GetFeature` is nearly as straightforward as calling a
local method.
```ruby
GET_FEATURE_POINTS = [
Point.new(latitude: 409_146_138, longitude: -746_188_906),
Point.new(latitude: 0, longitude: 0)
]
..
GET_FEATURE_POINTS.each do |pt|
resp = stub.get_feature(pt)
...
p "- found '#{resp.name}' at #{pt.inspect}"
end
```
As you can see, we create and populate a request protocol buffer object (in our
case `Point`), and create a response protocol buffer object for the server to
fill in. Finally, we call the method on the stub, passing it the context,
request, and response. If the method returns `OK`, then we can read the response
information from the server from our response object.
##### Streaming RPCs
Now let's look at our streaming methods. If you've already read [Creating the
server](#server) some of this may look very familiar - streaming RPCs are
implemented in a similar way on both sides. Here's where we call the server-side
streaming method `list_features`, which returns an `Enumerable` of `Features`
```ruby
resps = stub.list_features(LIST_FEATURES_RECT)
resps.each do |r|
p "- found '#{r.name}' at #{r.location.inspect}"
end
```
The client-side streaming method `record_route` is similar, except there we pass
the server an `Enumerable`.
```ruby
...
reqs = RandomRoute.new(features, points_on_route)
resp = stub.record_route(reqs.each, deadline)
...
```
Finally, let's look at our bidirectional streaming RPC `route_chat`. In this
case, we pass `Enumerable` to the method and get back an `Enumerable`.
```ruby
resps = stub.route_chat(ROUTE_CHAT_NOTES)
resps.each { |r| p "received #{r.inspect}" }
```
Although it's not shown well by this example, each enumerable is independent of
the other - both the client and server can read and write in any order — the
streams operate completely independently.
### Try it out!
Build client and server:
```sh
$ # from examples/ruby
$ gem install bundler && bundle install
```
Run the server, which will listen on port 50051:
```sh
$ # from examples/ruby
$ bundle exec route_guide/route_guide_server.rb ../python/route_guide/route_guide_db.json
$ # (note that the route_guide_db.json file is actually language-agnostic; it's just
$ # located in the python folder).
```
Run the client (in a different terminal):
```sh
$ # from examples/ruby
$ bundle exec route_guide/route_guide_client.rb ../python/route_guide/route_guide_db.json
```

View File

@ -0,0 +1,216 @@
---
layout: tutorials
title: gRPC Basics - Web
aliases: [/docs/tutorials/basic/web.html]
---
This tutorial provides a basic introduction on how to use
gRPC-Web from browsers.
By walking through this example you'll learn how to:
- Define a service in a .proto file.
- Generate client code using the protocol buffer compiler.
- Use the gRPC-Web API to write a simple client for your service.
It assumes a passing familiarity with [protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview).
<div id="toc"></div>
<a name="why-grpc"></a>
### Why use gRPC and gRPC-Web?
With gRPC you can define your service once in a .proto file and implement
clients and servers in any of gRPC's supported languages, which in turn can be
run in environments ranging from servers inside Google to your own tablet - all
the complexity of communication between different languages and environments is
handled for you by gRPC. You also get all the advantages of working with
protocol buffers, including efficient serialization, a simple IDL, and easy
interface updating. gRPC-Web lets you access gRPC services built in this manner
from browsers using an idiomatic API.
<a name="setup"></a>
### Define the Service
The first step when creating a gRPC service is to define the service methods
and their request and response message types using protocol buffers. In this
example, we define our `EchoService` in a file called
[`echo.proto`](https://github.com/grpc/grpc-web/blob/0.4.0/net/grpc/gateway/examples/echo/echo.proto).
For more information about protocol buffers and proto3 syntax, please see the
[protobuf documentation][].
```protobuf
message EchoRequest {
string message = 1;
}
message EchoResponse {
string message = 1;
}
service EchoService {
rpc Echo(EchoRequest) returns (EchoResponse);
}
```
### Implement gRPC Backend Server
Next, we implement our EchoService interface using Node in the backend gRPC
`EchoServer`. This will handle requests from clients. See the file
[`node-server/server.js`](https://github.com/grpc/grpc-web/blob/master/net/grpc/gateway/examples/echo/node-server/server.js)
for details.
You can implement the server in any language supported by gRPC. Please see
the [main page][] for more details.
```js
function doEcho(call, callback) {
callback(null, {message: call.request.message});
}
```
### Configure the Envoy Proxy
In this example, we will use the [Envoy](https://www.envoyproxy.io/)
proxy to forward the gRPC browser request to the backend server. You can see
the complete config file in
[envoy.yaml](https://github.com/grpc/grpc-web/blob/master/net/grpc/gateway/examples/echo/envoy.yaml)
To forward the gRPC requests to the backend server, we need a block like
this:
```yaml
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: echo_service }
http_filters:
- name: envoy.grpc_web
- name: envoy.router
clusters:
- name: echo_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: node-server, port_value: 9090 }}]
```
You may also need to add some CORS setup to make sure the browser can request
cross-origin content.
In this simple example, the browser makes gRPC requests to port `:8080`. Envoy
forwards the request to the backend gRPC server listening on port `:9090`.
### Generate Protobuf Messages and Service Client Stub
To generate the protobuf message classes from our `echo.proto`, run the
following command:
```sh
$ protoc -I=$DIR echo.proto \
--js_out=import_style=commonjs:$OUT_DIR
```
The `import_style` option passed to the `--js_out` flag makes sure the
generated files will have CommonJS style `require()` statements.
To generate the gRPC-Web service client stub, first you need the gRPC-Web
protoc plugin. To compile the plugin `protoc-gen-grpc-web`, you need to run
this from the repo's root directory:
```sh
$ cd grpc-web
$ sudo make install-plugin
```
To generate the service client stub file, run this command:
```sh
$ protoc -I=$DIR echo.proto \
--grpc-web_out=import_style=commonjs,mode=grpcwebtext:$OUT_DIR
```
In the `--grpc-web_out` param above:
- `mode` can be `grpcwebtext` (default) or `grpcweb`
- `import_style` can be `closure` (default) or `commonjs`
Our command generates the client stub, by default, to the file
`echo_grpc_web_pb.js`.
### Write JS Client Code
Now you are ready to write some JS client code. Put this in a `client.js` file.
```js
const {EchoRequest, EchoResponse} = require('./echo_pb.js');
const {EchoServiceClient} = require('./echo_grpc_web_pb.js');
var echoService = new EchoServiceClient('http://localhost:8080');
var request = new EchoRequest();
request.setMessage('Hello World!');
echoService.echo(request, {}, function(err, response) {
// ...
});
```
You will need a `package.json` file
```json
{
"name": "grpc-web-commonjs-example",
"dependencies": {
"google-protobuf": "^3.6.1",
"grpc-web": "^0.4.0"
},
"devDependencies": {
"browserify": "^16.2.2",
"webpack": "^4.16.5",
"webpack-cli": "^3.1.0"
}
}
```
### Compile the JS Library
Finally, putting all these together, we can compile all the relevant JS files
into one single JS library that can be used in the browser.
```sh
$ npm install
$ npx webpack client.js
```
Now embed `dist/main.js` into your project and see it in action!
[protobuf documentation]:https://developers.google.com/protocol-buffers/
[main page]:/docs/

165
content/faq/index.html Normal file
View File

@ -0,0 +1,165 @@
---
title: FAQ
---
<div class="container" style="padding-right:0px;">
<div class="row">
<h6>Here are some frequently asked questions. Hope you find your answer in here :-)</h6>
<h2>Frequently Asked Questions</h2>
<ul class="list-unstyled">
<li class="submenu">
<h6 class="arrow-r">What is gRPC?</h6>
<div class="SECTION" style="width:100% !important">
<p>gRPC is a modern, open source remote procedure call (RPC) framework that can run anywhere. It enables client and server applications to communicate transparently, and makes it easier to build connected systems.</p>
<p> Read the longer <a href="/blog/principles">Motivation &amp; Design Principles</a> post for background on why we created gRPC.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">What does gRPC stand for?</h6>
<div class="submenu-content">
<p><b>g</b>RPC <b>R</b>emote <b>P</b>rocedure <b>C</b>alls, of course!</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">Why would I want to use gRPC?</h6>
<div class="submenu-content">
<p>
The main usage scenarios:
<ul>
<li>Low latency, highly scalable, distributed systems.</li>
<li>Developing mobile clients which are communicating to a cloud server.</li>
<li>Designing a new protocol that needs to be accurate, efficient and language independent.</li>
<li>Layered design to enable extension eg. authentication, load balancing, logging and monitoring etc.</li>
</ul>
</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">Whos using this and why?</h6>
<div class="submenu-content">
<p> gRPC is a CNCF project.</p>
<p>Google has been using a lot of the underlying technologies and concepts in gRPC for a long time. The current implementation is being used in several of Googles cloud products and Google externally facing APIs. It is also being used by <a href="https://corner.squareup.com/2015/02/grpc.html">Square</a>, <a href="https://github.com/Netflix/ribbon">Netflix</a>, <a href="https://blog.gopheracademy.com/advent-2015/etcd-distributed-key-value-store-with-grpc-http2/">CoreOS</a>, <a href="https://blog.docker.com/2015/12/containerd-daemon-to-control-runc/">Docker</a>, <a href="https://github.com/cockroachdb/cockroach">Cockroachdb</a>, <a href="https://github.com/CiscoDevNet/grpc-getting-started">Cisco</a>, <a href="https://github.com/Juniper/open-nti">Juniper Networks</a> and many other organizations and individuals.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">What programming languages are supported?</h6>
<div class="submenu-content">
<p>C++, Java (incl. support for Android), Objective-C (for iOS), Python, Ruby, Go, C#, Node.js
are in GA and follow semantic versioning.</p>
<p>Dart support is in beta.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">How do I get started using gRPC?</h6>
<div class="submenu-content">
<p>You can start with installation of gRPC by following instructions from <a href="/docs/quickstart">here</a>. Or head over to the <a href="https://github.com/grpc">gRPC GitHub org page</a>, pick the runtime or language you are interested in and follow the README instructions.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">What is the license?</h6>
<div class="submenu-content">
<p>All implementations are licensed under <a href="https://github.com/grpc/grpc/blob/master/LICENSE">Apache 2.0</a>.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">How can I contribute?</h6>
<div class="submenu-content">
<p><a href="/contribute/">Contributors</a> are highly welcome and the repositories are hosted on GitHub. We look forward to community feedback, additions and bugs. Both individual contributors and corporate contributors need to sign our CLA. If you have ideas for a project around gRPC, please read guidelines and submit <a href="https://github.com/grpc/grpc-contrib/blob/master/CONTRIBUTING.md">here</a>. We have a growing list of projects under <a href="https://github.com/grpc-ecosystem">gRPC Ecosystem</a></p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">Where is the documentation?</h6>
<div class="submenu-content">
<p>Check out the <a href="/docs/">documentation</a> right here on grpc.io.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">What is the roadmap?</h6>
<div class="submenu-content">
<p>The gRPC project has an RFC process, through which new features are designed and approved for implementation. They are tracked in <a target="_blank" href="https://github.com/grpc/proposal"> this repository</a>. </p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">How long are gRPC releases supported for?</h6>
<div class="submenu-content">
<p>The gRPC project does not do LTS releases. Given the rolling release model above, we support the current, latest release and the release prior to that. Support here means bug fixes and security fixes.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">What is the latest gRPC Version?</h6>
<div class="submenu-content">
<p>The latest release tag is {{< param grpc_release_tag >}}</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">When do gRPC releases happen?</h6>
<div class="submenu-content">
<p>The gRPC project works in a model where the tip of the master branch is stable at all times. The project (across the various runtimes) targets to ship checkpoint releases every 6 weeks on a best effort basis. See the release schedule <a href="https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md">here</a>.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">Can I use it in the browser?</h6>
<div class="submenu-content">
<p>The <a href="https://github.com/grpc/grpc-web" target="_blank">gRPC-Web</a> project is Generally Available.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">Can I use gRPC with my favorite data format (JSON, Protobuf, Thrift, XML) ?</h6>
<div class="submenu-content">
<p>Yes. gRPC is designed to be extensible to support multiple content types. The initial release contains support for Protobuf and with external support for other content types such as FlatBuffers and Thrift, at varying levels of maturity.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">How does gRPC help in mobile application development?</h6>
<div class="submenu-content">
<p>gRPC and Protobuf provide an easy way to precisely define a service and auto generate reliable client libraries for iOS, Android and the servers providing the back end. The clients can take advantage of advanced streaming and connection features which help save bandwidth, do more over fewer TCP connections and save CPU usage and battery life.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">Why is gRPC better than any binary blob over HTTP/2?</h6>
<div class="submenu-content">
<p>This is largely what gRPC is on the wire. However gRPC is also a set of libraries that will provide higher-level features consistently across platforms that common HTTP libraries typically do not. Examples of such features include:
<ul>
<li>interaction with flow-control at the application layer</li>
<li>cascading call-cancellation</li>
<li>load balancing &amp; failover</li>
</ul>
</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">Why is gRPC better/worse than REST?</h6>
<div class="submenu-content">
<p>gRPC largely follows HTTP semantics over HTTP/2 but we explicitly allow for full-duplex streaming. We diverge from typical REST conventions as we use static paths for performance reasons during call dispatch as parsing call parameters from paths, query parameters and payload body adds latency and complexity. We have also formalized a set of errors that we believe are more directly applicable to API use cases than the HTTP status codes.</p>
</div>
</li>
<br>
<li class="submenu">
<h6 class="arrow-r">How do you pronounce gRPC?</h6>
<div class="submenu-content">
<p>Jee-Arr-Pee-See.</p>
</div>
</li>
</ul>
</div>
</div>

9
data/redirects.yaml Normal file
View File

@ -0,0 +1,9 @@
- go
- java
- python
- dotnet
- swift
- proto
- web
- dart
- community

24
deploy.sh Executable file
View File

@ -0,0 +1,24 @@
#!/bin/bash
echo -e "\033[0;32mDeploying updates to GitHub...\033[0m"
# Build the project.
hugo # if using a theme, replace with `hugo -t <YOURTHEME>`
# Go To Public folder
cd public
# Add changes to git.
git add .
# Commit changes.
msg="rebuilding site `date`"
if [ $# -eq 1 ]
then msg="$1"
fi
git commit -m "$msg"
# Push source and build repos.
git push origin master
# Come Back up to the Project Root
cd ..

View File

@ -0,0 +1,96 @@
{{ $dropdown := site.Menus.dropdown }}
{{ $currentUrl := .RelPermalink }}
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
<link href="https://fonts.googleapis.com/css?family=Open+Sans:300,400" rel="stylesheet">
<link rel="stylesheet" type="text/css" href="/css/style.css">
<title>{{ block "title" . }}{{ .Site.Title }}{{ end }}</title>
<!-- Favicons -->
<link rel="apple-touch-icon" href="/favicons/apple-touch-icon.png" sizes="180x180">
<link rel="icon" type="image/png" href="/favicons/android-chrome-192x192.png" sizes="192x192" >
<link rel="icon" type="image/png" href="/favicons/favicon-32x32.png" sizes="32x32">
<link rel="icon" type="image/png" href="/favicons/favicon-16x16.png" sizes="16x16">
<link rel="manifest" href="/favicons/manifest.json">
<link rel="mask-icon" href="/favicons/safari-pinned-tab.svg" color="#2DA6B0">
<meta name="msapplication-TileColor" content="#ffffff">
<meta name="msapplication-TileImage" content="/favicons/mstile-150x150.png">
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-60127042-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-60127042-1');
</script>
</head>
<body>
<div id="landing-content">
<div class="row">
<div class="topbanner{{ if not .IsHome }}sub{{ end }}">
<nav class="navbar navbar-expand-md navbar-dark topnav">
<a class="navbar-brand" href="{{ .Site.BaseURL }}">
<img src="{{ .Site.BaseURL }}img/grpc-logo.png" width="114" height="50">
</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="topnav, collapse navbar-collapse" id="navbarSupportedContent" style="float:right !important">
<ul class="navbar-nav ml-auto">
<li class="nav-item {{ if in .RelPermalink "about" }}active{{ end }}">
<a class="nav-link" href="{{ .Site.BaseURL }}about/">About</a>
</li>
<li class="nav-item dropdown {{ if in .RelPermalink "docs" }}active{{ end }}">
<a class="nav-link dropdown-toggle" href="{{ .Site.BaseURL }}docs/" id="navbarDropdown" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
Docs
</a>
<div class="dropdown-menu" aria-labelledby="navbarDropdown">
{{ range $dropdown }}
{{ $isCurrentPage := eq $currentUrl .URL }}
<a class="dropdown-item" href="{{ .URL }}">
{{ .Name }}
</a>
{{ end }}
</div>
</li>
<li class="nav-item {{ if in .RelPermalink "blog" }}active{{ end }}">
<a class="nav-link" href="/blog">
Blog
</a>
</li>
<li class="nav-item {{ if in .RelPermalink "community" }}active{{ end }}">
<a class="nav-link" href="/community">Community</a>
</li>
<li class="nav-item">
<a class="nav-link" href="https://packages.grpc.io/">
Packages
</a>
</li>
<li class="nav-item {{ if in .RelPermalink "faq" }}active{{ end }}">
<a class="nav-link" href="{{ .Site.BaseURL }}faq/">FAQ</a>
</li>
</ul>
</div>
</nav>
{{ block "main" . }}
{{ end }}
{{ block "footer" . }}
{{ end }}
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.3/umd/popper.min.js" integrity="sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/js/bootstrap.min.js" integrity="sha384-ChfqqxuZUCnJSK3+MXmPNIyE6ZbWh2IMqE241rYiqJxyMiZ6OW/JmZQ5stwEULTy" crossorigin="anonymous"></script>
</body>
</html>

View File

@ -0,0 +1,24 @@
{{ define "main" }}
<main>
<article>
<header>
<h1>{{.Title}}</h1>
</header>
<!-- "{{.Content}}" pulls from the markdown content of the corresponding _index.md -->
{{.Content}}
</article>
<ul>
<!-- Ranges through content/post/*.md -->
{{ range .Pages }}
{{ $date := dateFormat "2006-01-02" .Date }}
<li>
<a href="{{ .Permalink }}">
{{ $date }} | {{ .Title }}
</a>
</li>
{{ end }}
</ul>
</main>
{{ end }}

View File

@ -0,0 +1,14 @@
{{ define "title" }}
{{ .Title }} &ndash; {{ .Site.Title }}
{{ end }}
{{ define "main" }}
<div class="headertext">{{ .Title }}</div>
</div>
</div>
</div>
<div class="section2" style="text-align:left;margin-bottom:5%">
{{ .Content }}
</div>
{{ end }}

43
layouts/blog/list.html Normal file
View File

@ -0,0 +1,43 @@
{{ define "title" }}
{{ .Title }} &ndash; {{ .Site.Title }}
{{ end }}
{{ define "main" }}
<div class="headertext">Blog</div>
</div>
</div>
</div>
<div class="blogcols">
<div class="blogcol1" style="margin-top:4%">
{{ range .Paginator.Pages }}
<h3 style="margin-top:0px;"><a href="{{.Permalink}}">{{ .Title }}</a></h3>
<h5>{{ .Date.Format "January 02, 2006" }}</h5>
<p>
{{ .Summary }}
</p>
<a href="{{.Permalink}}">Read More</a>
<br><br><br>
{{ end }}
{{ template "_internal/pagination.html" . }}
</div>
<div class="blogcol2">
<h5 style="font-size:12pt;margin-bottom:20px">All blog posts</h5>
{{ range .Pages }}
<a href="{{.Permalink}}">{{.Title}}</a><br><br>
{{ end }}
</div>
</div>
{{ end }}

27
layouts/blog/single.html Normal file
View File

@ -0,0 +1,27 @@
{{ define "title" }}
{{ .Title }} &ndash; {{ .Site.Title }}
{{ end }}
{{ define "main" }}
<div class="headertext">Blog</div>
</div>
</div>
</div>
<div class="singleblog">
<h1>{{ .Title }}</h1>
<h5>Posted on {{ .Date.Format "Monday, January 02, 2006" }}
{{ cond ( isset $.Params "author" ) "by" "" }}
{{ if isset $.Params "author-link" }}
<a href="{{ $.Param "author-link" }}">{{ $.Param "author" }}</a>
{{ else }}
{{ $.Param "author" }}
{{ end }}
</h5>
<p>
{{ .Content }}
</p>
</div>
{{ end }}

42
layouts/docs/guides.html Normal file
View File

@ -0,0 +1,42 @@
{{ define "title" }}
{{ .Title }} &ndash; {{ .Site.Title }}
{{ end }}
{{ define "main" }}
{{ $guides := site.Menus.guides }}
{{ $currentUrl := .RelPermalink }}
<div class="headertext">Documentation</div>
</div>
</div>
</div>
{{ partial "docs-topnav.html" . }}
<div class="quickstartcols">
<div class="quickstartcol1">
<h8>Guides</h8>
{{ range $guides }}
{{ $isCurrentPage := eq $currentUrl .URL }}
<a href="{{ .URL }}"{{ if $isCurrentPage }} class="active"{{ end }}>
{{ .Name }}
</a>
{{ end }}
<h8 style="margin-top:25%">Related Guides</h8>
<a href="https://developers.google.com/protocol-buffers/docs/overview">Protocol Buffers</a>
</div>
<div class="quickstartcol2" style="margin-top:4%">
<h3 style="margin-top:0px;">
{{ .Title }}
</h3>
{{ .Content }}
</div>
</div>
</div>
{{ end }}

16
layouts/docs/list.html Normal file
View File

@ -0,0 +1,16 @@
{{ define "title" }}
{{ .Title }} &ndash; {{ .Site.Title }}
{{ end }}
{{ define "main" }}
<div class="headertext">Documentation</div>
</div>
</div>
</div>
{{ partial "docs-topnav.html" . }}
<div class="refsection">
{{ .Content }}
</div>
{{ end }}

View File

@ -0,0 +1,40 @@
{{ define "title" }}
{{ .Title }} &ndash; {{ .Site.Title }}
{{ end }}
{{ define "main" }}
<div class="headertext">Documentation</div>
</div>
</div>
</div>
{{ partial "docs-topnav.html" . }}
<div class="quickstartcols">
<div class="quickstartcol1">
<h8>Quick Start</h8>
<a href="{{ .Site.BaseURL }}docs/quickstart/cpp/" {{ if in .RelPermalink "cpp" }} class="active"{{ end }}>C++</a>
<a href="{{ .Site.BaseURL }}docs/quickstart/csharp/" {{ if in .RelPermalink "csharp" }} class="active"{{ end }}>C#</a>
<a href="{{ .Site.BaseURL }}docs/quickstart/dart/" {{ if in .RelPermalink "dart" }} class="active"{{ end }}>Dart</a>
<a href="{{ .Site.BaseURL }}docs/quickstart/go/" {{ if in .RelPermalink "go" }} class="active"{{ end }}>Go</a>
<a href="{{ .Site.BaseURL }}docs/quickstart/java/" {{ if in .RelPermalink "java" }} class="active"{{ end }}>Java</a>
<a href="{{ .Site.BaseURL }}docs/quickstart/android/" {{ if in .RelPermalink "android" }} class="active"{{ end }}>Android Java</a>
<a href="{{ .Site.BaseURL }}docs/quickstart/node/" {{ if in .RelPermalink "node" }} class="active"{{ end }}>Node.js</a>
<a href="{{ .Site.BaseURL }}docs/quickstart/objective-c/" {{ if in .RelPermalink "objective-c" }} class="active"{{ end }}>Objective-C</a>
<a href="{{ .Site.BaseURL }}docs/quickstart/php/" {{ if in .RelPermalink "php" }} class="active"{{ end }}>PHP</a>
<a href="{{ .Site.BaseURL }}docs/quickstart/python/" {{ if in .RelPermalink "python" }} class="active"{{ end }}>Python</a>
<a href="{{ .Site.BaseURL }}docs/quickstart/ruby/" {{ if in .RelPermalink "ruby" }} class="active"{{ end }}>Ruby</a>
<a href="{{ .Site.BaseURL }}docs/quickstart/web/" {{ if in .RelPermalink "web" }} class="active"{{ end }}>Web</a>
</div>
<div class="quickstartcol2" style="margin-top:4%">
<h3 style="margin-top:0px;">{{ .Title }}</h3>
{{ .Content }}
</div>
</div>
</div>
{{ end }}

16
layouts/docs/single.html Normal file
View File

@ -0,0 +1,16 @@
{{ define "title" }}
{{ .Title }} &ndash; {{ .Site.Title }}
{{ end }}
{{ define "main" }}
<div class="headertext">Documentation</div>
</div>
</div>
</div>
{{ partial "docs-topnav.html" . }}
<div class="refsection">
{{ .Content }}
</div>
{{ end }}

Some files were not shown because too many files have changed in this diff Show More