Add vendor

This commit is contained in:
Justin Santa Barbara 2017-04-08 13:22:52 -04:00
parent c8b18be9dd
commit 82c443189b
55 changed files with 8581 additions and 0 deletions

30
vendor/github.com/weaveworks/mesh/.gitignore generated vendored Normal file
View File

@ -0,0 +1,30 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof
# Bad smells
Makefile
Dockerfile
examples/increment-only-counter/increment-only-counter

201
vendor/github.com/weaveworks/mesh/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

79
vendor/github.com/weaveworks/mesh/README.md generated vendored Normal file
View File

@ -0,0 +1,79 @@
# mesh [![GoDoc](https://godoc.org/github.com/weaveworks/mesh?status.svg)](https://godoc.org/github.com/weaveworks/mesh) [![Circle CI](https://circleci.com/gh/weaveworks/mesh.svg?style=svg)](https://circleci.com/gh/weaveworks/mesh)
Mesh is a tool for building distributed applications.
Mesh implements a [gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol)
that provide membership, unicast, and broadcast functionality
with [eventually-consistent semantics](https://en.wikipedia.org/wiki/Eventual_consistency).
In CAP terms, it is AP: highly-available and partition-tolerant.
Mesh works in a wide variety of network setups, including thru NAT and firewalls, and across clouds and datacenters.
It works in situations where there is only partial connectivity,
i.e. data is transparently routed across multiple hops when there is no direct connection between peers.
It copes with partitions and partial network failure.
It can be easily bootstrapped, typically only requiring knowledge of a single existing peer in the mesh to join.
It has built-in shared-secret authentication and encryption.
It scales to on the order of 100 peers, and has no dependencies.
## Using
Mesh is currently distributed as a Go package.
See [the API documentation](https://godoc.org/github.com/weaveworks/mesh).
We plan to offer Mesh as a standalone service + an easy-to-use API.
We will support multiple deployment scenarios, including
as a standalone binary,
as a container,
as an ambassador or [sidecar](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html) component to an existing container,
and as an infrastructure service in popular platforms.
## Developing
Mesh builds with the standard Go tooling. You will need to put the
repository in Go's expected directory structure; i.e.,
`$GOPATH/src/github.com/weaveworks/mesh`.
### Building
If necessary, you may fetch the latest version of all of the dependencies into your GOPATH via
`go get -d -u -t ./...`
Build the code with the usual
`go install ./...`
### Testing
Assuming you've fetched dependencies as above,
`go test ./...`
### Dependencies
Mesh is a library, designed to be imported into a binary package.
Vendoring is currently the best way for binary package authors to ensure reliable, reproducible builds.
Therefore, we strongly recommend our users use vendoring for all of their dependencies, including Mesh.
To avoid compatibility and availability issues, Mesh doesn't vendor its own dependencies, and doesn't recommend use of third-party import proxies.
There are several tools to make vendoring easier, including
[gb](https://getgb.io),
[gvt](https://github.com/filosottile/gvt),
[glide](https://github.com/Masterminds/glide), and
[govendor](https://github.com/kardianos/govendor).
### Workflow
Mesh follows a typical PR workflow.
All contributions should be made as pull requests that satisfy the guidelines, below.
### Guidelines
- All code must abide [Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments)
- Names should abide [What's in a name](https://talks.golang.org/2014/names.slide#1)
- Code must build on both Linux and Darwin, via plain `go build`
- Code should have appropriate test coverage, invoked via plain `go test`
In addition, several mechanical checks are enforced.
See [the lint script](/lint) for details.

31
vendor/github.com/weaveworks/mesh/_metcd/README.md generated vendored Normal file
View File

@ -0,0 +1,31 @@
# metcd
metcd implements the [etcd](https://github.com/coreos/etcd)
[V3 API](https://github.com/coreos/etcd/blob/master/Documentation/rfc/v3api.md)
on top of Weave Mesh.
**Note** that this package no longer compiles due to changes in etcd upstream.
The code remains for historical purposes.
# Caveats
- We only partially implement the etcd V3 API. See [etcd_store.go](https://github.com/weaveworks/mesh/blob/master/metcd/etcd_store.go) for details.
- Snapshotting and compaction are not yet implemented.
## Usage
```go
ln, err := net.Listen("tcp", ":8080")
if err != nil {
panic(err)
}
minPeerCount := 3
logger := log.New(os.Stderr, "", log.Lstdflags)
server := metcd.NewDefaultServer(minPeerCount, logger)
server.Serve(ln)
```
To have finer-grained control over the mesh, use [metcd.NewServer](http://godoc.org/github.com/weaveworks/mesh/metcd#NewServer).
See [metcdsrv](https://github.com/weaveworks/mesh/tree/master/metcd/metcdsrv/main.go) for a complete example.

327
vendor/github.com/weaveworks/mesh/_metcd/ctrl.go generated vendored Normal file
View File

@ -0,0 +1,327 @@
package metcd
import (
"errors"
"fmt"
"io/ioutil"
"log"
"net"
"time"
"github.com/coreos/etcd/raft"
"github.com/coreos/etcd/raft/raftpb"
"golang.org/x/net/context"
"github.com/weaveworks/mesh"
"github.com/weaveworks/mesh/meshconn"
)
// +-------------+ +-----------------+ +-------------------------+ +-------+
// | mesh.Router | | packetTransport | | ctrl | | state |
// | | | | | +-------------------+ | | |
// | | | +----------+ | | | raft.Node | | | |
// | | | | meshconn | | | | | | | |
// | |======| ReadFrom|-----incomingc------->|Step Propose|<-----| API|<---
// | | | | WriteTo|<--------outgoingc----| | | | |
// | | | +----------+ | | | | | | |
// | | +-----------------+ | | | | | |
// | | | | | | +-------+
// | | +------------+ +--------------+ | | | | ^ ^
// | |===| membership |->| configurator |---->|ProposeConfChange | | | |
// +-------------+ +------------+ +--------------+ | | | | | |
// ^ | +-------------------+ | | |
// | | | | | | |
// | +-------|---------|-------+ | |
// H E R E | entryc snapshotc | |
// B E | | | | |
// D R A G O N S | | '-------------' |
// | v |
// | ConfChange +---------+ Normal |
// '-------------| demuxer |----------------------'
// +---------+
type ctrl struct {
self raft.Peer
minPeerCount int
incomingc <-chan raftpb.Message // from the transport
outgoingc chan<- raftpb.Message // to the transport
unreachablec <-chan uint64 // from the transport
confchangec <-chan raftpb.ConfChange // from the mesh
snapshotc chan<- raftpb.Snapshot // to the state machine
entryc chan<- raftpb.Entry // to the demuxer
proposalc <-chan []byte // from the state machine
stopc chan struct{} // from stop()
removedc chan<- struct{} // to calling context
terminatedc chan struct{}
storage *raft.MemoryStorage
node raft.Node
logger mesh.Logger
}
func newCtrl(
self net.Addr,
others []net.Addr, // to join existing cluster, pass nil or empty others
minPeerCount int,
incomingc <-chan raftpb.Message,
outgoingc chan<- raftpb.Message,
unreachablec <-chan uint64,
confchangec <-chan raftpb.ConfChange,
snapshotc chan<- raftpb.Snapshot,
entryc chan<- raftpb.Entry,
proposalc <-chan []byte,
removedc chan<- struct{},
logger mesh.Logger,
) *ctrl {
storage := raft.NewMemoryStorage()
raftLogger := &raft.DefaultLogger{Logger: log.New(ioutil.Discard, "", 0)}
raftLogger.EnableDebug()
nodeConfig := &raft.Config{
ID: makeRaftPeer(self).ID,
ElectionTick: 10,
HeartbeatTick: 1,
Storage: storage,
Applied: 0, // starting fresh
MaxSizePerMsg: 4096, // TODO(pb): looks like bytes; confirm that
MaxInflightMsgs: 256, // TODO(pb): copied from docs; confirm that
CheckQuorum: true, // leader steps down if quorum is not active for an electionTimeout
Logger: raftLogger,
}
startPeers := makeRaftPeers(others)
if len(startPeers) == 0 {
startPeers = nil // special case: join existing
}
node := raft.StartNode(nodeConfig, startPeers)
c := &ctrl{
self: makeRaftPeer(self),
minPeerCount: minPeerCount,
incomingc: incomingc,
outgoingc: outgoingc,
unreachablec: unreachablec,
confchangec: confchangec,
snapshotc: snapshotc,
entryc: entryc,
proposalc: proposalc,
stopc: make(chan struct{}),
removedc: removedc,
terminatedc: make(chan struct{}),
storage: storage,
node: node,
logger: logger,
}
go c.driveRaft() // analagous to raftexample serveChannels
return c
}
// It is a programmer error to call stop more than once.
func (c *ctrl) stop() {
close(c.stopc)
<-c.terminatedc
}
func (c *ctrl) driveRaft() {
defer c.logger.Printf("ctrl: driveRaft loop exit")
defer close(c.terminatedc)
defer c.node.Stop()
// We own driveProposals. We may terminate when the user invokes stop, or when
// the Raft Node shuts down, which is generally when it receives a ConfChange
// that removes it from the cluster. In either case, we kill driveProposals,
// and wait for it to exit before returning.
cancel := make(chan struct{})
done := make(chan struct{})
go func() {
c.driveProposals(cancel)
close(done)
}()
defer func() { <-done }() // order is important here
defer close(cancel) //
// Now that we are holding a raft.Node we have a few responsibilities.
// https://godoc.org/github.com/coreos/etcd/raft
ticker := time.NewTicker(100 * time.Millisecond) // TODO(pb): taken from raftexample; need to validate
defer ticker.Stop()
for {
select {
case <-ticker.C:
c.node.Tick()
case r := <-c.node.Ready():
if err := c.handleReady(r); err != nil {
c.logger.Printf("ctrl: handle ready: %v (aborting)", err)
close(c.removedc)
return
}
case msg := <-c.incomingc:
c.node.Step(context.TODO(), msg)
case id := <-c.unreachablec:
c.node.ReportUnreachable(id)
case <-c.stopc:
c.logger.Printf("ctrl: got stop signal")
return
}
}
}
func (c *ctrl) driveProposals(cancel <-chan struct{}) {
defer c.logger.Printf("ctrl: driveProposals loop exit")
// driveProposals is a separate goroutine from driveRaft, to mirror
// contrib/raftexample. To be honest, it's not clear to me why that should be
// required; it seems like we should be able to drive these channels in the
// same for/select loop as the others. But we have strange errors (likely
// deadlocks) if we structure it that way.
for c.proposalc != nil && c.confchangec != nil {
select {
case data, ok := <-c.proposalc:
if !ok {
c.logger.Printf("ctrl: got nil proposal; shutting down proposals")
c.proposalc = nil
continue
}
c.node.Propose(context.TODO(), data)
case cc, ok := <-c.confchangec:
if !ok {
c.logger.Printf("ctrl: got nil conf change; shutting down conf changes")
c.confchangec = nil
continue
}
c.logger.Printf("ctrl: ProposeConfChange %s %x", cc.Type, cc.NodeID)
c.node.ProposeConfChange(context.TODO(), cc)
case <-cancel:
return
}
}
}
func (c *ctrl) handleReady(r raft.Ready) error {
// These steps may be performed in parallel, except as noted in step 2.
//
// 1. Write HardState, Entries, and Snapshot to persistent storage if they are
// not empty. Note that when writing an Entry with Index i, any
// previously-persisted entries with Index >= i must be discarded.
if err := c.readySave(r.Snapshot, r.HardState, r.Entries); err != nil {
return fmt.Errorf("save: %v", err)
}
// 2. Send all Messages to the nodes named in the To field. It is important
// that no messages be sent until after the latest HardState has been persisted
// to disk, and all Entries written by any previous Ready batch (Messages may
// be sent while entries from the same batch are being persisted). If any
// Message has type MsgSnap, call Node.ReportSnapshot() after it has been sent
// (these messages may be large). Note: Marshalling messages is not
// thread-safe; it is important that you make sure that no new entries are
// persisted while marshalling. The easiest way to achieve this is to serialise
// the messages directly inside your main raft loop.
c.readySend(r.Messages)
// 3. Apply Snapshot (if any) and CommittedEntries to the state machine. If any
// committed Entry has Type EntryConfChange, call Node.ApplyConfChange() to
// apply it to the node. The configuration change may be cancelled at this
// point by setting the NodeID field to zero before calling ApplyConfChange
// (but ApplyConfChange must be called one way or the other, and the decision
// to cancel must be based solely on the state machine and not external
// information such as the observed health of the node).
if err := c.readyApply(r.Snapshot, r.CommittedEntries); err != nil {
return fmt.Errorf("apply: %v", err)
}
// 4. Call Node.Advance() to signal readiness for the next batch of updates.
// This may be done at any time after step 1, although all updates must be
// processed in the order they were returned by Ready.
c.readyAdvance()
return nil
}
func (c *ctrl) readySave(snapshot raftpb.Snapshot, hardState raftpb.HardState, entries []raftpb.Entry) error {
// For the moment, none of these steps persist to disk. That violates some Raft
// invariants. But we are ephemeral, and will always boot empty, willingly
// paying the snapshot cost. I trust that that the etcd Raft implementation
// permits this.
if !raft.IsEmptySnap(snapshot) {
if err := c.storage.ApplySnapshot(snapshot); err != nil {
return fmt.Errorf("apply snapshot: %v", err)
}
}
if !raft.IsEmptyHardState(hardState) {
if err := c.storage.SetHardState(hardState); err != nil {
return fmt.Errorf("set hard state: %v", err)
}
}
if err := c.storage.Append(entries); err != nil {
return fmt.Errorf("append: %v", err)
}
return nil
}
func (c *ctrl) readySend(msgs []raftpb.Message) {
for _, msg := range msgs {
// If this fails, the transport will tell us asynchronously via unreachablec.
c.outgoingc <- msg
if msg.Type == raftpb.MsgSnap {
// Assume snapshot sends always succeed.
// TODO(pb): do we need error reporting?
c.node.ReportSnapshot(msg.To, raft.SnapshotFinish)
}
}
}
func (c *ctrl) readyApply(snapshot raftpb.Snapshot, committedEntries []raftpb.Entry) error {
c.snapshotc <- snapshot
for _, committedEntry := range committedEntries {
c.entryc <- committedEntry
if committedEntry.Type == raftpb.EntryConfChange {
// See raftexample raftNode.publishEntries
var cc raftpb.ConfChange
if err := cc.Unmarshal(committedEntry.Data); err != nil {
return fmt.Errorf("unmarshal ConfChange: %v", err)
}
c.node.ApplyConfChange(cc)
if cc.Type == raftpb.ConfChangeRemoveNode && cc.NodeID == c.self.ID {
return errors.New("got ConfChange that removed me from the cluster; terminating")
}
}
}
return nil
}
func (c *ctrl) readyAdvance() {
c.node.Advance()
}
// makeRaftPeer converts a net.Addr into a raft.Peer.
// All peers must perform the Addr-to-Peer mapping in the same way.
//
// The etcd Raft implementation tracks the committed entry for each node ID,
// and panics if it discovers a node has lost previously committed entries.
// In effect, it assumes commitment implies durability. But our storage is
// explicitly non-durable. So, whenever a node restarts, we need to give it
// a brand new ID. That is the peer UID.
func makeRaftPeer(addr net.Addr) raft.Peer {
return raft.Peer{
ID: uint64(addr.(meshconn.MeshAddr).PeerUID),
Context: nil, // TODO(pb): ??
}
}
func makeRaftPeers(addrs []net.Addr) []raft.Peer {
peers := make([]raft.Peer, len(addrs))
for i, addr := range addrs {
peers[i] = makeRaftPeer(addr)
}
return peers
}

57
vendor/github.com/weaveworks/mesh/_metcd/ctrl_test.go generated vendored Normal file
View File

@ -0,0 +1,57 @@
package metcd
import (
"log"
"net"
"os"
"testing"
"time"
"github.com/coreos/etcd/raft/raftpb"
"github.com/weaveworks/mesh"
"github.com/weaveworks/mesh/meshconn"
)
func TestCtrlTerminates(t *testing.T) {
var (
peerName, _ = mesh.PeerNameFromString("01:23:45:67:89:01")
self = meshconn.MeshAddr{PeerName: peerName, PeerUID: 123}
others = []net.Addr{}
minPeerCount = 5
incomingc = make(chan raftpb.Message)
outgoingc = make(chan raftpb.Message, 10000)
unreachablec = make(chan uint64)
confchangec = make(chan raftpb.ConfChange)
snapshotc = make(chan raftpb.Snapshot, 10000)
entryc = make(chan raftpb.Entry)
proposalc = make(chan []byte)
removedc = make(chan struct{})
logger = log.New(os.Stderr, "", log.LstdFlags)
)
c := newCtrl(
self,
others,
minPeerCount,
incomingc,
outgoingc,
unreachablec,
confchangec,
snapshotc,
entryc,
proposalc,
removedc,
logger,
)
stopped := make(chan struct{})
go func() {
c.stop()
close(stopped)
}()
select {
case <-stopped:
t.Log("ctrl terminated")
case <-time.After(5 * time.Second):
t.Fatal("ctrl didn't terminate")
}
}

813
vendor/github.com/weaveworks/mesh/_metcd/etcd_store.go generated vendored Normal file
View File

@ -0,0 +1,813 @@
package metcd
import (
"bytes"
"errors"
"fmt"
"io/ioutil"
"os"
"sort"
"github.com/coreos/etcd/etcdserver/etcdserverpb"
"github.com/coreos/etcd/lease"
"github.com/coreos/etcd/mvcc"
"github.com/coreos/etcd/mvcc/backend"
"github.com/coreos/etcd/mvcc/mvccpb"
"github.com/coreos/etcd/raft/raftpb"
"github.com/gogo/protobuf/proto"
"github.com/weaveworks/mesh"
"golang.org/x/net/context"
)
// Transport-agnostic reimplementation of coreos/etcd/etcdserver. The original
// is unsuitable because it is tightly coupled to persistent storage, an HTTP
// transport, etc. Implements selected etcd V3 API (gRPC) methods.
type etcdStore struct {
proposalc chan<- []byte
snapshotc <-chan raftpb.Snapshot
entryc <-chan raftpb.Entry
confentryc chan<- raftpb.Entry
actionc chan func()
quitc chan struct{}
terminatedc chan struct{}
logger mesh.Logger
dbPath string // please os.RemoveAll on exit
kv mvcc.KV
lessor lease.Lessor
index *consistentIndex // see comment on type
idgen <-chan uint64
pending map[uint64]responseChans
}
var _ Server = &etcdStore{}
func newEtcdStore(
proposalc chan<- []byte,
snapshotc <-chan raftpb.Snapshot,
entryc <-chan raftpb.Entry,
confentryc chan<- raftpb.Entry,
logger mesh.Logger,
) *etcdStore {
// It would be much better if we could have a proper in-memory backend. Alas:
// backend.Backend is tightly coupled to bolt.DB, and both are tightly coupled
// to os.Open &c. So we'd need to fork both Bolt and backend. A task for
// another day.
f, err := ioutil.TempFile(os.TempDir(), "mesh_etcd_backend_")
if err != nil {
panic(err)
}
dbPath := f.Name()
f.Close()
logger.Printf("etcd store: using %s", dbPath)
b := backend.NewDefaultBackend(dbPath)
lessor := lease.NewLessor(b)
index := &consistentIndex{0}
kv := mvcc.New(b, lessor, index)
s := &etcdStore{
proposalc: proposalc,
snapshotc: snapshotc,
entryc: entryc,
confentryc: confentryc,
actionc: make(chan func()),
quitc: make(chan struct{}),
terminatedc: make(chan struct{}),
logger: logger,
dbPath: dbPath,
kv: kv,
lessor: lessor,
index: index,
idgen: makeIDGen(),
pending: map[uint64]responseChans{},
}
go s.loop()
return s
}
// Range implements gRPC KVServer.
// Range gets the keys in the range from the store.
func (s *etcdStore) Range(ctx context.Context, req *etcdserverpb.RangeRequest) (*etcdserverpb.RangeResponse, error) {
ireq := etcdserverpb.InternalRaftRequest{ID: <-s.idgen, Range: req}
msgc, errc, err := s.proposeInternalRaftRequest(ireq)
if err != nil {
return nil, err
}
select {
case <-ctx.Done():
s.cancelInternalRaftRequest(ireq)
return nil, ctx.Err()
case msg := <-msgc:
return msg.(*etcdserverpb.RangeResponse), nil
case err := <-errc:
return nil, err
case <-s.quitc:
return nil, errStopped
}
}
// Put implements gRPC KVServer.
// Put puts the given key into the store.
// A put request increases the revision of the store,
// and generates one event in the event history.
func (s *etcdStore) Put(ctx context.Context, req *etcdserverpb.PutRequest) (*etcdserverpb.PutResponse, error) {
ireq := etcdserverpb.InternalRaftRequest{ID: <-s.idgen, Put: req}
msgc, errc, err := s.proposeInternalRaftRequest(ireq)
if err != nil {
return nil, err
}
select {
case <-ctx.Done():
s.cancelInternalRaftRequest(ireq)
return nil, ctx.Err()
case msg := <-msgc:
return msg.(*etcdserverpb.PutResponse), nil
case err := <-errc:
return nil, err
case <-s.quitc:
return nil, errStopped
}
}
// Delete implements gRPC KVServer.
// Delete deletes the given range from the store.
// A delete request increase the revision of the store,
// and generates one event in the event history.
func (s *etcdStore) DeleteRange(ctx context.Context, req *etcdserverpb.DeleteRangeRequest) (*etcdserverpb.DeleteRangeResponse, error) {
ireq := etcdserverpb.InternalRaftRequest{ID: <-s.idgen, DeleteRange: req}
msgc, errc, err := s.proposeInternalRaftRequest(ireq)
if err != nil {
return nil, err
}
select {
case <-ctx.Done():
s.cancelInternalRaftRequest(ireq)
return nil, ctx.Err()
case msg := <-msgc:
return msg.(*etcdserverpb.DeleteRangeResponse), nil
case err := <-errc:
return nil, err
case <-s.quitc:
return nil, errStopped
}
}
// Txn implements gRPC KVServer.
// Txn processes all the requests in one transaction.
// A txn request increases the revision of the store,
// and generates events with the same revision in the event history.
// It is not allowed to modify the same key several times within one txn.
func (s *etcdStore) Txn(ctx context.Context, req *etcdserverpb.TxnRequest) (*etcdserverpb.TxnResponse, error) {
ireq := etcdserverpb.InternalRaftRequest{ID: <-s.idgen, Txn: req}
msgc, errc, err := s.proposeInternalRaftRequest(ireq)
if err != nil {
return nil, err
}
select {
case <-ctx.Done():
s.cancelInternalRaftRequest(ireq)
return nil, ctx.Err()
case msg := <-msgc:
return msg.(*etcdserverpb.TxnResponse), nil
case err := <-errc:
return nil, err
case <-s.quitc:
return nil, errStopped
}
}
// Compact implements gRPC KVServer.
// Compact compacts the event history in s. User should compact the
// event history periodically, or it will grow infinitely.
func (s *etcdStore) Compact(ctx context.Context, req *etcdserverpb.CompactionRequest) (*etcdserverpb.CompactionResponse, error) {
// We don't have snapshotting yet, so compact just puts us in a bad state.
// TODO(pb): fix this when we implement snapshotting.
return nil, errors.New("not implemented")
}
// The "consistent index" is the index number of the most recent committed
// entry. This logical value is duplicated and tracked in multiple places
// throughout the etcd server and storage code.
//
// For our part, we are expected to store one instance of this number, setting
// it whenever we receive a committed entry via entryc, and making it available
// for queries.
//
// The etcd storage backend is given a reference to this instance in the form of
// a ConsistentIndexGetter interface. In addition, it tracks its own view of the
// consistent index in a special bucket+key. See package etcd/mvcc, type
// consistentWatchableStore, method consistentIndex.
//
// Whenever a user makes an e.g. Put request, these values are compared. If
// there is some inconsistency, the transaction is marked as "skip" and becomes
// a no-op. This happens transparently to the user. See package etcd/mvcc,
// type consistentWatchableStore, method TxnBegin.
//
// tl;dr: (ಠ_ಠ)
type consistentIndex struct{ i uint64 }
func (i *consistentIndex) ConsistentIndex() uint64 { return i.i }
func (i *consistentIndex) set(index uint64) { i.i = index }
func makeIDGen() <-chan uint64 {
c := make(chan uint64)
go func() {
var i uint64 = 1
for {
c <- i
i++
}
}()
return c
}
const (
maxRequestBytes = 8192
)
var (
errStopped = errors.New("etcd store was stopped")
errTooBig = errors.New("request too large to send")
errCanceled = errors.New("request canceled")
)
type responseChans struct {
msgc chan<- proto.Message
errc chan<- error
}
func (s *etcdStore) loop() {
defer close(s.terminatedc)
defer s.removeDB()
for {
select {
case snapshot := <-s.snapshotc:
if err := s.applySnapshot(snapshot); err != nil {
s.logger.Printf("etcd store: apply snapshot: %v", err)
}
case entry := <-s.entryc:
if err := s.applyCommittedEntry(entry); err != nil {
s.logger.Printf("etcd store: apply committed entry: %v", err)
}
case f := <-s.actionc:
f()
case <-s.quitc:
return
}
}
}
func (s *etcdStore) stop() {
close(s.quitc)
<-s.terminatedc
}
func (s *etcdStore) applySnapshot(snapshot raftpb.Snapshot) error {
if len(snapshot.Data) == 0 {
//s.logger.Printf("etcd store: apply snapshot with empty snapshot; skipping")
return nil
}
s.logger.Printf("etcd store: applying snapshot: size %d", len(snapshot.Data))
s.logger.Printf("etcd store: applying snapshot: metadata %s", snapshot.Metadata.String())
s.logger.Printf("etcd store: applying snapshot: TODO") // TODO(pb)
return nil
}
func (s *etcdStore) applyCommittedEntry(entry raftpb.Entry) error {
// Set the consistent index regardless of the outcome. Because we need to do
// this for all committed entries, we need to receive all committed entries,
// and must therefore take responsibility to demux the conf changes to the
// configurator via confentryc.
//
// This requirement is unique to the etcd store. But for symmetry, we assign
// the same responsibility to the simple store.
s.index.set(entry.Index)
switch entry.Type {
case raftpb.EntryNormal:
break
case raftpb.EntryConfChange:
s.logger.Printf("etcd store: forwarding ConfChange entry")
s.confentryc <- entry
return nil
default:
s.logger.Printf("etcd store: got unknown entry type %s", entry.Type)
return fmt.Errorf("unknown entry type %d", entry.Type)
}
// entry.Size can be nonzero when len(entry.Data) == 0
if len(entry.Data) <= 0 {
s.logger.Printf("etcd store: got empty committed entry (term %d, index %d, type %s); skipping", entry.Term, entry.Index, entry.Type)
return nil
}
var req etcdserverpb.InternalRaftRequest
if err := req.Unmarshal(entry.Data); err != nil {
s.logger.Printf("etcd store: unmarshaling entry data: %v", err)
return err
}
msg, err := s.applyInternalRaftRequest(req)
if err != nil {
s.logger.Printf("etcd store: applying internal Raft request %d: %v", req.ID, err)
s.cancelPending(req.ID, err)
return err
}
s.signalPending(req.ID, msg)
return nil
}
// From public API method to proposalc.
func (s *etcdStore) proposeInternalRaftRequest(req etcdserverpb.InternalRaftRequest) (<-chan proto.Message, <-chan error, error) {
data, err := req.Marshal()
if err != nil {
return nil, nil, err
}
if len(data) > maxRequestBytes {
return nil, nil, errTooBig
}
msgc, errc, err := s.registerPending(req.ID)
if err != nil {
return nil, nil, err
}
s.proposalc <- data
return msgc, errc, nil
}
func (s *etcdStore) cancelInternalRaftRequest(req etcdserverpb.InternalRaftRequest) {
s.cancelPending(req.ID, errCanceled)
}
// From committed entryc, back to public API method.
// etcdserver/v3demo_server.go applyV3Result
func (s *etcdStore) applyInternalRaftRequest(req etcdserverpb.InternalRaftRequest) (proto.Message, error) {
switch {
case req.Range != nil:
return applyRange(noTxn, s.kv, req.Range)
case req.Put != nil:
return applyPut(noTxn, s.kv, s.lessor, req.Put)
case req.DeleteRange != nil:
return applyDeleteRange(noTxn, s.kv, req.DeleteRange)
case req.Txn != nil:
return applyTransaction(s.kv, s.lessor, req.Txn)
case req.Compaction != nil:
return applyCompaction(s.kv, req.Compaction)
case req.LeaseGrant != nil:
return applyLeaseGrant(s.lessor, req.LeaseGrant)
case req.LeaseRevoke != nil:
return applyLeaseRevoke(s.lessor, req.LeaseRevoke)
default:
return nil, fmt.Errorf("internal Raft request type not implemented")
}
}
func (s *etcdStore) registerPending(id uint64) (<-chan proto.Message, <-chan error, error) {
if _, ok := s.pending[id]; ok {
return nil, nil, fmt.Errorf("pending ID %d already registered", id)
}
msgc := make(chan proto.Message)
errc := make(chan error)
s.pending[id] = responseChans{msgc, errc}
return msgc, errc, nil
}
func (s *etcdStore) signalPending(id uint64, msg proto.Message) {
rc, ok := s.pending[id]
if !ok {
// InternalRaftRequests are replicated via Raft. So all peers will
// invoke this method for all messages on commit. But only the peer that
// serviced the API request will have an operating pending. So, this is
// a normal "failure" mode.
return
}
rc.msgc <- msg
delete(s.pending, id)
}
func (s *etcdStore) cancelPending(id uint64, err error) {
rc, ok := s.pending[id]
if !ok {
s.logger.Printf("etcd store: cancel pending ID %d, but nothing was pending; strange", id)
return
}
rc.errc <- err
delete(s.pending, id)
}
func (s *etcdStore) removeDB() {
s.logger.Printf("etcd store: removing tmp DB %s", s.dbPath)
if err := os.RemoveAll(s.dbPath); err != nil {
s.logger.Printf("etcd store: removing tmp DB %s: %v", s.dbPath, err)
}
}
// Sentinel value to indicate the operation is not part of a transaction.
const noTxn = -1
// isGteRange determines if the range end is a >= range. This works around grpc
// sending empty byte strings as nil; >= is encoded in the range end as '\0'.
func isGteRange(rangeEnd []byte) bool {
return len(rangeEnd) == 1 && rangeEnd[0] == 0
}
func applyRange(txnID int64, kv mvcc.KV, r *etcdserverpb.RangeRequest) (*etcdserverpb.RangeResponse, error) {
resp := &etcdserverpb.RangeResponse{}
resp.Header = &etcdserverpb.ResponseHeader{}
var (
kvs []mvccpb.KeyValue
rev int64
err error
)
if isGteRange(r.RangeEnd) {
r.RangeEnd = []byte{}
}
limit := r.Limit
if r.SortOrder != etcdserverpb.RangeRequest_NONE {
// fetch everything; sort and truncate afterwards
limit = 0
}
if limit > 0 {
// fetch one extra for 'more' flag
limit = limit + 1
}
if txnID != noTxn {
kvs, rev, err = kv.TxnRange(txnID, r.Key, r.RangeEnd, limit, r.Revision)
if err != nil {
return nil, err
}
} else {
kvs, rev, err = kv.Range(r.Key, r.RangeEnd, limit, r.Revision)
if err != nil {
return nil, err
}
}
if r.SortOrder != etcdserverpb.RangeRequest_NONE {
var sorter sort.Interface
switch {
case r.SortTarget == etcdserverpb.RangeRequest_KEY:
sorter = &kvSortByKey{&kvSort{kvs}}
case r.SortTarget == etcdserverpb.RangeRequest_VERSION:
sorter = &kvSortByVersion{&kvSort{kvs}}
case r.SortTarget == etcdserverpb.RangeRequest_CREATE:
sorter = &kvSortByCreate{&kvSort{kvs}}
case r.SortTarget == etcdserverpb.RangeRequest_MOD:
sorter = &kvSortByMod{&kvSort{kvs}}
case r.SortTarget == etcdserverpb.RangeRequest_VALUE:
sorter = &kvSortByValue{&kvSort{kvs}}
}
switch {
case r.SortOrder == etcdserverpb.RangeRequest_ASCEND:
sort.Sort(sorter)
case r.SortOrder == etcdserverpb.RangeRequest_DESCEND:
sort.Sort(sort.Reverse(sorter))
}
}
if r.Limit > 0 && len(kvs) > int(r.Limit) {
kvs = kvs[:r.Limit]
resp.More = true
}
resp.Header.Revision = rev
for i := range kvs {
resp.Kvs = append(resp.Kvs, &kvs[i])
}
return resp, nil
}
type kvSort struct{ kvs []mvccpb.KeyValue }
func (s *kvSort) Swap(i, j int) {
t := s.kvs[i]
s.kvs[i] = s.kvs[j]
s.kvs[j] = t
}
func (s *kvSort) Len() int { return len(s.kvs) }
type kvSortByKey struct{ *kvSort }
func (s *kvSortByKey) Less(i, j int) bool {
return bytes.Compare(s.kvs[i].Key, s.kvs[j].Key) < 0
}
type kvSortByVersion struct{ *kvSort }
func (s *kvSortByVersion) Less(i, j int) bool {
return (s.kvs[i].Version - s.kvs[j].Version) < 0
}
type kvSortByCreate struct{ *kvSort }
func (s *kvSortByCreate) Less(i, j int) bool {
return (s.kvs[i].CreateRevision - s.kvs[j].CreateRevision) < 0
}
type kvSortByMod struct{ *kvSort }
func (s *kvSortByMod) Less(i, j int) bool {
return (s.kvs[i].ModRevision - s.kvs[j].ModRevision) < 0
}
type kvSortByValue struct{ *kvSort }
func (s *kvSortByValue) Less(i, j int) bool {
return bytes.Compare(s.kvs[i].Value, s.kvs[j].Value) < 0
}
func applyPut(txnID int64, kv mvcc.KV, lessor lease.Lessor, req *etcdserverpb.PutRequest) (*etcdserverpb.PutResponse, error) {
resp := &etcdserverpb.PutResponse{}
resp.Header = &etcdserverpb.ResponseHeader{}
var (
rev int64
err error
)
if txnID != noTxn {
rev, err = kv.TxnPut(txnID, req.Key, req.Value, lease.LeaseID(req.Lease))
if err != nil {
return nil, err
}
} else {
leaseID := lease.LeaseID(req.Lease)
if leaseID != lease.NoLease {
if l := lessor.Lookup(leaseID); l == nil {
return nil, lease.ErrLeaseNotFound
}
}
rev = kv.Put(req.Key, req.Value, leaseID)
}
resp.Header.Revision = rev
return resp, nil
}
func applyDeleteRange(txnID int64, kv mvcc.KV, req *etcdserverpb.DeleteRangeRequest) (*etcdserverpb.DeleteRangeResponse, error) {
resp := &etcdserverpb.DeleteRangeResponse{}
resp.Header = &etcdserverpb.ResponseHeader{}
var (
n int64
rev int64
err error
)
if isGteRange(req.RangeEnd) {
req.RangeEnd = []byte{}
}
if txnID != noTxn {
n, rev, err = kv.TxnDeleteRange(txnID, req.Key, req.RangeEnd)
if err != nil {
return nil, err
}
} else {
n, rev = kv.DeleteRange(req.Key, req.RangeEnd)
}
resp.Deleted = n
resp.Header.Revision = rev
return resp, nil
}
func applyTransaction(kv mvcc.KV, lessor lease.Lessor, req *etcdserverpb.TxnRequest) (*etcdserverpb.TxnResponse, error) {
var revision int64
ok := true
for _, c := range req.Compare {
if revision, ok = applyCompare(kv, c); !ok {
break
}
}
var reqs []*etcdserverpb.RequestUnion
if ok {
reqs = req.Success
} else {
reqs = req.Failure
}
if err := checkRequestLeases(lessor, reqs); err != nil {
return nil, err
}
if err := checkRequestRange(kv, reqs); err != nil {
return nil, err
}
// When executing the operations of txn, we need to hold the txn lock.
// So the reader will not see any intermediate results.
txnID := kv.TxnBegin()
defer func() {
err := kv.TxnEnd(txnID)
if err != nil {
panic(fmt.Sprint("unexpected error when closing txn", txnID))
}
}()
resps := make([]*etcdserverpb.ResponseUnion, len(reqs))
for i := range reqs {
resps[i] = applyUnion(txnID, kv, reqs[i])
}
if len(resps) != 0 {
revision++
}
txnResp := &etcdserverpb.TxnResponse{}
txnResp.Header = &etcdserverpb.ResponseHeader{}
txnResp.Header.Revision = revision
txnResp.Responses = resps
txnResp.Succeeded = ok
return txnResp, nil
}
func checkRequestLeases(le lease.Lessor, reqs []*etcdserverpb.RequestUnion) error {
for _, requ := range reqs {
tv, ok := requ.Request.(*etcdserverpb.RequestUnion_RequestPut)
if !ok {
continue
}
preq := tv.RequestPut
if preq == nil || lease.LeaseID(preq.Lease) == lease.NoLease {
continue
}
if l := le.Lookup(lease.LeaseID(preq.Lease)); l == nil {
return lease.ErrLeaseNotFound
}
}
return nil
}
func checkRequestRange(kv mvcc.KV, reqs []*etcdserverpb.RequestUnion) error {
for _, requ := range reqs {
tv, ok := requ.Request.(*etcdserverpb.RequestUnion_RequestRange)
if !ok {
continue
}
greq := tv.RequestRange
if greq == nil || greq.Revision == 0 {
continue
}
if greq.Revision > kv.Rev() {
return mvcc.ErrFutureRev
}
if greq.Revision < kv.FirstRev() {
return mvcc.ErrCompacted
}
}
return nil
}
func applyUnion(txnID int64, kv mvcc.KV, union *etcdserverpb.RequestUnion) *etcdserverpb.ResponseUnion {
switch tv := union.Request.(type) {
case *etcdserverpb.RequestUnion_RequestRange:
if tv.RequestRange != nil {
resp, err := applyRange(txnID, kv, tv.RequestRange)
if err != nil {
panic("unexpected error during txn")
}
return &etcdserverpb.ResponseUnion{Response: &etcdserverpb.ResponseUnion_ResponseRange{ResponseRange: resp}}
}
case *etcdserverpb.RequestUnion_RequestPut:
if tv.RequestPut != nil {
resp, err := applyPut(txnID, kv, nil, tv.RequestPut)
if err != nil {
panic("unexpected error during txn")
}
return &etcdserverpb.ResponseUnion{Response: &etcdserverpb.ResponseUnion_ResponsePut{ResponsePut: resp}}
}
case *etcdserverpb.RequestUnion_RequestDeleteRange:
if tv.RequestDeleteRange != nil {
resp, err := applyDeleteRange(txnID, kv, tv.RequestDeleteRange)
if err != nil {
panic("unexpected error during txn")
}
return &etcdserverpb.ResponseUnion{Response: &etcdserverpb.ResponseUnion_ResponseDeleteRange{ResponseDeleteRange: resp}}
}
default:
// empty union
return nil
}
return nil
}
// applyCompare applies the compare request.
// It returns the revision at which the comparison happens. If the comparison
// succeeds, the it returns true. Otherwise it returns false.
func applyCompare(kv mvcc.KV, c *etcdserverpb.Compare) (int64, bool) {
ckvs, rev, err := kv.Range(c.Key, nil, 1, 0)
if err != nil {
if err == mvcc.ErrTxnIDMismatch {
panic("unexpected txn ID mismatch error")
}
return rev, false
}
var ckv mvccpb.KeyValue
if len(ckvs) != 0 {
ckv = ckvs[0]
} else {
// Use the zero value of ckv normally. However...
if c.Target == etcdserverpb.Compare_VALUE {
// Always fail if we're comparing a value on a key that doesn't exist.
// We can treat non-existence as the empty set explicitly, such that
// even a key with a value of length 0 bytes is still a real key
// that was written that way
return rev, false
}
}
// -1 is less, 0 is equal, 1 is greater
var result int
switch c.Target {
case etcdserverpb.Compare_VALUE:
tv, _ := c.TargetUnion.(*etcdserverpb.Compare_Value)
if tv != nil {
result = bytes.Compare(ckv.Value, tv.Value)
}
case etcdserverpb.Compare_CREATE:
tv, _ := c.TargetUnion.(*etcdserverpb.Compare_CreateRevision)
if tv != nil {
result = compareInt64(ckv.CreateRevision, tv.CreateRevision)
}
case etcdserverpb.Compare_MOD:
tv, _ := c.TargetUnion.(*etcdserverpb.Compare_ModRevision)
if tv != nil {
result = compareInt64(ckv.ModRevision, tv.ModRevision)
}
case etcdserverpb.Compare_VERSION:
tv, _ := c.TargetUnion.(*etcdserverpb.Compare_Version)
if tv != nil {
result = compareInt64(ckv.Version, tv.Version)
}
}
switch c.Result {
case etcdserverpb.Compare_EQUAL:
if result != 0 {
return rev, false
}
case etcdserverpb.Compare_GREATER:
if result != 1 {
return rev, false
}
case etcdserverpb.Compare_LESS:
if result != -1 {
return rev, false
}
}
return rev, true
}
func compareInt64(a, b int64) int {
switch {
case a < b:
return -1
case a > b:
return 1
default:
return 0
}
}
func applyCompaction(kv mvcc.KV, req *etcdserverpb.CompactionRequest) (*etcdserverpb.CompactionResponse, error) {
resp := &etcdserverpb.CompactionResponse{}
resp.Header = &etcdserverpb.ResponseHeader{}
_, err := kv.Compact(req.Revision)
if err != nil {
return nil, err
}
// get the current revision. which key to get is not important.
_, resp.Header.Revision, _ = kv.Range([]byte("compaction"), nil, 1, 0)
return resp, err
}
func applyLeaseGrant(lessor lease.Lessor, req *etcdserverpb.LeaseGrantRequest) (*etcdserverpb.LeaseGrantResponse, error) {
l, err := lessor.Grant(lease.LeaseID(req.ID), req.TTL)
resp := &etcdserverpb.LeaseGrantResponse{}
if err == nil {
resp.ID = int64(l.ID)
resp.TTL = l.TTL
}
return resp, err
}
func applyLeaseRevoke(lessor lease.Lessor, req *etcdserverpb.LeaseRevokeRequest) (*etcdserverpb.LeaseRevokeResponse, error) {
err := lessor.Revoke(lease.LeaseID(req.ID))
return &etcdserverpb.LeaseRevokeResponse{}, err
}

View File

@ -0,0 +1,20 @@
package metcd
// PrefixRangeEnd allows Get, Delete, and Watch requests to operate on all keys
// with a matching prefix. Pass the prefix to this function, and use the result
// as the RangeEnd value.
func PrefixRangeEnd(prefix []byte) []byte {
// https://github.com/coreos/etcd/blob/17e32b6/clientv3/op.go#L187
end := make([]byte, len(prefix))
copy(end, prefix)
for i := len(end) - 1; i >= 0; i-- {
if end[i] < 0xff {
end[i] = end[i] + 1
end = end[:i+1]
return end
}
}
// next prefix does not exist (e.g., 0xffff);
// default to WithFromKey policy
return []byte{0}
}

228
vendor/github.com/weaveworks/mesh/_metcd/membership.go generated vendored Normal file
View File

@ -0,0 +1,228 @@
package metcd
import (
"time"
"github.com/coreos/etcd/raft/raftpb"
"github.com/weaveworks/mesh"
)
// membership regularly polls the mesh.Router for peers in the mesh.
// New peer UIDs are sent on addc. Removed peer UIDs are sent on remc.
// If the membership set gets smaller than minCount, membership will
// close shrunkc and stop, and the caller should terminate.
type membership struct {
router *mesh.Router
minCount int
addc chan<- uint64 // to configurator
remc chan<- uint64 // to configurator
shrunkc chan<- struct{} // to calling context
quitc chan struct{}
logger mesh.Logger
}
func newMembership(router *mesh.Router, initial uint64set, minCount int, addc, remc chan<- uint64, shrunkc chan<- struct{}, logger mesh.Logger) *membership {
m := &membership{
router: router,
minCount: minCount,
addc: addc,
remc: remc,
shrunkc: shrunkc,
quitc: make(chan struct{}),
logger: logger,
}
go m.loop(initial)
return m
}
func (m *membership) stop() {
close(m.quitc)
}
func (m *membership) loop(members uint64set) {
defer m.logger.Printf("membership: loop exit")
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
var add, rem uint64set
for {
select {
case <-ticker.C:
add, rem, members = diff(members, membershipSet(m.router))
if len(members) < m.minCount {
m.logger.Printf("membership: member count (%d) shrunk beneath minimum (%d)", len(members), m.minCount)
close(m.shrunkc)
return
}
for id := range add {
m.addc <- id
}
for id := range rem {
m.remc <- id
}
case <-m.quitc:
return
}
}
}
func membershipSet(router *mesh.Router) uint64set {
descriptions := router.Peers.Descriptions()
members := make(uint64set, len(descriptions))
for _, description := range descriptions {
members.add(uint64(description.UID))
}
return members
}
func diff(prev, curr uint64set) (add, rem, next uint64set) {
add, rem, next = uint64set{}, uint64set{}, uint64set{}
for i := range prev {
prev.del(i)
if curr.has(i) { // was in previous, still in current
curr.del(i) // prevent it from being interpreted as new
next.add(i) // promoted to next
} else { // was in previous, no longer in current
rem.add(i) // marked as removed
}
}
for i := range curr {
curr.del(i)
add.add(i)
next.add(i)
}
return add, rem, next
}
// configurator sits between the mesh membership subsystem and the raft.Node.
// When the mesh tells us that a peer is removed, the configurator adds that
// peer ID to a pending-remove set. Every tick, the configurator sends a
// ConfChange Remove proposal to the raft.Node for each peer in the
// pending-remove set. And when the configurator receives a committed ConfChange
// Remove entry for the peer, it removes the peer from the pending-remove set.
//
// We do the same thing for the add flow, for symmetry.
//
// Why is this necessary? Well, due to what looks like a bug in the raft.Node,
// ConfChange Remove proposals can get lost when the target node disappears. It
// is especially acute when the killed node is the leader. The current (or new)
// leader ends up spamming Heartbeats to the terminated node forever. So,
// lacking any obvious way to track the state of individual proposals, I've
// elected to continuously re-propose ConfChanges until they are confirmed i.e.
// committed.
type configurator struct {
addc <-chan uint64 // from membership
remc <-chan uint64 // from membership
confchangec chan<- raftpb.ConfChange // to raft.Node
entryc <-chan raftpb.Entry // from raft.Node
quitc chan struct{}
logger mesh.Logger
}
func newConfigurator(addc, remc <-chan uint64, confchangec chan<- raftpb.ConfChange, entryc <-chan raftpb.Entry, logger mesh.Logger) *configurator {
c := &configurator{
addc: addc,
remc: remc,
confchangec: confchangec,
entryc: entryc,
quitc: make(chan struct{}),
logger: logger,
}
go c.loop()
return c
}
func (c *configurator) stop() {
close(c.quitc)
}
func (c *configurator) loop() {
defer c.logger.Printf("configurator: loop exit")
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
var (
pendingAdd = uint64set{}
pendingRem = uint64set{}
)
for {
select {
case id := <-c.addc:
if pendingAdd.has(id) {
c.logger.Printf("configurator: recv add %x, was pending add already", id)
} else {
c.logger.Printf("configurator: recv add %x, now pending add", id)
pendingAdd.add(id)
// We *must* wait before emitting a ConfChange.
// https://github.com/coreos/etcd/issues/4759
}
case id := <-c.remc:
if pendingRem.has(id) {
c.logger.Printf("configurator: recv rem %x, was pending rem already", id)
} else {
c.logger.Printf("configurator: recv rem %x, now pending rem", id)
pendingRem.add(id)
// We *must* wait before emitting a ConfChange.
// https://github.com/coreos/etcd/issues/4759
}
case <-ticker.C:
for id := range pendingAdd {
c.logger.Printf("configurator: send ConfChangeAddNode %x", id)
c.confchangec <- raftpb.ConfChange{
Type: raftpb.ConfChangeAddNode,
NodeID: id,
}
}
for id := range pendingRem {
c.logger.Printf("configurator: send ConfChangeRemoveNode %x", id)
c.confchangec <- raftpb.ConfChange{
Type: raftpb.ConfChangeRemoveNode,
NodeID: id,
}
}
case entry := <-c.entryc:
if entry.Type != raftpb.EntryConfChange {
c.logger.Printf("configurator: ignoring %s", entry.Type)
continue
}
var cc raftpb.ConfChange
if err := cc.Unmarshal(entry.Data); err != nil {
c.logger.Printf("configurator: got invalid ConfChange (%v); ignoring", err)
continue
}
switch cc.Type {
case raftpb.ConfChangeAddNode:
if _, ok := pendingAdd[cc.NodeID]; ok {
c.logger.Printf("configurator: recv %s %x: was pending add, deleting", cc.Type, cc.NodeID)
delete(pendingAdd, cc.NodeID)
} else {
c.logger.Printf("configurator: recv %s %x: not pending add, ignoring", cc.Type, cc.NodeID)
}
case raftpb.ConfChangeRemoveNode:
if _, ok := pendingRem[cc.NodeID]; ok {
c.logger.Printf("configurator: recv %s %x: was pending rem, deleting", cc.Type, cc.NodeID)
delete(pendingRem, cc.NodeID)
} else {
c.logger.Printf("configurator: recv %s %x: not pending rem, ignoring", cc.Type, cc.NodeID)
}
}
case <-c.quitc:
return
}
}
}
type uint64set map[uint64]struct{}
func (s uint64set) add(i uint64) { s[i] = struct{}{} }
func (s uint64set) has(i uint64) bool { _, ok := s[i]; return ok }
func (s uint64set) del(i uint64) { delete(s, i) }

View File

@ -0,0 +1,160 @@
package main
import (
"flag"
"fmt"
"io/ioutil"
"log"
"net"
"os"
"os/signal"
"sort"
"strconv"
"strings"
"syscall"
"time"
"github.com/weaveworks/mesh"
"github.com/weaveworks/mesh/meshconn"
"github.com/weaveworks/mesh/metcd"
)
func main() {
peers := &stringset{}
var (
apiListen = flag.String("api", ":8080", "API listen address")
meshListen = flag.String("mesh", net.JoinHostPort("0.0.0.0", strconv.Itoa(mesh.Port)), "mesh listen address")
hwaddr = flag.String("hwaddr", mustHardwareAddr(), "MAC address, i.e. mesh peer name")
nickname = flag.String("nickname", mustHostname(), "peer nickname")
password = flag.String("password", "", "password (optional)")
channel = flag.String("channel", "default", "gossip channel name")
quicktest = flag.Int("quicktest", 0, "set to integer 1-9 to enable quick test setup of node")
n = flag.Int("n", 3, "number of peers expected (lower bound)")
)
flag.Var(peers, "peer", "initial peer (may be repeated)")
flag.Parse()
if *quicktest >= 1 && *quicktest <= 9 {
*hwaddr = fmt.Sprintf("00:00:00:00:00:0%d", *quicktest)
*meshListen = fmt.Sprintf("0.0.0.0:600%d", *quicktest)
*apiListen = fmt.Sprintf("0.0.0.0:800%d", *quicktest)
*nickname = fmt.Sprintf("%d", *quicktest)
for i := 1; i <= 9; i++ {
peers.Set(fmt.Sprintf("127.0.0.1:600%d", i))
}
}
logger := log.New(os.Stderr, *nickname+"> ", log.LstdFlags)
host, portStr, err := net.SplitHostPort(*meshListen)
if err != nil {
logger.Fatalf("mesh address: %s: %v", *meshListen, err)
}
port, err := strconv.Atoi(portStr)
if err != nil {
logger.Fatalf("mesh address: %s: %v", *meshListen, err)
}
name, err := mesh.PeerNameFromString(*hwaddr)
if err != nil {
logger.Fatalf("%s: %v", *hwaddr, err)
}
ln, err := net.Listen("tcp", *apiListen)
if err != nil {
logger.Fatal(err)
}
logger.Printf("hello!")
defer logger.Printf("goodbye!")
// Create, but do not start, a router.
meshLogger := log.New(ioutil.Discard, "", 0) // no log from mesh please
router := mesh.NewRouter(mesh.Config{
Host: host,
Port: port,
ProtocolMinVersion: mesh.ProtocolMinVersion,
Password: []byte(*password),
ConnLimit: 64,
PeerDiscovery: true,
TrustedSubnets: []*net.IPNet{},
}, name, *nickname, mesh.NullOverlay{}, meshLogger)
// Create a meshconn.Peer.
peer := meshconn.NewPeer(name, router.Ourself.UID, logger)
gossip := router.NewGossip(*channel, peer)
peer.Register(gossip)
// Start the router and join the mesh.
func() {
logger.Printf("mesh router starting (%s)", *meshListen)
router.Start()
}()
defer func() {
logger.Printf("mesh router stopping")
router.Stop()
}()
router.ConnectionMaker.InitiateConnections(peers.slice(), true)
terminatec := make(chan struct{})
terminatedc := make(chan error)
go func() {
c := make(chan os.Signal)
signal.Notify(c, syscall.SIGINT, syscall.SIGTERM)
sig := <-c // receive interrupt
close(terminatec) // terminate metcd.Server
<-terminatedc // wait for shutdown
terminatedc <- fmt.Errorf("%s", sig) // forward signal
}()
go func() {
metcdServer := metcd.NewServer(router, peer, *n, terminatec, terminatedc, logger)
grpcServer := metcd.GRPCServer(metcdServer)
defer grpcServer.Stop()
logger.Printf("gRPC listening at %s", *apiListen)
terminatedc <- grpcServer.Serve(ln)
}()
logger.Print(<-terminatedc)
time.Sleep(time.Second) // TODO(pb): there must be a better way
}
type stringset map[string]struct{}
func (ss stringset) Set(value string) error {
ss[value] = struct{}{}
return nil
}
func (ss stringset) String() string {
return strings.Join(ss.slice(), ",")
}
func (ss stringset) slice() []string {
slice := make([]string, 0, len(ss))
for k := range ss {
slice = append(slice, k)
}
sort.Strings(slice)
return slice
}
func mustHardwareAddr() string {
ifaces, err := net.Interfaces()
if err != nil {
panic(err)
}
for _, iface := range ifaces {
if s := iface.HardwareAddr.String(); s != "" {
return s
}
}
panic("no valid network interfaces")
}
func mustHostname() string {
hostname, err := os.Hostname()
if err != nil {
panic(err)
}
return hostname
}

View File

@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail
# Kill child processes at exit
trap "pkill -P $$" SIGINT SIGTERM EXIT
go install github.com/weaveworks/mesh/metcd/metcdsrv
metcdsrv -quicktest=1 &
metcdsrv -quicktest=2 &
metcdsrv -quicktest=3 &
read x

View File

@ -0,0 +1,47 @@
#!/usr/bin/env bash
# This is just a sanity check for metcdsrv.
set -o errexit
set -o nounset
set -o pipefail
# Kill child processes at exit
trap "pkill -P $$" SIGINT SIGTERM EXIT
echo Installing metcdsrv
go install github.com/weaveworks/mesh/metcd/metcdsrv
echo Booting cluster
# Remove output redirection to debug
metcdsrv -quicktest=1 >/dev/null 2>&1 &
metcdsrv -quicktest=2 >/dev/null 2>&1 &
metcdsrv -quicktest=3 >/dev/null 2>&1 &
echo Waiting for cluster to settle
# Wait for the cluster to settle
sleep 5
echo Installing etcdctl
go install github.com/coreos/etcd/cmd/etcdctl
function etcdctl { env ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:8001,127.0.0.1:8002,127.0.0.1:8003 $*; }
echo Testing first put
etcdctl put foo bar
have=$(etcdctl get foo | tail -n1)
want="bar"
if [[ $want != $have ]]
then
echo foo: want $want, have $have
exit 1
fi
echo Testing second put
etcdctl put foo baz
have=$(etcdctl get foo | tail -n1)
want="baz"
if [[ $want != $have ]]
then
echo foo: want $want, have $have
exit 1
fi

View File

@ -0,0 +1,102 @@
package metcd
import (
"net"
"github.com/coreos/etcd/raft/raftpb"
"github.com/weaveworks/mesh"
"github.com/weaveworks/mesh/meshconn"
)
// packetTransport takes ownership of the net.PacketConn.
// Incoming messages are unmarshaled from the conn and send to incomingc.
// Outgoing messages are received from outgoingc and marshaled to the conn.
type packetTransport struct {
conn net.PacketConn
translate peerTranslator
incomingc chan<- raftpb.Message // to controller
outgoingc <-chan raftpb.Message // from controller
unreachablec chan<- uint64 // to controller
logger mesh.Logger
}
func newPacketTransport(
conn net.PacketConn,
translate peerTranslator,
incomingc chan<- raftpb.Message,
outgoingc <-chan raftpb.Message,
unreachablec chan<- uint64,
logger mesh.Logger,
) *packetTransport {
t := &packetTransport{
conn: conn,
translate: translate,
incomingc: incomingc,
outgoingc: outgoingc,
unreachablec: unreachablec,
logger: logger,
}
go t.recvLoop()
go t.sendLoop()
return t
}
type peerTranslator func(uid mesh.PeerUID) (mesh.PeerName, error)
func (t *packetTransport) stop() {
t.conn.Close()
}
func (t *packetTransport) recvLoop() {
defer t.logger.Printf("packet transport: recv loop exit")
const maxRecvLen = 8192
b := make([]byte, maxRecvLen)
for {
n, remote, err := t.conn.ReadFrom(b)
if err != nil {
t.logger.Printf("packet transport: recv: %v (aborting)", err)
return
} else if n >= cap(b) {
t.logger.Printf("packet transport: recv from %s: short read, %d >= %d (continuing)", remote, n, cap(b))
continue
}
var msg raftpb.Message
if err := msg.Unmarshal(b[:n]); err != nil {
t.logger.Printf("packet transport: recv from %s (sz %d): %v (%s) (continuing)", remote, n, err, b[:n])
continue
}
//t.logger.Printf("packet transport: recv from %s (sz %d/%d) OK", remote, n, msg.Size())
t.incomingc <- msg
}
}
func (t *packetTransport) sendLoop() {
defer t.logger.Printf("packet transport: send loop exit")
for msg := range t.outgoingc {
b, err := msg.Marshal()
if err != nil {
t.logger.Printf("packet transport: send to Raft ID %x: %v (continuing)", msg.To, err)
continue
}
peerName, err := t.translate(mesh.PeerUID(msg.To))
if err != nil {
select {
case t.unreachablec <- msg.To:
t.logger.Printf("packet transport: send to Raft ID %x: %v (unreachable; continuing) (%s)", msg.To, err, msg.Type)
default:
t.logger.Printf("packet transport: send to Raft ID %x: %v (unreachable, report dropped; continuing) (%s)", msg.To, err, msg.Type)
}
continue
}
dst := meshconn.MeshAddr{PeerName: peerName}
if n, err := t.conn.WriteTo(b, dst); err != nil {
t.logger.Printf("packet transport: send to Mesh peer %s: %v (continuing)", dst, err)
continue
} else if n < len(b) {
t.logger.Printf("packet transport: send to Mesh peer %s: short write, %d < %d (continuing)", dst, n, len(b))
continue
}
//t.logger.Printf("packet transport: send to %s (sz %d/%d) OK", dst, msg.Size(), len(b))
}
}

231
vendor/github.com/weaveworks/mesh/_metcd/server.go generated vendored Normal file
View File

@ -0,0 +1,231 @@
package metcd
import (
"fmt"
"net"
"os"
"time"
"github.com/coreos/etcd/etcdserver/etcdserverpb"
"github.com/coreos/etcd/raft/raftpb"
"google.golang.org/grpc"
"github.com/weaveworks/mesh"
"github.com/weaveworks/mesh/meshconn"
)
// Server collects the etcd V3 server interfaces that we implement.
type Server interface {
//etcdserverpb.AuthServer
//etcdserverpb.ClusterServer
etcdserverpb.KVServer
//etcdserverpb.LeaseServer
//etcdserverpb.MaintenanceServer
//etcdserverpb.WatchServer
}
// GRPCServer converts a metcd.Server to a *grpc.Server.
func GRPCServer(s Server, options ...grpc.ServerOption) *grpc.Server {
srv := grpc.NewServer(options...)
//etcdserverpb.RegisterAuthServer(srv, s)
//etcdserverpb.RegisterClusterServer(srv, s)
etcdserverpb.RegisterKVServer(srv, s)
//etcdserverpb.RegisterLeaseServer(srv, s)
//etcdserverpb.RegisterMaintenanceServer(srv, s)
//etcdserverpb.RegisterWatchServer(srv, s)
return srv
}
// NewServer returns a Server that (partially) implements the etcd V3 API.
// It uses the passed mesh components to act as the Raft transport.
// For the moment, it blocks until the mesh has minPeerCount peers.
// (This responsibility should rather be given to the caller.)
// The server can be terminated by certain conditions in the cluster.
// If that happens, terminatedc signaled, and the server is invalid.
func NewServer(
router *mesh.Router,
peer *meshconn.Peer,
minPeerCount int,
terminatec <-chan struct{},
terminatedc chan<- error,
logger mesh.Logger,
) Server {
c := make(chan Server)
go serverManager(router, peer, minPeerCount, terminatec, terminatedc, logger, c)
return <-c
}
// NewDefaultServer is like NewServer, but we take care of creating a
// mesh.Router and meshconn.Peer for you, with sane defaults. If you need more
// fine-grained control, create the components yourself and use NewServer.
func NewDefaultServer(
minPeerCount int,
terminatec <-chan struct{},
terminatedc chan<- error,
logger mesh.Logger,
) Server {
var (
peerName = mustPeerName()
nickName = mustHostname()
host = "0.0.0.0"
port = 6379
password = ""
channel = "metcd"
)
router := mesh.NewRouter(mesh.Config{
Host: host,
Port: port,
ProtocolMinVersion: mesh.ProtocolMinVersion,
Password: []byte(password),
ConnLimit: 64,
PeerDiscovery: true,
TrustedSubnets: []*net.IPNet{},
}, peerName, nickName, mesh.NullOverlay{}, logger)
// Create a meshconn.Peer and connect it to a channel.
peer := meshconn.NewPeer(router.Ourself.Peer.Name, router.Ourself.UID, logger)
gossip := router.NewGossip(channel, peer)
peer.Register(gossip)
// Start the router and join the mesh.
// Note that we don't ever stop the router.
// This may or may not be a problem.
// TODO(pb): determine if this is a super huge problem
router.Start()
return NewServer(router, peer, minPeerCount, terminatec, terminatedc, logger)
}
func serverManager(
router *mesh.Router,
peer *meshconn.Peer,
minPeerCount int,
terminatec <-chan struct{},
terminatedc chan<- error,
logger mesh.Logger,
out chan<- Server,
) {
// Identify mesh peers to either create or join a cluster.
// This algorithm is presently completely insufficient.
// It suffers from timing failures, and doesn't understand channels.
// TODO(pb): use gossip to agree on better starting conditions
var (
self = meshconn.MeshAddr{PeerName: router.Ourself.Peer.Name, PeerUID: router.Ourself.UID}
others = []net.Addr{}
)
for {
others = others[:0]
for _, desc := range router.Peers.Descriptions() {
others = append(others, meshconn.MeshAddr{PeerName: desc.Name, PeerUID: desc.UID})
}
if len(others) == minPeerCount {
logger.Printf("detected %d peers; creating", len(others))
break
} else if len(others) > minPeerCount {
logger.Printf("detected %d peers; joining", len(others))
others = others[:0] // empty others slice means join
break
}
logger.Printf("detected %d peers; waiting...", len(others))
time.Sleep(time.Second)
}
var (
incomingc = make(chan raftpb.Message) // from meshconn to ctrl
outgoingc = make(chan raftpb.Message) // from ctrl to meshconn
unreachablec = make(chan uint64, 10000) // from meshconn to ctrl
confchangec = make(chan raftpb.ConfChange) // from meshconn to ctrl
snapshotc = make(chan raftpb.Snapshot) // from ctrl to state machine
entryc = make(chan raftpb.Entry) // from ctrl to state
confentryc = make(chan raftpb.Entry) // from state to configurator
proposalc = make(chan []byte) // from state machine to ctrl
removedc = make(chan struct{}) // from ctrl to us
shrunkc = make(chan struct{}) // from membership to us
)
// Create the thing that watches the cluster membership via the router. It
// signals conf changes, and closes shrunkc when the cluster is too small.
var (
addc = make(chan uint64)
remc = make(chan uint64)
)
m := newMembership(router, membershipSet(router), minPeerCount, addc, remc, shrunkc, logger)
defer m.stop()
// Create the thing that converts mesh membership changes to Raft ConfChange
// proposals.
c := newConfigurator(addc, remc, confchangec, confentryc, logger)
defer c.stop()
// Create a packet transport, wrapping the meshconn.Peer.
transport := newPacketTransport(peer, translateVia(router), incomingc, outgoingc, unreachablec, logger)
defer transport.stop()
// Create the API server. store.stop must go on the defer stack before
// ctrl.stop so that the ctrl stops first. Otherwise, ctrl can deadlock
// processing the last tick.
store := newEtcdStore(proposalc, snapshotc, entryc, confentryc, logger)
defer store.stop()
// Create the controller, which drives the Raft node internally.
ctrl := newCtrl(self, others, minPeerCount, incomingc, outgoingc, unreachablec, confchangec, snapshotc, entryc, proposalc, removedc, logger)
defer ctrl.stop()
// Return the store to the client.
out <- store
errc := make(chan error)
go func() {
<-terminatec
errc <- fmt.Errorf("metcd server terminated by user request")
}()
go func() {
<-removedc
errc <- fmt.Errorf("the Raft peer was removed from the cluster")
}()
go func() {
<-shrunkc
errc <- fmt.Errorf("the Raft cluster got too small")
}()
terminatedc <- <-errc
}
func translateVia(router *mesh.Router) peerTranslator {
return func(uid mesh.PeerUID) (mesh.PeerName, error) {
for _, d := range router.Peers.Descriptions() {
if d.UID == uid {
return d.Name, nil
}
}
return 0, fmt.Errorf("peer UID %x not known", uid)
}
}
func mustPeerName() mesh.PeerName {
peerName, err := mesh.PeerNameFromString(mustHardwareAddr())
if err != nil {
panic(err)
}
return peerName
}
func mustHardwareAddr() string {
ifaces, err := net.Interfaces()
if err != nil {
panic(err)
}
for _, iface := range ifaces {
if s := iface.HardwareAddr.String(); s != "" {
return s
}
}
panic("no valid network interfaces")
}
func mustHostname() string {
hostname, err := os.Hostname()
if err != nil {
panic(err)
}
return hostname
}

4
vendor/github.com/weaveworks/mesh/circle.yml generated vendored Normal file
View File

@ -0,0 +1,4 @@
test:
pre:
- ./lint

500
vendor/github.com/weaveworks/mesh/connection.go generated vendored Normal file
View File

@ -0,0 +1,500 @@
package mesh
import (
"fmt"
"net"
"strconv"
"time"
)
// Connection describes a link between peers.
// It may be in any state, not necessarily established.
type Connection interface {
Remote() *Peer
getLocal() *Peer
remoteTCPAddress() string
isOutbound() bool
isEstablished() bool
}
type ourConnection interface {
Connection
breakTie(ourConnection) connectionTieBreak
shutdown(error)
logf(format string, args ...interface{})
}
// A local representation of the remote side of a connection.
// Limited capabilities compared to LocalConnection.
type remoteConnection struct {
local *Peer
remote *Peer
remoteTCPAddr string
outbound bool
established bool
}
func newRemoteConnection(from, to *Peer, tcpAddr string, outbound bool, established bool) *remoteConnection {
return &remoteConnection{
local: from,
remote: to,
remoteTCPAddr: tcpAddr,
outbound: outbound,
established: established,
}
}
func (conn *remoteConnection) Remote() *Peer { return conn.remote }
func (conn *remoteConnection) getLocal() *Peer { return conn.local }
func (conn *remoteConnection) remoteTCPAddress() string { return conn.remoteTCPAddr }
func (conn *remoteConnection) isOutbound() bool { return conn.outbound }
func (conn *remoteConnection) isEstablished() bool { return conn.established }
// LocalConnection is the local (our) side of a connection.
// It implements ProtocolSender, and manages per-channel GossipSenders.
type LocalConnection struct {
OverlayConn OverlayConnection
remoteConnection
tcpConn *net.TCPConn
trustRemote bool // is remote on a trusted subnet?
trustedByRemote bool // does remote trust us?
version byte
tcpSender tcpSender
sessionKey *[32]byte
heartbeatTCP *time.Ticker
router *Router
uid uint64
actionChan chan<- connectionAction
errorChan chan<- error
finished <-chan struct{} // closed to signal that actorLoop has finished
senders *gossipSenders
logger Logger
}
// If the connection is successful, it will end up in the local peer's
// connections map.
func startLocalConnection(connRemote *remoteConnection, tcpConn *net.TCPConn, router *Router, acceptNewPeer bool, logger Logger) {
if connRemote.local != router.Ourself.Peer {
panic("attempt to create local connection from a peer which is not ourself")
}
actionChan := make(chan connectionAction, ChannelSize)
errorChan := make(chan error, 1)
finished := make(chan struct{})
conn := &LocalConnection{
remoteConnection: *connRemote, // NB, we're taking a copy of connRemote here.
router: router,
tcpConn: tcpConn,
trustRemote: router.trusts(connRemote),
uid: randUint64(),
actionChan: actionChan,
errorChan: errorChan,
finished: finished,
logger: logger,
}
conn.senders = newGossipSenders(conn, finished)
go conn.run(actionChan, errorChan, finished, acceptNewPeer)
}
func (conn *LocalConnection) logf(format string, args ...interface{}) {
format = "->[" + conn.remoteTCPAddr + "|" + conn.remote.String() + "]: " + format
conn.logger.Printf(format, args...)
}
func (conn *LocalConnection) breakTie(dupConn ourConnection) connectionTieBreak {
dupConnLocal := dupConn.(*LocalConnection)
// conn.uid is used as the tie breaker here, in the knowledge that
// both sides will make the same decision.
if conn.uid < dupConnLocal.uid {
return tieBreakWon
} else if dupConnLocal.uid < conn.uid {
return tieBreakLost
}
return tieBreakTied
}
// Established returns true if the connection is established.
// TODO(pb): data race?
func (conn *LocalConnection) isEstablished() bool {
return conn.established
}
// SendProtocolMsg implements ProtocolSender.
func (conn *LocalConnection) SendProtocolMsg(m protocolMsg) error {
if err := conn.sendProtocolMsg(m); err != nil {
conn.shutdown(err)
return err
}
return nil
}
func (conn *LocalConnection) gossipSenders() *gossipSenders {
return conn.senders
}
// ACTOR methods
// NB: The conn.* fields are only written by the connection actor
// process, which is the caller of the ConnectionAction funs. Hence we
// do not need locks for reading, and only need write locks for fields
// read by other processes.
// Non-blocking.
func (conn *LocalConnection) shutdown(err error) {
// err should always be a real error, even if only io.EOF
if err == nil {
panic("nil error")
}
select {
case conn.errorChan <- err:
default:
}
}
// Send an actor request to the actorLoop, but don't block if actorLoop has
// exited. See http://blog.golang.org/pipelines for pattern.
func (conn *LocalConnection) sendAction(action connectionAction) {
select {
case conn.actionChan <- action:
case <-conn.finished:
}
}
// ACTOR server
func (conn *LocalConnection) run(actionChan <-chan connectionAction, errorChan <-chan error, finished chan<- struct{}, acceptNewPeer bool) {
var err error // important to use this var and not create another one with 'err :='
defer func() { conn.teardown(err) }()
defer close(finished)
if err = conn.tcpConn.SetLinger(0); err != nil {
return
}
intro, err := protocolIntroParams{
MinVersion: conn.router.ProtocolMinVersion,
MaxVersion: ProtocolMaxVersion,
Features: conn.makeFeatures(),
Conn: conn.tcpConn,
Password: conn.router.Password,
Outbound: conn.outbound,
}.doIntro()
if err != nil {
return
}
conn.sessionKey = intro.SessionKey
conn.tcpSender = intro.Sender
conn.version = intro.Version
remote, err := conn.parseFeatures(intro.Features)
if err != nil {
return
}
if err = conn.registerRemote(remote, acceptNewPeer); err != nil {
return
}
isRestartedPeer := conn.Remote().UID != remote.UID
conn.logf("connection ready; using protocol version %v", conn.version)
// only use negotiated session key for untrusted connections
var sessionKey *[32]byte
if conn.untrusted() {
sessionKey = conn.sessionKey
}
params := OverlayConnectionParams{
RemotePeer: conn.remote,
LocalAddr: conn.tcpConn.LocalAddr().(*net.TCPAddr),
RemoteAddr: conn.tcpConn.RemoteAddr().(*net.TCPAddr),
Outbound: conn.outbound,
ConnUID: conn.uid,
SessionKey: sessionKey,
SendControlMessage: conn.sendOverlayControlMessage,
Features: intro.Features,
}
if conn.OverlayConn, err = conn.router.Overlay.PrepareConnection(params); err != nil {
return
}
// As soon as we do AddConnection, the new connection becomes
// visible to the packet routing logic. So AddConnection must
// come after PrepareConnection
if err = conn.router.Ourself.doAddConnection(conn, isRestartedPeer); err != nil {
return
}
conn.router.ConnectionMaker.connectionCreated(conn)
// OverlayConnection confirmation comes after AddConnection,
// because only after that completes do we know the connection is
// valid: in particular that it is not a duplicate connection to
// the same peer. Overlay communication on a duplicate connection
// can cause problems such as tripping up overlay crypto at the
// other end due to data being decoded by the other connection. It
// is also generally wasteful to engage in any interaction with
// the remote on a connection that turns out to be invalid.
conn.OverlayConn.Confirm()
// receiveTCP must follow also AddConnection. In the absence
// of any indirect connectivity to the remote peer, the first
// we hear about it (and any peers reachable from it) is
// through topology gossip it sends us on the connection. We
// must ensure that the connection has been added to Ourself
// prior to processing any such gossip, otherwise we risk
// immediately gc'ing part of that newly received portion of
// the topology (though not the remote peer itself, since that
// will have a positive ref count), leaving behind dangling
// references to peers. Hence we must invoke AddConnection,
// which is *synchronous*, first.
conn.heartbeatTCP = time.NewTicker(tcpHeartbeat)
go conn.receiveTCP(intro.Receiver)
// AddConnection must precede actorLoop. More precisely, it
// must precede shutdown, since that invokes DeleteConnection
// and is invoked on termination of this entire
// function. Essentially this boils down to a prohibition on
// running AddConnection in a separate goroutine, at least not
// without some synchronisation. Which in turn requires the
// launching of the receiveTCP goroutine to precede actorLoop.
err = conn.actorLoop(actionChan, errorChan)
}
func (conn *LocalConnection) makeFeatures() map[string]string {
features := map[string]string{
"PeerNameFlavour": PeerNameFlavour,
"Name": conn.local.Name.String(),
"NickName": conn.local.NickName,
"ShortID": fmt.Sprint(conn.local.ShortID),
"UID": fmt.Sprint(conn.local.UID),
"ConnID": fmt.Sprint(conn.uid),
"Trusted": fmt.Sprint(conn.trustRemote),
}
conn.router.Overlay.AddFeaturesTo(features)
return features
}
func (conn *LocalConnection) parseFeatures(features map[string]string) (*Peer, error) {
if err := mustHave(features, []string{"PeerNameFlavour", "Name", "NickName", "UID", "ConnID"}); err != nil {
return nil, err
}
remotePeerNameFlavour := features["PeerNameFlavour"]
if remotePeerNameFlavour != PeerNameFlavour {
return nil, fmt.Errorf("Peer name flavour mismatch (ours: '%s', theirs: '%s')", PeerNameFlavour, remotePeerNameFlavour)
}
name, err := PeerNameFromString(features["Name"])
if err != nil {
return nil, err
}
nickName := features["NickName"]
var shortID uint64
var hasShortID bool
if shortIDStr, ok := features["ShortID"]; ok {
hasShortID = true
shortID, err = strconv.ParseUint(shortIDStr, 10, peerShortIDBits)
if err != nil {
return nil, err
}
}
var trusted bool
if trustedStr, ok := features["Trusted"]; ok {
trusted, err = strconv.ParseBool(trustedStr)
if err != nil {
return nil, err
}
}
conn.trustedByRemote = trusted
uid, err := parsePeerUID(features["UID"])
if err != nil {
return nil, err
}
remoteConnID, err := strconv.ParseUint(features["ConnID"], 10, 64)
if err != nil {
return nil, err
}
conn.uid ^= remoteConnID
peer := newPeer(name, nickName, uid, 0, PeerShortID(shortID))
peer.HasShortID = hasShortID
return peer, nil
}
func (conn *LocalConnection) registerRemote(remote *Peer, acceptNewPeer bool) error {
if acceptNewPeer {
conn.remote = conn.router.Peers.fetchWithDefault(remote)
} else {
conn.remote = conn.router.Peers.fetchAndAddRef(remote.Name)
if conn.remote == nil {
return fmt.Errorf("Found unknown remote name: %s at %s", remote.Name, conn.remoteTCPAddr)
}
}
if remote.Name == conn.local.Name && remote.UID != conn.local.UID {
return &peerNameCollisionError{conn.local, remote}
}
if conn.remote == conn.local {
return errConnectToSelf
}
return nil
}
func (conn *LocalConnection) actorLoop(actionChan <-chan connectionAction, errorChan <-chan error) (err error) {
fwdErrorChan := conn.OverlayConn.ErrorChannel()
fwdEstablishedChan := conn.OverlayConn.EstablishedChannel()
for err == nil {
select {
case err = <-errorChan:
case err = <-fwdErrorChan:
default:
select {
case action := <-actionChan:
err = action()
case <-conn.heartbeatTCP.C:
err = conn.sendSimpleProtocolMsg(ProtocolHeartbeat)
case <-fwdEstablishedChan:
conn.established = true
fwdEstablishedChan = nil
conn.router.Ourself.doConnectionEstablished(conn)
case err = <-errorChan:
case err = <-fwdErrorChan:
}
}
}
return
}
func (conn *LocalConnection) teardown(err error) {
if conn.remote == nil {
conn.logger.Printf("->[%s] connection shutting down due to error during handshake: %v", conn.remoteTCPAddr, err)
} else {
conn.logf("connection shutting down due to error: %v", err)
}
if conn.tcpConn != nil {
if closeErr := conn.tcpConn.Close(); closeErr != nil {
conn.logger.Printf("warning: %v", closeErr)
}
}
if conn.remote != nil {
conn.router.Peers.dereference(conn.remote)
conn.router.Ourself.doDeleteConnection(conn)
}
if conn.heartbeatTCP != nil {
conn.heartbeatTCP.Stop()
}
if conn.OverlayConn != nil {
conn.OverlayConn.Stop()
}
conn.router.ConnectionMaker.connectionTerminated(conn, err)
}
func (conn *LocalConnection) sendOverlayControlMessage(tag byte, msg []byte) error {
return conn.sendProtocolMsg(protocolMsg{protocolTag(tag), msg})
}
// Helpers
func (conn *LocalConnection) sendSimpleProtocolMsg(tag protocolTag) error {
return conn.sendProtocolMsg(protocolMsg{tag: tag})
}
func (conn *LocalConnection) sendProtocolMsg(m protocolMsg) error {
return conn.tcpSender.Send(append([]byte{byte(m.tag)}, m.msg...))
}
func (conn *LocalConnection) receiveTCP(receiver tcpReceiver) {
var err error
for {
if err = conn.extendReadDeadline(); err != nil {
break
}
var msg []byte
if msg, err = receiver.Receive(); err != nil {
break
}
if len(msg) < 1 {
conn.logf("ignoring blank msg")
continue
}
if err = conn.handleProtocolMsg(protocolTag(msg[0]), msg[1:]); err != nil {
break
}
}
conn.shutdown(err)
}
func (conn *LocalConnection) handleProtocolMsg(tag protocolTag, payload []byte) error {
switch tag {
case ProtocolHeartbeat:
case ProtocolReserved1, ProtocolReserved2, ProtocolReserved3, ProtocolOverlayControlMsg:
conn.OverlayConn.ControlMessage(byte(tag), payload)
case ProtocolGossipUnicast, ProtocolGossipBroadcast, ProtocolGossip:
return conn.router.handleGossip(tag, payload)
default:
conn.logf("ignoring unknown protocol tag: %v", tag)
}
return nil
}
func (conn *LocalConnection) extendReadDeadline() error {
return conn.tcpConn.SetReadDeadline(time.Now().Add(tcpHeartbeat * 2))
}
// Untrusted returns true if either we don't trust our remote, or are not
// trusted by our remote.
func (conn *LocalConnection) untrusted() bool {
return !conn.trustRemote || !conn.trustedByRemote
}
type connectionTieBreak int
const (
tieBreakWon connectionTieBreak = iota
tieBreakLost
tieBreakTied
)
var errConnectToSelf = fmt.Errorf("cannot connect to ourself")
type peerNameCollisionError struct {
local, remote *Peer
}
func (err *peerNameCollisionError) Error() string {
return fmt.Sprintf("local %q and remote %q peer names collision", err.local, err.remote)
}
// The actor closure used by LocalConnection. If an action returns an error,
// it will terminate the actor loop, which terminates the connection in turn.
type connectionAction func() error
func mustHave(features map[string]string, keys []string) error {
for _, key := range keys {
if _, ok := features[key]; !ok {
return fmt.Errorf("field %s is missing", key)
}
}
return nil
}

399
vendor/github.com/weaveworks/mesh/connection_maker.go generated vendored Normal file
View File

@ -0,0 +1,399 @@
package mesh
import (
"fmt"
"math/rand"
"net"
"time"
"unicode"
)
const (
initialInterval = 2 * time.Second
maxInterval = 6 * time.Minute
resetAfter = 1 * time.Minute
)
type peerAddrs map[string]*net.TCPAddr
// ConnectionMaker initiates and manages connections to peers.
type connectionMaker struct {
ourself *localPeer
peers *Peers
localAddr string
port int
discovery bool
targets map[string]*target
connections map[Connection]struct{}
directPeers peerAddrs
terminationCount int
actionChan chan<- connectionMakerAction
logger Logger
}
// TargetState describes the connection state of a remote target.
type targetState int
const (
targetWaiting targetState = iota
targetAttempting
targetConnected
targetSuspended
)
// Information about an address where we may find a peer.
type target struct {
state targetState
lastError error // reason for disconnection last time
tryAfter time.Time // next time to try this address
tryInterval time.Duration // retry delay on next failure
}
// The actor closure used by ConnectionMaker. If an action returns true, the
// ConnectionMaker will check the state of its targets, and reconnect to
// relevant candidates.
type connectionMakerAction func() bool
// newConnectionMaker returns a usable ConnectionMaker, seeded with
// peers, making outbound connections from localAddr, and listening on
// port. If discovery is true, ConnectionMaker will attempt to
// initiate new connections with peers it's not directly connected to.
func newConnectionMaker(ourself *localPeer, peers *Peers, localAddr string, port int, discovery bool, logger Logger) *connectionMaker {
actionChan := make(chan connectionMakerAction, ChannelSize)
cm := &connectionMaker{
ourself: ourself,
peers: peers,
localAddr: localAddr,
port: port,
discovery: discovery,
directPeers: peerAddrs{},
targets: make(map[string]*target),
connections: make(map[Connection]struct{}),
actionChan: actionChan,
logger: logger,
}
go cm.queryLoop(actionChan)
return cm
}
// InitiateConnections creates new connections to the provided peers,
// specified in host:port format. If replace is true, any existing direct
// peers are forgotten.
//
// TODO(pb): Weave Net invokes router.ConnectionMaker.InitiateConnections;
// it may be better to provide that on Router directly.
func (cm *connectionMaker) InitiateConnections(peers []string, replace bool) []error {
errors := []error{}
addrs := peerAddrs{}
for _, peer := range peers {
host, port, err := net.SplitHostPort(peer)
if err != nil {
host = peer
port = "0" // we use that as an indication that "no port was supplied"
}
if host == "" || !isAlnum(port) {
errors = append(errors, fmt.Errorf("invalid peer name %q, should be host[:port]", peer))
} else if addr, err := net.ResolveTCPAddr("tcp4", fmt.Sprintf("%s:%s", host, port)); err != nil {
errors = append(errors, err)
} else {
addrs[peer] = addr
}
}
cm.actionChan <- func() bool {
if replace {
cm.directPeers = peerAddrs{}
}
for peer, addr := range addrs {
cm.directPeers[peer] = addr
// curtail any existing reconnect interval
if target, found := cm.targets[cm.completeAddr(*addr)]; found {
target.nextTryNow()
}
}
return true
}
return errors
}
func isAlnum(s string) bool {
for _, c := range s {
if !unicode.In(c, unicode.Letter, unicode.Digit) {
return false
}
}
return true
}
// ForgetConnections removes direct connections to the provided peers,
// specified in host:port format.
//
// TODO(pb): Weave Net invokes router.ConnectionMaker.ForgetConnections;
// it may be better to provide that on Router directly.
func (cm *connectionMaker) ForgetConnections(peers []string) {
cm.actionChan <- func() bool {
for _, peer := range peers {
delete(cm.directPeers, peer)
}
return true
}
}
// Targets takes a snapshot of the targets (direct peers),
// either just the ones we are still trying, or all of them.
// Note these are the same things that InitiateConnections and ForgetConnections talks about,
// but a method to retrieve 'Connections' would obviously return the current connections.
func (cm *connectionMaker) Targets(activeOnly bool) []string {
resultChan := make(chan []string, 0)
cm.actionChan <- func() bool {
var slice []string
for peer, addr := range cm.directPeers {
if activeOnly {
if target, ok := cm.targets[cm.completeAddr(*addr)]; ok && target.tryAfter.IsZero() {
continue
}
}
slice = append(slice, peer)
}
resultChan <- slice
return false
}
return <-resultChan
}
// connectionAborted marks the target identified by address as broken, and
// puts it in the TargetWaiting state.
func (cm *connectionMaker) connectionAborted(address string, err error) {
cm.actionChan <- func() bool {
target := cm.targets[address]
target.state = targetWaiting
target.lastError = err
target.nextTryLater()
return true
}
}
// connectionCreated registers the passed connection, and marks the target
// identified by conn.RemoteTCPAddr() as established, and puts it in the
// TargetConnected state.
func (cm *connectionMaker) connectionCreated(conn Connection) {
cm.actionChan <- func() bool {
cm.connections[conn] = struct{}{}
if conn.isOutbound() {
target := cm.targets[conn.remoteTCPAddress()]
target.state = targetConnected
}
return false
}
}
// connectionTerminated unregisters the passed connection, and marks the
// target identified by conn.RemoteTCPAddr() as Waiting.
func (cm *connectionMaker) connectionTerminated(conn Connection, err error) {
cm.actionChan <- func() bool {
if err != errConnectToSelf {
cm.terminationCount++
}
delete(cm.connections, conn)
if conn.isOutbound() {
target := cm.targets[conn.remoteTCPAddress()]
target.state = targetWaiting
target.lastError = err
_, peerNameCollision := err.(*peerNameCollisionError)
switch {
case peerNameCollision || err == errConnectToSelf:
target.nextTryNever()
case time.Now().After(target.tryAfter.Add(resetAfter)):
target.nextTryNow()
default:
target.nextTryLater()
}
}
return true
}
}
// refresh sends a no-op action into the ConnectionMaker, purely so that the
// ConnectionMaker will check the state of its targets and reconnect to
// relevant candidates.
func (cm *connectionMaker) refresh() {
cm.actionChan <- func() bool { return true }
}
func (cm *connectionMaker) queryLoop(actionChan <-chan connectionMakerAction) {
timer := time.NewTimer(maxDuration)
run := func() { timer.Reset(cm.checkStateAndAttemptConnections()) }
for {
select {
case action := <-actionChan:
if action() {
run()
}
case <-timer.C:
run()
}
}
}
func (cm *connectionMaker) completeAddr(addr net.TCPAddr) string {
if addr.Port == 0 {
addr.Port = cm.port
}
return addr.String()
}
func (cm *connectionMaker) checkStateAndAttemptConnections() time.Duration {
var (
validTarget = make(map[string]struct{})
directTarget = make(map[string]struct{})
)
ourConnectedPeers, ourConnectedTargets, ourInboundIPs := cm.ourConnections()
addTarget := func(address string) {
if _, connected := ourConnectedTargets[address]; connected {
return
}
validTarget[address] = struct{}{}
if _, found := cm.targets[address]; found {
return
}
tgt := &target{state: targetWaiting}
tgt.nextTryNow()
cm.targets[address] = tgt
}
// Add direct targets that are not connected
for _, addr := range cm.directPeers {
attempt := true
if addr.Port == 0 {
// If a peer was specified w/o a port, then we do not
// attempt to connect to it if we have any inbound
// connections from that IP.
if _, connected := ourInboundIPs[addr.IP.String()]; connected {
attempt = false
}
}
address := cm.completeAddr(*addr)
directTarget[address] = struct{}{}
if attempt {
addTarget(address)
}
}
// Add targets for peers that someone else is connected to, but we
// aren't
if cm.discovery {
cm.addPeerTargets(ourConnectedPeers, addTarget)
}
return cm.connectToTargets(validTarget, directTarget)
}
func (cm *connectionMaker) ourConnections() (peerNameSet, map[string]struct{}, map[string]struct{}) {
var (
ourConnectedPeers = make(peerNameSet)
ourConnectedTargets = make(map[string]struct{})
ourInboundIPs = make(map[string]struct{})
)
for conn := range cm.connections {
address := conn.remoteTCPAddress()
ourConnectedPeers[conn.Remote().Name] = struct{}{}
ourConnectedTargets[address] = struct{}{}
if conn.isOutbound() {
continue
}
if ip, _, err := net.SplitHostPort(address); err == nil { // should always succeed
ourInboundIPs[ip] = struct{}{}
}
}
return ourConnectedPeers, ourConnectedTargets, ourInboundIPs
}
func (cm *connectionMaker) addPeerTargets(ourConnectedPeers peerNameSet, addTarget func(string)) {
cm.peers.forEach(func(peer *Peer) {
if peer == cm.ourself.Peer {
return
}
// Modifying peer.connections requires a write lock on Peers,
// and since we are holding a read lock (due to the ForEach),
// access without locking the peer is safe.
for otherPeer, conn := range peer.connections {
if otherPeer == cm.ourself.Name {
continue
}
if _, connected := ourConnectedPeers[otherPeer]; connected {
continue
}
address := conn.remoteTCPAddress()
if conn.isOutbound() {
addTarget(address)
} else if ip, _, err := net.SplitHostPort(address); err == nil {
// There is no point connecting to the (likely
// ephemeral) remote port of an inbound connection
// that some peer has. Let's try to connect on the
// weave port instead.
addTarget(fmt.Sprintf("%s:%d", ip, cm.port))
}
}
})
}
func (cm *connectionMaker) connectToTargets(validTarget map[string]struct{}, directTarget map[string]struct{}) time.Duration {
now := time.Now() // make sure we catch items just added
after := maxDuration
for address, target := range cm.targets {
if target.state != targetWaiting && target.state != targetSuspended {
continue
}
if _, valid := validTarget[address]; !valid {
// Not valid: suspend reconnects if direct peer,
// otherwise forget this target entirely
if _, direct := directTarget[address]; direct {
target.state = targetSuspended
} else {
delete(cm.targets, address)
}
continue
}
if target.tryAfter.IsZero() {
continue
}
target.state = targetWaiting
switch duration := target.tryAfter.Sub(now); {
case duration <= 0:
target.state = targetAttempting
_, isCmdLineTarget := directTarget[address]
go cm.attemptConnection(address, isCmdLineTarget)
case duration < after:
after = duration
}
}
return after
}
func (cm *connectionMaker) attemptConnection(address string, acceptNewPeer bool) {
cm.logger.Printf("->[%s] attempting connection", address)
if err := cm.ourself.createConnection(cm.localAddr, address, acceptNewPeer, cm.logger); err != nil {
cm.logger.Printf("->[%s] error during connection attempt: %v", address, err)
cm.connectionAborted(address, err)
}
}
func (t *target) nextTryNever() {
t.tryAfter = time.Time{}
t.tryInterval = maxInterval
}
func (t *target) nextTryNow() {
t.tryAfter = time.Now()
t.tryInterval = initialInterval
}
// The delay at the nth retry is a random value in the range
// [i-i/2,i+i/2], where i = InitialInterval * 1.5^(n-1).
func (t *target) nextTryLater() {
t.tryAfter = time.Now().Add(t.tryInterval/2 + time.Duration(rand.Int63n(int64(t.tryInterval))))
t.tryInterval = t.tryInterval * 3 / 2
if t.tryInterval > maxInterval {
t.tryInterval = maxInterval
}
}

View File

@ -0,0 +1,51 @@
# Increment-only counter
This example implements an in-memory incremental-only counter.
This is a state-based CRDT, so the write operation is `incr()`.
## Demo
Start several peers on the same host.
Tell the second and subsequent peers to connect to the first one.
```
$ ./increment-only-counter -hwaddr 00:00:00:00:00:01 -nickname a -mesh :6001 -http :8001 &
$ ./increment-only-counter -hwaddr 00:00:00:00:00:02 -nickname b -mesh :6002 -http :8002 -peer 127.0.0.1:6001 &
$ ./increment-only-counter -hwaddr 00:00:00:00:00:03 -nickname c -mesh :6003 -http :8003 -peer 127.0.0.1:6001 &
```
Get current value using the HTTP API of any peer.
```
$ curl -Ss -XGET "http://localhost:8002/"
get => 0
```
Increameant the value:
```
$ curl -Ss -XPOST "http://localhost:8003/"
incr => 1
```
Get current value from another peer:
```
$ curl -Ss -XGET "http://localhost:8001/"
get => 1
```
Incremeant again:
```
$ curl -Ss -XPOST "http://localhost:8002/"
incr => 2
```
And get current value from a different peer:
```
> curl -Ss -XGET "http://localhost:8003/"
get => 2
```
## Implementation
- [The state object](/examples/increment-only-counter/state.go) implements `GossipData`.
- [The peer object](/examples/increment-only-counter/peer.go) implements `Gossiper`.
- [The func main](/examples/increment-only-counter/main.go) wires the components together.

View File

@ -0,0 +1,144 @@
package main
import (
"flag"
"fmt"
"io/ioutil"
"log"
"net"
"net/http"
"os"
"os/signal"
"sort"
"strconv"
"strings"
"syscall"
"github.com/weaveworks/mesh"
)
func main() {
peers := &stringset{}
var (
httpListen = flag.String("http", ":8080", "HTTP listen address")
meshListen = flag.String("mesh", net.JoinHostPort("0.0.0.0", strconv.Itoa(mesh.Port)), "mesh listen address")
hwaddr = flag.String("hwaddr", mustHardwareAddr(), "MAC address, i.e. mesh peer ID")
nickname = flag.String("nickname", mustHostname(), "peer nickname")
password = flag.String("password", "", "password (optional)")
channel = flag.String("channel", "default", "gossip channel name")
)
flag.Var(peers, "peer", "initial peer (may be repeated)")
flag.Parse()
logger := log.New(os.Stderr, *nickname+"> ", log.LstdFlags)
host, portStr, err := net.SplitHostPort(*meshListen)
if err != nil {
logger.Fatalf("mesh address: %s: %v", *meshListen, err)
}
port, err := strconv.Atoi(portStr)
if err != nil {
logger.Fatalf("mesh address: %s: %v", *meshListen, err)
}
name, err := mesh.PeerNameFromString(*hwaddr)
if err != nil {
logger.Fatalf("%s: %v", *hwaddr, err)
}
router := mesh.NewRouter(mesh.Config{
Host: host,
Port: port,
ProtocolMinVersion: mesh.ProtocolMinVersion,
Password: []byte(*password),
ConnLimit: 64,
PeerDiscovery: true,
TrustedSubnets: []*net.IPNet{},
}, name, *nickname, mesh.NullOverlay{}, log.New(ioutil.Discard, "", 0))
peer := newPeer(name, logger)
gossip := router.NewGossip(*channel, peer)
peer.register(gossip)
func() {
logger.Printf("mesh router starting (%s)", *meshListen)
router.Start()
}()
defer func() {
logger.Printf("mesh router stopping")
router.Stop()
}()
router.ConnectionMaker.InitiateConnections(peers.slice(), true)
errs := make(chan error)
go func() {
c := make(chan os.Signal)
signal.Notify(c, syscall.SIGINT)
errs <- fmt.Errorf("%s", <-c)
}()
go func() {
logger.Printf("HTTP server starting (%s)", *httpListen)
http.HandleFunc("/", handle(peer))
errs <- http.ListenAndServe(*httpListen, nil)
}()
logger.Print(<-errs)
}
type counter interface {
get() int
incr() int
}
func handle(c counter) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case "GET":
fmt.Fprintf(w, "get => %d\n", c.get())
case "POST":
fmt.Fprintf(w, "incr => %d\n", c.incr())
}
}
}
type stringset map[string]struct{}
func (ss stringset) Set(value string) error {
ss[value] = struct{}{}
return nil
}
func (ss stringset) String() string {
return strings.Join(ss.slice(), ",")
}
func (ss stringset) slice() []string {
slice := make([]string, 0, len(ss))
for k := range ss {
slice = append(slice, k)
}
sort.Strings(slice)
return slice
}
func mustHardwareAddr() string {
ifaces, err := net.Interfaces()
if err != nil {
panic(err)
}
for _, iface := range ifaces {
if s := iface.HardwareAddr.String(); s != "" {
return s
}
}
panic("no valid network interfaces")
}
func mustHostname() string {
hostname, err := os.Hostname()
if err != nil {
panic(err)
}
return hostname
}

View File

@ -0,0 +1,136 @@
package main
import (
"log"
"bytes"
"encoding/gob"
"github.com/weaveworks/mesh"
)
// Peer encapsulates state and implements mesh.Gossiper.
// It should be passed to mesh.Router.NewGossip,
// and the resulting Gossip registered in turn,
// before calling mesh.Router.Start.
type peer struct {
st *state
send mesh.Gossip
actions chan<- func()
quit chan struct{}
logger *log.Logger
}
// peer implements mesh.Gossiper.
var _ mesh.Gossiper = &peer{}
// Construct a peer with empty state.
// Be sure to register a channel, later,
// so we can make outbound communication.
func newPeer(self mesh.PeerName, logger *log.Logger) *peer {
actions := make(chan func())
p := &peer{
st: newState(self),
send: nil, // must .register() later
actions: actions,
quit: make(chan struct{}),
logger: logger,
}
go p.loop(actions)
return p
}
func (p *peer) loop(actions <-chan func()) {
for {
select {
case f := <-actions:
f()
case <-p.quit:
return
}
}
}
// register the result of a mesh.Router.NewGossip.
func (p *peer) register(send mesh.Gossip) {
p.actions <- func() { p.send = send }
}
// Return the current value of the counter.
func (p *peer) get() int {
return p.st.get()
}
// Increment the counter by one.
func (p *peer) incr() (result int) {
c := make(chan struct{})
p.actions <- func() {
defer close(c)
st := p.st.incr()
if p.send != nil {
p.send.GossipBroadcast(st)
} else {
p.logger.Printf("no sender configured; not broadcasting update right now")
}
result = st.get()
}
<-c
return result
}
func (p *peer) stop() {
close(p.quit)
}
// Return a copy of our complete state.
func (p *peer) Gossip() (complete mesh.GossipData) {
complete = p.st.copy()
p.logger.Printf("Gossip => complete %v", complete.(*state).set)
return complete
}
// Merge the gossiped data represented by buf into our state.
// Return the state information that was modified.
func (p *peer) OnGossip(buf []byte) (delta mesh.GossipData, err error) {
var set map[mesh.PeerName]int
if err := gob.NewDecoder(bytes.NewReader(buf)).Decode(&set); err != nil {
return nil, err
}
delta = p.st.mergeDelta(set)
if delta == nil {
p.logger.Printf("OnGossip %v => delta %v", set, delta)
} else {
p.logger.Printf("OnGossip %v => delta %v", set, delta.(*state).set)
}
return delta, nil
}
// Merge the gossiped data represented by buf into our state.
// Return the state information that was modified.
func (p *peer) OnGossipBroadcast(src mesh.PeerName, buf []byte) (received mesh.GossipData, err error) {
var set map[mesh.PeerName]int
if err := gob.NewDecoder(bytes.NewReader(buf)).Decode(&set); err != nil {
return nil, err
}
received = p.st.mergeReceived(set)
if received == nil {
p.logger.Printf("OnGossipBroadcast %s %v => delta %v", src, set, received)
} else {
p.logger.Printf("OnGossipBroadcast %s %v => delta %v", src, set, received.(*state).set)
}
return received, nil
}
// Merge the gossiped data represented by buf into our state.
func (p *peer) OnGossipUnicast(src mesh.PeerName, buf []byte) error {
var set map[mesh.PeerName]int
if err := gob.NewDecoder(bytes.NewReader(buf)).Decode(&set); err != nil {
return err
}
complete := p.st.mergeComplete(set)
p.logger.Printf("OnGossipUnicast %s %v => complete %v", src, set, complete)
return nil
}

View File

@ -0,0 +1,134 @@
package main
import (
"bytes"
"encoding/gob"
"io/ioutil"
"log"
"reflect"
"testing"
"github.com/weaveworks/mesh"
)
func TestPeerOnGossip(t *testing.T) {
for _, testcase := range []struct {
initial map[mesh.PeerName]int
msg map[mesh.PeerName]int
want map[mesh.PeerName]int
}{
{
map[mesh.PeerName]int{},
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 1, 456: 2},
},
{
map[mesh.PeerName]int{123: 1},
map[mesh.PeerName]int{123: 0, 456: 2},
map[mesh.PeerName]int{456: 2},
},
{
map[mesh.PeerName]int{123: 9},
map[mesh.PeerName]int{123: 8},
nil,
},
} {
p := newPeer(mesh.PeerName(999), log.New(ioutil.Discard, "", 0))
p.st.mergeComplete(testcase.initial)
var buf bytes.Buffer
if err := gob.NewEncoder(&buf).Encode(testcase.msg); err != nil {
t.Fatal(err)
}
delta, err := p.OnGossip(buf.Bytes())
if err != nil {
t.Errorf("%v OnGossip %v: %v", testcase.initial, testcase.msg, err)
continue
}
if want := testcase.want; want == nil {
if delta != nil {
t.Errorf("%v OnGossip %v: want nil, have non-nil", testcase.initial, testcase.msg)
}
} else {
if have := delta.(*state).set; !reflect.DeepEqual(want, have) {
t.Errorf("%v OnGossip %v: want %v, have %v", testcase.initial, testcase.msg, want, have)
}
}
}
}
func TestPeerOnGossipBroadcast(t *testing.T) {
for _, testcase := range []struct {
initial map[mesh.PeerName]int
msg map[mesh.PeerName]int
want map[mesh.PeerName]int
}{
{
map[mesh.PeerName]int{},
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 1, 456: 2},
},
{
map[mesh.PeerName]int{123: 1},
map[mesh.PeerName]int{123: 0, 456: 2},
map[mesh.PeerName]int{456: 2},
},
{
map[mesh.PeerName]int{123: 9},
map[mesh.PeerName]int{123: 8},
map[mesh.PeerName]int{}, // OnGossipBroadcast returns received, which should never be nil
},
} {
p := newPeer(999, log.New(ioutil.Discard, "", 0))
p.st.mergeComplete(testcase.initial)
var buf bytes.Buffer
if err := gob.NewEncoder(&buf).Encode(testcase.msg); err != nil {
t.Fatal(err)
}
delta, err := p.OnGossipBroadcast(mesh.UnknownPeerName, buf.Bytes())
if err != nil {
t.Errorf("%v OnGossipBroadcast %v: %v", testcase.initial, testcase.msg, err)
continue
}
if want, have := testcase.want, delta.(*state).set; !reflect.DeepEqual(want, have) {
t.Errorf("%v OnGossipBroadcast %v: want %v, have %v", testcase.initial, testcase.msg, want, have)
}
}
}
func TestPeerOnGossipUnicast(t *testing.T) {
for _, testcase := range []struct {
initial map[mesh.PeerName]int
msg map[mesh.PeerName]int
want map[mesh.PeerName]int
}{
{
map[mesh.PeerName]int{},
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 1, 456: 2},
},
{
map[mesh.PeerName]int{123: 1},
map[mesh.PeerName]int{123: 0, 456: 2},
map[mesh.PeerName]int{123: 1, 456: 2},
},
{
map[mesh.PeerName]int{123: 9},
map[mesh.PeerName]int{123: 8},
map[mesh.PeerName]int{123: 9},
},
} {
p := newPeer(999, log.New(ioutil.Discard, "", 0))
p.st.mergeComplete(testcase.initial)
var buf bytes.Buffer
if err := gob.NewEncoder(&buf).Encode(testcase.msg); err != nil {
t.Fatal(err)
}
if err := p.OnGossipUnicast(mesh.UnknownPeerName, buf.Bytes()); err != nil {
t.Errorf("%v OnGossipBroadcast %v: %v", testcase.initial, testcase.msg, err)
continue
}
if want, have := testcase.want, p.st.set; !reflect.DeepEqual(want, have) {
t.Errorf("%v OnGossipBroadcast %v: want %v, have %v", testcase.initial, testcase.msg, want, have)
}
}
}

View File

@ -0,0 +1,133 @@
package main
import (
"bytes"
"sync"
"encoding/gob"
"github.com/weaveworks/mesh"
)
// state is an implementation of a G-counter.
type state struct {
mtx sync.RWMutex
set map[mesh.PeerName]int
self mesh.PeerName
}
// state implements GossipData.
var _ mesh.GossipData = &state{}
// Construct an empty state object, ready to receive updates.
// This is suitable to use at program start.
// Other peers will populate us with data.
func newState(self mesh.PeerName) *state {
return &state{
set: map[mesh.PeerName]int{},
self: self,
}
}
func (st *state) get() (result int) {
st.mtx.RLock()
defer st.mtx.RUnlock()
for _, v := range st.set {
result += v
}
return result
}
func (st *state) incr() (complete *state) {
st.mtx.Lock()
defer st.mtx.Unlock()
st.set[st.self]++
return &state{
set: st.set,
}
}
func (st *state) copy() *state {
st.mtx.RLock()
defer st.mtx.RUnlock()
return &state{
set: st.set,
}
}
// Encode serializes our complete state to a slice of byte-slices.
// In this simple example, we use a single gob-encoded
// buffer: see https://golang.org/pkg/encoding/gob/
func (st *state) Encode() [][]byte {
st.mtx.RLock()
defer st.mtx.RUnlock()
var buf bytes.Buffer
if err := gob.NewEncoder(&buf).Encode(st.set); err != nil {
panic(err)
}
return [][]byte{buf.Bytes()}
}
// Merge merges the other GossipData into this one,
// and returns our resulting, complete state.
func (st *state) Merge(other mesh.GossipData) (complete mesh.GossipData) {
return st.mergeComplete(other.(*state).copy().set)
}
// Merge the set into our state, abiding increment-only semantics.
// Return a non-nil mesh.GossipData representation of the received set.
func (st *state) mergeReceived(set map[mesh.PeerName]int) (received mesh.GossipData) {
st.mtx.Lock()
defer st.mtx.Unlock()
for peer, v := range set {
if v <= st.set[peer] {
delete(set, peer) // optimization: make the forwarded data smaller
continue
}
st.set[peer] = v
}
return &state{
set: set, // all remaining elements were novel to us
}
}
// Merge the set into our state, abiding increment-only semantics.
// Return any key/values that have been mutated, or nil if nothing changed.
func (st *state) mergeDelta(set map[mesh.PeerName]int) (delta mesh.GossipData) {
st.mtx.Lock()
defer st.mtx.Unlock()
for peer, v := range set {
if v <= st.set[peer] {
delete(set, peer) // requirement: it's not part of a delta
continue
}
st.set[peer] = v
}
if len(set) <= 0 {
return nil // per OnGossip requirements
}
return &state{
set: set, // all remaining elements were novel to us
}
}
// Merge the set into our state, abiding increment-only semantics.
// Return our resulting, complete state.
func (st *state) mergeComplete(set map[mesh.PeerName]int) (complete mesh.GossipData) {
st.mtx.Lock()
defer st.mtx.Unlock()
for peer, v := range set {
if v > st.set[peer] {
st.set[peer] = v
}
}
return &state{
set: st.set, // n.b. can't .copy() due to lock contention
}
}

View File

@ -0,0 +1,118 @@
package main
import (
"reflect"
"testing"
"github.com/weaveworks/mesh"
)
func TestStateMergeReceived(t *testing.T) {
for _, testcase := range []struct {
initial map[mesh.PeerName]int
merge map[mesh.PeerName]int
want map[mesh.PeerName]int
}{
{
map[mesh.PeerName]int{},
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 1, 456: 2},
},
{
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{},
},
{
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{789: 3},
map[mesh.PeerName]int{789: 3},
},
{
map[mesh.PeerName]int{456: 3},
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 1}, // we drop keys that don't semantically merge
},
} {
initial, merge := testcase.initial, testcase.merge // mergeReceived modifies arguments
delta := newState(999).mergeComplete(initial).(*state).mergeReceived(merge)
if want, have := testcase.want, delta.(*state).set; !reflect.DeepEqual(want, have) {
t.Errorf("%v mergeReceived %v: want %v, have %v", testcase.initial, testcase.merge, want, have)
}
}
}
func TestStateMergeDelta(t *testing.T) {
for _, testcase := range []struct {
initial map[mesh.PeerName]int
merge map[mesh.PeerName]int
want map[mesh.PeerName]int
}{
{
map[mesh.PeerName]int{},
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 1, 456: 2},
},
{
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 1, 456: 2},
nil,
},
{
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{789: 3},
map[mesh.PeerName]int{789: 3},
},
{
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{456: 3},
map[mesh.PeerName]int{456: 3},
},
} {
initial, merge := testcase.initial, testcase.merge // mergeDelta modifies arguments
delta := newState(999).mergeComplete(initial).(*state).mergeDelta(merge)
if want := testcase.want; want == nil {
if delta != nil {
t.Errorf("%v mergeDelta %v: want nil, have non-nil", testcase.initial, testcase.merge)
}
} else {
if have := delta.(*state).set; !reflect.DeepEqual(want, have) {
t.Errorf("%v mergeDelta %v: want %v, have %v", testcase.initial, testcase.merge, want, have)
}
}
}
}
func TestStateMergeComplete(t *testing.T) {
for _, testcase := range []struct {
initial map[mesh.PeerName]int
merge map[mesh.PeerName]int
want map[mesh.PeerName]int
}{
{
map[mesh.PeerName]int{},
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 1, 456: 2},
},
{
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 1, 456: 2},
},
{
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{789: 3},
map[mesh.PeerName]int{123: 1, 456: 2, 789: 3},
},
{
map[mesh.PeerName]int{123: 1, 456: 2},
map[mesh.PeerName]int{123: 0, 456: 3},
map[mesh.PeerName]int{123: 1, 456: 3},
},
} {
st := newState(999).mergeComplete(testcase.initial).(*state).mergeComplete(testcase.merge).(*state)
if want, have := testcase.want, st.set; !reflect.DeepEqual(want, have) {
t.Errorf("%v mergeComplete %v: want %v, have %v", testcase.initial, testcase.merge, want, have)
}
}
}

269
vendor/github.com/weaveworks/mesh/gossip.go generated vendored Normal file
View File

@ -0,0 +1,269 @@
package mesh
import "sync"
// Gossip is the sending interface.
//
// TODO(pb): rename to e.g. Sender
type Gossip interface {
// GossipUnicast emits a single message to a peer in the mesh.
//
// TODO(pb): rename to Unicast?
//
// Unicast takes []byte instead of GossipData because "to date there has
// been no compelling reason [in practice] to do merging on unicast."
// But there may be some motivation to have unicast Mergeable; see
// https://github.com/weaveworks/weave/issues/1764
//
// TODO(pb): for uniformity of interface, rather take GossipData?
GossipUnicast(dst PeerName, msg []byte) error
// GossipBroadcast emits a message to all peers in the mesh.
//
// TODO(pb): rename to Broadcast?
GossipBroadcast(update GossipData)
}
// Gossiper is the receiving interface.
//
// TODO(pb): rename to e.g. Receiver
type Gossiper interface {
// OnGossipUnicast merges received data into state.
//
// TODO(pb): rename to e.g. OnUnicast
OnGossipUnicast(src PeerName, msg []byte) error
// OnGossipBroadcast merges received data into state and returns a
// representation of the received data (typically a delta) for further
// propagation.
//
// TODO(pb): rename to e.g. OnBroadcast
OnGossipBroadcast(src PeerName, update []byte) (received GossipData, err error)
// Gossip returns the state of everything we know; gets called periodically.
Gossip() (complete GossipData)
// OnGossip merges received data into state and returns "everything new
// I've just learnt", or nil if nothing in the received data was new.
OnGossip(msg []byte) (delta GossipData, err error)
}
// GossipData is a merge-able dataset.
// Think: log-structured data.
type GossipData interface {
// Encode encodes the data into multiple byte-slices.
Encode() [][]byte
// Merge combines another GossipData into this one and returns the result.
//
// TODO(pb): does it need to be leave the original unmodified?
Merge(GossipData) GossipData
}
// GossipSender accumulates GossipData that needs to be sent to one
// destination, and sends it when possible. GossipSender is one-to-one with a
// channel.
type gossipSender struct {
sync.Mutex
makeMsg func(msg []byte) protocolMsg
makeBroadcastMsg func(srcName PeerName, msg []byte) protocolMsg
sender protocolSender
gossip GossipData
broadcasts map[PeerName]GossipData
more chan<- struct{}
flush chan<- chan<- bool // for testing
}
// NewGossipSender constructs a usable GossipSender.
func newGossipSender(
makeMsg func(msg []byte) protocolMsg,
makeBroadcastMsg func(srcName PeerName, msg []byte) protocolMsg,
sender protocolSender,
stop <-chan struct{},
) *gossipSender {
more := make(chan struct{}, 1)
flush := make(chan chan<- bool)
s := &gossipSender{
makeMsg: makeMsg,
makeBroadcastMsg: makeBroadcastMsg,
sender: sender,
broadcasts: make(map[PeerName]GossipData),
more: more,
flush: flush,
}
go s.run(stop, more, flush)
return s
}
func (s *gossipSender) run(stop <-chan struct{}, more <-chan struct{}, flush <-chan chan<- bool) {
sent := false
for {
select {
case <-stop:
return
case <-more:
sentSomething, err := s.deliver(stop)
if err != nil {
return
}
sent = sent || sentSomething
case ch := <-flush: // for testing
// send anything pending, then reply back whether we sent
// anything since previous flush
select {
case <-more:
sentSomething, err := s.deliver(stop)
if err != nil {
return
}
sent = sent || sentSomething
default:
}
ch <- sent
sent = false
}
}
}
func (s *gossipSender) deliver(stop <-chan struct{}) (bool, error) {
sent := false
// We must not hold our lock when sending, since that would block
// the callers of Send/Broadcast while we are stuck waiting for
// network congestion to clear. So we pick and send one piece of
// data at a time, only holding the lock during the picking.
for {
select {
case <-stop:
return sent, nil
default:
}
data, makeProtocolMsg := s.pick()
if data == nil {
return sent, nil
}
for _, msg := range data.Encode() {
if err := s.sender.SendProtocolMsg(makeProtocolMsg(msg)); err != nil {
return sent, err
}
}
sent = true
}
}
func (s *gossipSender) pick() (data GossipData, makeProtocolMsg func(msg []byte) protocolMsg) {
s.Lock()
defer s.Unlock()
switch {
case s.gossip != nil: // usually more important than broadcasts
data = s.gossip
makeProtocolMsg = s.makeMsg
s.gossip = nil
case len(s.broadcasts) > 0:
for srcName, d := range s.broadcasts {
data = d
makeProtocolMsg = func(msg []byte) protocolMsg { return s.makeBroadcastMsg(srcName, msg) }
delete(s.broadcasts, srcName)
break
}
}
return
}
// Send accumulates the GossipData and will send it eventually.
// Send and Broadcast accumulate into different buckets.
func (s *gossipSender) Send(data GossipData) {
s.Lock()
defer s.Unlock()
if s.empty() {
defer s.prod()
}
if s.gossip == nil {
s.gossip = data
} else {
s.gossip = s.gossip.Merge(data)
}
}
// Broadcast accumulates the GossipData under the given srcName and will send
// it eventually. Send and Broadcast accumulate into different buckets.
func (s *gossipSender) Broadcast(srcName PeerName, data GossipData) {
s.Lock()
defer s.Unlock()
if s.empty() {
defer s.prod()
}
d, found := s.broadcasts[srcName]
if !found {
s.broadcasts[srcName] = data
} else {
s.broadcasts[srcName] = d.Merge(data)
}
}
func (s *gossipSender) empty() bool { return s.gossip == nil && len(s.broadcasts) == 0 }
func (s *gossipSender) prod() {
select {
case s.more <- struct{}{}:
default:
}
}
// Flush sends all pending data, and returns true if anything was sent since
// the previous flush. For testing.
func (s *gossipSender) Flush() bool {
ch := make(chan bool)
s.flush <- ch
return <-ch
}
// gossipSenders wraps a ProtocolSender (e.g. a LocalConnection) and yields
// per-channel GossipSenders.
// TODO(pb): may be able to remove this and use makeGossipSender directly
type gossipSenders struct {
sync.Mutex
sender protocolSender
stop <-chan struct{}
senders map[string]*gossipSender
}
// NewGossipSenders returns a usable GossipSenders leveraging the ProtocolSender.
// TODO(pb): is stop chan the best way to do that?
func newGossipSenders(sender protocolSender, stop <-chan struct{}) *gossipSenders {
return &gossipSenders{
sender: sender,
stop: stop,
senders: make(map[string]*gossipSender),
}
}
// Sender yields the GossipSender for the named channel.
// It will use the factory function if no sender yet exists.
func (gs *gossipSenders) Sender(channelName string, makeGossipSender func(sender protocolSender, stop <-chan struct{}) *gossipSender) *gossipSender {
gs.Lock()
defer gs.Unlock()
s, found := gs.senders[channelName]
if !found {
s = makeGossipSender(gs.sender, gs.stop)
gs.senders[channelName] = s
}
return s
}
// Flush flushes all managed senders. Used for testing.
func (gs *gossipSenders) Flush() bool {
sent := false
gs.Lock()
defer gs.Unlock()
for _, sender := range gs.senders {
sent = sender.Flush() || sent
}
return sent
}
// GossipChannels is an index of channel name to gossip channel.
type gossipChannels map[string]*gossipChannel
type gossipConnection interface {
gossipSenders() *gossipSenders
}

152
vendor/github.com/weaveworks/mesh/gossip_channel.go generated vendored Normal file
View File

@ -0,0 +1,152 @@
package mesh
import (
"bytes"
"encoding/gob"
"fmt"
)
// gossipChannel is a logical communication channel within a physical mesh.
type gossipChannel struct {
name string
ourself *localPeer
routes *routes
gossiper Gossiper
logger Logger
}
// newGossipChannel returns a named, usable channel.
// It delegates receiving duties to the passed Gossiper.
func newGossipChannel(channelName string, ourself *localPeer, r *routes, g Gossiper, logger Logger) *gossipChannel {
return &gossipChannel{
name: channelName,
ourself: ourself,
routes: r,
gossiper: g,
logger: logger,
}
}
func (c *gossipChannel) deliverUnicast(srcName PeerName, origPayload []byte, dec *gob.Decoder) error {
var destName PeerName
if err := dec.Decode(&destName); err != nil {
return err
}
if c.ourself.Name == destName {
var payload []byte
if err := dec.Decode(&payload); err != nil {
return err
}
return c.gossiper.OnGossipUnicast(srcName, payload)
}
if err := c.relayUnicast(destName, origPayload); err != nil {
c.logf("%v", err)
}
return nil
}
func (c *gossipChannel) deliverBroadcast(srcName PeerName, _ []byte, dec *gob.Decoder) error {
var payload []byte
if err := dec.Decode(&payload); err != nil {
return err
}
data, err := c.gossiper.OnGossipBroadcast(srcName, payload)
if err != nil || data == nil {
return err
}
c.relayBroadcast(srcName, data)
return nil
}
func (c *gossipChannel) deliver(srcName PeerName, _ []byte, dec *gob.Decoder) error {
var payload []byte
if err := dec.Decode(&payload); err != nil {
return err
}
update, err := c.gossiper.OnGossip(payload)
if err != nil || update == nil {
return err
}
c.relay(srcName, update)
return nil
}
// GossipUnicast implements Gossip, relaying msg to dst, which must be a
// member of the channel.
func (c *gossipChannel) GossipUnicast(dstPeerName PeerName, msg []byte) error {
return c.relayUnicast(dstPeerName, gobEncode(c.name, c.ourself.Name, dstPeerName, msg))
}
// GossipBroadcast implements Gossip, relaying update to all members of the
// channel.
func (c *gossipChannel) GossipBroadcast(update GossipData) {
c.relayBroadcast(c.ourself.Name, update)
}
// Send relays data into the channel topology via random neighbours.
func (c *gossipChannel) Send(data GossipData) {
c.relay(c.ourself.Name, data)
}
// SendDown relays data into the channel topology via conn.
func (c *gossipChannel) SendDown(conn Connection, data GossipData) {
c.senderFor(conn).Send(data)
}
func (c *gossipChannel) relayUnicast(dstPeerName PeerName, buf []byte) (err error) {
if relayPeerName, found := c.routes.UnicastAll(dstPeerName); !found {
err = fmt.Errorf("unknown relay destination: %s", dstPeerName)
} else if conn, found := c.ourself.ConnectionTo(relayPeerName); !found {
err = fmt.Errorf("unable to find connection to relay peer %s", relayPeerName)
} else {
err = conn.(protocolSender).SendProtocolMsg(protocolMsg{ProtocolGossipUnicast, buf})
}
return err
}
func (c *gossipChannel) relayBroadcast(srcName PeerName, update GossipData) {
c.routes.ensureRecalculated()
for _, conn := range c.ourself.ConnectionsTo(c.routes.BroadcastAll(srcName)) {
c.senderFor(conn).Broadcast(srcName, update)
}
}
func (c *gossipChannel) relay(srcName PeerName, data GossipData) {
c.routes.ensureRecalculated()
for _, conn := range c.ourself.ConnectionsTo(c.routes.randomNeighbours(srcName)) {
c.senderFor(conn).Send(data)
}
}
func (c *gossipChannel) senderFor(conn Connection) *gossipSender {
return conn.(gossipConnection).gossipSenders().Sender(c.name, c.makeGossipSender)
}
func (c *gossipChannel) makeGossipSender(sender protocolSender, stop <-chan struct{}) *gossipSender {
return newGossipSender(c.makeMsg, c.makeBroadcastMsg, sender, stop)
}
func (c *gossipChannel) makeMsg(msg []byte) protocolMsg {
return protocolMsg{ProtocolGossip, gobEncode(c.name, c.ourself.Name, msg)}
}
func (c *gossipChannel) makeBroadcastMsg(srcName PeerName, msg []byte) protocolMsg {
return protocolMsg{ProtocolGossipBroadcast, gobEncode(c.name, srcName, msg)}
}
func (c *gossipChannel) logf(format string, args ...interface{}) {
format = "[gossip " + c.name + "]: " + format
c.logger.Printf(format, args...)
}
// GobEncode gob-encodes each item and returns the resulting byte slice.
func gobEncode(items ...interface{}) []byte {
buf := new(bytes.Buffer)
enc := gob.NewEncoder(buf)
for _, i := range items {
if err := enc.Encode(i); err != nil {
panic(err)
}
}
return buf.Bytes()
}

256
vendor/github.com/weaveworks/mesh/gossip_test.go generated vendored Normal file
View File

@ -0,0 +1,256 @@
package mesh
import (
"fmt"
"io/ioutil"
"log"
"sync"
"testing"
"github.com/stretchr/testify/require"
)
// TODO test gossip unicast; atm we only test topology gossip and
// surrogates, neither of which employ unicast.
type mockGossipConnection struct {
remoteConnection
dest *Router
senders *gossipSenders
start chan struct{}
}
var _ gossipConnection = &mockGossipConnection{}
func newTestRouter(name string) *Router {
peerName, _ := PeerNameFromString(name)
router := NewRouter(Config{}, peerName, "nick", nil, log.New(ioutil.Discard, "", 0))
router.Start()
return router
}
func (conn *mockGossipConnection) breakTie(dupConn ourConnection) connectionTieBreak {
return tieBreakTied
}
func (conn *mockGossipConnection) shutdown(err error) {
}
func (conn *mockGossipConnection) logf(format string, args ...interface{}) {
format = "->[" + conn.remoteTCPAddr + "|" + conn.remote.String() + "]: " + format
if len(format) == 0 || format[len(format)-1] != '\n' {
format += "\n"
}
fmt.Printf(format, args...)
}
func (conn *mockGossipConnection) SendProtocolMsg(pm protocolMsg) error {
<-conn.start
return conn.dest.handleGossip(pm.tag, pm.msg)
}
func (conn *mockGossipConnection) gossipSenders() *gossipSenders {
return conn.senders
}
func (conn *mockGossipConnection) Start() {
close(conn.start)
}
func sendPendingGossip(routers ...*Router) {
// Loop until all routers report they didn't send anything
for sentSomething := true; sentSomething; {
sentSomething = false
for _, router := range routers {
sentSomething = router.sendPendingGossip() || sentSomething
}
}
}
func addTestGossipConnection(r1, r2 *Router) {
c1 := r1.newTestGossipConnection(r2)
c2 := r2.newTestGossipConnection(r1)
c1.Start()
c2.Start()
}
func (router *Router) newTestGossipConnection(r *Router) *mockGossipConnection {
to := r.Ourself.Peer
toPeer := newPeer(to.Name, to.NickName, to.UID, 0, to.ShortID)
toPeer = router.Peers.fetchWithDefault(toPeer) // Has side-effect of incrementing refcount
conn := &mockGossipConnection{
remoteConnection: *newRemoteConnection(router.Ourself.Peer, toPeer, "", false, true),
dest: r,
start: make(chan struct{}),
}
conn.senders = newGossipSenders(conn, make(chan struct{}))
router.Ourself.handleAddConnection(conn, false)
router.Ourself.handleConnectionEstablished(conn)
return conn
}
func (router *Router) DeleteTestGossipConnection(r *Router) {
toName := r.Ourself.Peer.Name
conn, _ := router.Ourself.ConnectionTo(toName)
router.Peers.dereference(conn.Remote())
router.Ourself.handleDeleteConnection(conn.(ourConnection))
}
// Create a Peer representing the receiver router, with connections to
// the routers supplied as arguments, carrying across all UID and
// version information.
func (router *Router) tp(routers ...*Router) *Peer {
peer := newPeerFrom(router.Ourself.Peer)
connections := make(map[PeerName]Connection)
for _, r := range routers {
p := newPeerFrom(r.Ourself.Peer)
connections[r.Ourself.Peer.Name] = newMockConnection(peer, p)
}
peer.Version = router.Ourself.Peer.Version
peer.connections = connections
return peer
}
// Check that the topology of router matches the peers and all of their connections
func checkTopology(t *testing.T, router *Router, wantedPeers ...*Peer) {
router.Peers.RLock()
checkTopologyPeers(t, true, router.Peers.allPeers(), wantedPeers...)
router.Peers.RUnlock()
}
func flushAndCheckTopology(t *testing.T, routers []*Router, wantedPeers ...*Peer) {
sendPendingGossip(routers...)
for _, r := range routers {
checkTopology(t, r, wantedPeers...)
}
}
func TestGossipTopology(t *testing.T) {
// Create some peers that will talk to each other
r1 := newTestRouter("01:00:00:01:00:00")
r2 := newTestRouter("02:00:00:02:00:00")
r3 := newTestRouter("03:00:00:03:00:00")
routers := []*Router{r1, r2, r3}
// Check state when they have no connections
checkTopology(t, r1, r1.tp())
checkTopology(t, r2, r2.tp())
// Now try adding some connections
addTestGossipConnection(r1, r2)
sendPendingGossip(r1, r2)
checkTopology(t, r1, r1.tp(r2), r2.tp(r1))
checkTopology(t, r2, r1.tp(r2), r2.tp(r1))
addTestGossipConnection(r2, r3)
flushAndCheckTopology(t, routers, r1.tp(r2), r2.tp(r1, r3), r3.tp(r2))
addTestGossipConnection(r3, r1)
flushAndCheckTopology(t, routers, r1.tp(r2, r3), r2.tp(r1, r3), r3.tp(r1, r2))
// Drop the connection from 2 to 3
r2.DeleteTestGossipConnection(r3)
flushAndCheckTopology(t, routers, r1.tp(r2, r3), r2.tp(r1), r3.tp(r1, r2))
// Drop the connection from 1 to 3
r1.DeleteTestGossipConnection(r3)
sendPendingGossip(r1, r2, r3)
checkTopology(t, r1, r1.tp(r2), r2.tp(r1))
checkTopology(t, r2, r1.tp(r2), r2.tp(r1))
// r3 still thinks r1 has a connection to it
checkTopology(t, r3, r1.tp(r2, r3), r2.tp(r1), r3.tp(r1, r2))
}
func TestGossipSurrogate(t *testing.T) {
// create the topology r1 <-> r2 <-> r3
r1 := newTestRouter("01:00:00:01:00:00")
r2 := newTestRouter("02:00:00:02:00:00")
r3 := newTestRouter("03:00:00:03:00:00")
routers := []*Router{r1, r2, r3}
addTestGossipConnection(r1, r2)
addTestGossipConnection(r3, r2)
flushAndCheckTopology(t, routers, r1.tp(r2), r2.tp(r1, r3), r3.tp(r2))
// create a gossiper at either end, but not the middle
g1 := newTestGossiper()
g3 := newTestGossiper()
s1 := r1.NewGossip("Test", g1)
s3 := r3.NewGossip("Test", g3)
// broadcast a message from each end, check it reaches the other
broadcast(s1, 1)
broadcast(s3, 2)
sendPendingGossip(r1, r2, r3)
g1.checkHas(t, 2)
g3.checkHas(t, 1)
// check that each end gets their message back through periodic
// gossip
r1.sendAllGossip()
r3.sendAllGossip()
sendPendingGossip(r1, r2, r3)
g1.checkHas(t, 1, 2)
g3.checkHas(t, 1, 2)
}
type testGossiper struct {
sync.RWMutex
state map[byte]struct{}
}
func newTestGossiper() *testGossiper {
return &testGossiper{state: make(map[byte]struct{})}
}
func (g *testGossiper) OnGossipUnicast(sender PeerName, msg []byte) error {
return nil
}
func (g *testGossiper) OnGossipBroadcast(_ PeerName, update []byte) (GossipData, error) {
g.Lock()
defer g.Unlock()
for _, v := range update {
g.state[v] = struct{}{}
}
return newSurrogateGossipData(update), nil
}
func (g *testGossiper) Gossip() GossipData {
g.RLock()
defer g.RUnlock()
state := make([]byte, len(g.state))
for v := range g.state {
state = append(state, v)
}
return newSurrogateGossipData(state)
}
func (g *testGossiper) OnGossip(update []byte) (GossipData, error) {
g.Lock()
defer g.Unlock()
var delta []byte
for _, v := range update {
if _, found := g.state[v]; !found {
delta = append(delta, v)
g.state[v] = struct{}{}
}
}
if len(delta) == 0 {
return nil, nil
}
return newSurrogateGossipData(delta), nil
}
func (g *testGossiper) checkHas(t *testing.T, vs ...byte) {
g.RLock()
defer g.RUnlock()
for _, v := range vs {
if _, found := g.state[v]; !found {
require.FailNow(t, fmt.Sprintf("%d is missing", v))
}
}
}
func broadcast(s Gossip, v byte) {
s.GossipBroadcast(newSurrogateGossipData([]byte{v}))
}

22
vendor/github.com/weaveworks/mesh/lint generated vendored Executable file
View File

@ -0,0 +1,22 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail
if [ ! $(command -v gometalinter) ]
then
go get github.com/alecthomas/gometalinter
gometalinter --install --vendor
fi
gometalinter \
--exclude='error return value not checked.*(Close|Log|Print).*\(errcheck\)$' \
--exclude='.*_test\.go:.*error return value not checked.*\(errcheck\)$' \
--exclude='duplicate of.*_test.go.*\(dupl\)$' \
--disable=aligncheck \
--disable=gotype \
--disable=gas \
--cyclo-over=20 \
--tests \
--deadline=10s

296
vendor/github.com/weaveworks/mesh/local_peer.go generated vendored Normal file
View File

@ -0,0 +1,296 @@
package mesh
import (
"encoding/gob"
"fmt"
"net"
"sync"
"time"
)
// localPeer is the only "active" peer in the mesh. It extends Peer with
// additional behaviors, mostly to retrieve and manage connection state.
type localPeer struct {
sync.RWMutex
*Peer
router *Router
actionChan chan<- localPeerAction
}
// The actor closure used by localPeer.
type localPeerAction func()
// newLocalPeer returns a usable LocalPeer.
func newLocalPeer(name PeerName, nickName string, router *Router) *localPeer {
actionChan := make(chan localPeerAction, ChannelSize)
peer := &localPeer{
Peer: newPeer(name, nickName, randomPeerUID(), 0, randomPeerShortID()),
router: router,
actionChan: actionChan,
}
go peer.actorLoop(actionChan)
return peer
}
// Connections returns all the connections that the local peer is aware of.
func (peer *localPeer) getConnections() connectionSet {
connections := make(connectionSet)
peer.RLock()
defer peer.RUnlock()
for _, conn := range peer.connections {
connections[conn] = struct{}{}
}
return connections
}
// ConnectionTo returns the connection to the named peer, if any.
//
// TODO(pb): Weave Net invokes router.Ourself.ConnectionTo;
// it may be better to provide that on Router directly.
func (peer *localPeer) ConnectionTo(name PeerName) (Connection, bool) {
peer.RLock()
defer peer.RUnlock()
conn, found := peer.connections[name]
return conn, found // yes, you really can't inline that. FFS.
}
// ConnectionsTo returns all known connections to the named peers.
//
// TODO(pb): Weave Net invokes router.Ourself.ConnectionsTo;
// it may be better to provide that on Router directly.
func (peer *localPeer) ConnectionsTo(names []PeerName) []Connection {
if len(names) == 0 {
return nil
}
conns := make([]Connection, 0, len(names))
peer.RLock()
defer peer.RUnlock()
for _, name := range names {
conn, found := peer.connections[name]
// Again, !found could just be due to a race.
if found {
conns = append(conns, conn)
}
}
return conns
}
// createConnection creates a new connection, originating from
// localAddr, to peerAddr. If acceptNewPeer is false, peerAddr must
// already be a member of the mesh.
func (peer *localPeer) createConnection(localAddr string, peerAddr string, acceptNewPeer bool, logger Logger) error {
if err := peer.checkConnectionLimit(); err != nil {
return err
}
localTCPAddr, err := net.ResolveTCPAddr("tcp4", localAddr)
if err != nil {
return err
}
remoteTCPAddr, err := net.ResolveTCPAddr("tcp4", peerAddr)
if err != nil {
return err
}
tcpConn, err := net.DialTCP("tcp4", localTCPAddr, remoteTCPAddr)
if err != nil {
return err
}
connRemote := newRemoteConnection(peer.Peer, nil, peerAddr, true, false)
startLocalConnection(connRemote, tcpConn, peer.router, acceptNewPeer, logger)
return nil
}
// ACTOR client API
// Synchronous.
func (peer *localPeer) doAddConnection(conn ourConnection, isRestartedPeer bool) error {
resultChan := make(chan error)
peer.actionChan <- func() {
resultChan <- peer.handleAddConnection(conn, isRestartedPeer)
}
return <-resultChan
}
// Asynchronous.
func (peer *localPeer) doConnectionEstablished(conn ourConnection) {
peer.actionChan <- func() {
peer.handleConnectionEstablished(conn)
}
}
// Synchronous.
func (peer *localPeer) doDeleteConnection(conn ourConnection) {
resultChan := make(chan interface{})
peer.actionChan <- func() {
peer.handleDeleteConnection(conn)
resultChan <- nil
}
<-resultChan
}
func (peer *localPeer) encode(enc *gob.Encoder) {
peer.RLock()
defer peer.RUnlock()
peer.Peer.encode(enc)
}
// ACTOR server
func (peer *localPeer) actorLoop(actionChan <-chan localPeerAction) {
gossipTimer := time.Tick(gossipInterval)
for {
select {
case action := <-actionChan:
action()
case <-gossipTimer:
peer.router.sendAllGossip()
}
}
}
func (peer *localPeer) handleAddConnection(conn ourConnection, isRestartedPeer bool) error {
if peer.Peer != conn.getLocal() {
panic("Attempt made to add connection to peer where peer is not the source of connection")
}
if conn.Remote() == nil {
panic("Attempt made to add connection to peer with unknown remote peer")
}
toName := conn.Remote().Name
dupErr := fmt.Errorf("Multiple connections to %s added to %s", conn.Remote(), peer.String())
// deliberately non symmetrical
if dupConn, found := peer.connections[toName]; found {
if dupConn == conn {
return nil
}
dupOurConn := dupConn.(ourConnection)
switch conn.breakTie(dupOurConn) {
case tieBreakWon:
dupOurConn.shutdown(dupErr)
peer.handleDeleteConnection(dupOurConn)
case tieBreakLost:
return dupErr
case tieBreakTied:
// oh good grief. Sod it, just kill both of them.
dupOurConn.shutdown(dupErr)
peer.handleDeleteConnection(dupOurConn)
return dupErr
}
}
if err := peer.checkConnectionLimit(); err != nil {
return err
}
_, isConnectedPeer := peer.router.Routes.Unicast(toName)
peer.addConnection(conn)
switch {
case isRestartedPeer:
conn.logf("connection added (restarted peer)")
peer.router.sendAllGossipDown(conn)
case isConnectedPeer:
conn.logf("connection added")
default:
conn.logf("connection added (new peer)")
peer.router.sendAllGossipDown(conn)
}
peer.router.Routes.recalculate()
peer.broadcastPeerUpdate(conn.Remote())
return nil
}
func (peer *localPeer) handleConnectionEstablished(conn ourConnection) {
if peer.Peer != conn.getLocal() {
panic("Peer informed of active connection where peer is not the source of connection")
}
if dupConn, found := peer.connections[conn.Remote().Name]; !found || conn != dupConn {
conn.shutdown(fmt.Errorf("Cannot set unknown connection active"))
return
}
peer.connectionEstablished(conn)
conn.logf("connection fully established")
peer.router.Routes.recalculate()
peer.broadcastPeerUpdate()
}
func (peer *localPeer) handleDeleteConnection(conn ourConnection) {
if peer.Peer != conn.getLocal() {
panic("Attempt made to delete connection from peer where peer is not the source of connection")
}
if conn.Remote() == nil {
panic("Attempt made to delete connection to peer with unknown remote peer")
}
toName := conn.Remote().Name
if connFound, found := peer.connections[toName]; !found || connFound != conn {
return
}
peer.deleteConnection(conn)
conn.logf("connection deleted")
// Must do garbage collection first to ensure we don't send out an
// update with unreachable peers (can cause looping)
peer.router.Peers.GarbageCollect()
peer.router.Routes.recalculate()
peer.broadcastPeerUpdate()
}
// helpers
func (peer *localPeer) broadcastPeerUpdate(peers ...*Peer) {
// Some tests run without a router. This should be fixed so
// that the relevant part of Router can be easily run in the
// context of a test, but that will involve significant
// reworking of tests.
if peer.router != nil {
peer.router.broadcastTopologyUpdate(append(peers, peer.Peer))
}
}
func (peer *localPeer) checkConnectionLimit() error {
limit := peer.router.ConnLimit
if 0 != limit && peer.connectionCount() >= limit {
return fmt.Errorf("Connection limit reached (%v)", limit)
}
return nil
}
func (peer *localPeer) addConnection(conn Connection) {
peer.Lock()
defer peer.Unlock()
peer.connections[conn.Remote().Name] = conn
peer.Version++
}
func (peer *localPeer) deleteConnection(conn Connection) {
peer.Lock()
defer peer.Unlock()
delete(peer.connections, conn.Remote().Name)
peer.Version++
}
func (peer *localPeer) connectionEstablished(conn Connection) {
peer.Lock()
defer peer.Unlock()
peer.Version++
}
func (peer *localPeer) connectionCount() int {
peer.RLock()
defer peer.RUnlock()
return len(peer.connections)
}
func (peer *localPeer) setShortID(shortID PeerShortID) {
peer.Lock()
defer peer.Unlock()
peer.ShortID = shortID
peer.Version++
}
func (peer *localPeer) setVersionBeyond(version uint64) bool {
peer.Lock()
defer peer.Unlock()
if version >= peer.Version {
peer.Version = version + 1
return true
}
return false
}

6
vendor/github.com/weaveworks/mesh/logger.go generated vendored Normal file
View File

@ -0,0 +1,6 @@
package mesh
// Logger is a simple interface used by mesh to do logging.
type Logger interface {
Printf(format string, args ...interface{})
}

15
vendor/github.com/weaveworks/mesh/meshconn/README.md generated vendored Normal file
View File

@ -0,0 +1,15 @@
# meshconn
meshconn implements [net.PacketConn](https://golang.org/pkg/net/#PacketConn) on top of mesh.
Think of it as UDP with benefits:
NAT and bastion host (DMZ) traversal,
broadcast/multicast in networks where this is normally not possible e.g. EC2,
and an up-to-date, queryable memberlist.
meshconn supports [net.Addr](https://golang.org/pkg/net/#Addr) of the form `weavemesh://<PeerName>`.
By default, `<PeerName>` is a hardware address of the form `01:02:03:FD:FE:FF`.
Other forms of PeerName e.g. hashes are supported.
meshconn itself is largely stateless and has best-effort delivery semantics.
As a future experiment, it could easily be amended to have basic resiliency guarantees.
Also, at the moment, PacketConn read and write deadlines are not supported.

View File

@ -0,0 +1,22 @@
package meshconn
import (
"fmt"
"net"
"github.com/weaveworks/mesh"
)
// MeshAddr implements net.Addr for mesh peers.
type MeshAddr struct {
mesh.PeerName // stable across invocations
mesh.PeerUID // new with each invocation
}
var _ net.Addr = MeshAddr{}
// Network returns weavemesh.
func (a MeshAddr) Network() string { return "weavemesh" }
// String returns weavemesh://<PeerName>.
func (a MeshAddr) String() string { return fmt.Sprintf("%s://%s", a.Network(), a.PeerName.String()) }

182
vendor/github.com/weaveworks/mesh/meshconn/peer.go generated vendored Normal file
View File

@ -0,0 +1,182 @@
package meshconn
import (
"errors"
"net"
"time"
"github.com/weaveworks/mesh"
)
var (
// ErrShortRead is returned by ReadFrom when the
// passed buffer is too small for the packet.
ErrShortRead = errors.New("short read")
// ErrPeerClosed is returned by ReadFrom and WriteTo
// when the peer is closed during the operation.
ErrPeerClosed = errors.New("peer closed")
// ErrGossipNotRegistered is returned by Write to when attempting
// to write before a mesh.Gossip has been registered in the peer.
ErrGossipNotRegistered = errors.New("gossip not registered")
// ErrNotMeshAddr is returned by WriteTo when attempting
// to write to a non-mesh address.
ErrNotMeshAddr = errors.New("not a mesh addr")
// ErrNotSupported is returned by methods that are not supported.
ErrNotSupported = errors.New("not supported")
)
// Peer implements mesh.Gossiper and net.PacketConn.
type Peer struct {
name mesh.PeerName
uid mesh.PeerUID
gossip mesh.Gossip
recv chan pkt
actions chan func()
quit chan struct{}
logger mesh.Logger
}
// NewPeer returns a Peer, which can be used as a net.PacketConn.
// Clients must Register a mesh.Gossip before calling ReadFrom or WriteTo.
// Clients should aggressively consume from ReadFrom.
func NewPeer(name mesh.PeerName, uid mesh.PeerUID, logger mesh.Logger) *Peer {
p := &Peer{
name: name,
uid: uid,
gossip: nil, // initially no gossip
recv: make(chan pkt),
actions: make(chan func()),
quit: make(chan struct{}),
logger: logger,
}
go p.loop()
return p
}
func (p *Peer) loop() {
for {
select {
case f := <-p.actions:
f()
case <-p.quit:
return
}
}
}
// Register injects the mesh.Gossip and enables full-duplex communication.
// Clients should consume from ReadFrom without blocking.
func (p *Peer) Register(gossip mesh.Gossip) {
p.actions <- func() { p.gossip = gossip }
}
// ReadFrom implements net.PacketConn.
// Clients should consume from ReadFrom without blocking.
func (p *Peer) ReadFrom(b []byte) (n int, remote net.Addr, err error) {
c := make(chan struct{})
p.actions <- func() {
go func() { // so as not to block loop
defer close(c)
select {
case pkt := <-p.recv:
n = copy(b, pkt.Buf)
remote = MeshAddr{PeerName: pkt.SrcName, PeerUID: pkt.SrcUID}
if n < len(pkt.Buf) {
err = ErrShortRead
}
case <-p.quit:
err = ErrPeerClosed
}
}()
}
<-c
return n, remote, err
}
// WriteTo implements net.PacketConn.
func (p *Peer) WriteTo(b []byte, dst net.Addr) (n int, err error) {
c := make(chan struct{})
p.actions <- func() {
defer close(c)
if p.gossip == nil {
err = ErrGossipNotRegistered
return
}
meshAddr, ok := dst.(MeshAddr)
if !ok {
err = ErrNotMeshAddr
return
}
pkt := pkt{SrcName: p.name, SrcUID: p.uid, Buf: b}
if meshAddr.PeerName == p.name {
p.recv <- pkt
return
}
// TODO(pb): detect and support broadcast
buf := pkt.encode()
n = len(buf)
err = p.gossip.GossipUnicast(meshAddr.PeerName, buf)
}
<-c
return n, err
}
// Close implements net.PacketConn.
func (p *Peer) Close() error {
close(p.quit)
return nil
}
// LocalAddr implements net.PacketConn.
func (p *Peer) LocalAddr() net.Addr {
return MeshAddr{PeerName: p.name, PeerUID: p.uid}
}
// SetDeadline implements net.PacketConn.
// SetDeadline is not supported.
func (p *Peer) SetDeadline(time.Time) error {
return ErrNotSupported
}
// SetReadDeadline implements net.PacketConn.
// SetReadDeadline is not supported.
func (p *Peer) SetReadDeadline(time.Time) error {
return ErrNotSupported
}
// SetWriteDeadline implements net.PacketConn.
// SetWriteDeadline is not supported.
func (p *Peer) SetWriteDeadline(time.Time) error {
return ErrNotSupported
}
// Gossip implements mesh.Gossiper.
func (p *Peer) Gossip() (complete mesh.GossipData) {
return pktSlice{} // we're stateless
}
// OnGossip implements mesh.Gossiper.
// The buf is a single pkt.
func (p *Peer) OnGossip(buf []byte) (delta mesh.GossipData, err error) {
return pktSlice{makePkt(buf)}, nil
}
// OnGossipBroadcast implements mesh.Gossiper.
// The buf is a single pkt
func (p *Peer) OnGossipBroadcast(_ mesh.PeerName, buf []byte) (received mesh.GossipData, err error) {
pkt := makePkt(buf)
p.recv <- pkt // to ReadFrom
return pktSlice{pkt}, nil
}
// OnGossipUnicast implements mesh.Gossiper.
// The buf is a single pkt.
func (p *Peer) OnGossipUnicast(_ mesh.PeerName, buf []byte) error {
pkt := makePkt(buf)
p.recv <- pkt // to ReadFrom
return nil
}

51
vendor/github.com/weaveworks/mesh/meshconn/pkt.go generated vendored Normal file
View File

@ -0,0 +1,51 @@
package meshconn
import (
"bytes"
"encoding/gob"
"github.com/weaveworks/mesh"
)
type pkt struct {
SrcName mesh.PeerName
SrcUID mesh.PeerUID
Buf []byte
}
func makePkt(buf []byte) pkt {
var p pkt
if err := gob.NewDecoder(bytes.NewBuffer(buf)).Decode(&p); err != nil {
panic(err)
}
return p
}
func (p pkt) encode() []byte {
var buf bytes.Buffer
if err := gob.NewEncoder(&buf).Encode(p); err != nil {
panic(err)
}
return buf.Bytes()
}
// pktSlice implements mesh.GossipData.
type pktSlice []pkt
var _ mesh.GossipData = &pktSlice{}
func (s pktSlice) Encode() [][]byte {
bufs := make([][]byte, len(s))
for i, pkt := range s {
bufs[i] = pkt.encode()
}
return bufs
}
func (s pktSlice) Merge(other mesh.GossipData) mesh.GossipData {
o := other.(pktSlice)
merged := make(pktSlice, 0, len(s)+len(o))
merged = append(merged, s...)
merged = append(merged, o...)
return merged
}

115
vendor/github.com/weaveworks/mesh/mocks_test.go generated vendored Normal file
View File

@ -0,0 +1,115 @@
// No mocks are tested by this file.
//
// It supplies some mock implementations to other unit tests, and is
// named "...test.go" so it is only compiled under `go test`.
package mesh
import (
"fmt"
"testing"
"github.com/stretchr/testify/require"
)
// Add to peers a connection from peers.ourself to p
func (peers *Peers) AddTestConnection(p *Peer) {
summary := p.peerSummary
summary.Version = 0
toPeer := newPeerFromSummary(summary)
toPeer = peers.fetchWithDefault(toPeer) // Has side-effect of incrementing refcount
conn := newMockConnection(peers.ourself.Peer, toPeer)
peers.ourself.addConnection(conn)
peers.ourself.connectionEstablished(conn)
}
// Add to peers a connection from p1 to p2
func (peers *Peers) AddTestRemoteConnection(p1, p2 *Peer) {
fromPeer := newPeerFrom(p1)
fromPeer = peers.fetchWithDefault(fromPeer)
toPeer := newPeerFrom(p2)
toPeer = peers.fetchWithDefault(toPeer)
peers.ourself.addConnection(newRemoteConnection(fromPeer, toPeer, "", false, false))
}
func (peers *Peers) DeleteTestConnection(p *Peer) {
toName := p.Name
toPeer := peers.Fetch(toName)
peers.dereference(toPeer)
conn, _ := peers.ourself.ConnectionTo(toName)
peers.ourself.deleteConnection(conn)
}
// mockConnection used in testing is very similar to a
// RemoteConnection, without the RemoteTCPAddr(). We are making it a
// separate type in order to distinguish what is created by the test
// from what is created by the real code.
func newMockConnection(from, to *Peer) Connection {
type mockConnection struct{ *remoteConnection }
return &mockConnection{newRemoteConnection(from, to, "", false, false)}
}
func checkEqualConns(t *testing.T, ourName PeerName, got, wanted map[PeerName]Connection) {
checkConns := make(peerNameSet)
for _, conn := range wanted {
checkConns[conn.Remote().Name] = struct{}{}
}
for _, conn := range got {
remoteName := conn.Remote().Name
if _, found := checkConns[remoteName]; found {
delete(checkConns, remoteName)
} else {
require.FailNow(t, fmt.Sprintf("Unexpected connection from %s to %s", ourName, remoteName))
}
}
if len(checkConns) > 0 {
require.FailNow(t, fmt.Sprintf("Expected connections not found: from %s to %v", ourName, checkConns))
}
}
// Get all the peers from a Peers in a slice
func (peers *Peers) allPeers() []*Peer {
var res []*Peer
for _, peer := range peers.byName {
res = append(res, peer)
}
return res
}
func (peers *Peers) allPeersExcept(excludeName PeerName) []*Peer {
res := peers.allPeers()
for i, peer := range res {
if peer.Name == excludeName {
return append(res[:i], res[i+1:]...)
}
}
return res
}
// Check that the peers slice matches the wanted peers
func checkPeerArray(t *testing.T, peers []*Peer, wantedPeers ...*Peer) {
checkTopologyPeers(t, false, peers, wantedPeers...)
}
// Check that the peers slice matches the wanted peers and optionally
// all of their connections
func checkTopologyPeers(t *testing.T, checkConns bool, peers []*Peer, wantedPeers ...*Peer) {
check := make(map[PeerName]*Peer)
for _, peer := range wantedPeers {
check[peer.Name] = peer
}
for _, peer := range peers {
name := peer.Name
if wantedPeer, found := check[name]; found {
if checkConns {
checkEqualConns(t, name, peer.connections, wantedPeer.connections)
}
delete(check, name)
} else {
require.FailNow(t, fmt.Sprintf("Unexpected peer: %s", name))
}
}
if len(check) > 0 {
require.FailNow(t, fmt.Sprintf("Expected peers not found: %v", check))
}
}

123
vendor/github.com/weaveworks/mesh/overlay.go generated vendored Normal file
View File

@ -0,0 +1,123 @@
package mesh
import (
"net"
)
// Overlay yields OverlayConnections.
type Overlay interface {
// Enhance a features map with overlay-related features.
AddFeaturesTo(map[string]string)
// Prepare on overlay connection. The connection should remain
// passive until it has been Confirm()ed.
PrepareConnection(OverlayConnectionParams) (OverlayConnection, error)
// Obtain diagnostic information specific to the overlay.
Diagnostics() interface{}
// Stop the overlay.
Stop()
}
// OverlayConnectionParams are used to set up overlay connections.
type OverlayConnectionParams struct {
RemotePeer *Peer
// The local address of the corresponding TCP connection. Used to
// derive the local IP address for sending. May differ for
// different overlay connections.
LocalAddr *net.TCPAddr
// The remote address of the corresponding TCP connection. Used to
// determine the address to send to, but only if the TCP
// connection is outbound. Otherwise the Overlay needs to discover
// it (e.g. from incoming datagrams).
RemoteAddr *net.TCPAddr
// Is the corresponding TCP connection outbound?
Outbound bool
// Unique identifier for this connection
ConnUID uint64
// Session key, if connection is encrypted; nil otherwise.
//
// NB: overlay connections must take care not to use nonces which
// may collide with those of the main connection. These nonces are
// 192 bits, with the top most bit unspecified, the next bit set
// to 1, followed by 126 zero bits, and a message sequence number
// in the lowest 64 bits.
SessionKey *[32]byte
// Function to send a control message to the counterpart
// overlay connection.
SendControlMessage func(tag byte, msg []byte) error
// Features passed at connection initiation
Features map[string]string
}
// OverlayConnection describes all of the machinery to manage overlay
// connectivity to a particular peer.
type OverlayConnection interface {
// Confirm that the connection is really wanted, and so the
// Overlay should begin heartbeats etc. to verify the operation of
// the overlay connection.
Confirm()
// EstablishedChannel returns a channel that will be closed when the
// overlay connection is established, i.e. its operation has been
// confirmed.
EstablishedChannel() <-chan struct{}
// ErrorChannel returns a channel that forwards errors from the overlay
// connection. The overlay connection is not expected to be operational
// after the first error, so the channel only needs to buffer a single
// error.
ErrorChannel() <-chan error
// Stop terminates the connection.
Stop()
// ControlMessage handles a message from the remote peer. 'tag' exists for
// compatibility, and should always be ProtocolOverlayControlMessage for
// non-sleeve overlays.
ControlMessage(tag byte, msg []byte)
// Attrs returns the user-facing overlay name plus any other
// data that users may wish to check or monitor
Attrs() map[string]interface{}
}
// NullOverlay implements Overlay and OverlayConnection with no-ops.
type NullOverlay struct{}
// AddFeaturesTo implements Overlay.
func (NullOverlay) AddFeaturesTo(map[string]string) {}
// PrepareConnection implements Overlay.
func (NullOverlay) PrepareConnection(OverlayConnectionParams) (OverlayConnection, error) {
return NullOverlay{}, nil
}
// Diagnostics implements Overlay.
func (NullOverlay) Diagnostics() interface{} { return nil }
// Confirm implements OverlayConnection.
func (NullOverlay) Confirm() {}
// EstablishedChannel implements OverlayConnection.
func (NullOverlay) EstablishedChannel() <-chan struct{} { return nil }
// ErrorChannel implements OverlayConnection.
func (NullOverlay) ErrorChannel() <-chan error { return nil }
// Stop implements OverlayConnection.
func (NullOverlay) Stop() {}
// ControlMessage implements OverlayConnection.
func (NullOverlay) ControlMessage(byte, []byte) {}
// Attrs implements OverlayConnection.
func (NullOverlay) Attrs() map[string]interface{} { return nil }

200
vendor/github.com/weaveworks/mesh/peer.go generated vendored Normal file
View File

@ -0,0 +1,200 @@
package mesh
import (
"crypto/rand"
"encoding/binary"
"fmt"
"sort"
"strconv"
)
// Peer is a local representation of a peer, including connections to other
// peers. By itself, it is a remote peer.
type Peer struct {
Name PeerName
peerSummary
localRefCount uint64 // maintained by Peers
connections map[PeerName]Connection
}
type peerSummary struct {
NameByte []byte
NickName string
UID PeerUID
Version uint64
ShortID PeerShortID
HasShortID bool
}
// PeerDescription collects information about peers that is useful to clients.
type PeerDescription struct {
Name PeerName
NickName string
UID PeerUID
Self bool
NumConnections int
}
type connectionSet map[Connection]struct{}
func newPeerFromSummary(summary peerSummary) *Peer {
return &Peer{
Name: PeerNameFromBin(summary.NameByte),
peerSummary: summary,
connections: make(map[PeerName]Connection),
}
}
func newPeer(name PeerName, nickName string, uid PeerUID, version uint64, shortID PeerShortID) *Peer {
return newPeerFromSummary(peerSummary{
NameByte: name.bytes(),
NickName: nickName,
UID: uid,
Version: version,
ShortID: shortID,
HasShortID: true,
})
}
func newPeerPlaceholder(name PeerName) *Peer {
return newPeerFromSummary(peerSummary{NameByte: name.bytes()})
}
// String returns the peer name and nickname.
func (peer *Peer) String() string {
return fmt.Sprint(peer.Name, "(", peer.NickName, ")")
}
// Routes calculates the routing table from this peer to all peers reachable
// from it, returning a "next hop" map of PeerNameX -> PeerNameY, which says
// "in order to send a message to X, the peer should send the message to its
// neighbour Y".
//
// Because currently we do not have weightings on the connections between
// peers, there is no need to use a minimum spanning tree algorithm. Instead
// we employ the simpler and cheaper breadth-first widening. The computation
// is deterministic, which ensures that when it is performed on the same data
// by different peers, they get the same result. This is important since
// otherwise we risk message loss or routing cycles.
//
// When the 'establishedAndSymmetric' flag is set, only connections that are
// marked as 'established' and are symmetric (i.e. where both sides indicate
// they have a connection to the other) are considered.
//
// When a non-nil stopAt peer is supplied, the widening stops when it reaches
// that peer. The boolean return indicates whether that has happened.
//
// NB: This function should generally be invoked while holding a read lock on
// Peers and LocalPeer.
func (peer *Peer) routes(stopAt *Peer, establishedAndSymmetric bool) (bool, map[PeerName]PeerName) {
routes := make(unicastRoutes)
routes[peer.Name] = UnknownPeerName
nextWorklist := []*Peer{peer}
for len(nextWorklist) > 0 {
worklist := nextWorklist
sort.Sort(listOfPeers(worklist))
nextWorklist = []*Peer{}
for _, curPeer := range worklist {
if curPeer == stopAt {
return true, routes
}
curPeer.forEachConnectedPeer(establishedAndSymmetric, routes,
func(remotePeer *Peer) {
nextWorklist = append(nextWorklist, remotePeer)
remoteName := remotePeer.Name
// We now know how to get to remoteName: the same
// way we get to curPeer. Except, if curPeer is
// the starting peer in which case we know we can
// reach remoteName directly.
if curPeer == peer {
routes[remoteName] = remoteName
} else {
routes[remoteName] = routes[curPeer.Name]
}
})
}
}
return false, routes
}
// Apply f to all peers reachable by peer. If establishedAndSymmetric is true,
// only peers with established bidirectional connections will be selected. The
// exclude maps is treated as a set of remote peers to blacklist.
func (peer *Peer) forEachConnectedPeer(establishedAndSymmetric bool, exclude map[PeerName]PeerName, f func(*Peer)) {
for remoteName, conn := range peer.connections {
if establishedAndSymmetric && !conn.isEstablished() {
continue
}
if _, found := exclude[remoteName]; found {
continue
}
remotePeer := conn.Remote()
if remoteConn, found := remotePeer.connections[peer.Name]; !establishedAndSymmetric || (found && remoteConn.isEstablished()) {
f(remotePeer)
}
}
}
// PeerUID uniquely identifies a peer in a mesh.
type PeerUID uint64
// ParsePeerUID parses a decimal peer UID from a string.
func parsePeerUID(s string) (PeerUID, error) {
uid, err := strconv.ParseUint(s, 10, 64)
return PeerUID(uid), err
}
func randomPeerUID() PeerUID {
for {
uid := randUint64()
if uid != 0 { // uid 0 is reserved for peer placeholder
return PeerUID(uid)
}
}
}
// PeerShortID exists for the sake of fast datapath. They are 12 bits,
// randomly assigned, but we detect and recover from collisions. This
// does limit us to 4096 peers, but that should be sufficient for a
// while.
type PeerShortID uint16
const peerShortIDBits = 12
func randomPeerShortID() PeerShortID {
return PeerShortID(randUint16() & (1<<peerShortIDBits - 1))
}
func randBytes(n int) []byte {
buf := make([]byte, n)
if _, err := rand.Read(buf); err != nil {
panic(err)
}
return buf
}
func randUint64() (r uint64) {
return binary.LittleEndian.Uint64(randBytes(8))
}
func randUint16() (r uint16) {
return binary.LittleEndian.Uint16(randBytes(2))
}
// ListOfPeers implements sort.Interface on a slice of Peers.
type listOfPeers []*Peer
// Len implements sort.Interface.
func (lop listOfPeers) Len() int {
return len(lop)
}
// Swap implements sort.Interface.
func (lop listOfPeers) Swap(i, j int) {
lop[i], lop[j] = lop[j], lop[i]
}
// Less implements sort.Interface.
func (lop listOfPeers) Less(i, j int) bool {
return lop[i].Name < lop[j].Name
}

58
vendor/github.com/weaveworks/mesh/peer_name_hash.go generated vendored Normal file
View File

@ -0,0 +1,58 @@
// +build peer_name_hash
package mesh
// Let peer names be SHA256 hashes of anything, provided they are unique.
import (
"crypto/sha256"
"encoding/hex"
)
// PeerName must be globally unique and usable as a map key.
type PeerName string
const (
// PeerNameFlavour is the type of peer names we use.
PeerNameFlavour = "hash"
// NameSize is the number of bytes in a peer name.
NameSize = sha256.Size >> 1
// UnknownPeerName is used as a sentinel value.
UnknownPeerName = PeerName("")
)
// PeerNameFromUserInput parses PeerName from a user-provided string.
func PeerNameFromUserInput(userInput string) (PeerName, error) {
// fixed-length identity
nameByteAry := sha256.Sum256([]byte(userInput))
return PeerNameFromBin(nameByteAry[:NameSize]), nil
}
// PeerNameFromString parses PeerName from a generic string.
func PeerNameFromString(nameStr string) (PeerName, error) {
if _, err := hex.DecodeString(nameStr); err != nil {
return UnknownPeerName, err
}
return PeerName(nameStr), nil
}
// PeerNameFromBin parses PeerName from a byte slice.
func PeerNameFromBin(nameByte []byte) PeerName {
return PeerName(hex.EncodeToString(nameByte))
}
// bytes encodes PeerName as a byte slice.
func (name PeerName) bytes() []byte {
res, err := hex.DecodeString(string(name))
if err != nil {
panic("unable to decode name to bytes: " + name)
}
return res
}
// String encodes PeerName as a string.
func (name PeerName) String() string {
return string(name)
}

View File

@ -0,0 +1,17 @@
// +build peer_name_hash
package mesh_test
import "testing"
func TestHashPeerNameFromUserInput(t *testing.T) {
t.Skip("TODO")
}
func TestHashPeerNameFromString(t *testing.T) {
t.Skip("TODO")
}
func TestHashPeerNameFromBin(t *testing.T) {
t.Skip("TODO")
}

110
vendor/github.com/weaveworks/mesh/peer_name_mac.go generated vendored Normal file
View File

@ -0,0 +1,110 @@
// +build peer_name_mac !peer_name_alternative
package mesh
// The !peer_name_alternative effectively makes this the default,
// i.e. to choose an alternative, run
//
// go build -tags 'peer_name_alternative peer_name_hash'
//
// Let peer names be MACs...
//
// MACs need to be unique across our network, or bad things will
// happen anyway. So they make pretty good candidates for peer
// names. And doing so is pretty efficient both computationally and
// network overhead wise.
//
// Note that we do not mandate *what* MAC should be used as the peer
// name. In particular it doesn't actually have to be the MAC of, say,
// the network interface the peer is sniffing on.
import (
"fmt"
"net"
)
// PeerName is used as a map key. Since net.HardwareAddr isn't suitable for
// that - it's a slice, and slices can't be map keys - we convert that to/from
// uint64.
type PeerName uint64
const (
// PeerNameFlavour is the type of peer names we use.
PeerNameFlavour = "mac"
// NameSize is the number of bytes in a peer name.
NameSize = 6
// UnknownPeerName is used as a sentinel value.
UnknownPeerName = PeerName(0)
)
// PeerNameFromUserInput parses PeerName from a user-provided string.
func PeerNameFromUserInput(userInput string) (PeerName, error) {
return PeerNameFromString(userInput)
}
// PeerNameFromString parses PeerName from a generic string.
func PeerNameFromString(nameStr string) (PeerName, error) {
var a, b, c, d, e, f uint64
match := func(format string, args ...interface{}) bool {
a, b, c, d, e, f = 0, 0, 0, 0, 0, 0
n, err := fmt.Sscanf(nameStr+"\000", format+"\000", args...)
return err == nil && n == len(args)
}
switch {
case match("%2x:%2x:%2x:%2x:%2x:%2x", &a, &b, &c, &d, &e, &f):
case match("::%2x:%2x:%2x:%2x", &c, &d, &e, &f):
case match("%2x::%2x:%2x:%2x", &a, &d, &e, &f):
case match("%2x:%2x::%2x:%2x", &a, &b, &e, &f):
case match("%2x:%2x:%2x::%2x", &a, &b, &c, &f):
case match("%2x:%2x:%2x:%2x::", &a, &b, &c, &d):
case match("::%2x:%2x:%2x", &d, &e, &f):
case match("%2x::%2x:%2x", &a, &e, &f):
case match("%2x:%2x::%2x", &a, &b, &f):
case match("%2x:%2x:%2x::", &a, &b, &c):
case match("::%2x:%2x", &e, &f):
case match("%2x::%2x", &a, &f):
case match("%2x:%2x::", &a, &b):
case match("::%2x", &f):
case match("%2x::", &a):
default:
return UnknownPeerName, fmt.Errorf("invalid peer name format: %q", nameStr)
}
return PeerName(a<<40 | b<<32 | c<<24 | d<<16 | e<<8 | f), nil
}
// PeerNameFromBin parses PeerName from a byte slice.
func PeerNameFromBin(nameByte []byte) PeerName {
return PeerName(macint(net.HardwareAddr(nameByte)))
}
// bytes encodes PeerName as a byte slice.
func (name PeerName) bytes() []byte {
return intmac(uint64(name))
}
// String encodes PeerName as a string.
func (name PeerName) String() string {
return intmac(uint64(name)).String()
}
func macint(mac net.HardwareAddr) (r uint64) {
for _, b := range mac {
r <<= 8
r |= uint64(b)
}
return
}
func intmac(key uint64) (r net.HardwareAddr) {
r = make([]byte, 6)
for i := 5; i >= 0; i-- {
r[i] = byte(key)
key >>= 8
}
return
}

View File

@ -0,0 +1,70 @@
// +build peer_name_mac !peer_name_alternative
package mesh_test
import (
"github.com/stretchr/testify/require"
"github.com/weaveworks/mesh"
"testing"
)
func TestMacPeerNameFromUserInput(t *testing.T) {
t.Skip("TODO")
}
func checkSuccess(t *testing.T, nameStr string, expected uint64) {
actual, err := mesh.PeerNameFromString(nameStr)
require.NoError(t, err)
require.Equal(t, mesh.PeerName(expected), actual)
}
func checkFailure(t *testing.T, nameStr string) {
_, err := mesh.PeerNameFromString(nameStr)
require.Error(t, err)
}
func TestMacPeerNameFromString(t *testing.T) {
// Permitted elisions
checkSuccess(t, "12:34:56:78:9A:BC", 0x123456789ABC)
checkSuccess(t, "::56:78:9A:BC", 0x000056789ABC)
checkSuccess(t, "12::78:9A:BC", 0x120000789ABC)
checkSuccess(t, "12:34::9A:BC", 0x123400009ABC)
checkSuccess(t, "12:34:56::BC", 0x1234560000BC)
checkSuccess(t, "12:34:56:78::", 0x123456780000)
checkSuccess(t, "::78:9A:BC", 0x000000789ABC)
checkSuccess(t, "12::9A:BC", 0x120000009ABC)
checkSuccess(t, "12:34::BC", 0x1234000000BC)
checkSuccess(t, "12:34:56::", 0x123456000000)
checkSuccess(t, "::9A:BC", 0x000000009ABC)
checkSuccess(t, "12::BC", 0x1200000000BC)
checkSuccess(t, "12:34::", 0x123400000000)
checkSuccess(t, "::BC", 0x0000000000BC)
checkSuccess(t, "12::", 0x120000000000)
// Case insensitivity
checkSuccess(t, "ab:cD:Ef:AB::", 0xABCDEFAB0000)
// Optional zero padding
checkSuccess(t, "1:2:3:4:5:6", 0x010203040506)
checkSuccess(t, "01:02:03:04:05:06", 0x010203040506)
// Trailing garbage detection
checkFailure(t, "12::garbage")
// Octet length
checkFailure(t, "123::")
// Forbidden elisions
checkFailure(t, "::")
checkFailure(t, "::34:56:78:9A:BC")
checkFailure(t, "12::56:78:9A:BC")
checkFailure(t, "12:34::78:9A:BC")
checkFailure(t, "12:34:56::9A:BC")
checkFailure(t, "12:34:56:78::BC")
checkFailure(t, "12:34:56:78:9A::")
checkFailure(t, "12::78::")
}
func TestMacPeerNameFromBin(t *testing.T) {
t.Skip("TODO")
}

15
vendor/github.com/weaveworks/mesh/peer_test.go generated vendored Normal file
View File

@ -0,0 +1,15 @@
package mesh
import "testing"
func newPeerFrom(peer *Peer) *Peer {
return newPeerFromSummary(peer.peerSummary)
}
func TestPeerRoutes(t *testing.T) {
t.Skip("TODO")
}
func TestPeerForEachConnectedPeer(t *testing.T) {
t.Skip("TODO")
}

560
vendor/github.com/weaveworks/mesh/peers.go generated vendored Normal file
View File

@ -0,0 +1,560 @@
package mesh
import (
"bytes"
"encoding/gob"
"io"
"math/rand"
"sync"
)
// Peers collects all of the known peers in the mesh, including ourself.
type Peers struct {
sync.RWMutex
ourself *localPeer
byName map[PeerName]*Peer
byShortID map[PeerShortID]shortIDPeers
onGC []func(*Peer)
// Called when the mapping from short IDs to peers changes
onInvalidateShortIDs []func()
}
type shortIDPeers struct {
// If we know about a single peer with the short ID, this is
// that peer. If there is a collision, this is the peer with
// the lowest Name.
peer *Peer
// In case of a collision, this holds the other peers.
others []*Peer
}
type peerNameSet map[PeerName]struct{}
type connectionSummary struct {
NameByte []byte
RemoteTCPAddr string
Outbound bool
Established bool
}
// Due to changes to Peers that need to be sent out
// once the Peers is unlocked.
type peersPendingNotifications struct {
// Peers that have been GCed
removed []*Peer
// The mapping from short IDs to peers changed
invalidateShortIDs bool
// The local short ID needs reassigning due to a collision
reassignLocalShortID bool
// The local peer was modified
localPeerModified bool
}
func newPeers(ourself *localPeer) *Peers {
peers := &Peers{
ourself: ourself,
byName: make(map[PeerName]*Peer),
byShortID: make(map[PeerShortID]shortIDPeers),
}
peers.fetchWithDefault(ourself.Peer)
return peers
}
// Descriptions returns descriptions for all known peers.
func (peers *Peers) Descriptions() []PeerDescription {
peers.RLock()
defer peers.RUnlock()
descriptions := make([]PeerDescription, 0, len(peers.byName))
for _, peer := range peers.byName {
descriptions = append(descriptions, PeerDescription{
Name: peer.Name,
NickName: peer.peerSummary.NickName,
UID: peer.UID,
Self: peer.Name == peers.ourself.Name,
NumConnections: len(peer.connections),
})
}
return descriptions
}
// OnGC adds a new function to be set of functions that will be executed on
// all subsequent GC runs, receiving the GC'd peer.
func (peers *Peers) OnGC(callback func(*Peer)) {
peers.Lock()
defer peers.Unlock()
// Although the array underlying peers.onGC might be accessed
// without holding the lock in unlockAndNotify, we don't
// support removing callbacks, so a simple append here is
// safe.
peers.onGC = append(peers.onGC, callback)
}
// OnInvalidateShortIDs adds a new function to a set of functions that will be
// executed on all subsequent GC runs, when the mapping from short IDs to
// peers has changed.
func (peers *Peers) OnInvalidateShortIDs(callback func()) {
peers.Lock()
defer peers.Unlock()
// Safe, as in OnGC
peers.onInvalidateShortIDs = append(peers.onInvalidateShortIDs, callback)
}
func (peers *Peers) unlockAndNotify(pending *peersPendingNotifications) {
broadcastLocalPeer := (pending.reassignLocalShortID && peers.reassignLocalShortID(pending)) || pending.localPeerModified
onGC := peers.onGC
onInvalidateShortIDs := peers.onInvalidateShortIDs
peers.Unlock()
if pending.removed != nil {
for _, callback := range onGC {
for _, peer := range pending.removed {
callback(peer)
}
}
}
if pending.invalidateShortIDs {
for _, callback := range onInvalidateShortIDs {
callback()
}
}
if broadcastLocalPeer {
peers.ourself.broadcastPeerUpdate()
}
}
func (peers *Peers) addByShortID(peer *Peer, pending *peersPendingNotifications) {
if !peer.HasShortID {
return
}
entry, ok := peers.byShortID[peer.ShortID]
if !ok {
entry = shortIDPeers{peer: peer}
} else if entry.peer == nil {
// This short ID is free, but was used in the past.
// Because we are reusing it, it's an invalidation
// event.
entry.peer = peer
pending.invalidateShortIDs = true
} else if peer.Name < entry.peer.Name {
// Short ID collision, this peer becomes the principal
// peer for the short ID, bumping the previous one
// into others.
if entry.peer == peers.ourself.Peer {
// The bumped peer is peers.ourself, so we
// need to look for a new short ID.
pending.reassignLocalShortID = true
}
entry.others = append(entry.others, entry.peer)
entry.peer = peer
pending.invalidateShortIDs = true
} else {
// Short ID collision, this peer is secondary
entry.others = append(entry.others, peer)
}
peers.byShortID[peer.ShortID] = entry
}
func (peers *Peers) deleteByShortID(peer *Peer, pending *peersPendingNotifications) {
if !peer.HasShortID {
return
}
entry := peers.byShortID[peer.ShortID]
var otherIndex int
if peer != entry.peer {
// peer is secondary, find its index in others
otherIndex = -1
for i, other := range entry.others {
if peer == other {
otherIndex = i
break
}
}
if otherIndex < 0 {
return
}
} else if len(entry.others) != 0 {
// need to find the peer with the lowest name to
// become the new principal one
otherIndex = 0
minName := entry.others[0].Name
for i := 1; i < len(entry.others); i++ {
otherName := entry.others[i].Name
if otherName < minName {
minName = otherName
otherIndex = i
}
}
entry.peer = entry.others[otherIndex]
pending.invalidateShortIDs = true
} else {
// This is the last peer with the short ID. We clear
// the entry, don't delete it, in order to detect when
// it gets re-used.
peers.byShortID[peer.ShortID] = shortIDPeers{}
return
}
entry.others[otherIndex] = entry.others[len(entry.others)-1]
entry.others = entry.others[:len(entry.others)-1]
peers.byShortID[peer.ShortID] = entry
}
func (peers *Peers) reassignLocalShortID(pending *peersPendingNotifications) bool {
newShortID, ok := peers.chooseShortID()
if ok {
peers.setLocalShortID(newShortID, pending)
return true
}
// Otherwise we'll try again later on in garbageColleect
return false
}
func (peers *Peers) setLocalShortID(newShortID PeerShortID, pending *peersPendingNotifications) {
peers.deleteByShortID(peers.ourself.Peer, pending)
peers.ourself.setShortID(newShortID)
peers.addByShortID(peers.ourself.Peer, pending)
}
// Choose an available short ID at random.
func (peers *Peers) chooseShortID() (PeerShortID, bool) {
rng := rand.New(rand.NewSource(int64(randUint64())))
// First, just try picking some short IDs at random, and
// seeing if they are available:
for i := 0; i < 10; i++ {
shortID := PeerShortID(rng.Intn(1 << peerShortIDBits))
if peers.byShortID[shortID].peer == nil {
return shortID, true
}
}
// Looks like most short IDs are used. So count the number of
// unused ones, and pick one at random.
available := int(1 << peerShortIDBits)
for _, entry := range peers.byShortID {
if entry.peer != nil {
available--
}
}
if available == 0 {
// All short IDs are used.
return 0, false
}
n := rng.Intn(available)
var i PeerShortID
for {
if peers.byShortID[i].peer == nil {
if n == 0 {
return i, true
}
n--
}
i++
}
}
// fetchWithDefault will use reference fields of the passed peer object to
// look up and return an existing, matching peer. If no matching peer is
// found, the passed peer is saved and returned.
func (peers *Peers) fetchWithDefault(peer *Peer) *Peer {
peers.Lock()
var pending peersPendingNotifications
defer peers.unlockAndNotify(&pending)
if existingPeer, found := peers.byName[peer.Name]; found {
existingPeer.localRefCount++
return existingPeer
}
peers.byName[peer.Name] = peer
peers.addByShortID(peer, &pending)
peer.localRefCount++
return peer
}
// Fetch returns a peer matching the passed name, without incrementing its
// refcount. If no matching peer is found, Fetch returns nil.
func (peers *Peers) Fetch(name PeerName) *Peer {
peers.RLock()
defer peers.RUnlock()
return peers.byName[name]
}
// Like fetch, but increments local refcount.
func (peers *Peers) fetchAndAddRef(name PeerName) *Peer {
peers.Lock()
defer peers.Unlock()
peer := peers.byName[name]
if peer != nil {
peer.localRefCount++
}
return peer
}
// FetchByShortID returns a peer matching the passed short ID.
// If no matching peer is found, FetchByShortID returns nil.
func (peers *Peers) FetchByShortID(shortID PeerShortID) *Peer {
peers.RLock()
defer peers.RUnlock()
return peers.byShortID[shortID].peer
}
// Dereference decrements the refcount of the matching peer.
// TODO(pb): this is an awkward way to use the mutex; consider refactoring
func (peers *Peers) dereference(peer *Peer) {
peers.Lock()
defer peers.Unlock()
peer.localRefCount--
}
func (peers *Peers) forEach(fun func(*Peer)) {
peers.RLock()
defer peers.RUnlock()
for _, peer := range peers.byName {
fun(peer)
}
}
// Merge an incoming update with our own topology.
//
// We add peers hitherto unknown to us, and update peers for which the
// update contains a more recent version than known to us. The return
// value is a) a representation of the received update, and b) an
// "improved" update containing just these new/updated elements.
func (peers *Peers) applyUpdate(update []byte) (peerNameSet, peerNameSet, error) {
peers.Lock()
var pending peersPendingNotifications
defer peers.unlockAndNotify(&pending)
newPeers, decodedUpdate, decodedConns, err := peers.decodeUpdate(update)
if err != nil {
return nil, nil, err
}
// Add new peers
for name, newPeer := range newPeers {
peers.byName[name] = newPeer
peers.addByShortID(newPeer, &pending)
}
// Now apply the updates
newUpdate := peers.applyDecodedUpdate(decodedUpdate, decodedConns, &pending)
peers.garbageCollect(&pending)
for _, peerRemoved := range pending.removed {
delete(newUpdate, peerRemoved.Name)
}
updateNames := make(peerNameSet)
for _, peer := range decodedUpdate {
updateNames[peer.Name] = struct{}{}
}
return updateNames, newUpdate, nil
}
func (peers *Peers) names() peerNameSet {
peers.RLock()
defer peers.RUnlock()
names := make(peerNameSet)
for name := range peers.byName {
names[name] = struct{}{}
}
return names
}
func (peers *Peers) encodePeers(names peerNameSet) []byte {
buf := new(bytes.Buffer)
enc := gob.NewEncoder(buf)
peers.RLock()
defer peers.RUnlock()
for name := range names {
if peer, found := peers.byName[name]; found {
if peer == peers.ourself.Peer {
peers.ourself.encode(enc)
} else {
peer.encode(enc)
}
}
}
return buf.Bytes()
}
// GarbageCollect takes a lock, triggers a GC, and invokes the accumulated GC
// callbacks.
func (peers *Peers) GarbageCollect() {
peers.Lock()
var pending peersPendingNotifications
defer peers.unlockAndNotify(&pending)
peers.garbageCollect(&pending)
}
func (peers *Peers) garbageCollect(pending *peersPendingNotifications) {
peers.ourself.RLock()
_, reached := peers.ourself.routes(nil, false)
peers.ourself.RUnlock()
for name, peer := range peers.byName {
if _, found := reached[peer.Name]; !found && peer.localRefCount == 0 {
delete(peers.byName, name)
peers.deleteByShortID(peer, pending)
pending.removed = append(pending.removed, peer)
}
}
if len(pending.removed) > 0 && peers.byShortID[peers.ourself.ShortID].peer != peers.ourself.Peer {
// The local peer doesn't own its short ID. Garbage
// collection might have freed some up, so try to
// reassign.
pending.reassignLocalShortID = true
}
}
func (peers *Peers) decodeUpdate(update []byte) (newPeers map[PeerName]*Peer, decodedUpdate []*Peer, decodedConns [][]connectionSummary, err error) {
newPeers = make(map[PeerName]*Peer)
decodedUpdate = []*Peer{}
decodedConns = [][]connectionSummary{}
decoder := gob.NewDecoder(bytes.NewReader(update))
for {
summary, connSummaries, decErr := decodePeer(decoder)
if decErr == io.EOF {
break
} else if decErr != nil {
err = decErr
return
}
newPeer := newPeerFromSummary(summary)
decodedUpdate = append(decodedUpdate, newPeer)
decodedConns = append(decodedConns, connSummaries)
if _, found := peers.byName[newPeer.Name]; !found {
newPeers[newPeer.Name] = newPeer
}
}
for _, connSummaries := range decodedConns {
for _, connSummary := range connSummaries {
remoteName := PeerNameFromBin(connSummary.NameByte)
if _, found := newPeers[remoteName]; found {
continue
}
if _, found := peers.byName[remoteName]; found {
continue
}
// Update refers to a peer which we have no knowledge of.
newPeers[remoteName] = newPeerPlaceholder(remoteName)
}
}
return
}
func (peers *Peers) applyDecodedUpdate(decodedUpdate []*Peer, decodedConns [][]connectionSummary, pending *peersPendingNotifications) peerNameSet {
newUpdate := make(peerNameSet)
for idx, newPeer := range decodedUpdate {
connSummaries := decodedConns[idx]
name := newPeer.Name
// guaranteed to find peer in the peers.byName
switch peer := peers.byName[name]; peer {
case peers.ourself.Peer:
if newPeer.UID != peer.UID {
// The update contains information about an old
// incarnation of ourselves. We increase our version
// number beyond that which we received, so our
// information supersedes the old one when it is
// received by other peers.
pending.localPeerModified = peers.ourself.setVersionBeyond(newPeer.Version)
}
case newPeer:
peer.connections = makeConnsMap(peer, connSummaries, peers.byName)
newUpdate[name] = struct{}{}
default: // existing peer
if newPeer.Version < peer.Version ||
(newPeer.Version == peer.Version &&
(newPeer.UID < peer.UID ||
(newPeer.UID == peer.UID &&
(!newPeer.HasShortID || peer.HasShortID)))) {
continue
}
peer.Version = newPeer.Version
peer.UID = newPeer.UID
peer.NickName = newPeer.NickName
peer.connections = makeConnsMap(peer, connSummaries, peers.byName)
if newPeer.ShortID != peer.ShortID || newPeer.HasShortID != peer.HasShortID {
peers.deleteByShortID(peer, pending)
peer.ShortID = newPeer.ShortID
peer.HasShortID = newPeer.HasShortID
peers.addByShortID(peer, pending)
}
newUpdate[name] = struct{}{}
}
}
return newUpdate
}
func (peer *Peer) encode(enc *gob.Encoder) {
if err := enc.Encode(peer.peerSummary); err != nil {
panic(err)
}
connSummaries := []connectionSummary{}
for _, conn := range peer.connections {
connSummaries = append(connSummaries, connectionSummary{
conn.Remote().NameByte,
conn.remoteTCPAddress(),
conn.isOutbound(),
conn.isEstablished(),
})
}
if err := enc.Encode(connSummaries); err != nil {
panic(err)
}
}
func decodePeer(dec *gob.Decoder) (ps peerSummary, connSummaries []connectionSummary, err error) {
if err = dec.Decode(&ps); err != nil {
return
}
if err = dec.Decode(&connSummaries); err != nil {
return
}
return
}
func makeConnsMap(peer *Peer, connSummaries []connectionSummary, byName map[PeerName]*Peer) map[PeerName]Connection {
conns := make(map[PeerName]Connection)
for _, connSummary := range connSummaries {
name := PeerNameFromBin(connSummary.NameByte)
remotePeer := byName[name]
conn := newRemoteConnection(peer, remotePeer, connSummary.RemoteTCPAddr, connSummary.Outbound, connSummary.Established)
conns[name] = conn
}
return conns
}

376
vendor/github.com/weaveworks/mesh/peers_test.go generated vendored Normal file
View File

@ -0,0 +1,376 @@
package mesh
import (
"fmt"
"math/rand"
"testing"
"time"
"github.com/stretchr/testify/require"
)
// TODO we should also test:
//
// - applying an incremental update, including the case where that
// leads to an UnknownPeerError
//
// - the "improved update" calculation
//
// - non-gc of peers that are only referenced locally
func newNode(name PeerName) (*Peer, *Peers) {
peer := newLocalPeer(name, "", nil)
peers := newPeers(peer)
return peer.Peer, peers
}
// Check that ApplyUpdate copies the whole topology from peers
func checkApplyUpdate(t *testing.T, peers *Peers) {
dummyName, _ := PeerNameFromString("99:00:00:01:00:00")
// We need a new node outside of the network, with a connection
// into it.
_, testBedPeers := newNode(dummyName)
testBedPeers.AddTestConnection(peers.ourself.Peer)
testBedPeers.applyUpdate(peers.encodePeers(peers.names()))
checkTopologyPeers(t, true, testBedPeers.allPeersExcept(dummyName), peers.allPeers()...)
}
func TestPeersEncoding(t *testing.T) {
const numNodes = 20
const numIters = 1000
var peer [numNodes]*Peer
var ps [numNodes]*Peers
for i := 0; i < numNodes; i++ {
name, _ := PeerNameFromString(fmt.Sprintf("%02d:00:00:01:00:00", i))
peer[i], ps[i] = newNode(name)
}
var conns []struct{ from, to int }
for i := 0; i < numIters; i++ {
oper := rand.Intn(2)
switch oper {
case 0:
from, to := rand.Intn(numNodes), rand.Intn(numNodes)
if from != to {
if _, found := peer[from].connections[peer[to].Name]; !found {
ps[from].AddTestConnection(peer[to])
conns = append(conns, struct{ from, to int }{from, to})
checkApplyUpdate(t, ps[from])
}
}
case 1:
if len(conns) > 0 {
n := rand.Intn(len(conns))
c := conns[n]
ps[c.from].DeleteTestConnection(peer[c.to])
ps[c.from].GarbageCollect()
checkApplyUpdate(t, ps[c.from])
conns = append(conns[:n], conns[n+1:]...)
}
}
}
}
func garbageCollect(peers *Peers) []*Peer {
var removed []*Peer
peers.OnGC(func(peer *Peer) { removed = append(removed, peer) })
peers.GarbageCollect()
return removed
}
func TestPeersGarbageCollection(t *testing.T) {
const (
peer1NameString = "01:00:00:01:00:00"
peer2NameString = "02:00:00:02:00:00"
peer3NameString = "03:00:00:03:00:00"
)
var (
peer1Name, _ = PeerNameFromString(peer1NameString)
peer2Name, _ = PeerNameFromString(peer2NameString)
peer3Name, _ = PeerNameFromString(peer3NameString)
)
// Create some peers with some connections to each other
p1, ps1 := newNode(peer1Name)
p2, ps2 := newNode(peer2Name)
p3, ps3 := newNode(peer3Name)
ps1.AddTestConnection(p2)
ps2.AddTestRemoteConnection(p1, p2)
ps2.AddTestConnection(p1)
ps2.AddTestConnection(p3)
ps3.AddTestConnection(p1)
ps1.AddTestConnection(p3)
ps2.AddTestRemoteConnection(p1, p3)
ps2.AddTestRemoteConnection(p3, p1)
// Every peer is referenced, so nothing should be dropped
require.Empty(t, garbageCollect(ps1), "peers removed")
require.Empty(t, garbageCollect(ps2), "peers removed")
require.Empty(t, garbageCollect(ps3), "peers removed")
// Drop the connection from 2 to 3, and 3 isn't garbage-collected
// because 1 has a connection to 3
ps2.DeleteTestConnection(p3)
require.Empty(t, garbageCollect(ps2), "peers removed")
// Drop the connection from 1 to 3, and 3 will get removed by
// garbage-collection
ps1.DeleteTestConnection(p3)
checkPeerArray(t, garbageCollect(ps1), p3)
}
func TestShortIDCollisions(t *testing.T) {
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
_, peers := newNode(PeerName(1 << peerShortIDBits))
// Make enough peers that short id collisions are
// overwhelmingly likely
ps := make([]*Peer, 1<<peerShortIDBits)
for i := 0; i < 1<<peerShortIDBits; i++ {
ps[i] = newPeer(PeerName(i), "", PeerUID(i), 0,
PeerShortID(rng.Intn(1<<peerShortIDBits)))
}
shuffle := func() {
for i := range ps {
j := rng.Intn(i + 1)
ps[i], ps[j] = ps[j], ps[i]
}
}
// Fill peers
shuffle()
var pending peersPendingNotifications
for _, p := range ps {
peers.addByShortID(p, &pending)
}
// Check invariants
counts := make([]int, 1<<peerShortIDBits)
saw := func(p *Peer) {
if p != peers.ourself.Peer {
counts[p.UID]++
}
}
for shortID, entry := range peers.byShortID {
if entry.peer == nil {
// no principal peer for this short id, so
// others must be empty
require.Empty(t, entry.others)
continue
}
require.Equal(t, shortID, entry.peer.ShortID)
saw(entry.peer)
for _, p := range entry.others {
saw(p)
require.Equal(t, shortID, p.ShortID)
// the principal peer should have the lowest name
require.True(t, p.Name > entry.peer.Name)
}
}
// Check that every peer was seen
for _, n := range counts {
require.Equal(t, 1, n)
}
// Delete all the peers
shuffle()
for _, p := range ps {
peers.deleteByShortID(p, &pending)
}
for _, entry := range peers.byShortID {
if entry.peer != peers.ourself.Peer {
require.Nil(t, entry.peer)
}
require.Empty(t, entry.others)
}
}
// Test the easy case of short id reassignment, when few short ids are taken
func TestShortIDReassignmentEasy(t *testing.T) {
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
_, peers := newNode(PeerName(0))
for i := 1; i <= 10; i++ {
peers.fetchWithDefault(newPeer(PeerName(i), "", PeerUID(i), 0,
PeerShortID(rng.Intn(1<<peerShortIDBits))))
}
checkShortIDReassignment(t, peers)
}
// Test the hard case of short id reassignment, when most short ids are taken
func TestShortIDReassignmentHard(t *testing.T) {
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
_, peers := newNode(PeerName(1 << peerShortIDBits))
// Take all short ids
ps := make([]*Peer, 1<<peerShortIDBits)
var pending peersPendingNotifications
for i := 0; i < 1<<peerShortIDBits; i++ {
ps[i] = newPeer(PeerName(i), "", PeerUID(i), 0,
PeerShortID(i))
peers.addByShortID(ps[i], &pending)
}
// As all short ids are taken, an attempted reassigment won't
// do anything
oldShortID := peers.ourself.ShortID
require.False(t, peers.reassignLocalShortID(&pending))
require.Equal(t, oldShortID, peers.ourself.ShortID)
// Free up a few ids
for i := 0; i < 10; i++ {
x := rng.Intn(len(ps))
if ps[x] != nil {
peers.deleteByShortID(ps[x], &pending)
ps[x] = nil
}
}
checkShortIDReassignment(t, peers)
}
func checkShortIDReassignment(t *testing.T, peers *Peers) {
oldShortID := peers.ourself.ShortID
peers.reassignLocalShortID(&peersPendingNotifications{})
require.NotEqual(t, oldShortID, peers.ourself.ShortID)
require.Equal(t, peers.ourself.Peer, peers.byShortID[peers.ourself.ShortID].peer)
}
func TestShortIDInvalidation(t *testing.T) {
_, peers := newNode(PeerName(1 << peerShortIDBits))
// need to use a short id that is not the local peer's
shortID := peers.ourself.ShortID + 1
var pending peersPendingNotifications
requireInvalidateShortIDs := func(expect bool) {
require.Equal(t, expect, pending.invalidateShortIDs)
pending.invalidateShortIDs = false
}
// The use of a fresh short id does not cause invalidation
a := newPeer(PeerName(1), "", PeerUID(1), 0, shortID)
peers.addByShortID(a, &pending)
requireInvalidateShortIDs(false)
// An addition which does not change the mapping
b := newPeer(PeerName(2), "", PeerUID(2), 0, shortID)
peers.addByShortID(b, &pending)
requireInvalidateShortIDs(false)
// An addition which does change the mapping
c := newPeer(PeerName(0), "", PeerUID(0), 0, shortID)
peers.addByShortID(c, &pending)
requireInvalidateShortIDs(true)
// A deletion which does not change the mapping
peers.deleteByShortID(b, &pending)
requireInvalidateShortIDs(false)
// A deletion which does change the mapping
peers.deleteByShortID(c, &pending)
requireInvalidateShortIDs(true)
// Deleting the last peer with a short id does not cause invalidation
peers.deleteByShortID(a, &pending)
requireInvalidateShortIDs(false)
// .. but subsequent reuse of that short id does cause invalidation
peers.addByShortID(a, &pending)
requireInvalidateShortIDs(true)
}
func TestShortIDPropagation(t *testing.T) {
_, peers1 := newNode(PeerName(1))
_, peers2 := newNode(PeerName(2))
peers1.AddTestConnection(peers2.ourself.Peer)
peers1.applyUpdate(peers2.encodePeers(peers2.names()))
peers12 := peers1.Fetch(PeerName(2))
old := peers12.peerSummary
require.True(t,
peers2.reassignLocalShortID(&peersPendingNotifications{}))
peers1.applyUpdate(peers2.encodePeers(peers2.names()))
require.NotEqual(t, old.Version, peers12.Version)
require.NotEqual(t, old.ShortID, peers12.ShortID)
}
func TestShortIDCollision(t *testing.T) {
// Create 3 peers
_, peers1 := newNode(PeerName(1))
_, peers2 := newNode(PeerName(2))
_, peers3 := newNode(PeerName(3))
var pending peersPendingNotifications
peers1.setLocalShortID(1, &pending)
peers2.setLocalShortID(2, &pending)
peers3.setLocalShortID(3, &pending)
peers2.AddTestConnection(peers1.ourself.Peer)
peers3.AddTestConnection(peers2.ourself.Peer)
// Propogate from 1 to 2 to 3
peers2.applyUpdate(peers1.encodePeers(peers1.names()))
peers3.applyUpdate(peers2.encodePeers(peers2.names()))
// Force the short id of peer 1 to collide with peer 2. Peer
// 1 has the lowest name, so it gets to keep the short id
peers1.setLocalShortID(2, &pending)
oldShortID := peers2.ourself.ShortID
_, updated, _ := peers2.applyUpdate(peers1.encodePeers(peers1.names()))
// peer 2 should have noticed the collision and resolved it
require.NotEqual(t, oldShortID, peers2.ourself.ShortID)
// The Peers do not have a Router, so broadcastPeerUpdate does
// nothing in the context of this test. So we fake what it
// would do.
updated[PeerName(2)] = struct{}{}
// the update from peer 2 should include its short id change
peers3.applyUpdate(peers2.encodePeers(updated))
require.Equal(t, peers2.ourself.ShortID,
peers3.Fetch(PeerName(2)).ShortID)
}
// Test the case where all short ids are taken, but then some peers go
// away, so the local peer reassigns
func TestDeferredShortIDReassignment(t *testing.T) {
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
_, us := newNode(PeerName(1 << peerShortIDBits))
// Connect us to other peers occupying all short ids
others := make([]*Peers, 1<<peerShortIDBits)
var pending peersPendingNotifications
for i := range others {
_, others[i] = newNode(PeerName(i))
others[i].setLocalShortID(PeerShortID(i), &pending)
us.AddTestConnection(others[i].ourself.Peer)
}
// Check that, as expected, the local peer does not own its
// short id
require.NotEqual(t, us.ourself.Peer,
us.byShortID[us.ourself.ShortID].peer)
// Disconnect one peer, and we should now be able to claim its
// short id
other := others[rng.Intn(1<<peerShortIDBits)]
us.DeleteTestConnection(other.ourself.Peer)
us.GarbageCollect()
require.Equal(t, us.ourself.Peer, us.byShortID[us.ourself.ShortID].peer)
}

364
vendor/github.com/weaveworks/mesh/protocol.go generated vendored Normal file
View File

@ -0,0 +1,364 @@
package mesh
import (
"bytes"
"encoding/gob"
"encoding/hex"
"fmt"
"io"
"time"
)
const (
// Protocol identifies a sort of major version of the protocol.
Protocol = "weave"
// ProtocolMinVersion establishes the lowest protocol version among peers
// that we're willing to try to communicate with.
ProtocolMinVersion = 1
// ProtocolMaxVersion establishes the highest protocol version among peers
// that we're willing to try to communicate with.
ProtocolMaxVersion = 2
)
var (
protocolBytes = []byte(Protocol)
// How long we wait for the handshake phase of protocol negotiation.
headerTimeout = 10 * time.Second
// See filterV1Features.
protocolV1Features = []string{
"ConnID",
"Name",
"NickName",
"PeerNameFlavour",
"UID",
}
errExpectedCrypto = fmt.Errorf("password specified, but peer requested an unencrypted connection")
errExpectedNoCrypto = fmt.Errorf("no password specificed, but peer requested an encrypted connection")
)
type protocolIntroConn interface {
io.ReadWriter
// net.Conn's deadline methods
SetDeadline(t time.Time) error
SetReadDeadline(t time.Time) error
SetWriteDeadline(t time.Time) error
}
// The params necessary to negotiate a protocol intro with a remote peer.
type protocolIntroParams struct {
MinVersion byte
MaxVersion byte
Features map[string]string
Conn protocolIntroConn
Password []byte
Outbound bool
}
// The results from a successful protocol intro.
type protocolIntroResults struct {
Features map[string]string
Receiver tcpReceiver
Sender tcpSender
SessionKey *[32]byte
Version byte
}
// DoIntro executes the protocol introduction.
func (params protocolIntroParams) doIntro() (res protocolIntroResults, err error) {
if err = params.Conn.SetDeadline(time.Now().Add(headerTimeout)); err != nil {
return
}
if res.Version, err = params.exchangeProtocolHeader(); err != nil {
return
}
var pubKey, privKey *[32]byte
if params.Password != nil {
if pubKey, privKey, err = generateKeyPair(); err != nil {
return
}
}
if err = params.Conn.SetWriteDeadline(time.Time{}); err != nil {
return
}
if err = params.Conn.SetReadDeadline(time.Now().Add(tcpHeartbeat * 2)); err != nil {
return
}
switch res.Version {
case 1:
err = res.doIntroV1(params, pubKey, privKey)
case 2:
err = res.doIntroV2(params, pubKey, privKey)
default:
panic("unhandled protocol version")
}
return
}
func (params protocolIntroParams) exchangeProtocolHeader() (byte, error) {
// Write in a separate goroutine to avoid the possibility of
// deadlock. The result channel is of size 1 so that the
// goroutine does not linger even if we encounter an error on
// the read side.
sendHeader := append(protocolBytes, params.MinVersion, params.MaxVersion)
writeDone := make(chan error, 1)
go func() {
_, err := params.Conn.Write(sendHeader)
writeDone <- err
}()
header := make([]byte, len(protocolBytes)+2)
if n, err := io.ReadFull(params.Conn, header); err != nil && n == 0 {
return 0, fmt.Errorf("failed to receive remote protocol header: %s", err)
} else if err != nil {
return 0, fmt.Errorf("received incomplete remote protocol header (%d octets instead of %d): %v; error: %s",
n, len(header), header[:n], err)
}
if !bytes.Equal(protocolBytes, header[:len(protocolBytes)]) {
return 0, fmt.Errorf("remote protocol header not recognised: %v", header[:len(protocolBytes)])
}
theirMinVersion := header[len(protocolBytes)]
minVersion := theirMinVersion
if params.MinVersion > minVersion {
minVersion = params.MinVersion
}
theirMaxVersion := header[len(protocolBytes)+1]
maxVersion := theirMaxVersion
if maxVersion > params.MaxVersion {
maxVersion = params.MaxVersion
}
if minVersion > maxVersion {
return 0, fmt.Errorf("remote version range [%d,%d] is incompatible with ours [%d,%d]",
theirMinVersion, theirMaxVersion,
params.MinVersion, params.MaxVersion)
}
if err := <-writeDone; err != nil {
return 0, err
}
return maxVersion, nil
}
// The V1 procotol consists of the protocol identification/version
// header, followed by a stream of gobified values. The first value
// is the encoded features map (never encrypted). The subsequent
// values are the messages on the connection (encrypted for an
// encrypted connection). For an encrypted connection, the public key
// is passed in the "PublicKey" feature as a string of hex digits.
func (res *protocolIntroResults) doIntroV1(params protocolIntroParams, pubKey, privKey *[32]byte) error {
features := filterV1Features(params.Features)
if pubKey != nil {
features["PublicKey"] = hex.EncodeToString(pubKey[:])
}
enc := gob.NewEncoder(params.Conn)
dec := gob.NewDecoder(params.Conn)
// Encode in a separate goroutine to avoid the possibility of
// deadlock. The result channel is of size 1 so that the
// goroutine does not linger even if we encounter an error on
// the read side.
encodeDone := make(chan error, 1)
go func() {
encodeDone <- enc.Encode(features)
}()
if err := dec.Decode(&res.Features); err != nil {
return err
}
if err := <-encodeDone; err != nil {
return err
}
res.Sender = newGobTCPSender(enc)
res.Receiver = newGobTCPReceiver(dec)
if pubKey == nil {
if _, present := res.Features["PublicKey"]; present {
return errExpectedNoCrypto
}
} else {
remotePubKeyStr, ok := res.Features["PublicKey"]
if !ok {
return errExpectedCrypto
}
remotePubKey, err := hex.DecodeString(remotePubKeyStr)
if err != nil {
return err
}
res.setupCrypto(params, remotePubKey, privKey)
}
res.Features = filterV1Features(res.Features)
return nil
}
// In the V1 protocol, the intro fields are sent unencrypted. So we
// restrict them to an established subset of fields that are assumed
// to be safe.
func filterV1Features(intro map[string]string) map[string]string {
safe := make(map[string]string)
for _, k := range protocolV1Features {
if val, ok := intro[k]; ok {
safe[k] = val
}
}
return safe
}
// The V2 procotol consists of the protocol identification/version
// header, followed by:
//
// - A single "encryption flag" byte: 0 for no encryption, 1 for
// encryption.
//
// - When the connection is encrypted, 32 bytes follow containing the
// public key.
//
// - Then a stream of length-prefixed messages, which are encrypted
// for an encrypted connection.
//
// The first message contains the encoded features map (so in contrast
// to V1, it will be encrypted on an encrypted connection).
func (res *protocolIntroResults) doIntroV2(params protocolIntroParams, pubKey, privKey *[32]byte) error {
// Public key exchange
var wbuf []byte
if pubKey == nil {
wbuf = []byte{0}
} else {
wbuf = make([]byte, 1+len(*pubKey))
wbuf[0] = 1
copy(wbuf[1:], (*pubKey)[:])
}
// Write in a separate goroutine to avoid the possibility of
// deadlock. The result channel is of size 1 so that the
// goroutine does not linger even if we encounter an error on
// the read side.
writeDone := make(chan error, 1)
go func() {
_, err := params.Conn.Write(wbuf)
writeDone <- err
}()
rbuf := make([]byte, 1)
if _, err := io.ReadFull(params.Conn, rbuf); err != nil {
return err
}
switch rbuf[0] {
case 0:
if pubKey != nil {
return errExpectedCrypto
}
res.Sender = newLengthPrefixTCPSender(params.Conn)
res.Receiver = newLengthPrefixTCPReceiver(params.Conn)
case 1:
if pubKey == nil {
return errExpectedNoCrypto
}
rbuf = make([]byte, len(pubKey))
if _, err := io.ReadFull(params.Conn, rbuf); err != nil {
return err
}
res.Sender = newLengthPrefixTCPSender(params.Conn)
res.Receiver = newLengthPrefixTCPReceiver(params.Conn)
res.setupCrypto(params, rbuf, privKey)
default:
return fmt.Errorf("Bad encryption flag %d", rbuf[0])
}
if err := <-writeDone; err != nil {
return err
}
// Features exchange
go func() {
buf := new(bytes.Buffer)
if err := gob.NewEncoder(buf).Encode(&params.Features); err != nil {
writeDone <- err
return
}
writeDone <- res.Sender.Send(buf.Bytes())
}()
rbuf, err := res.Receiver.Receive()
if err != nil {
return err
}
if err := gob.NewDecoder(bytes.NewReader(rbuf)).Decode(&res.Features); err != nil {
return err
}
if err := <-writeDone; err != nil {
return err
}
return nil
}
func (res *protocolIntroResults) setupCrypto(params protocolIntroParams, remotePubKey []byte, privKey *[32]byte) {
var remotePubKeyArr [32]byte
copy(remotePubKeyArr[:], remotePubKey)
res.SessionKey = formSessionKey(&remotePubKeyArr, privKey, params.Password)
res.Sender = newEncryptedTCPSender(res.Sender, res.SessionKey, params.Outbound)
res.Receiver = newEncryptedTCPReceiver(res.Receiver, res.SessionKey, params.Outbound)
}
// ProtocolTag identifies the type of msg encoded in a ProtocolMsg.
type protocolTag byte
const (
// ProtocolHeartbeat identifies a heartbeat msg.
ProtocolHeartbeat = iota
// ProtocolReserved1 is a legacy overly control message.
ProtocolReserved1
// ProtocolReserved2 is a legacy overly control message.
ProtocolReserved2
// ProtocolReserved3 is a legacy overly control message.
ProtocolReserved3
// ProtocolGossip identifies a pure gossip msg.
ProtocolGossip
// ProtocolGossipUnicast identifies a gossip (unicast) msg.
ProtocolGossipUnicast
// ProtocolGossipBroadcast identifies a gossip (broadcast) msg.
ProtocolGossipBroadcast
// ProtocolOverlayControlMsg identifies a control msg.
ProtocolOverlayControlMsg
)
// ProtocolMsg combines a tag and encoded msg.
type protocolMsg struct {
tag protocolTag
msg []byte
}
type protocolSender interface {
SendProtocolMsg(m protocolMsg) error
}

205
vendor/github.com/weaveworks/mesh/protocol_crypto.go generated vendored Normal file
View File

@ -0,0 +1,205 @@
package mesh
import (
"crypto/rand"
"crypto/sha256"
"encoding/binary"
"encoding/gob"
"fmt"
"io"
"sync"
"golang.org/x/crypto/nacl/box"
"golang.org/x/crypto/nacl/secretbox"
)
// MaxTCPMsgSize is the hard limit on sends and receives. Larger messages will
// result in errors. This applies to the LengthPrefixTCP{Sender,Receiver} i.e.
// V2 of the protocol.
const maxTCPMsgSize = 10 * 1024 * 1024
// GenerateKeyPair is used during encrypted protocol introduction.
func generateKeyPair() (publicKey, privateKey *[32]byte, err error) {
return box.GenerateKey(rand.Reader)
}
// FormSessionKey is used during encrypted protocol introduction.
func formSessionKey(remotePublicKey, localPrivateKey *[32]byte, secretKey []byte) *[32]byte {
var sharedKey [32]byte
box.Precompute(&sharedKey, remotePublicKey, localPrivateKey)
sharedKeySlice := sharedKey[:]
sharedKeySlice = append(sharedKeySlice, secretKey...)
sessionKey := sha256.Sum256(sharedKeySlice)
return &sessionKey
}
// TCP Senders/Receivers
// TCPCryptoState stores session key, nonce, and sequence state.
//
// The lowest 64 bits of the nonce contain the message sequence number. The
// top most bit indicates the connection polarity at the sender - '1' for
// outbound; the next indicates protocol type - '1' for TCP. The remaining 126
// bits are zero. The polarity is needed so that the two ends of a connection
// do not use the same nonces; the protocol type so that the TCP connection
// nonces are distinct from nonces used by overlay connections, if they share
// the session key. This is a requirement of the NaCl Security Model; see
// http://nacl.cr.yp.to/box.html.
type tcpCryptoState struct {
sessionKey *[32]byte
nonce [24]byte
seqNo uint64
}
// NewTCPCryptoState returns a valid TCPCryptoState.
func newTCPCryptoState(sessionKey *[32]byte, outbound bool) *tcpCryptoState {
s := &tcpCryptoState{sessionKey: sessionKey}
if outbound {
s.nonce[0] |= (1 << 7)
}
s.nonce[0] |= (1 << 6)
return s
}
func (s *tcpCryptoState) advance() {
s.seqNo++
binary.BigEndian.PutUint64(s.nonce[16:24], s.seqNo)
}
// TCPSender describes anything that can send byte buffers.
// It abstracts over the different protocol version senders.
type tcpSender interface {
Send([]byte) error
}
// GobTCPSender implements TCPSender and is used in the V1 protocol.
type gobTCPSender struct {
encoder *gob.Encoder
}
func newGobTCPSender(encoder *gob.Encoder) *gobTCPSender {
return &gobTCPSender{encoder: encoder}
}
// Send implements TCPSender by encoding the msg.
func (sender *gobTCPSender) Send(msg []byte) error {
return sender.encoder.Encode(msg)
}
// LengthPrefixTCPSender implements TCPSender and is used in the V2 protocol.
type lengthPrefixTCPSender struct {
writer io.Writer
}
func newLengthPrefixTCPSender(writer io.Writer) *lengthPrefixTCPSender {
return &lengthPrefixTCPSender{writer: writer}
}
// Send implements TCPSender by writing the size of the msg as a big-endian
// uint32 before the msg. msgs larger than MaxTCPMsgSize are rejected.
func (sender *lengthPrefixTCPSender) Send(msg []byte) error {
l := len(msg)
if l > maxTCPMsgSize {
return fmt.Errorf("outgoing message exceeds maximum size: %d > %d", l, maxTCPMsgSize)
}
// We copy the message so we can send it in a single Write
// operation, thus making this thread-safe without locking.
prefixedMsg := make([]byte, 4+l)
binary.BigEndian.PutUint32(prefixedMsg, uint32(l))
copy(prefixedMsg[4:], msg)
_, err := sender.writer.Write(prefixedMsg)
return err
}
// Implement TCPSender by wrapping an existing TCPSender with tcpCryptoState.
type encryptedTCPSender struct {
sync.RWMutex
sender tcpSender
state *tcpCryptoState
}
func newEncryptedTCPSender(sender tcpSender, sessionKey *[32]byte, outbound bool) *encryptedTCPSender {
return &encryptedTCPSender{sender: sender, state: newTCPCryptoState(sessionKey, outbound)}
}
// Send implements TCPSender by sealing and sending the msg as-is.
func (sender *encryptedTCPSender) Send(msg []byte) error {
sender.Lock()
defer sender.Unlock()
encodedMsg := secretbox.Seal(nil, msg, &sender.state.nonce, sender.state.sessionKey)
sender.state.advance()
return sender.sender.Send(encodedMsg)
}
// tcpReceiver describes anything that can receive byte buffers.
// It abstracts over the different protocol version receivers.
type tcpReceiver interface {
Receive() ([]byte, error)
}
// gobTCPReceiver implements TCPReceiver and is used in the V1 protocol.
type gobTCPReceiver struct {
decoder *gob.Decoder
}
func newGobTCPReceiver(decoder *gob.Decoder) *gobTCPReceiver {
return &gobTCPReceiver{decoder: decoder}
}
// Receive implements TCPReciever by Gob decoding into a byte slice directly.
func (receiver *gobTCPReceiver) Receive() ([]byte, error) {
var msg []byte
err := receiver.decoder.Decode(&msg)
return msg, err
}
// lengthPrefixTCPReceiver implements TCPReceiver, used in the V2 protocol.
type lengthPrefixTCPReceiver struct {
reader io.Reader
}
func newLengthPrefixTCPReceiver(reader io.Reader) *lengthPrefixTCPReceiver {
return &lengthPrefixTCPReceiver{reader: reader}
}
// Receive implements TCPReceiver by making a length-limited read into a byte buffer.
func (receiver *lengthPrefixTCPReceiver) Receive() ([]byte, error) {
lenPrefix := make([]byte, 4)
if _, err := io.ReadFull(receiver.reader, lenPrefix); err != nil {
return nil, err
}
l := binary.BigEndian.Uint32(lenPrefix)
if l > maxTCPMsgSize {
return nil, fmt.Errorf("incoming message exceeds maximum size: %d > %d", l, maxTCPMsgSize)
}
msg := make([]byte, l)
_, err := io.ReadFull(receiver.reader, msg)
return msg, err
}
// encryptedTCPReceiver implements TCPReceiver by wrapping a TCPReceiver with TCPCryptoState.
type encryptedTCPReceiver struct {
receiver tcpReceiver
state *tcpCryptoState
}
func newEncryptedTCPReceiver(receiver tcpReceiver, sessionKey *[32]byte, outbound bool) *encryptedTCPReceiver {
return &encryptedTCPReceiver{receiver: receiver, state: newTCPCryptoState(sessionKey, !outbound)}
}
// Receive implements TCPReceiver by reading from the wrapped TCPReceiver and
// unboxing the encrypted message, returning the decoded message.
func (receiver *encryptedTCPReceiver) Receive() ([]byte, error) {
msg, err := receiver.receiver.Receive()
if err != nil {
return nil, err
}
decodedMsg, success := secretbox.Open(nil, msg, &receiver.state.nonce, receiver.state.sessionKey)
if !success {
return nil, fmt.Errorf("Unable to decrypt TCP msg")
}
receiver.state.advance()
return decodedMsg, nil
}

View File

@ -0,0 +1,15 @@
package mesh_test
import "testing"
func TestGobTCPSenderReceiver(t *testing.T) {
t.Skip("TODO")
}
func TestLengthPrefixTCPSenderReceiver(t *testing.T) {
t.Skip("TODO")
}
func TestEncryptedTCPSenderReceiver(t *testing.T) {
t.Skip("TODO")
}

96
vendor/github.com/weaveworks/mesh/protocol_test.go generated vendored Normal file
View File

@ -0,0 +1,96 @@
package mesh
import (
"io"
"testing"
"time"
"github.com/stretchr/testify/require"
)
type testConn struct {
io.Writer
io.Reader
}
func (testConn) SetDeadline(t time.Time) error {
return nil
}
func (testConn) SetReadDeadline(t time.Time) error {
return nil
}
func (testConn) SetWriteDeadline(t time.Time) error {
return nil
}
func connPair() (protocolIntroConn, protocolIntroConn) {
a := testConn{}
b := testConn{}
a.Reader, b.Writer = io.Pipe()
b.Reader, a.Writer = io.Pipe()
return &a, &b
}
func doIntro(t *testing.T, params protocolIntroParams) <-chan protocolIntroResults {
ch := make(chan protocolIntroResults, 1)
go func() {
res, err := params.doIntro()
require.Nil(t, err)
ch <- res
}()
return ch
}
func doProtocolIntro(t *testing.T, aver, bver byte, password []byte) byte {
aconn, bconn := connPair()
aresch := doIntro(t, protocolIntroParams{
MinVersion: ProtocolMinVersion,
MaxVersion: aver,
Features: map[string]string{"Name": "A"},
Conn: aconn,
Outbound: true,
Password: password,
})
bresch := doIntro(t, protocolIntroParams{
MinVersion: ProtocolMinVersion,
MaxVersion: bver,
Features: map[string]string{"Name": "B"},
Conn: bconn,
Outbound: false,
Password: password,
})
ares := <-aresch
bres := <-bresch
// Check that features were conveyed
require.Equal(t, "B", ares.Features["Name"])
require.Equal(t, "A", bres.Features["Name"])
// Check that Senders and Receivers work
go func() {
require.Nil(t, ares.Sender.Send([]byte("Hello from A")))
require.Nil(t, bres.Sender.Send([]byte("Hello from B")))
}()
data, err := bres.Receiver.Receive()
require.Nil(t, err)
require.Equal(t, "Hello from A", string(data))
data, err = ares.Receiver.Receive()
require.Nil(t, err)
require.Equal(t, "Hello from B", string(data))
require.Equal(t, ares.Version, bres.Version)
return ares.Version
}
func TestProtocolIntro(t *testing.T) {
require.Equal(t, 2, int(doProtocolIntro(t, 2, 2, nil)))
require.Equal(t, 2, int(doProtocolIntro(t, 2, 2, []byte("sekr1t"))))
require.Equal(t, 1, int(doProtocolIntro(t, 1, 2, nil)))
require.Equal(t, 1, int(doProtocolIntro(t, 1, 2, []byte("pa55"))))
require.Equal(t, 1, int(doProtocolIntro(t, 2, 1, nil)))
require.Equal(t, 1, int(doProtocolIntro(t, 2, 1, []byte("w0rd"))))
}

309
vendor/github.com/weaveworks/mesh/router.go generated vendored Normal file
View File

@ -0,0 +1,309 @@
package mesh
import (
"bytes"
"encoding/gob"
"fmt"
"math"
"net"
"sync"
"time"
)
var (
// Port is the port used for all mesh communication.
Port = 6783
// ChannelSize is the buffer size used by so-called actor goroutines
// throughout mesh.
ChannelSize = 16
)
const (
tcpHeartbeat = 30 * time.Second
gossipInterval = 30 * time.Second
maxDuration = time.Duration(math.MaxInt64)
acceptMaxTokens = 100
acceptTokenDelay = 100 * time.Millisecond // [2]
)
// Config defines dimensions of configuration for the router.
// TODO(pb): provide usable defaults in NewRouter
type Config struct {
Host string
Port int
ProtocolMinVersion byte
Password []byte
ConnLimit int
PeerDiscovery bool
TrustedSubnets []*net.IPNet
}
// Router manages communication between this peer and the rest of the mesh.
// Router implements Gossiper.
type Router struct {
Config
Overlay Overlay
Ourself *localPeer
Peers *Peers
Routes *routes
ConnectionMaker *connectionMaker
gossipLock sync.RWMutex
gossipChannels gossipChannels
topologyGossip Gossip
acceptLimiter *tokenBucket
logger Logger
}
// NewRouter returns a new router. It must be started.
func NewRouter(config Config, name PeerName, nickName string, overlay Overlay, logger Logger) *Router {
router := &Router{Config: config, gossipChannels: make(gossipChannels)}
if overlay == nil {
overlay = NullOverlay{}
}
router.Overlay = overlay
router.Ourself = newLocalPeer(name, nickName, router)
router.Peers = newPeers(router.Ourself)
router.Peers.OnGC(func(peer *Peer) {
logger.Printf("Removed unreachable peer %s", peer)
})
router.Routes = newRoutes(router.Ourself, router.Peers)
router.ConnectionMaker = newConnectionMaker(router.Ourself, router.Peers, net.JoinHostPort(router.Host, "0"), router.Port, router.PeerDiscovery, logger)
router.logger = logger
router.topologyGossip = router.NewGossip("topology", router)
router.acceptLimiter = newTokenBucket(acceptMaxTokens, acceptTokenDelay)
return router
}
// Start listening for TCP connections. This is separate from NewRouter so
// that gossipers can register before we start forming connections.
func (router *Router) Start() {
router.listenTCP()
}
// Stop shuts down the router.
func (router *Router) Stop() error {
router.Overlay.Stop()
// TODO: perform more graceful shutdown...
return nil
}
func (router *Router) usingPassword() bool {
return router.Password != nil
}
func (router *Router) listenTCP() {
localAddr, err := net.ResolveTCPAddr("tcp4", net.JoinHostPort(router.Host, fmt.Sprint(router.Port)))
if err != nil {
panic(err)
}
ln, err := net.ListenTCP("tcp4", localAddr)
if err != nil {
panic(err)
}
go func() {
defer ln.Close()
for {
tcpConn, err := ln.AcceptTCP()
if err != nil {
router.logger.Printf("%v", err)
continue
}
router.acceptTCP(tcpConn)
router.acceptLimiter.wait()
}
}()
}
func (router *Router) acceptTCP(tcpConn *net.TCPConn) {
remoteAddrStr := tcpConn.RemoteAddr().String()
router.logger.Printf("->[%s] connection accepted", remoteAddrStr)
connRemote := newRemoteConnection(router.Ourself.Peer, nil, remoteAddrStr, false, false)
startLocalConnection(connRemote, tcpConn, router, true, router.logger)
}
// NewGossip returns a usable GossipChannel from the router.
//
// TODO(pb): rename?
func (router *Router) NewGossip(channelName string, g Gossiper) Gossip {
channel := newGossipChannel(channelName, router.Ourself, router.Routes, g, router.logger)
router.gossipLock.Lock()
defer router.gossipLock.Unlock()
if _, found := router.gossipChannels[channelName]; found {
panic(fmt.Sprintf("[gossip] duplicate channel %s", channelName))
}
router.gossipChannels[channelName] = channel
return channel
}
func (router *Router) gossipChannel(channelName string) *gossipChannel {
router.gossipLock.RLock()
channel, found := router.gossipChannels[channelName]
router.gossipLock.RUnlock()
if found {
return channel
}
router.gossipLock.Lock()
defer router.gossipLock.Unlock()
if channel, found = router.gossipChannels[channelName]; found {
return channel
}
channel = newGossipChannel(channelName, router.Ourself, router.Routes, &surrogateGossiper{}, router.logger)
channel.logf("created surrogate channel")
router.gossipChannels[channelName] = channel
return channel
}
func (router *Router) gossipChannelSet() map[*gossipChannel]struct{} {
channels := make(map[*gossipChannel]struct{})
router.gossipLock.RLock()
defer router.gossipLock.RUnlock()
for _, channel := range router.gossipChannels {
channels[channel] = struct{}{}
}
return channels
}
func (router *Router) handleGossip(tag protocolTag, payload []byte) error {
decoder := gob.NewDecoder(bytes.NewReader(payload))
var channelName string
if err := decoder.Decode(&channelName); err != nil {
return err
}
channel := router.gossipChannel(channelName)
var srcName PeerName
if err := decoder.Decode(&srcName); err != nil {
return err
}
switch tag {
case ProtocolGossipUnicast:
return channel.deliverUnicast(srcName, payload, decoder)
case ProtocolGossipBroadcast:
return channel.deliverBroadcast(srcName, payload, decoder)
case ProtocolGossip:
return channel.deliver(srcName, payload, decoder)
}
return nil
}
// Relay all pending gossip data for each channel via random neighbours.
func (router *Router) sendAllGossip() {
for channel := range router.gossipChannelSet() {
if gossip := channel.gossiper.Gossip(); gossip != nil {
channel.Send(gossip)
}
}
}
// Relay all pending gossip data for each channel via conn.
func (router *Router) sendAllGossipDown(conn Connection) {
for channel := range router.gossipChannelSet() {
if gossip := channel.gossiper.Gossip(); gossip != nil {
channel.SendDown(conn, gossip)
}
}
}
// for testing
func (router *Router) sendPendingGossip() bool {
sentSomething := false
for conn := range router.Ourself.getConnections() {
sentSomething = conn.(gossipConnection).gossipSenders().Flush() || sentSomething
}
return sentSomething
}
// BroadcastTopologyUpdate is invoked whenever there is a change to the mesh
// topology, and broadcasts the new set of peers to the mesh.
func (router *Router) broadcastTopologyUpdate(update []*Peer) {
names := make(peerNameSet)
for _, p := range update {
names[p.Name] = struct{}{}
}
router.topologyGossip.GossipBroadcast(&topologyGossipData{peers: router.Peers, update: names})
}
// OnGossipUnicast implements Gossiper, but always returns an error, as a
// router should only receive gossip broadcasts of TopologyGossipData.
func (router *Router) OnGossipUnicast(sender PeerName, msg []byte) error {
return fmt.Errorf("unexpected topology gossip unicast: %v", msg)
}
// OnGossipBroadcast receives broadcasts of TopologyGossipData.
// It returns the received update unchanged.
func (router *Router) OnGossipBroadcast(_ PeerName, update []byte) (GossipData, error) {
origUpdate, _, err := router.applyTopologyUpdate(update)
if err != nil || len(origUpdate) == 0 {
return nil, err
}
return &topologyGossipData{peers: router.Peers, update: origUpdate}, nil
}
// Gossip yields the current topology as GossipData.
func (router *Router) Gossip() GossipData {
return &topologyGossipData{peers: router.Peers, update: router.Peers.names()}
}
// OnGossip receives broadcasts of TopologyGossipData.
// It returns an "improved" version of the received update.
// See peers.ApplyUpdate.
func (router *Router) OnGossip(update []byte) (GossipData, error) {
_, newUpdate, err := router.applyTopologyUpdate(update)
if err != nil || len(newUpdate) == 0 {
return nil, err
}
return &topologyGossipData{peers: router.Peers, update: newUpdate}, nil
}
func (router *Router) applyTopologyUpdate(update []byte) (peerNameSet, peerNameSet, error) {
origUpdate, newUpdate, err := router.Peers.applyUpdate(update)
if err != nil {
return nil, nil, err
}
if len(newUpdate) > 0 {
router.ConnectionMaker.refresh()
router.Routes.recalculate()
}
return origUpdate, newUpdate, nil
}
func (router *Router) trusts(remote *remoteConnection) bool {
if tcpAddr, err := net.ResolveTCPAddr("tcp4", remote.remoteTCPAddr); err == nil {
for _, trustedSubnet := range router.TrustedSubnets {
if trustedSubnet.Contains(tcpAddr.IP) {
return true
}
}
} else {
// Should not happen as remoteTCPAddr was obtained from TCPConn
router.logger.Printf("Unable to parse remote TCP addr: %s", err)
}
return false
}
// The set of peers in the mesh network.
// Gossiped just like anything else.
type topologyGossipData struct {
peers *Peers
update peerNameSet
}
// Merge implements GossipData.
func (d *topologyGossipData) Merge(other GossipData) GossipData {
names := make(peerNameSet)
for name := range d.update {
names[name] = struct{}{}
}
for name := range other.(*topologyGossipData).update {
names[name] = struct{}{}
}
return &topologyGossipData{peers: d.peers, update: names}
}
// Encode implements GossipData.
func (d *topologyGossipData) Encode() [][]byte {
return [][]byte{d.peers.encodePeers(d.update)}
}

263
vendor/github.com/weaveworks/mesh/routes.go generated vendored Normal file
View File

@ -0,0 +1,263 @@
package mesh
import (
"math"
"sync"
)
type unicastRoutes map[PeerName]PeerName
type broadcastRoutes map[PeerName][]PeerName
// routes aggregates unicast and broadcast routes for our peer.
type routes struct {
sync.RWMutex
ourself *localPeer
peers *Peers
onChange []func()
unicast unicastRoutes
unicastAll unicastRoutes // [1]
broadcast broadcastRoutes
broadcastAll broadcastRoutes // [1]
recalc chan<- *struct{}
wait chan<- chan struct{}
action chan<- func()
// [1] based on *all* connections, not just established &
// symmetric ones
}
// newRoutes returns a usable Routes based on the LocalPeer and existing Peers.
func newRoutes(ourself *localPeer, peers *Peers) *routes {
recalculate := make(chan *struct{}, 1)
wait := make(chan chan struct{})
action := make(chan func())
r := &routes{
ourself: ourself,
peers: peers,
unicast: unicastRoutes{ourself.Name: UnknownPeerName},
unicastAll: unicastRoutes{ourself.Name: UnknownPeerName},
broadcast: broadcastRoutes{ourself.Name: []PeerName{}},
broadcastAll: broadcastRoutes{ourself.Name: []PeerName{}},
recalc: recalculate,
wait: wait,
action: action,
}
go r.run(recalculate, wait, action)
return r
}
// OnChange appends callback to the functions that will be called whenever the
// routes are recalculated.
func (r *routes) OnChange(callback func()) {
r.Lock()
defer r.Unlock()
r.onChange = append(r.onChange, callback)
}
// PeerNames returns the peers that are accountd for in the r.
func (r *routes) PeerNames() peerNameSet {
return r.peers.names()
}
// Unicast returns the next hop on the unicast route to the named peer,
// based on established and symmetric connections.
func (r *routes) Unicast(name PeerName) (PeerName, bool) {
r.RLock()
defer r.RUnlock()
hop, found := r.unicast[name]
return hop, found
}
// UnicastAll returns the next hop on the unicast route to the named peer,
// based on all connections.
func (r *routes) UnicastAll(name PeerName) (PeerName, bool) {
r.RLock()
defer r.RUnlock()
hop, found := r.unicastAll[name]
return hop, found
}
// Broadcast returns the set of peer names that should be notified
// when we receive a broadcast message originating from the named peer
// based on established and symmetric connections.
func (r *routes) Broadcast(name PeerName) []PeerName {
return r.lookupOrCalculate(name, &r.broadcast, true)
}
// BroadcastAll returns the set of peer names that should be notified
// when we receive a broadcast message originating from the named peer
// based on all connections.
func (r *routes) BroadcastAll(name PeerName) []PeerName {
return r.lookupOrCalculate(name, &r.broadcastAll, false)
}
func (r *routes) lookupOrCalculate(name PeerName, broadcast *broadcastRoutes, establishedAndSymmetric bool) []PeerName {
r.RLock()
hops, found := (*broadcast)[name]
r.RUnlock()
if found {
return hops
}
res := make(chan []PeerName)
r.action <- func() {
r.RLock()
hops, found := (*broadcast)[name]
r.RUnlock()
if found {
res <- hops
return
}
r.peers.RLock()
r.ourself.RLock()
hops = r.calculateBroadcast(name, establishedAndSymmetric)
r.ourself.RUnlock()
r.peers.RUnlock()
res <- hops
r.Lock()
(*broadcast)[name] = hops
r.Unlock()
}
return <-res
}
// RandomNeighbours chooses min(log2(n_peers), n_neighbouring_peers)
// neighbours, with a random distribution that is topology-sensitive,
// favouring neighbours at the end of "bottleneck links". We determine the
// latter based on the unicast routing table. If a neighbour appears as the
// value more frequently than others - meaning that we reach a higher
// proportion of peers via that neighbour than other neighbours - then it is
// chosen with a higher probability.
//
// Note that we choose log2(n_peers) *neighbours*, not peers. Consequently, on
// sparsely connected peers this function returns a higher proportion of
// neighbours than elsewhere. In extremis, on peers with fewer than
// log2(n_peers) neighbours, all neighbours are returned.
func (r *routes) randomNeighbours(except PeerName) []PeerName {
destinations := make(peerNameSet)
r.RLock()
defer r.RUnlock()
count := int(math.Log2(float64(len(r.unicastAll))))
// depends on go's random map iteration
for _, dst := range r.unicastAll {
if dst != UnknownPeerName && dst != except {
destinations[dst] = struct{}{}
if len(destinations) >= count {
break
}
}
}
res := make([]PeerName, 0, len(destinations))
for dst := range destinations {
res = append(res, dst)
}
return res
}
// Recalculate requests recalculation of the routing table. This is async but
// can effectively be made synchronous with a subsequent call to
// EnsureRecalculated.
func (r *routes) recalculate() {
// The use of a 1-capacity channel in combination with the
// non-blocking send is an optimisation that results in multiple
// requests being coalesced.
select {
case r.recalc <- nil:
default:
}
}
// EnsureRecalculated waits for any preceding Recalculate requests to finish.
func (r *routes) ensureRecalculated() {
done := make(chan struct{})
r.wait <- done
<-done
}
func (r *routes) run(recalculate <-chan *struct{}, wait <-chan chan struct{}, action <-chan func()) {
for {
select {
case <-recalculate:
r.calculate()
case done := <-wait:
select {
case <-recalculate:
r.calculate()
default:
}
close(done)
case f := <-action:
f()
}
}
}
func (r *routes) calculate() {
r.peers.RLock()
r.ourself.RLock()
var (
unicast = r.calculateUnicast(true)
unicastAll = r.calculateUnicast(false)
broadcast = make(broadcastRoutes)
broadcastAll = make(broadcastRoutes)
)
broadcast[r.ourself.Name] = r.calculateBroadcast(r.ourself.Name, true)
broadcastAll[r.ourself.Name] = r.calculateBroadcast(r.ourself.Name, false)
r.ourself.RUnlock()
r.peers.RUnlock()
r.Lock()
r.unicast = unicast
r.unicastAll = unicastAll
r.broadcast = broadcast
r.broadcastAll = broadcastAll
onChange := r.onChange
r.Unlock()
for _, callback := range onChange {
callback()
}
}
// Calculate all the routes for the question: if *we* want to send a
// packet to Peer X, what is the next hop?
//
// When we sniff a packet, we determine the destination peer
// ourself. Consequently, we can relay the packet via any
// arbitrary peers - the intermediate peers do not have to have
// any knowledge of the MAC address at all. Thus there's no need
// to exchange knowledge of MAC addresses, nor any constraints on
// the routes that we construct.
func (r *routes) calculateUnicast(establishedAndSymmetric bool) unicastRoutes {
_, unicast := r.ourself.routes(nil, establishedAndSymmetric)
return unicast
}
// Calculate the route to answer the question: if we receive a
// broadcast originally from Peer X, which peers should we pass the
// frames on to?
//
// When the topology is stable, and thus all peers perform route
// calculations based on the same data, the algorithm ensures that
// broadcasts reach every peer exactly once.
//
// This is largely due to properties of the Peer.Routes algorithm. In
// particular:
//
// ForAll X,Y,Z in Peers.
// X.Routes(Y) <= X.Routes(Z) \/
// X.Routes(Z) <= X.Routes(Y)
// ForAll X,Y,Z in Peers.
// Y =/= Z /\ X.Routes(Y) <= X.Routes(Z) =>
// X.Routes(Y) u [P | Y.HasSymmetricConnectionTo(P)] <= X.Routes(Z)
// where <= is the subset relationship on keys of the returned map.
func (r *routes) calculateBroadcast(name PeerName, establishedAndSymmetric bool) []PeerName {
hops := []PeerName{}
peer, found := r.peers.byName[name]
if !found {
return hops
}
if found, reached := peer.routes(r.ourself.Peer, establishedAndSymmetric); found {
r.ourself.forEachConnectedPeer(establishedAndSymmetric, reached,
func(remotePeer *Peer) { hops = append(hops, remotePeer.Name) })
}
return hops
}

23
vendor/github.com/weaveworks/mesh/routes_test.go generated vendored Normal file
View File

@ -0,0 +1,23 @@
package mesh_test
import "testing"
func TestRoutesUnicast(t *testing.T) {
t.Skip("TODO")
}
func TestRoutesUnicastAll(t *testing.T) {
t.Skip("TODO")
}
func TestRoutesBroadcast(t *testing.T) {
t.Skip("TODO")
}
func TestRoutesBroadcastAll(t *testing.T) {
t.Skip("TODO")
}
func TestRoutesRecalculate(t *testing.T) {
t.Skip("TODO")
}

223
vendor/github.com/weaveworks/mesh/status.go generated vendored Normal file
View File

@ -0,0 +1,223 @@
package mesh
import (
"fmt"
"net"
)
// Status is our current state as a peer, as taken from a router.
// This is designed to be used as diagnostic information.
type Status struct {
Protocol string
ProtocolMinVersion int
ProtocolMaxVersion int
Encryption bool
PeerDiscovery bool
Name string
NickName string
Port int
Peers []PeerStatus
UnicastRoutes []unicastRouteStatus
BroadcastRoutes []broadcastRouteStatus
Connections []LocalConnectionStatus
TerminationCount int
Targets []string
OverlayDiagnostics interface{}
TrustedSubnets []string
}
// NewStatus returns a Status object, taken as a snapshot from the router.
func NewStatus(router *Router) *Status {
return &Status{
Protocol: Protocol,
ProtocolMinVersion: ProtocolMinVersion,
ProtocolMaxVersion: ProtocolMaxVersion,
Encryption: router.usingPassword(),
PeerDiscovery: router.PeerDiscovery,
Name: router.Ourself.Name.String(),
NickName: router.Ourself.NickName,
Port: router.Port,
Peers: makePeerStatusSlice(router.Peers),
UnicastRoutes: makeUnicastRouteStatusSlice(router.Routes),
BroadcastRoutes: makeBroadcastRouteStatusSlice(router.Routes),
Connections: makeLocalConnectionStatusSlice(router.ConnectionMaker),
TerminationCount: router.ConnectionMaker.terminationCount,
Targets: router.ConnectionMaker.Targets(false),
OverlayDiagnostics: router.Overlay.Diagnostics(),
TrustedSubnets: makeTrustedSubnetsSlice(router.TrustedSubnets),
}
}
// PeerStatus is the current state of a peer in the mesh.
type PeerStatus struct {
Name string
NickName string
UID PeerUID
ShortID PeerShortID
Version uint64
Connections []connectionStatus
}
// makePeerStatusSlice takes a snapshot of the state of peers.
func makePeerStatusSlice(peers *Peers) []PeerStatus {
var slice []PeerStatus
peers.forEach(func(peer *Peer) {
var connections []connectionStatus
if peer == peers.ourself.Peer {
for conn := range peers.ourself.getConnections() {
connections = append(connections, makeConnectionStatus(conn))
}
} else {
// Modifying peer.connections requires a write lock on
// Peers, and since we are holding a read lock (due to the
// ForEach), access without locking the peer is safe.
for _, conn := range peer.connections {
connections = append(connections, makeConnectionStatus(conn))
}
}
slice = append(slice, PeerStatus{
peer.Name.String(),
peer.NickName,
peer.UID,
peer.ShortID,
peer.Version,
connections,
})
})
return slice
}
type connectionStatus struct {
Name string
NickName string
Address string
Outbound bool
Established bool
}
func makeConnectionStatus(c Connection) connectionStatus {
return connectionStatus{
Name: c.Remote().Name.String(),
NickName: c.Remote().NickName,
Address: c.remoteTCPAddress(),
Outbound: c.isOutbound(),
Established: c.isEstablished(),
}
}
// unicastRouteStatus is the current state of an established unicast route.
type unicastRouteStatus struct {
Dest, Via string
}
// makeUnicastRouteStatusSlice takes a snapshot of the unicast routes in routes.
func makeUnicastRouteStatusSlice(r *routes) []unicastRouteStatus {
r.RLock()
defer r.RUnlock()
var slice []unicastRouteStatus
for dest, via := range r.unicast {
slice = append(slice, unicastRouteStatus{dest.String(), via.String()})
}
return slice
}
// BroadcastRouteStatus is the current state of an established broadcast route.
type broadcastRouteStatus struct {
Source string
Via []string
}
// makeBroadcastRouteStatusSlice takes a snapshot of the broadcast routes in routes.
func makeBroadcastRouteStatusSlice(r *routes) []broadcastRouteStatus {
r.RLock()
defer r.RUnlock()
var slice []broadcastRouteStatus
for source, via := range r.broadcast {
var hops []string
for _, hop := range via {
hops = append(hops, hop.String())
}
slice = append(slice, broadcastRouteStatus{source.String(), hops})
}
return slice
}
// LocalConnectionStatus is the current state of a physical connection to a peer.
type LocalConnectionStatus struct {
Address string
Outbound bool
State string
Info string
Attrs map[string]interface{}
}
// makeLocalConnectionStatusSlice takes a snapshot of the active local
// connections in the ConnectionMaker.
func makeLocalConnectionStatusSlice(cm *connectionMaker) []LocalConnectionStatus {
resultChan := make(chan []LocalConnectionStatus, 0)
cm.actionChan <- func() bool {
var slice []LocalConnectionStatus
for conn := range cm.connections {
state := "pending"
if conn.isEstablished() {
state = "established"
}
lc, _ := conn.(*LocalConnection)
attrs := lc.OverlayConn.Attrs()
name, ok := attrs["name"]
if !ok {
name = "none"
}
info := fmt.Sprintf("%-6v %v", name, conn.Remote())
if lc.router.usingPassword() {
if lc.untrusted() {
info = fmt.Sprintf("%-11v %v", "encrypted", info)
} else {
info = fmt.Sprintf("%-11v %v", "unencrypted", info)
}
}
slice = append(slice, LocalConnectionStatus{conn.remoteTCPAddress(), conn.isOutbound(), state, info, attrs})
}
for address, target := range cm.targets {
add := func(state, info string) {
slice = append(slice, LocalConnectionStatus{address, true, state, info, nil})
}
switch target.state {
case targetWaiting:
until := "never"
if !target.tryAfter.IsZero() {
until = target.tryAfter.String()
}
if target.lastError == nil { // shouldn't happen
add("waiting", "until: "+until)
} else {
add("failed", target.lastError.Error()+", retry: "+until)
}
case targetAttempting:
if target.lastError == nil {
add("connecting", "")
} else {
add("retrying", target.lastError.Error())
}
case targetConnected:
case targetSuspended:
}
}
resultChan <- slice
return false
}
return <-resultChan
}
// makeTrustedSubnetsSlice makes a human-readable copy of the trustedSubnets.
func makeTrustedSubnetsSlice(trustedSubnets []*net.IPNet) []string {
trustedSubnetStrs := []string{}
for _, trustedSubnet := range trustedSubnets {
trustedSubnetStrs = append(trustedSubnetStrs, trustedSubnet.String())
}
return trustedSubnetStrs
}

View File

@ -0,0 +1,94 @@
package mesh
import (
"bytes"
"hash/fnv"
"sync"
"time"
)
// surrogateGossiper ignores unicasts and relays broadcasts and gossips.
type surrogateGossiper struct {
sync.Mutex
prevUpdates []prevUpdate
}
type prevUpdate struct {
update []byte
hash uint64
t time.Time
}
var _ Gossiper = &surrogateGossiper{}
// Hook to mock time for testing
var now = func() time.Time { return time.Now() }
// OnGossipUnicast implements Gossiper.
func (*surrogateGossiper) OnGossipUnicast(sender PeerName, msg []byte) error {
return nil
}
// OnGossipBroadcast implements Gossiper.
func (*surrogateGossiper) OnGossipBroadcast(_ PeerName, update []byte) (GossipData, error) {
return newSurrogateGossipData(update), nil
}
// Gossip implements Gossiper.
func (*surrogateGossiper) Gossip() GossipData {
return nil
}
// OnGossip should return "everything new I've just learnt".
// surrogateGossiper doesn't understand the content of messages, but it can eliminate simple duplicates
func (s *surrogateGossiper) OnGossip(update []byte) (GossipData, error) {
hash := fnv.New64a()
_, _ = hash.Write(update)
updateHash := hash.Sum64()
s.Lock()
defer s.Unlock()
for _, p := range s.prevUpdates {
if updateHash == p.hash && bytes.Equal(update, p.update) {
return nil, nil
}
}
// Delete anything that's older than the gossip interval, so we don't grow forever
// (this time limit is arbitrary; surrogateGossiper should pass on new gossip immediately
// so there should be no reason for a duplicate to show up after a long time)
updateTime := now()
deleteBefore := updateTime.Add(-gossipInterval)
keepFrom := len(s.prevUpdates)
for i, p := range s.prevUpdates {
if p.t.After(deleteBefore) {
keepFrom = i
break
}
}
s.prevUpdates = append(s.prevUpdates[keepFrom:], prevUpdate{update, updateHash, updateTime})
return newSurrogateGossipData(update), nil
}
// surrogateGossipData is a simple in-memory GossipData.
type surrogateGossipData struct {
messages [][]byte
}
var _ GossipData = &surrogateGossipData{}
func newSurrogateGossipData(msg []byte) *surrogateGossipData {
return &surrogateGossipData{messages: [][]byte{msg}}
}
// Encode implements GossipData.
func (d *surrogateGossipData) Encode() [][]byte {
return d.messages
}
// Merge implements GossipData.
func (d *surrogateGossipData) Merge(other GossipData) GossipData {
o := other.(*surrogateGossipData)
messages := make([][]byte, 0, len(d.messages)+len(o.messages))
messages = append(messages, d.messages...)
messages = append(messages, o.messages...)
return &surrogateGossipData{messages: messages}
}

View File

@ -0,0 +1,57 @@
package mesh
import "testing"
import "time"
import "github.com/stretchr/testify/require"
func TestSurrogateGossiperUnicast(t *testing.T) {
t.Skip("TODO")
}
func TestSurrogateGossiperBroadcast(t *testing.T) {
t.Skip("TODO")
}
func TestSurrogateGossiperGossip(t *testing.T) {
t.Skip("TODO")
}
func checkOnGossip(t *testing.T, s Gossiper, input, expected []byte) {
r, err := s.OnGossip(input)
require.NoError(t, err)
if r == nil {
if expected == nil {
return
}
require.Fail(t, "Gossip result should NOT be nil, but was")
}
require.Equal(t, [][]byte{expected}, r.Encode())
}
func TestSurrogateGossiperOnGossip(t *testing.T) {
myTime := time.Now()
now = func() time.Time { return myTime }
s := &surrogateGossiper{}
msg := [][]byte{[]byte("test 1"), []byte("test 2"), []byte("test 3"), []byte("test 4")}
checkOnGossip(t, s, msg[0], msg[0])
checkOnGossip(t, s, msg[1], msg[1])
checkOnGossip(t, s, msg[0], nil)
checkOnGossip(t, s, msg[1], nil)
myTime = myTime.Add(gossipInterval / 2) // Should not trigger cleardown
checkOnGossip(t, s, msg[2], msg[2]) // Only clears out old ones on new entry
checkOnGossip(t, s, msg[0], nil)
checkOnGossip(t, s, msg[1], nil)
myTime = myTime.Add(gossipInterval)
checkOnGossip(t, s, msg[0], nil)
checkOnGossip(t, s, msg[3], msg[3]) // Only clears out old ones on new entry
checkOnGossip(t, s, msg[0], msg[0])
checkOnGossip(t, s, msg[0], nil)
}
func TestSurrogateGossipDataEncode(t *testing.T) {
t.Skip("TODO")
}
func TestSurrogateGossipDataMerge(t *testing.T) {
t.Skip("TODO")
}

48
vendor/github.com/weaveworks/mesh/token_bucket.go generated vendored Normal file
View File

@ -0,0 +1,48 @@
package mesh
import (
"time"
)
// TokenBucket acts as a rate-limiter.
// It is not safe for concurrent use by multiple goroutines.
type tokenBucket struct {
capacity int64 // Maximum capacity of bucket
tokenInterval time.Duration // Token replenishment rate
refillDuration time.Duration // Time to refill from empty
earliestUnspentToken time.Time
}
// newTokenBucket returns a bucket containing capacity tokens, refilled at a
// rate of one token per tokenInterval.
func newTokenBucket(capacity int64, tokenInterval time.Duration) *tokenBucket {
tb := tokenBucket{
capacity: capacity,
tokenInterval: tokenInterval,
refillDuration: tokenInterval * time.Duration(capacity)}
tb.earliestUnspentToken = tb.capacityToken()
return &tb
}
// Blocks until there is a token available.
// Not safe for concurrent use by multiple goroutines.
func (tb *tokenBucket) wait() {
// If earliest unspent token is in the future, sleep until then
time.Sleep(tb.earliestUnspentToken.Sub(time.Now()))
// Alternatively, enforce bucket capacity if necessary
capacityToken := tb.capacityToken()
if tb.earliestUnspentToken.Before(capacityToken) {
tb.earliestUnspentToken = capacityToken
}
// 'Remove' a token from the bucket
tb.earliestUnspentToken = tb.earliestUnspentToken.Add(tb.tokenInterval)
}
// Determine the historic token timestamp representing a full bucket
func (tb *tokenBucket) capacityToken() time.Time {
return time.Now().Add(-tb.refillDuration).Truncate(tb.tokenInterval)
}

View File

@ -0,0 +1,7 @@
package mesh_test
import "testing"
func TestTokenBucket(t *testing.T) {
t.Skip("TODO")
}