Compare commits

...

113 Commits
v1.4.1 ... main

Author SHA1 Message Date
08fly ec6c44733d
Fix: insert registry when existAckOffset is zero (#671)
Co-authored-by: tianyafei <tianyafei@corp.netease.com>
2024-04-08 14:38:42 +08:00
kytool 31bec67423
fix: retry interceptor goroutine blocking (#669) 2024-03-15 15:19:40 +08:00
Guorui Yu d9c57b9202
feat: add STS Token support for the SLS sink (#663)
Use STS(Security Token Service[1]) Token can reduce the risk of long-term
access key (AccessKey) leakage. The token will automatically expire
after expiration date, so it need to be periodically refreshed.

There 2 ways for a user to retrieve a STS token:
- AssumeRole/AssumeRoleWithSAML/AssumeRoleWithOIDC [2]
- Access ECS metadata server [3] on an ECS instance,
so we create a new credentialprovider to execute commands to allow end
users to extend the STS refreshment without modifying the loggie itself.

Signed-off-by: ruogui.ygr <ruogui.ygr@alibaba-inc.com>
---
[1] https://www.alibabacloud.com/help/en/ram/product-overview/what-is-sts
[2] https://www.alibabacloud.com/help/en/ram/developer-reference/api-sts-2015-04-01-dir-role-assuming/
[3] https://www.alibabacloud.com/help/en/ecs/user-guide/obtain-a-temporary-authorization-token

Co-authored-by: ruogui.ygr <ruogui.ygr@alibaba-inc.com>
2024-03-15 15:18:45 +08:00
YenchangChan 04da98fff9
Fix: nfs volumes in docker runtime issue (#658) 2024-03-15 15:17:59 +08:00
YenchangChan f27f8d336a
perf: scan files at first when source start (#657) 2024-03-15 15:17:12 +08:00
ethfoo b6f758bafc
Fix: skip check the others runtime type (#666) 2024-02-23 09:57:34 +08:00
ethfoo fef96ad0d2
Fix: render topic with strict in franzKafka sink (#651) 2023-11-27 15:17:55 +08:00
guangwu e21f3d0df1
chore: unnecessary use of fmt.Sprintf (#647)
Signed-off-by: guoguangwu <guoguangwu@magic-shield.com>
2023-11-24 09:48:33 +08:00
zhanglei b584302836
fix:fix globalWatcher is nil,when source is enable (#648) 2023-11-22 09:55:24 +08:00
美味的糯米 ba7e228e20
upgrade: k8s.io/cri (#646)
Signed-off-by: Wang Xinwen <wxw0504@outlook.com>
2023-11-10 16:38:26 +08:00
zhanglei 7d03ad88dd
feat:Add load filtering to clusterLogConfig and exclude namespaces (#643)
* feat:Add load filtering to clusterLogConfig and exclude namespaces
2023-11-09 19:31:58 +08:00
Jeff Li 76fe3f4801
feat: add random port option to global http (#644) 2023-11-06 19:34:01 +08:00
sunbin0530 342e58a413
修复sink插件franzKafka无法自动创建topic的问题 (#641)
Co-authored-by: sunbink <o1Haxpgz3>
2023-11-06 19:33:45 +08:00
zhanglei 8e89d72644
fix:Fix comma split bug (#642) 2023-11-06 19:22:33 +08:00
dongjiang ae4a7250f9
support replace regex (#632)
Signed-off-by: dongjiang1989 <dongjiang1989@126.com>
2023-09-13 09:45:47 +08:00
dongjiang fa19c9ef53
Add transformer replace action (#631)
Signed-off-by: dongjiang1989 <dongjiang1989@126.com>
2023-09-06 20:41:24 +08:00
zhu733756 c7cdbac92a
[interceptor/transformer] feat:support setFloat/setInt/setBool (#620)
* support setFloat/setInt/setBool
---------

Signed-off-by: zhuhan <zhuhan@cestc.cn>
2023-09-01 15:32:12 +08:00
ethfoo 95a00ef1dd
Fix: readFromTail when existAckOffset is zero (#624) (#625) 2023-08-30 11:33:42 +08:00
linshiyx 2f3a2104d1
Fix: support collect log files from pod pv when container runtime is containerd (#615) 2023-08-09 15:42:28 +08:00
Co1a 083d9601a6
Feat/json engine select (#2) (#616)
* feat:allow user to select json engine && init it

* feat:add new interface MarshalIndent

* chore:replace other packages using json/jsoniter to util.json

* feat:replace project json lib into util/json

---------

Co-authored-by: ChengRui <chengrui@thinkingdata.cn>
2023-08-09 15:42:05 +08:00
ethfoo 71e3a7ea7f
Fix: add podNamespace as prefix of source name when pod is matched by clusterlogconfig (#613) 2023-08-04 14:58:20 +08:00
ethfoo d933083227
Feat: add discoveryNodes params in elasticsearch sink (#612) 2023-08-04 14:38:02 +08:00
ethfoo 350b21ec57
Feat: optimize elasticsearch sink request buffer (#608) 2023-08-01 19:39:46 +08:00
ethfoo d024a5052a
Fix: cfg npe in query pipelines (#604) 2023-08-01 10:27:05 +08:00
ethfoo 9d7b9a54b5
Fix: invalid pipelines will not stop all the pipelines running (#602) 2023-08-01 09:59:20 +08:00
dependabot[bot] 2520f55bfd
Chore(deps): Bump github.com/docker/distribution (#546)
Bumps [github.com/docker/distribution](https://github.com/docker/distribution) from 2.7.1+incompatible to 2.8.2+incompatible.
- [Release notes](https://github.com/docker/distribution/releases)
- [Commits](https://github.com/docker/distribution/compare/v2.7.1...v2.8.2)

---
updated-dependencies:
- dependency-name: github.com/docker/distribution
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-28 14:44:41 +08:00
ethfoo 6d18d0f847
Release: add v1.5.0-rc.0 changelog (#597) 2023-07-26 19:42:58 +08:00
ethfoo 6a5b1e5ce6
Feat: add timestamp and bodyKey in source (#600) 2023-07-25 17:29:05 +08:00
ethfoo c452784e11
Feat: add addonMetaSchema in file source (#599) 2023-07-25 10:27:38 +08:00
ethfoo 96516da425
Feat: support build core components of Loggie (#598) 2023-07-24 17:39:53 +08:00
snowsi 1a7b7dc24c
Fix: Convert to string type when returning [] byte type (#596)
Co-authored-by: jishengjie <jishengjie@kylinos.cn>
2023-07-21 11:14:11 +08:00
ethfoo a577e5944b
Feat: support copytruncate in file source (#571)
* Feat: support copytruncate in file source
2023-07-13 09:58:23 +08:00
ethfoo 71ce680008
Fix: check if bulk response has errors (#592) 2023-07-11 13:52:31 +08:00
ethfoo 9fcbacb47c
Feat: change userName to username in Kafka sasl (#591) 2023-07-11 11:13:10 +08:00
ethfoo a18cfbc306
Fix: cleanUnfinished npe in cleanFiles (#590) 2023-07-10 19:20:56 +08:00
ethfoo 56aa85cdc8
Feat: add watcher config compatibility in file source (#589) 2023-07-10 16:12:46 +08:00
ethfoo fd73e56df2
Feat: add target in maxbyte interceptor (#588) 2023-07-07 17:32:17 +08:00
ethfoo aa68ae5ef9
Feat: optimize the log print in container runtime (#586) 2023-07-04 15:00:40 +08:00
ethfoo 8f8c9756e9
Feat: move FdHoldTimeoutWhenInactive and FdHoldTimeoutWhenRemove to collect config (#585) 2023-07-03 20:34:09 +08:00
ethfoo 641f9a1002
Feat: change olivere/elastic to the official go client for es (#581) 2023-07-03 20:29:16 +08:00
ethfoo b441fef23f
Feat: add sample logger for ack chain (#583) 2023-07-03 19:10:56 +08:00
ethfoo ad1722b704
Fix: update some badger opts, add badger period gc (#584) 2023-07-03 19:00:08 +08:00
hansedong f83e4a46c8
feat: support rocketmq sink (#530)
Signed-off-by: hansedong <admin@yinxiaoluo.com>
2023-07-03 17:16:43 +08:00
cnxiafc d7dc45a124
Feat 230607 franz kafka partion key (#562)
* add partion key
2023-07-03 17:15:26 +08:00
ethfoo aa321c79d0
Feat: add subLogger and sample logger (#582) 2023-07-03 17:08:38 +08:00
ethfoo 09ed74eea9
Feat: add cheanUnfinished in cleanFiles (#580) 2023-06-29 20:22:47 +08:00
ethfoo 5cf14ad81c
Feat: move readFromTail and CleanFiles from watcher config to collect config in filesource (#579) 2023-06-29 18:24:14 +08:00
ethfoo 51151c567c
Fix: add error return in genfiles and inspect subcmd (#578) 2023-06-29 15:29:23 +08:00
ethfoo c61efab59d
Feat: remove "has no consumer listener" debug level log (#577) 2023-06-28 19:19:49 +08:00
ethfoo 744889f0af
Feat: add total count in dev sink (#576) 2023-06-28 17:37:42 +08:00
kytool 254c32df33
fix(interceptor): Optimizes maxbytes interceptions (#575)
* fix(interceptor): Optimizes maxbytes interceptions
2023-06-28 17:22:00 +08:00
ethfoo d24a44d36e
Fix: set topic when commit in kafka source (#574) 2023-06-26 15:25:46 +08:00
ethfoo c10e519cd5
Feat: add franzKafka source (#573) 2023-06-26 15:13:13 +08:00
ethfoo 0fc45e513d
Fix: drop events if partial error in elasticsearch sink (#572) 2023-06-25 16:15:05 +08:00
ethfoo 2988233d90
Feat: support omit empty fields in k8s discovery (#570) 2023-06-19 10:00:11 +08:00
ethfoo 8f07f0fba1
Feat: support render ${_k8s.clusterlogconfig} in fields (#569) 2023-06-16 15:15:23 +08:00
kytool 5521209f35
fix(sink): kafka Writer AllowAutoTopicCreation set to true, it default is false (#567)
Co-authored-by: yuanxingcheng <yuanxingcheng@wps.cn>
2023-06-15 10:24:01 +08:00
ethfoo 0bdef8263d
Feat: support multiple topics in kafka source (#548) 2023-06-08 14:13:55 +08:00
wchy1001 3fbbaeced7
feat: Add kata runtime support for log collection (#554)
Currently, when rootFsCollectionEnabled is enabled, and the containerRuntime
is containerd. loggie collects container logs by accessing the mapped host
directory("/proc/{pid}/root"), However, when the containerd's runtime
is not the "runc", this way may not work. this PR adds support
specifically for kata as the containerd runtime.

Co-authored-by: wuchunyang <wchy1001@gmail.com>
2023-06-08 10:03:35 +08:00
ethfoo 3cdbc1295e
Fix: sink concurrency deepCopy failed (#563) 2023-06-07 18:25:43 +08:00
mmaxiaolei f3610dd97c
fix: release fd to prevent excessive memory usage (#528)
* fix: release fd to prevent excessive memory usage

* fix: seek release fd
2023-06-07 15:43:13 +08:00
ethfoo c013a2db08
Feat: add ignoreUnknownTopicOrPartition in kafka sink (#560) 2023-06-07 14:50:11 +08:00
ethfoo c2be4c2a42
Feat: support default sinkRef in kubernetes discovery (#555) 2023-06-07 14:49:54 +08:00
cnxiafc 3d96c1e3a1
Fix: logger listener may cause block(#561)
* logger listener may cause block
2023-06-07 14:49:20 +08:00
ethfoo 78ab6ef12f
Feat: set default maxOpenFds to 4096 (#559) 2023-06-06 20:23:39 +08:00
ethfoo 3760c38f83
Feat: support elasticsearch sink default index pattern (#553) 2023-06-02 10:31:09 +08:00
ethfoo 5cfbc97a4a
Fix: update eventbus.Publish to PublishOrDrop (#552) 2023-06-01 16:50:29 +08:00
ethfoo 37f22655e2
Feat: add default index if render elasticsearch index failed (#551) 2023-06-01 16:22:01 +08:00
ethfoo 389c28a780
Feat: add default topic if render kafka topic failed (#550) 2023-06-01 14:43:50 +08:00
ethfoo 4356df9dc6
Fix: npe when type:vm in clusterlogconfig (#549) 2023-06-01 14:33:11 +08:00
ethfoo b54cc313b7
Fix: multiple registrations for /api/v1/pipeline/local/sink/dev (#547) 2023-05-31 11:38:58 +08:00
Co1a c2f69ca735
fix:multi arch other command can't use (#541) 2023-05-29 10:25:06 +08:00
ethfoo b8420c43a7
Feat: pretty error when unpack yaml config failed (#539) 2023-05-24 16:19:03 +08:00
ethfoo 749ae0dfc0
Fix: sqlite is locked (#524) 2023-05-05 19:44:52 +08:00
ethfoo 41046da74b
Feat: add resultStatus in dev sink which can be used to simulate failure, drop (#531) 2023-05-05 19:44:20 +08:00
ethfoo 771da3f696
Fix: duplicated batchSize in queue (#533) 2023-05-05 19:42:08 +08:00
mmaxiaolei 497f1a4625
fix: large line may cause oom (#529)
* fix: large line may cause oom
2023-04-25 20:37:21 +08:00
mmaxiaolei e22c55b503
fix: ratelimit may lose precision on Linux (#525)
* fix: ratelimit may lose precision on Linux
2023-04-21 17:29:53 +08:00
ethfoo 2588a0e515
Fix: update build-in-badger in makefile (#522) 2023-04-13 17:25:31 +08:00
ethfoo e311f4859d
Feat: alert adv (#512)
- logAlerts can be grouped by groupkey.
- logAlert listener and alertWebhook can save alerts until numbers of alerts reach some value.
- Error logs of Loggie can be sent to logAlert listener or alertWebhook.
- Error logs of Loggie can be configured as loggie source.

---------

Co-authored-by: ziyu-zhao <zhao.ziyu2@northeastern.edu>
2023-04-13 14:08:15 +08:00
dongjiang 1ba463a206
Fix: grpc batch out-of-order data streams (#517)
* fix grpc batch out-of-order data streams
2023-04-13 10:48:31 +08:00
ethfoo b099468646
Feat: update dockerfile and makefile with badger engine (#520) 2023-04-12 16:30:31 +08:00
ethfoo bb1428c2ff
Fix: parse condition failed when contain ERROR (#514) (#515) 2023-04-07 17:43:43 +08:00
ethfoo 0f03167202
Fix: set defaults in fieldsUnderKey (#513) 2023-04-07 17:43:01 +08:00
ethfoo a0877f44e3
Feat: add loggie version sub command (#508) 2023-03-24 10:02:48 +08:00
ethfoo 1522f8c3b8
Feat: add clientId in Kafka source (#507) 2023-03-24 10:02:31 +08:00
ethfoo cb765921b3
Feat: optimize kafka source (#506)
- add worker in kafka source
- add AddonMeta in kafka source
- add printEventsInterval in dev sink
- add printMetrics in dev sink
- upgrade kafka-go version
- fix list topic to support sasl
2023-03-22 20:52:10 +08:00
曾浩 86ca94f8a1
Feat: get loggie version with api (#496)
* Feat: get loggie version with api
2023-03-15 20:25:54 +08:00
dongjiang b405987648
[Fixbug] udpate automaxprocs version v1.5.1 (#488) 2023-03-10 10:17:12 +08:00
ziyu-zhao f146b8fb95
Feat mount root (#460)
* mount root
2023-03-09 17:42:19 +08:00
ethfoo 65020bb373
Fix: sync vm only when vm mode is enabled (#489) 2023-03-09 11:44:38 +08:00
ethfoo 66280111ce
docs: update Readme (#485) 2023-03-07 14:25:24 +08:00
ethfoo 09bb2070ad
docs: update logo in Readme (#484) 2023-03-06 20:29:08 +08:00
lei.ch1941 e729b9f8d8
Feat: add toStr function in transformer interceptor (#482)
* Feat: support toStr(key) and toStr(key,srcType) usage in transformer interceptor
2023-03-06 20:13:02 +08:00
ethfoo 42665897b2
docs: update Readme.md (#483) 2023-03-06 20:01:28 +08:00
ethfoo eb1e54a79d
Chore: add test-* on push branches in workflows (#481) 2023-03-03 14:16:19 +08:00
ethfoo 1a321f3abf
Feat: ignore logconfig with sidecar injection annotation (#478) 2023-02-24 15:41:33 +08:00
ethfoo dbcd30d864
Feat: add build tag to remove sqlite (#476)
* Feat: add build tag to remove sqlite

* Fix: subcmd return error
2023-02-24 11:12:08 +08:00
ethfoo 726f6dcf6a
Feat: add persistence driver badger (#475)
1.move persistence from /source/file to a single package
2.move db config to loggie.yml
3.replace sqlite with badger
2023-02-21 18:50:11 +08:00
ethfoo b2d8667587
Feat: add host metadata in addHostMeta interceptor (#474) 2023-02-20 16:29:32 +08:00
ethfoo ac994a3dc4
Feat: discovery support VM and type:vm when loggie running in node outside kubernetes cluster (#449) 2023-02-20 14:44:12 +08:00
ethfoo 9ec308a2f8
Feat: added sortBy field in es source (#473) 2023-02-17 14:25:58 +08:00
ethfoo 1a7cdc72f9
Feat: add genfiles sub command (#471) 2023-02-16 18:23:58 +08:00
ethfoo 97468872a9
Feat: rename kafka source config SASL to sasl (#464) 2023-02-07 17:43:23 +08:00
ethfoo 54eb253074
Fix: update UserName to username in franzKafka sink config (#462) 2023-02-03 15:01:01 +08:00
ziyu-zhao 9726a97560
add logconfig/clusterlogconfig queue (#457)
* add logconfig/clusterlogconfig queue
2023-02-01 14:18:36 +08:00
Fatpa cfc5e945ca
fix(source): panic when kubeEvent Series is nil (#459) 2023-01-31 10:01:40 +08:00
ethfoo 6fbcd54949
Fix: refactor kubernetes events record (#456) 2023-01-17 16:44:53 +08:00
ethfoo d1740a6f99
Feat: sink codec support printEvents (#448) 2023-01-16 14:32:22 +08:00
ethfoo 015f84474f
Feat: add typePodFields and typeNodeFields to LogConfig/ClusterLogConfig (#450) 2023-01-16 14:30:15 +08:00
ziyu-zhao b5d6634560
fix pipeline restart (#454) 2023-01-16 13:38:58 +08:00
mmaxiaolei fb5cf7668d
fix: create dir soft link job (#453) 2023-01-16 13:37:55 +08:00
ethfoo 9af388e138
Release v1.4 (#452)
* Chore: add release-v1.4.0-rc.0 changelog

* Fix: remove elasticsearch sink failed return

* Feat: Optimize the configuration parameters to remove the redundancy generated by rendering
2023-01-16 13:37:05 +08:00
1924 changed files with 478631 additions and 41711 deletions

View File

@ -5,6 +5,7 @@ on:
branches:
- main
- release-*
- test-*
tags:
- v*

View File

@ -14,7 +14,7 @@ RUN if [ "$TARGETARCH" = "arm64" ]; then apt-get update && apt-get install -y gc
&& GOOS=$TARGETOS GOARCH=$TARGETARCH CC=$CC CC_FOR_TARGET=$CC_FOR_TARGET make build
# Run
FROM --platform=$BUILDPLATFORM debian:buster-slim
FROM debian:buster-slim
WORKDIR /
COPY --from=builder /loggie .

18
Dockerfile.badger Normal file
View File

@ -0,0 +1,18 @@
# Build the binary
FROM --platform=$BUILDPLATFORM golang:1.18 as builder
ARG TARGETARCH
ARG TARGETOS
# Copy in the go src
WORKDIR /
COPY . .
# Build
RUN make build-in-badger
# Run
FROM debian:buster-slim
WORKDIR /
COPY --from=builder /loggie .
ENTRYPOINT ["/loggie"]

View File

@ -82,8 +82,13 @@ benchmark: ## Run benchmark
##@ Build
build: ## go build
CGO_ENABLED=1 GOOS=${GOOS} GOARCH=${GOARCH} go build -mod=vendor -a ${extra_flags} -o loggie cmd/loggie/main.go
build: ## go build, EXT_BUILD_TAGS=include_core would only build core package
CGO_ENABLED=1 GOOS=${GOOS} GOARCH=${GOARCH} go build -tags ${EXT_BUILD_TAGS} -mod=vendor -a ${extra_flags} -o loggie cmd/loggie/main.go
##@ Build(without sqlite)
build-in-badger: ## go build without sqlite, EXT_BUILD_TAGS=include_core would only build core package
GOOS=${GOOS} GOARCH=${GOARCH} go build -tags driver_badger,${EXT_BUILD_TAGS} -mod=vendor -a -ldflags '-X github.com/loggie-io/loggie/pkg/core/global._VERSION_=${TAG} -X github.com/loggie-io/loggie/pkg/util/persistence._DRIVER_=badger -s -w' -o loggie cmd/loggie/main.go
##@ Images
@ -93,6 +98,16 @@ docker-build: ## Docker build -t ${REPO}:${TAG}, try: make docker-build REPO=<Yo
docker-push: ## Docker push ${REPO}:${TAG}
docker push ${REPO}:${TAG}
docker-multi-arch: ## Docker buildx, try: make docker-build REPO=<YourRepoHost>, ${TAG} generated by git
docker-multi-arch: ## Docker buildx, try: make docker-multi-arch REPO=<YourRepoHost>, ${TAG} generated by git
docker buildx build --platform linux/amd64,linux/arm64 -t ${REPO}:${TAG} . --push
LOG_DIR ?= /tmp/log ## log directory
LOG_MAXSIZE ?= 10 ## max size in MB of the logfile before it's rolled
LOG_QPS ?= 0 ## qps of line generate
LOG_TOTAL ?= 5 ## total line count
LOG_LINE_BYTES ?= 1024 ## bytes per line
LOG_MAX_BACKUPS ?= 5 ## max number of rolled files to keep
genfiles: ## generate log files, try: make genfiles LOG_TOTAL=30000
go run cmd/loggie/main.go genfiles -totalCount=${LOG_TOTAL} -lineBytes=${LOG_LINE_BYTES} -qps=${LOG_QPS} \
-log.maxBackups=${LOG_MAX_BACKUPS} -log.maxSize=${LOG_MAXSIZE} -log.directory=${LOG_DIR} -log.noColor=true \
-log.enableStdout=false -log.enableFile=true -log.timeFormat="2006-01-02 15:04:05.000"

172
README.md
View File

@ -1,5 +1,5 @@
<img src="https://github.com/loggie-io/loggie/blob/main/logo/loggie.svg" width="250">
<img src="https://github.com/loggie-io/loggie/blob/main/logo/loggie-draw.png" width="250">
[![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white)](https://loggie-io.github.io/docs/)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/569/badge)](https://bestpractices.coreinfrastructure.org/projects/569)
@ -9,14 +9,169 @@
Loggie is a lightweight, high-performance, cloud-native agent and aggregator based on Golang.
- Supports multiple pipeline and pluggable components, including data transfer, filtering, parsing, and alarm functions.
- Supports multiple pipeline and pluggable components, including data transfer, filtering, parsing, and alerting.
- Uses native Kubernetes CRD for operation and management.
- Offers a range of observability, reliability, and automation features suitable for production environments.
## Architecture
Based on Loggie, we can build a cloud-native scalable log data platform.
![](https://loggie-io.github.io/docs/user-guide/enterprise-practice/imgs/loggie-extend.png)
## Features
### Next-generation cloud-native log collection and transmission
#### Building pipelines based on CRD
Loggie includes LogConfig/ClusterLogConfig/Interceptor/Sink CRDs, allowing for the creation of data collection, transfer, processing, and sending pipelines through simple YAML file creation.
eg:
```yaml
apiVersion: loggie.io/v1beta1
kind: LogConfig
metadata:
name: tomcat
namespace: default
spec:
selector:
type: pod
labelSelector:
app: tomcat
pipeline:
sources: |
- type: file
name: common
paths:
- stdout
- /usr/local/tomcat/logs/*.log
sinkRef: default
interceptorRef: default
```
![crd-usage](https://loggie-io.github.io/docs/user-guide/use-in-kubernetes/imgs/loggie-crd-usage.png)
#### Multiple architectures
- **Agent**: Deployed via DaemonSet, Loggie can collect log files without the need for containers to mount volumes.
- **Sidecar**: Supports non-intrusive auto-injection of Loggie sidecars, without the need to manually add them to the Deployment/StatefulSet templates.
- **Aggregator**: Supports deployment as an independent intermediate machine, which can receive aggregated data sent by Loggie Agent and can also be used to consume and process various data sources.
But regardless of the deployment architecture, Loggie still maintains a simple and intuitive internal design.
![](https://loggie-io.github.io/docs/getting-started/imgs/loggie-arch.png)
### High Performance
#### Benchmark
Configure Filebeat and Loggie to collect logs, and send them to a Kafka topic without using client compression, with the Kafka topic partition configured as 3.
With sufficient resources for the Agent specification, modify the number of files collected, the concurrency of the sending client (configure Filebeat worker and Loggie parallelism), and observe their respective CPU, memory, and pod network card transmission rates.
| Agent | File Size | File Count | Sink Concurrency | CPU | MEM (rss) | Transmission Rates |
|----------|-----------|------------|------------------|----------|-----------|--------------------|
| Filebeat | 3.2G | 1 | 3 | 7.5~8.5c | 63.8MiB | 75.9MiB/s |
| Filebeat | 3.2G | 1 | 8 | 10c | 65MiB | 70MiB/s |
| Filebeat | 3.2G | 10 | 8 | 11c | 65MiB | 80MiB/s |
| | | | | | | |
| Loggie | 3.2G | 1 | 3 | 2.1c | 60MiB | 120MiB/s |
| Loggie | 3.2G | 1 | 8 | 2.4c | 68.7MiB | 120MiB/s |
| Loggie | 3.2G | 10 | 8 | 3.5c | 70MiB | 210MiB/s |
#### Adaptive Sink Concurrency
With sink concurrency configuration enabled, Loggie can:
- Automatically adjust the downstream data sending parallelism based on the actual downstream data response, making full use of the downstream server's performance without affecting its performance.
- Adjust the downstream data sending speed appropriately when upstream data collection is blocked to relieve upstream blocking.
### Lightweight Streaming Data Analysis and Monitoring
Logs are a universal data type and are not related to platforms or systems. How to better utilize this data is the core capability that Loggie focuses on and develops.
![](https://loggie-io.github.io/docs/user-guide/enterprise-practice/imgs/loggie-chain.png)
#### Real-time parsing and transformation
With the configuration of transformer interceptors and the configuration of functional actions, Loggie can achieve:
- Parsing of various data formats (json, grok, regex, split, etc.)
- Conversion of various fields (add, copy, move, set, del, fmt, etc.)
- Support for conditional judgment and processing logic (if, else, return, dropEvent, ignoreError, etc.)
eg:
```yaml
interceptors:
- type: transformer
actions:
- action: regex(body)
pattern: (?<ip>\S+) (?<id>\S+) (?<u>\S+) (?<time>\[.*?\]) (?<url>\".*?\") (?<status>\S+) (?<size>\S+)
- if: equal(status, 404)
then:
- action: add(topic, not_found)
- action: return()
- if: equal(status, 500)
then:
- action: dropEvent()
```
#### Detection, recognition, and alerting
Helps you quickly detect potential problems and anomalies in the data and issue timely alerts. Support custom webhooks to connect to various alert channels.
Supports matching methods such as:
- No data: no log data generated within the configured time period.
- Fuzzy matching
- Regular expression matching
- Conditional judgment
- Field comparison: equal/less/greater...
#### Log data aggregation and monitoring
Often, metric data is not only exposed through prometheus exporters, but log data itself can also provide a source of metrics. For example, by counting the access logs of a gateway, you can calculate the number of 5xx or 4xx status codes within a certain time interval, aggregate the qps of a certain interface, and calculate the total amount of body data, etc.
eg:
```yaml
- type: aggregator
interval: 1m
select:
# operatorCOUNT/COUNT-DISTINCT/SUM/AVG/MAX/MIN
- {key: amount, operator: SUM, as: amount_total}
- {key: quantity, operator: SUM, as: qty_total}
groupBy: ["city"]
calculate:
- {expression: " ${amount_total} / ${qty_total} ", as: avg_amount}
```
### Observability and fast troubleshooting
- Loggie provides configurable and rich metrics, and dashboards that can be imported into Grafana with one click.
<img src="https://loggie-io.github.io/docs/user-guide/monitor/img/grafana-agent-1.png" width="1000"/>
- Quickly troubleshoot Loggie itself and any problems in data transmission by Loggie terminal.
<img src="https://loggie-io.github.io/docs/user-guide/troubleshot/img/loggie-dashboard.png" width="1000"/>
## FAQs
### Loggie vs Filebeat/Fluentd/Logstash/Flume
| | Loggie | Filebeat | Fluentd | Logstash | Flume |
|-------------------------------------|------------------------------------------------------------------------------------------------------------------|--------------|--------------------|---------------|---------------|
| Language | Golang | Golang | Ruby | JRuby | Java |
| Multiple Pipelines | ✓ | single queue | single queue | ✓ | ✓ |
| Multiple output | ✓ | one output | copy | ✓ | ✓ |
| Aggregator | ✓ | ✓ | ✓ | ✓ | ✓ |
| Log Alarm | ✓ | | | | |
| Kubernetes container log collection | support container stdout and logs files in container | stdout | stdout | | |
| Configuration delivery | through CRD | manual | manual | manual | manual |
| Monitoring | support Prometheus metricsand can be configured to output indicator log files separately, sending metrics, etc. | | prometheus metrics | need exporter | need exporter |
| Resource Usage | low | low | average | high | high |
## [Documentation](https://loggie-io.github.io/docs-en/)
@ -36,10 +191,13 @@ Loggie is a lightweight, high-performance, cloud-native agent and aggregator bas
- [Args](https://loggie-io.github.io/docs-en/reference/global/args/)
- [System](https://loggie-io.github.io/docs-en/reference/global/monitor/)
- Pipelines
- source: [file](https://loggie-io.github.io/docs-en/reference/pipelines/source/file/), [kafka](https://loggie-io.github.io/docs-en/reference/pipelines/source/kafka/), [kubeEvent](https://loggie-io.github.io/docs-en/reference/pipelines/source/kube-event/), [grpc](https://loggie-io.github.io/docs-en/reference/pipelines/source/grpc/)..
- sink: [elasticsearch](https://loggie-io.github.io/docs-en/reference/pipelines/sink/elasticsearch/), [kafka](https://loggie-io.github.io/docs-en/reference/pipelines/sink/kafka/), [grpc](https://loggie-io.github.io/docs-en/reference/pipelines/sink/grpc/), [dev](https://loggie-io.github.io/docs-en/reference/pipelines/sink/dev/)..
- interceptor: [transformer](https://loggie-io.github.io/docs-en/reference/pipelines/interceptor/transformer/), [limit](https://loggie-io.github.io/docs-en/reference/pipelines/interceptor/limit/), [logAlert](https://loggie-io.github.io/docs-en/reference/pipelines/interceptor/logalert/), [maxbytes](https://loggie-io.github.io/docs-en/reference/pipelines/interceptor/maxbytes/)..
- CRD ([logConfig](https://loggie-io.github.io/docs-en/reference/discovery/kubernetes/logconfig/), [sink](https://loggie-io.github.io/docs-en/reference/discovery/kubernetes/sink/), [interceptor](https://loggie-io.github.io/docs-en/reference/discovery/kubernetes/interceptors/))
- Source: [file](https://loggie-io.github.io/docs-en/reference/pipelines/source/file/), [kafka](https://loggie-io.github.io/docs-en/reference/pipelines/source/kafka/), [kubeEvent](https://loggie-io.github.io/docs-en/reference/pipelines/source/kubeEvent/), [grpc](https://loggie-io.github.io/docs-en/reference/pipelines/source/grpc/), [elasticsearch](https://loggie-io.github.io/docs-en/reference/pipelines/source/elasticsearch/), [prometheusExporter](https://loggie-io.github.io/docs-en/reference/pipelines/source/prometheus-exporter/)..
- Sink: [elasticsearch](https://loggie-io.github.io/docs-en/reference/pipelines/sink/elasticsearch/), [kafka](https://loggie-io.github.io/docs-en/reference/pipelines/sink/kafka/), [grpc](https://loggie-io.github.io/docs-en/reference/pipelines/sink/grpc/), [loki](https://loggie-io.github.io/docs-en/reference/pipelines/sink/loki/), [zinc](https://loggie-io.github.io/docs-en/reference/pipelines/sink/zinc/), [alertWebhook](https://loggie-io.github.io/docs-en/reference/pipelines/sink/webhook/), [dev](https://loggie-io.github.io/docs-en/reference/pipelines/sink/dev/)..
- Interceptor: [transformer](https://loggie-io.github.io/docs-en/reference/pipelines/interceptor/transformer/), [schema](https://loggie-io.github.io/docs-en/reference/pipelines/interceptor/schema/), [limit](https://loggie-io.github.io/docs-en/reference/pipelines/interceptor/limit/), [logAlert](https://loggie-io.github.io/docs-en/reference/pipelines/interceptor/logalert/), [maxbytes](https://loggie-io.github.io/docs-en/reference/pipelines/interceptor/maxbytes/)..
- CRD: [LogConfig](https://loggie-io.github.io/docs-en/reference/discovery/kubernetes/logconfig/), [ClusterLogConfig](https://loggie-io.github.io/docs-en/reference/discovery/kubernetes/clusterlogconfig/), [Sink](https://loggie-io.github.io/docs-en/reference/discovery/kubernetes/sink/), [Interceptor](https://loggie-io.github.io/docs-en/reference/discovery/kubernetes/interceptors/)
## RoadMap
[RoadMap 2023](https://loggie-io.github.io/docs-en/getting-started/roadmap/roadmap-2023/)
## License

View File

@ -1,5 +1,5 @@
<img src="https://github.com/loggie-io/loggie/blob/main/logo/loggie.svg" width="250">
<img src="https://github.com/loggie-io/loggie/blob/main/logo/loggie-draw.png" width="250">
[![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white)](https://loggie-io.github.io/docs/)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/569/badge)](https://bestpractices.coreinfrastructure.org/projects/569)
@ -11,34 +11,197 @@ Loggie是一个基于Golang的轻量级、高性能、云原生日志采集Agent
- **云原生的日志形态**快速便捷的容器日志采集方式原生的Kubernetes CRD动态配置下发
- **生产级的特性**:基于长期的大规模运维经验,形成了全方位的可观测性、快速排障、异常预警、自动化运维能力
## 架构
我们可以基于Loggie打造一套的云原生可扩展的全链路日志数据平台。
![](https://loggie-io.github.io/docs/user-guide/enterprise-practice/imgs/loggie-extend.png)
## 特性
### 新一代的云原生日志采集和传输方式
#### 基于CRD的快速配置和使用
Loggie包含LogConfig/ClusterLogConfig/Interceptor/Sink CRD只需简单的创建一些YAML文件即可搭建一系列的数据采集、传输、处理、发送流水线。
示例:
```yaml
apiVersion: loggie.io/v1beta1
kind: LogConfig
metadata:
name: tomcat
namespace: default
spec:
selector:
type: pod
labelSelector:
app: tomcat
pipeline:
sources: |
- type: file
name: common
paths:
- stdout
- /usr/local/tomcat/logs/*.log
sinkRef: default
interceptorRef: default
```
![crd-usage](https://loggie-io.github.io/docs/user-guide/use-in-kubernetes/imgs/loggie-crd-usage.png)
#### 支持多种部署架构
- **Agent**: 使用DaemonSet部署无需业务容器挂载Volume即可采集日志文件
- **Sidecar**: 支持Loggie sidecar无侵入自动注入无需手动添加到Deployment/StatefulSet部署模版
- **Aggregator**: 支持Deployment独立部署成中转机形态可接收聚合Loggie Agent发送的数据也可单独用于消费处理各类数据源
但不管是哪种部署架构Loggie仍然保持着简单直观的内部设计。
![](https://loggie-io.github.io/docs/getting-started/imgs/loggie-arch.png)
### 轻量级和高性能
#### 基准压测对比
配置Filebeat和Loggie采集日志并发送至Kafka某个Topic不使用客户端压缩Kafka Topic配置Partition为3。
在保证Agent规格资源充足的情况下修改采集的文件个数、发送客户端并发度配置Filebeat worker和Loggie parallelism)观察各自的CPU、Memory和Pod网卡发送速率。
| Agent | 文件大小 | 日志文件数 | 发送并发度 | CPU | MEM (rss) | 网卡发包速率 |
|----------|------|-------|-------|----------|-----------|-----------|
| Filebeat | 3.2G | 1 | 3 | 7.5~8.5c | 63.8MiB | 75.9MiB/s |
| Filebeat | 3.2G | 1 | 8 | 10c | 65MiB | 70MiB/s |
| Filebeat | 3.2G | 10 | 8 | 11c | 65MiB | 80MiB/s |
| | | | | | | |
| Loggie | 3.2G | 1 | 3 | 2.1c | 60MiB | 120MiB/s |
| Loggie | 3.2G | 1 | 8 | 2.4c | 68.7MiB | 120MiB/s |
| Loggie | 3.2G | 10 | 8 | 3.5c | 70MiB | 210MiB/s |
#### 自适应sink并发度
打开sink并发度配置后Loggie可做到
- 根据下游数据响应的实际情况,自动调整下游数据发送并行数,尽量发挥下游服务端的性能,且不影响其性能。
- 在上游数据收集被阻塞时,适当调整下游数据发送速度,缓解上游阻塞。
### 轻量级流式数据分析与监控
日志本身是一种通用的和平台、系统无关的数据如何更好的利用到这些数据是Loggie关注和主要发展的核心能力。
![](https://loggie-io.github.io/docs/user-guide/enterprise-practice/imgs/loggie-chain.png)
#### 实时解析和转换
只需配置transformer interceptor通过配置函数式的action即可实现
- 各种数据格式的解析json, grok, regex, split...)
- 各种字段的转换add, copy, move, set, del, fmt...)
- 支持条件判断和处理逻辑if, else, return, dropEvent, ignoreError...)
可用于:
- 日志提取出日志级别level并且drop掉DEBUG日志
- 日志里混合包括有json和plain的日志形式可以判断json形式的日志并且进行处理
- 根据访问日志里的status code增加不同的topic字段
示例:
```yaml
interceptors:
- type: transformer
actions:
- action: regex(body)
pattern: (?<ip>\S+) (?<id>\S+) (?<u>\S+) (?<time>\[.*?\]) (?<url>\".*?\") (?<status>\S+) (?<size>\S+)
- if: equal(status, 404)
then:
- action: add(topic, not_found)
- action: return()
- if: equal(status, 500)
then:
- action: dropEvent()
```
#### 检测识别与报警
帮你快速检测到数据中可能出现的问题和异常,及时发出报警。
支持匹配方式:
- 无数据:配置的时间段内无日志数据产生
- 匹配
- 模糊匹配
- 正则匹配
- 条件判断
- 字段比较equal/less/greater…
支持部署形态:
- 在数据采集链路检测:简单易用,无需额外部署
- 独立链路检测两种形态独立部署Aggregator消费Kafka/Elasticsearch等进行数据的匹配和报警
均可支持自定义webhook对接各类报警渠道。
#### 业务数据聚合与监控
很多时候指标数据Metrics不仅仅是通过prometheus exporter来暴露日志数据本身也可以提供指标的来源。
比如说通过统计网关的access日志可以计算出一段时间间隔内5xx或者4xx的statusCode个数聚合某个接口的qps计算出传输body的总量等等。
该功能正在内测中,敬请期待。
示例:
```yaml
- type: aggregator
interval: 1m
select:
# 算子COUNT/COUNT-DISTINCT/SUM/AVG/MAX/MIN
- {key: amount, operator: SUM, as: amount_total}
- {key: quantity, operator: SUM, as: qty_total}
groupby: ["city"]
# 计算:根据字段中的值,再计算处理
calculate:
- {expression: " ${amount_total} / ${qty_total} ", as: avg_amount}
```
### 全链路的快速排障与可观测性
- Loggie提供了可配置的、丰富的数据指标还有dashboard可以一键导入到grafana中
<img src="https://loggie-io.github.io/docs/user-guide/monitor/img/grafana-agent-1.png" width="1000"/>
- 使用Loggie terminal和help接口快速便捷的排查Loggie本身的问题数据传输过程中的问题
<img src="https://loggie-io.github.io/docs/user-guide/troubleshot/img/loggie-dashboard.png" width="1000"/>
## FAQs
### Loggie vs Filebeat/Fluentd/Logstash/Flume
| | Loggie | Filebeat | Fluentd | Logstash | Flume |
|------------------|-----------------------------------------------------|------------------------------------|--------------------------|----------------|----------------|
| 开发语言 | Golang | Golang | Ruby | JRuby | Java |
| 多Pipeline | 支持 | 单队列 | 单队列 | 支持 | 支持 |
| 多输出源 | 支持 | 不支持仅一个Output | 配置copy | 支持 | 支持 |
| 中转机 | 支持 | 不支持 | 支持 | 支持 | 支持 |
| 日志报警 | 支持 | 不支持 | 不支持 | 不支持 | 不支持 |
| Kubernetes容器日志采集 | 支持容器的stdout和容器内部日志文件 | 只支持容器stdout | 只支持容器stdout | 不支持 | 不支持 |
| 配置下发 | Kubernetes下可通过CRD配置主机场景配置中心陆续支持中 | 手动配置 | 手动配置 | 手动配置 | 手动配置 |
| 监控 | 原生支持Prometheus metrics同时可配置单独输出指标日志文件、发送metrics等方式 | API接口暴露接入Prometheus需使用额外的exporter | 支持API和Prometheus metrics | 需使用额外的exporter | 需使用额外的exporter |
| 资源占用 | 低 | 低 | 一般 | 较高 | 较高 |
## 文档
请参考Loggie[文档](https://loggie-io.github.io/docs/)。
### 开始
## 快速上手
- [快速上手](https://loggie-io.github.io/docs/getting-started/quick-start/quick-start/)
- 部署([Kubernetes](https://loggie-io.github.io/docs/getting-started/install/kubernetes/), [主机](https://loggie-io.github.io/docs/getting-started/install/node/))
### 用户指南
- [设计与架构](https://loggie-io.github.io/docs/user-guide/architecture/core-arch/)
- [在Kubernetes下使用](https://loggie-io.github.io/docs/user-guide/use-in-kubernetes/general-usage/)
- [监控与告警](https://loggie-io.github.io/docs/user-guide/monitor/loggie-monitor/)
### 组件配置
## 组件配置
- [启动参数](https://loggie-io.github.io/docs/reference/global/args/)
- [系统配置](https://loggie-io.github.io/docs/reference/global/system/)
- Pipeline配置
- source: [file](https://loggie-io.github.io/docs/reference/pipelines/source/file/), [kafka](https://loggie-io.github.io/docs/reference/pipelines/source/kafka/), [kubeEvent](https://loggie-io.github.io/docs/reference/pipelines/source/kubeEvent/), [grpc](https://loggie-io.github.io/docs/reference/pipelines/source/grpc/)..
- sink: [elassticsearch](https://loggie-io.github.io/docs/reference/pipelines/sink/elasticsearch/), [kafka](https://loggie-io.github.io/docs/reference/pipelines/sink/kafka/), [grpc](https://loggie-io.github.io/docs/reference/pipelines/sink/grpc/), [dev](https://loggie-io.github.io/docs/reference/pipelines/sink/dev/)..
- interceptor: [normalize](https://loggie-io.github.io/docs/reference/pipelines/interceptor/normalize/), [limit](https://loggie-io.github.io/docs/reference/pipelines/interceptor/limit/), [logAlert](https://loggie-io.github.io/docs/reference/pipelines/interceptor/logalert/), [maxbytes](https://loggie-io.github.io/docs/reference/pipelines/interceptor/maxbytes/)..
- CRD([logConfig](https://loggie-io.github.io/docs/reference/discovery/kubernetes/logconfig/), [sink](https://loggie-io.github.io/docs/reference/discovery/kubernetes/sink/), [interceptor](https://loggie-io.github.io/docs/reference/discovery/kubernetes/interceptors/))
- Source: [file](https://loggie-io.github.io/docs/reference/pipelines/source/file/), [kafka](https://loggie-io.github.io/docs/reference/pipelines/source/kafka/), [kubeEvent](https://loggie-io.github.io/docs/reference/pipelines/source/kubeEvent/), [grpc](https://loggie-io.github.io/docs/reference/pipelines/source/grpc/), [elasticsearch](https://loggie-io.github.io/docs/reference/pipelines/source/elasticsearch/), [prometheusExporter](https://loggie-io.github.io/docs/reference/pipelines/source/prometheus-exporter/)..
- Sink: [elasticsearch](https://loggie-io.github.io/docs/reference/pipelines/sink/elasticsearch/), [kafka](https://loggie-io.github.io/docs/reference/pipelines/sink/kafka/), [grpc](https://loggie-io.github.io/docs/reference/pipelines/sink/grpc/), [loki](https://loggie-io.github.io/docs/reference/pipelines/sink/loki/), [zinc](https://loggie-io.github.io/docs/reference/pipelines/sink/zinc/), [alertWebhook](https://loggie-io.github.io/docs/reference/pipelines/sink/webhook/), [dev](https://loggie-io.github.io/docs/reference/pipelines/sink/dev/)..
- Interceptor: [transformer](https://loggie-io.github.io/docs/reference/pipelines/interceptor/transformer/), [schema](https://loggie-io.github.io/docs/reference/pipelines/interceptor/schema/), [limit](https://loggie-io.github.io/docs/reference/pipelines/interceptor/limit/), [logAlert](https://loggie-io.github.io/docs/reference/pipelines/interceptor/logalert/), [maxbytes](https://loggie-io.github.io/docs/reference/pipelines/interceptor/maxbytes/)..
- CRD: [LogConfig](https://loggie-io.github.io/docs/reference/discovery/kubernetes/logconfig/), [ClusterLogConfig](https://loggie-io.github.io/docs/reference/discovery/kubernetes/clusterlogconfig/), [Sink](https://loggie-io.github.io/docs/reference/discovery/kubernetes/sink/), [Interceptor](https://loggie-io.github.io/docs/reference/discovery/kubernetes/interceptors/)
## RoadMap
- [RoadMap 2023](https://loggie-io.github.io/docs/getting-started/roadmap/roadmap-2023/)
## 交流讨论
在使用Loggie的时候遇到问题 请提issues或者联系我们。

View File

@ -19,6 +19,13 @@ package main
import (
"flag"
"fmt"
"net"
"net/http"
"os"
"path/filepath"
"runtime"
"strings"
"github.com/loggie-io/loggie/cmd/subcmd"
"github.com/loggie-io/loggie/pkg/control"
"github.com/loggie-io/loggie/pkg/core/cfg"
@ -30,15 +37,13 @@ import (
"github.com/loggie-io/loggie/pkg/discovery/kubernetes"
"github.com/loggie-io/loggie/pkg/eventbus"
_ "github.com/loggie-io/loggie/pkg/include"
"github.com/loggie-io/loggie/pkg/ops"
"github.com/loggie-io/loggie/pkg/ops/helper"
"github.com/loggie-io/loggie/pkg/util/json"
"github.com/loggie-io/loggie/pkg/util/persistence"
"github.com/loggie-io/loggie/pkg/util/yaml"
"github.com/pkg/errors"
"go.uber.org/automaxprocs/maxprocs"
"net/http"
"os"
"path/filepath"
"runtime"
"strings"
)
var (
@ -68,7 +73,7 @@ func main() {
log.Info("version: %s", global.GetVersion())
// set up signals so we handle the first shutdown signal gracefully
// set up signals, so we handle the first shutdown signal gracefully
stopCh := signals.SetupSignalHandler()
// Automatically set GOMAXPROCS to match Linux container CPU quota
@ -83,11 +88,13 @@ func main() {
// system config file
syscfg := sysconfig.Config{}
cfg.UnpackTypeDefaultsAndValidate(strings.ToLower(configType), globalConfigFile, &syscfg)
// register jsonEngine
json.SetDefaultEngine(syscfg.Loggie.JSONEngine)
// start eventBus listeners
eventbus.StartAndRun(syscfg.Loggie.MonitorEventBus)
// init log after error func
log.AfterError = eventbus.AfterErrorFunc
log.AfterErrorConfig = syscfg.Loggie.ErrorAlertConfig
log.Info("pipelines config path: %s", pipelineConfigPath)
// pipeline config file
@ -106,6 +113,9 @@ func main() {
log.Fatal("unpack config.pipeline config file err: %v", err)
}
persistence.SetConfig(syscfg.Loggie.Db)
defer persistence.StopDbHandler()
controller := control.NewController()
controller.Start(pipecfgs)
@ -125,11 +135,23 @@ func main() {
// api for debugging
helper.Setup(controller)
// api for get loggie Version
ops.Setup(controller)
if syscfg.Loggie.Http.Enabled {
go func() {
if err = http.ListenAndServe(fmt.Sprintf("%s:%d", syscfg.Loggie.Http.Host, syscfg.Loggie.Http.Port), nil); err != nil {
log.Fatal("http listen and serve err: %v", err)
if syscfg.Loggie.Http.RandPort {
syscfg.Loggie.Http.Port = 0
}
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", syscfg.Loggie.Http.Host, syscfg.Loggie.Http.Port))
if err != nil {
log.Fatal("http listen err: %v", err)
}
log.Info("http listen addr %s", listener.Addr().String())
if err = http.Serve(listener, nil); err != nil {
log.Fatal("http serve err: %v", err)
}
}()
}

View File

@ -0,0 +1,60 @@
/*
Copyright 2023 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package genfiles
import (
"errors"
"flag"
"github.com/loggie-io/loggie/pkg/core/log"
"github.com/loggie-io/loggie/pkg/source/dev"
"os"
)
const (
SubCommandGenFiles = "genfiles"
)
var (
genFilesCmd *flag.FlagSet
totalCount int64
lineBytes int
qps int
)
func init() {
genFilesCmd = flag.NewFlagSet(SubCommandGenFiles, flag.ExitOnError)
genFilesCmd.Int64Var(&totalCount, "totalCount", -1, "total line count")
genFilesCmd.IntVar(&lineBytes, "lineBytes", 1024, "bytes per line")
genFilesCmd.IntVar(&qps, "qps", 10, "line qps")
log.SetFlag(genFilesCmd)
}
func RunGenFiles() error {
if len(os.Args) > 2 {
if err := genFilesCmd.Parse(os.Args[2:]); err != nil {
return err
}
}
log.InitDefaultLogger()
stop := make(chan struct{})
dev.GenLines(stop, totalCount, lineBytes, qps, func(content []byte, index int64) {
log.Info("%d %s", index, content)
})
return errors.New("exit")
}

View File

@ -14,13 +14,13 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package subcmd
package inspect
import (
"errors"
"flag"
"github.com/loggie-io/loggie/pkg/ops/dashboard"
"github.com/loggie-io/loggie/pkg/ops/dashboard/gui"
"github.com/pkg/errors"
"os"
)
@ -28,31 +28,15 @@ const SubCommandInspect = "inspect"
var (
inspectCmd *flag.FlagSet
LoggiePort int
loggiePort int
)
func init() {
inspectCmd = flag.NewFlagSet(SubCommandInspect, flag.ExitOnError)
inspectCmd.IntVar(&LoggiePort, "loggiePort", 9196, "Loggie http port")
inspectCmd.IntVar(&loggiePort, "loggiePort", 9196, "Loggie http port")
}
func SwitchSubCommand() error {
if len(os.Args) == 1 {
return nil
}
switch os.Args[1] {
case SubCommandInspect:
if err := runInspect(); err != nil {
return err
}
return errors.New("exit")
}
return nil
}
func runInspect() error {
func RunInspect() error {
if len(os.Args) > 2 {
if err := inspectCmd.Parse(os.Args[2:]); err != nil {
return err
@ -60,12 +44,12 @@ func runInspect() error {
}
d := dashboard.New(&gui.Config{
LoggiePort: LoggiePort,
LoggiePort: loggiePort,
})
if err := d.Start(); err != nil {
d.Stop()
return err
}
return nil
return errors.New("exit")
}

48
cmd/subcmd/switch.go Normal file
View File

@ -0,0 +1,48 @@
/*
Copyright 2023 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package subcmd
import (
"github.com/loggie-io/loggie/cmd/subcmd/genfiles"
"github.com/loggie-io/loggie/cmd/subcmd/inspect"
"github.com/loggie-io/loggie/cmd/subcmd/version"
"os"
)
func SwitchSubCommand() error {
if len(os.Args) == 1 {
return nil
}
switch os.Args[1] {
case inspect.SubCommandInspect:
if err := inspect.RunInspect(); err != nil {
return err
}
case genfiles.SubCommandGenFiles:
if err := genfiles.RunGenFiles(); err != nil {
return err
}
case version.SubCommandVersion:
if err := version.RunVersion(); err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,48 @@
/*
Copyright 2023 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package version
import (
"errors"
"flag"
"fmt"
"github.com/loggie-io/loggie/pkg/core/global"
"os"
)
const (
SubCommandVersion = "version"
)
var (
versionCmd *flag.FlagSet
)
func init() {
versionCmd = flag.NewFlagSet(SubCommandVersion, flag.ExitOnError)
}
func RunVersion() error {
if len(os.Args) > 2 {
if err := versionCmd.Parse(os.Args[2:]); err != nil {
return err
}
}
fmt.Printf("Loggie version: %s\n", global.GetVersion())
return errors.New("exit")
}

View File

@ -1,3 +1,127 @@
# Release v1.5.0-rc.0
### :star2: Features
- [breaking]: The `db` in `file source` is moved to the [`loggie.yml`](https://loggie-io.github.io/docs/main/reference/global/db/). If upgrading from an earlier version to v1.5, be sure to check whether `db` has been configured for `file source`. If it is not configured, you can just ignore it, and the default value will remain compatible.
- Added rocketmq sink (#530)
- Added franzKafka source (#573)
- Added kata runtime (#554)
- `typePodFields`/`typeNodeFields` is supported in LogConfig/ClusterLogConfig (#450)
- sink codec support printEvents (#448)
- Added queue in LogConfig/ClusterLogConfig (#457)
- Changed `olivere/elastic` to the official elasticsearch go client (#581)
- Supported `copytruncate` in file source (#571)
- Added `genfiles` sub command (#471)
- Added queue in LogConfig/ClusterLogConfig queue (#457)
- Added `sortBy` field in elasticsearch source (#473)
- Added host VM mode with Kubernetes as the configuration center (#449) (#489)
- New `addHostMeta` interceptor (#474)
- Added persistence driver `badger` (#475) (#584)
- Ignore LogConfig with sidecar injection annotation (#478)
- Added `toStr` action in transformer interceptor (#482)
- You can mount the root directory of a node to the Loggie container without mounting additional Loggie volumes (#460)
- Get loggie version with api and sub command (#496) (#508)
- Added the `worker` and the `clientId` in Kafka source (#506) (#507)
- Upgrade `kafka-go` version (#506) (#567)
- Added resultStatus in dev sink which can be used to simulate failure, drop (#531)
- Pretty error when unmarshal yaml configuration failed (#539)
- Added default topic if render kafka topic failed (#550)
- Added `ignoreUnknownTopicOrPartition` in kafka sink (#560)
- Supported multiple topics in kafka source (#548)
- Added default index if render elasticsearch index failed (#551) (#553)
- The default `maxOpenFds` is set to 4096 (#559)
- Supported default `sinkRef` in kubernetes discovery (#555)
- Added `${_k8s.clusterlogconfig}` in `typePodFields` (#569)
- Supported omit empty fields in Kubernetes discovery (#570)
- Optimizes `maxbytes` interceptors (#575)
- Moved `readFromTail`, `cleanFiles`, `fdHoldTimeoutWhenInactive`, `fdHoldTimeoutWhenRemove` from watcher to outer layer in `file source` (#579) (#585)
- Added `cheanUnfinished` in cleanFiles (#580)
- Added `target` in `maxbyte` interceptor (#588)
- Added `partionKey` in franzKafka (#562)
- Added `highPrecision` in `rateLimit` interceptor (#525)
### :bug: Bug Fixes
- Fixed panic when kubeEvent Series is nil (#459)
- Upgraded `automaxprocs` version to v1.5.1 (#488)
- Fixed set defaults failed in `fieldsUnderKey` (#513)
- Fixed parse condition failed when contain ERROR in transformer interceptor (#514) (#515)
- Fixed grpc batch out-of-order data streams (#517)
- Fixed large line may cause oom (#529)
- Fixed duplicated batchSize in queue (#533)
- Fixed sqlite locked panic (#524)
- Fixed command can't be used in multi-arch container (#541)
- Fixed `logger listener` may cause block (#561) (#552)
- Fixed `sink concurrency` deepCopy failed (#563)
- Drop events when partial error in elasticsearch sink (#572)
# Release v1.4.0
### :star2: Features
- Added Loggie dashboard feature for easier troubleshooting (#416)
- Enhanced log alerting function with more flexible log alert detection rules and added alertWebhook sink (#392)
- Added sink concurrency support for automatic adaptation based on downstream delay (#376)
- Added franzKafka sink for users who prefer the franz kafka library (#423)
- Added elasticsearch source (#345)
- Added zinc sink (#254)
- Added pulsar sink (#417)
- Added grok action to transformer interceptor (#418)
- Added split action to transformer interceptor (#411)
- Added jsonEncode action to transformer interceptor (#421)
- Added fieldsFromPath configuration to source for obtaining fields from file content (#401)
- Added fieldsRef parameter to filesource listener for obtaining key value from fields configuration and adding to metrics as label (#402)
- In transformer interceptor, added dropIfError support to drop event if action execution fails (#409)
- Added info listener which currently exposes loggie_info_stat metrics and displays version label (#410)
- Added support for customized kafka sink partition key
- Added sasl support to Kafka source (#415)
- Added https insecureSkipVerify support to loki sink (#422)
- Optimized file source for large files (#430)
- Changed default value of file source maxOpenFds to 1024 (#437)
- ContainerRuntime can now be set to none (#439)
- Upgraded to go 1.18 (#440)
- Optimize the configuration parameters to remove the redundancy generated by rendering
### :bug: Bug Fixes
- Added source fields to filesource listener (#402)
- Fixed issue of transformer copy action not copying non-string body (#420)
- Added fetching of logs file from UpperDir when rootfs collection is enabled (#414)
- Fix pipeline restart npe (#454)
- Fix create dir soft link job (#453)
# Release v1.4.0-rc.0
### :star2: Features
- Added Loggie dashboard feature for easier troubleshooting (#416)
- Enhanced log alerting function with more flexible log alert detection rules and added alertWebhook sink (#392)
- Added sink concurrency support for automatic adaptation based on downstream delay (#376)
- Added franzKafka sink for users who prefer the franz kafka library (#423)
- Added elasticsearch source (#345)
- Added zinc sink (#254)
- Added pulsar sink (#417)
- Added grok action to transformer interceptor (#418)
- Added split action to transformer interceptor (#411)
- Added jsonEncode action to transformer interceptor (#421)
- Added fieldsFromPath configuration to source for obtaining fields from file content (#401)
- Added fieldsRef parameter to filesource listener for obtaining key value from fields configuration and adding to metrics as label (#402)
- In transformer interceptor, added dropIfError support to drop event if action execution fails (#409)
- Added info listener which currently exposes loggie_info_stat metrics and displays version label (#410)
- Added support for customized kafka sink partition key
- Added sasl support to Kafka source (#415)
- Added https insecureSkipVerify support to loki sink (#422)
- Optimized file source for large files (#430)
- Changed default value of file source maxOpenFds to 1024 (#437)
- ContainerRuntime can now be set to none (#439)
- Upgraded to go 1.18 (#440)
### :bug: Bug Fixes
- Added source fields to filesource listener (#402)
- Fixed issue of transformer copy action not copying non-string body (#420)
- Added fetching of logs file from UpperDir when rootfs collection is enabled (#414)
# Release v1.3.0
### :star2: Features

70
go.mod
View File

@ -6,18 +6,18 @@ require (
github.com/aliyun/aliyun-log-go-sdk v0.1.35
github.com/andres-erbsen/clock v0.0.0-20160526145045-9e14626cd129
github.com/bmatcuk/doublestar/v4 v4.0.2
github.com/creasty/defaults v1.5.1
github.com/creasty/defaults v1.7.0
github.com/dgraph-io/badger/v3 v3.2103.5
github.com/docker/docker v17.12.0-ce-rc1.0.20200706150819-a40b877fbb9e+incompatible
github.com/fsnotify/fsnotify v1.5.4
github.com/gdamore/tcell/v2 v2.4.1-0.20210905002822-f057f0a857a1
github.com/go-playground/validator/v10 v10.4.1
github.com/gogo/protobuf v1.3.2
github.com/golang/snappy v0.0.3
github.com/google/go-cmp v0.5.8
github.com/google/go-cmp v0.5.9
github.com/hpcloud/tail v1.0.0
github.com/jcmturner/gokrb5/v8 v8.4.3
github.com/json-iterator/go v1.1.12
github.com/mattn/go-sqlite3 v1.14.6
github.com/mattn/go-zglob v0.0.3
github.com/mmaxiaolei/backoff v0.0.0-20210104115436-e015e09efaba
github.com/olivere/elastic/v7 v7.0.28
@ -29,27 +29,25 @@ require (
github.com/prometheus/prometheus v1.8.2-0.20201028100903-3245b3267b24
github.com/rivo/tview v0.0.0-20221029100920-c4a7e501810d
github.com/rs/zerolog v1.20.0
github.com/segmentio/kafka-go v0.4.23
github.com/segmentio/kafka-go v0.4.39
github.com/shirou/gopsutil/v3 v3.22.2
github.com/smartystreets-prototypes/go-disruptor v0.0.0-20200316140655-c96477fd7a6a
github.com/stretchr/testify v1.8.0
github.com/thinkeridea/go-extend v1.3.2
github.com/stretchr/testify v1.8.2
github.com/twmb/franz-go v1.10.4
github.com/twmb/franz-go/pkg/sasl/kerberos v1.1.0
go.uber.org/atomic v1.7.0
go.uber.org/automaxprocs v0.0.0-20200415073007-b685be8c1c23
golang.org/x/net v0.0.0-20220812174116-3211cb980234
golang.org/x/text v0.3.7
go.uber.org/automaxprocs v1.5.1
golang.org/x/net v0.17.0
golang.org/x/text v0.13.0
golang.org/x/time v0.0.0-20220609170525-579cf78fd858
google.golang.org/grpc v1.47.0
google.golang.org/protobuf v1.28.0
google.golang.org/grpc v1.54.0
google.golang.org/protobuf v1.30.0
gopkg.in/natefinch/lumberjack.v2 v2.0.0
gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.25.4
k8s.io/apimachinery v0.25.4
k8s.io/client-go v0.25.4
k8s.io/code-generator v0.25.4
k8s.io/cri-api v0.24.0
sigs.k8s.io/yaml v1.3.0
)
@ -59,10 +57,18 @@ require (
github.com/DataDog/zstd v1.5.0 // indirect
github.com/apache/pulsar-client-go/oauth2 v0.0.0-20220120090717-25e59572242e // indirect
github.com/ardielle/ardielle-go v1.5.2 // indirect
github.com/cespare/xxhash v1.1.0 // indirect
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
github.com/danieljoos/wincred v1.0.2 // indirect
github.com/dgraph-io/ristretto v0.1.1 // indirect
github.com/dvsekhvalnov/jose2go v0.0.0-20200901110807-248326c1351b // indirect
github.com/emirpasic/gods v1.12.0 // indirect
github.com/fatih/color v1.10.0 // indirect
github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 // indirect
github.com/golang-jwt/jwt v3.2.2+incompatible // indirect
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b // indirect
github.com/golang/mock v1.5.0 // indirect
github.com/google/flatbuffers v1.12.1 // indirect
github.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c // indirect
github.com/hashicorp/go-uuid v1.0.3 // indirect
github.com/jcmturner/aescts/v2 v2.0.0 // indirect
@ -71,19 +77,32 @@ require (
github.com/jcmturner/rpc/v2 v2.0.3 // indirect
github.com/keybase/go-keychain v0.0.0-20190712205309-48d3d31d256d // indirect
github.com/klauspost/compress v1.15.9 // indirect
github.com/klauspost/cpuid/v2 v2.0.9 // indirect
github.com/konsorten/go-windows-terminal-sequences v1.0.3 // indirect
github.com/linkedin/goavro/v2 v2.9.8 // indirect
github.com/mattn/go-colorable v0.1.8 // indirect
github.com/mattn/go-isatty v0.0.12 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mtibben/percent v0.2.1 // indirect
github.com/patrickmn/go-cache v2.1.0+incompatible // indirect
github.com/pierrec/lz4/v4 v4.1.15 // indirect
github.com/satori/go.uuid v1.2.0 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/stathat/consistent v1.0.0 // indirect
github.com/tidwall/gjson v1.13.0 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.0 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/twmb/franz-go/pkg/kmsg v1.2.0 // indirect
golang.org/x/crypto v0.0.0-20220817201139-bc19a97f63c8 // indirect
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect
go.opencensus.io v0.23.0 // indirect
golang.org/x/arch v0.0.0-20210923205945-b76863e36670 // indirect
golang.org/x/crypto v0.14.0 // indirect
golang.org/x/mod v0.8.0 // indirect
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 // indirect
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10 // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/tools v0.1.12 // indirect
golang.org/x/sys v0.13.0 // indirect
golang.org/x/term v0.13.0 // indirect
golang.org/x/tools v0.6.0 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220107163113-42d7afdf6368 // indirect
)
@ -97,7 +116,7 @@ require (
github.com/cenkalti/backoff v2.2.1+incompatible // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/distribution v2.7.1+incompatible // indirect
github.com/docker/distribution v2.8.2+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/emicklei/go-restful/v3 v3.8.0 // indirect
@ -113,7 +132,7 @@ require (
github.com/go-playground/locales v0.13.0 // indirect
github.com/go-playground/universal-translator v0.17.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/imdario/mergo v0.3.12 // indirect
@ -140,8 +159,8 @@ require (
github.com/spf13/pflag v1.0.5 // indirect
github.com/tklauser/go-sysconf v0.3.9 // indirect
github.com/tklauser/numcpus v0.3.0 // indirect
github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c // indirect
github.com/xdg/stringprep v1.0.0 // indirect
github.com/xdg/scram v1.0.5 // indirect
github.com/xdg/stringprep v1.0.3 // indirect
github.com/yusufpapurcu/wmi v1.2.2 // indirect
gopkg.in/fsnotify.v1 v1.4.7 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
@ -156,12 +175,21 @@ require (
)
require (
github.com/apache/rocketmq-client-go/v2 v2.1.1
github.com/bytedance/sonic v1.9.2
github.com/dustin/go-humanize v1.0.0
github.com/elastic/go-elasticsearch/v7 v7.17.10
github.com/goccy/go-json v0.10.2
github.com/goccy/go-yaml v1.11.0
github.com/mattn/go-sqlite3 v1.11.0
k8s.io/cri-api v0.28.3
k8s.io/metrics v0.25.4
sigs.k8s.io/controller-runtime v0.13.1
)
replace (
github.com/docker/docker => github.com/docker/docker v1.13.1
github.com/elastic/go-elasticsearch/v7 => github.com/loggie-io/go-elasticsearch/v7 v7.17.11-0.20230703032733-f33cec60fa85
google.golang.org/grpc => google.golang.org/grpc v1.33.2
google.golang.org/protobuf => google.golang.org/protobuf v1.26.0
gopkg.in/natefinch/lumberjack.v2 v2.0.0 => github.com/machine3/lumberjack v0.2.0

171
go.sum
View File

@ -66,8 +66,9 @@ github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6L
github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.1.0 h1:ksErzDEI1khOiGPgpwuI7x2ebx/uXQNw7xJpn9Eq1+I=
github.com/BurntSushi/toml v1.1.0/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DATA-DOG/go-sqlmock v1.3.3/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM=
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
@ -78,6 +79,7 @@ github.com/Knetic/govaluate v3.0.1-0.20171022003610-9aa49832a739+incompatible/go
github.com/Microsoft/go-winio v0.4.14 h1:+hMXMk01us9KgxGb7ftKQt2Xpf5hH/yky+TDA+qxleU=
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/OneOfOne/xxhash v1.2.2 h1:KMrpdQIwFcEqXDklaen+P1axHaj9BSKzvpUUfnHldSE=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
@ -108,12 +110,15 @@ github.com/apache/pulsar-client-go v0.8.1 h1:UZINLbH3I5YtNzqkju7g9vrl4CKrEgYSx2r
github.com/apache/pulsar-client-go v0.8.1/go.mod h1:yJNcvn/IurarFDxwmoZvb2Ieylg630ifxeO/iXpk27I=
github.com/apache/pulsar-client-go/oauth2 v0.0.0-20220120090717-25e59572242e h1:EqiJ0Xil8NmcXyupNqXV9oYDBeWntEIegxLahrTr8DY=
github.com/apache/pulsar-client-go/oauth2 v0.0.0-20220120090717-25e59572242e/go.mod h1:Xee4tgYLFpYcPMcTfBYWE1uKRzeciodGTSEDMzsR6i8=
github.com/apache/rocketmq-client-go/v2 v2.1.1 h1:WY/LkOYSQaVyV+HOqdiIgF4LE3beZ/jwdSLKZlzpabw=
github.com/apache/rocketmq-client-go/v2 v2.1.1/go.mod h1:GZzExtXY9zpI6FfiVJYAhw2IXQtgnHUuWpULo7nr5lw=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/ardielle/ardielle-go v1.5.2 h1:TilHTpHIQJ27R1Tl/iITBzMwiUGSlVfiVhwDNGM3Zj4=
github.com/ardielle/ardielle-go v1.5.2/go.mod h1:I4hy1n795cUhaVt/ojz83SNVCYIGsAFAONtv2Dr7HUI=
github.com/ardielle/ardielle-tools v1.5.4/go.mod h1:oZN+JRMnqGiIhrzkRN9l26Cej9dEx4jeNG6A+AdkShk=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-metrics v0.3.3/go.mod h1:4O98XIr/9W0sxpJ8UaYkvjk10Iff7SnFrb4QAOwNTFc=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
@ -140,16 +145,23 @@ github.com/bmatcuk/doublestar/v4 v4.0.2/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTS
github.com/bmizerany/pat v0.0.0-20170815010413-6226ea591a40/go.mod h1:8rLXio+WjiTceGBHIoTvn60HIbs7Hm7bcHjyrSqYB9c=
github.com/bmizerany/perks v0.0.0-20141205001514-d9a9656a3a4b/go.mod h1:ac9efd0D1fsDb3EJvhqgXRbFx7bs2wqZ10HQPeU8U/Q=
github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps=
github.com/bytedance/sonic v1.5.0/go.mod h1:ED5hyg4y6t3/9Ku1R6dU/4KyJ48DZ4jPhfY1O2AihPM=
github.com/bytedance/sonic v1.9.2 h1:GDaNjuWSGu09guE9Oql0MSTNhNCLlWwO8y/xM5BzcbM=
github.com/bytedance/sonic v1.9.2/go.mod h1:i736AoUSYt75HyZLoJW9ERYxcy6eaN6h4BZXU064P/U=
github.com/c-bata/go-prompt v0.2.2/go.mod h1:VzqtzE2ksDBcdln8G7mk2RX9QyGjH+OVqOCSiVIqS34=
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
github.com/cenkalti/backoff v2.2.1+incompatible h1:tNowT99t7UNflLxfYYSlKYsBpXdEet03Pg2g16Swow4=
github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/cenkalti/backoff/v4 v4.0.2/go.mod h1:eEew/i+1Q6OrCDZh3WiXYv3+nJwBASZ8Bog/87DQnVg=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chenzhuoyu/base64x v0.0.0-20211019084208-fb5309c8db06/go.mod h1:DH46F32mSOjUmXrMHnKwZdA8wcEefY7UVqBKYGjpdQY=
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 h1:qSGYFH7+jGhDF8vLC+iwCD4WpbV1EBDSzWkJODFLams=
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311/go.mod h1:b583jCggY9gE99b6G5LEC39OIiVsWj+R97kbl5odCEk=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
@ -162,32 +174,42 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/containerd/containerd v1.3.4/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creasty/defaults v1.5.1 h1:j8WexcS3d/t4ZmllX4GEkl4wIB/trOr035ajcLHCISM=
github.com/creasty/defaults v1.5.1/go.mod h1:FPZ+Y0WNrbqOVw+c6av63eyHUAl6pMHZwqLPvXUZGfY=
github.com/creasty/defaults v1.7.0 h1:eNdqZvc5B509z18lD8yc212CAqJNvfT1Jq6L8WowdBA=
github.com/creasty/defaults v1.7.0/go.mod h1:iGzKe6pbEHnpMPtfDXZEr0NVxWnPTjb1bbDy08fPzYM=
github.com/danieljoos/wincred v1.0.2 h1:zf4bhty2iLuwgjgpraD2E9UbvO+fe54XXGJbOwe23fU=
github.com/danieljoos/wincred v1.0.2/go.mod h1:SnuYRW9lp1oJrZX/dXJqr0cPK5gYXqx3EJbmjhLdK9U=
github.com/dave/jennifer v1.2.0/go.mod h1:fIb+770HOpJ2fmN9EPPKOqm1vMGhB+TwXKMZhrIygKg=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgraph-io/badger/v3 v3.2103.5 h1:ylPa6qzbjYRQMU6jokoj4wzcaweHylt//CH0AKt0akg=
github.com/dgraph-io/badger/v3 v3.2103.5/go.mod h1:4MPiseMeDQ3FNCYwRbbcBOGJLf5jsE0PPFzRiKjtcdw=
github.com/dgraph-io/ristretto v0.1.1 h1:6CWw5tJNgpegArSHpNHJKldNeq03FQCwYvfMVWajOK8=
github.com/dgraph-io/ristretto v0.1.1/go.mod h1:S1GPSBCYCIhmVNfcth17y2zZtQT6wzkzgwUve0VDWWA=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-bitstream v0.0.0-20180413035011-3522498ce2c8/go.mod h1:VMaSuZ+SZcx/wljOQKvp5srsbCiKDEb6K2wC4+PiBmQ=
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2 h1:tdlZCpZ/P9DhczCTSixgIKmwPv6+wP5DGjqLYw5SUiA=
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
github.com/dgryski/go-sip13 v0.0.0-20200911182023-62edffca9245/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/digitalocean/godo v1.46.0/go.mod h1:p7dOjjtSBqCTUksqtA5Fd3uaKs9kyTq2xcz76ulEJRU=
github.com/dimfeld/httptreemux v5.0.1+incompatible h1:Qj3gVcDNoOthBAqftuD596rm4wg/adLLz5xh5CmpiCA=
github.com/dimfeld/httptreemux v5.0.1+incompatible/go.mod h1:rbUlSV+CCpv/SuqUTP/8Bk2O3LyUV436/yaRGkhP6Z0=
github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.8.2+incompatible h1:T3de5rq0dB1j30rp0sA2rER+m322EBzniBPB6ZIzuh8=
github.com/docker/distribution v2.8.2+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v1.13.1 h1:IkZjBSIc8hBjLpqeAbeE5mca5mNgeatLHBy3GO78BWo=
github.com/docker/docker v1.13.1/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
@ -198,10 +220,11 @@ github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDD
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dvsekhvalnov/jose2go v0.0.0-20200901110807-248326c1351b h1:HBah4D48ypg3J7Np4N+HY/ZR76fx3HEUGxDU6Uk39oQ=
github.com/dvsekhvalnov/jose2go v0.0.0-20200901110807-248326c1351b/go.mod h1:7BvyPhdbLxMXIYTFPLsyJRFMsKmOZnQmzh6Gb+uquuM=
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 h1:YEetp8/yCZMuEPMUDHG0CW/brkkEp8mzqk2+ODEitlw=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/eclipse/paho.mqtt.golang v1.2.0/go.mod h1:H9keYFcgq3Qr5OUJm/JZI/i6U7joQ8SYLhZwfeOo6Ts=
@ -210,6 +233,8 @@ github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkg
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful/v3 v3.8.0 h1:eCZ8ulSerjdAiaNpF7GxXIE7ZCMo1moN1qX+S609eVw=
github.com/emicklei/go-restful/v3 v3.8.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/emirpasic/gods v1.12.0 h1:QAUIPSaCu4G+POclxeqb3F+WPpdKqFGlw36+yOzGlrg=
github.com/emirpasic/gods v1.12.0/go.mod h1:YfzfFFoVP/catgzJb4IKIqXjX78Ha8FMSDh3ymbK86o=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
@ -217,15 +242,16 @@ github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fatih/color v1.10.0 h1:s36xzo75JdqLaaWoiEHk767eHiwo0598uUxyfiPkDsg=
github.com/fatih/color v1.10.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM=
github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8vw=
github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
github.com/frankban/quicktest v1.10.2 h1:19ARM85nVi4xH7xPXuc5eM/udya5ieh7b/Sv+d844Tk=
github.com/frankban/quicktest v1.10.2/go.mod h1:K+q6oSqb0W0Ininfk863uOk1lMy69l/P6txr3mVT54s=
github.com/frankban/quicktest v1.11.3 h1:8sXhOn0uLys67V8EsXLc6eszDs8VXWxL3iRvebPhedY=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.5.4 h1:jRbGcIw6P2Meqdwuo0H1p6JVLbL5DHKAKlYndzMwVZI=
@ -361,6 +387,10 @@ github.com/gobuffalo/packd v0.1.0/go.mod h1:M2Juc+hhDXf/PnmBANFCqx4DM3wRbgDvnVWe
github.com/gobuffalo/packr/v2 v2.0.9/go.mod h1:emmyGweYTm6Kdper+iywB6YK5YzuKchGtJQZ0Odn4pQ=
github.com/gobuffalo/packr/v2 v2.2.0/go.mod h1:CaAwI0GPIAv+5wKLtv8Afwl+Cm78K/I/VCm/3ptBN+0=
github.com/gobuffalo/syncx v0.0.0-20190224160051-33c29581e754/go.mod h1:HhnNqWY95UYwwW3uSASeV7vtgYkT2t16hJgV3AEPUpw=
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/goccy/go-yaml v1.11.0 h1:n7Z+zx8S9f9KgzG6KtQKf+kwqXZlLNR2F6018Dgau54=
github.com/goccy/go-yaml v1.11.0/go.mod h1:H+mJrWtjPTJAHvRbV09MCK9xYwODM+wRTVFFTWckfng=
github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 h1:ZpnhV/YsD2/4cESfV5+Hoeu/iUR3ruzNvZ+yQfO03a0=
github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
@ -376,6 +406,7 @@ github.com/golang-jwt/jwt v3.2.2+incompatible h1:IfV12K8xAKAnZqdXVzCZ+TOjboZ2keL
github.com/golang-jwt/jwt v3.2.2+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I=
github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0/go.mod h1:E/TSTwGwJL78qG/PmXZO1EjYhfJinVAhrmmHX6Z8B9k=
github.com/golang/geo v0.0.0-20190916061304-5b978397cfec/go.mod h1:QZ0nwyI2jOfgRAoBvP+ab5aRr7c9x7lhGEJrKvBwjWI=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@ -389,6 +420,7 @@ github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/mock v1.5.0 h1:jlYHihg//f7RRwuPfptm04yp4s7O6Kw8EZiVYIGcH0g=
github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@ -402,8 +434,9 @@ github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.2/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
@ -412,6 +445,8 @@ github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEW
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/flatbuffers v1.11.0/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
github.com/google/flatbuffers v1.12.1 h1:MVlul7pQNoDzWRLTw5imwYsl+usrS1TXG2H4jg6ImGw=
github.com/google/flatbuffers v1.12.1/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
github.com/google/gnostic v0.5.7-v3refs h1:FhTMOKj2VhjpouxvWJAV1TL304uMlb9zcDqkl6cEI54=
github.com/google/gnostic v0.5.7-v3refs/go.mod h1:73MKFl6jIHelAJNaBGFzt3SPtZULs9dYrGFt8OiIsHQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
@ -427,8 +462,8 @@ github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
@ -457,6 +492,7 @@ github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
github.com/gophercloud/gophercloud v0.13.0/go.mod h1:VX0Ibx85B60B5XOrZr6kaNwrmPUzcmMpwxvQ1WQIIWM=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
@ -568,6 +604,7 @@ github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHm
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jsternberg/zap-logfmt v1.0.0/go.mod h1:uvPs/4X51zdkcm5jXl5SYoN+4RK21K8mysFmDaM/h+o=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
@ -583,11 +620,13 @@ github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.4.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.9.8/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.10.8/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.12.3/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
github.com/klauspost/compress v1.15.9 h1:wKRjX6JRtDdrE9qwa4b/Cip7ACOshUI4smpCQanqjSY=
github.com/klauspost/compress v1.15.9/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
github.com/klauspost/cpuid v0.0.0-20170728055534-ae7887de9fa5/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/cpuid/v2 v2.0.9 h1:lgaqFMSdTdQYdZ04uHyN2d/eKdOMyi2YLSvlQIBFYa4=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/crc32 v0.0.0-20161016154125-cb6bfca970f6/go.mod h1:+ZoRqAPRLkC4NPOvfYeR5KNOrY6TD+/sAC3HXPZgDYg=
github.com/klauspost/pgzip v1.0.2-0.20170402124221-0bf5dcad4ada/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@ -598,8 +637,8 @@ github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
@ -613,12 +652,15 @@ github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-b
github.com/lightstep/lightstep-tracer-go v0.18.1/go.mod h1:jlF1pusYV4pidLvZ+XD0UBX0ZE6WURAspgAczcDHrL4=
github.com/linkedin/goavro/v2 v2.9.8 h1:jN50elxBsGBDGVDEKqUlDuU1cFwJ11K/yrJCBMe/7Wg=
github.com/linkedin/goavro/v2 v2.9.8/go.mod h1:UgQUb2N/pmueQYH9bfqFioWxzYCZXSfF8Jw03O5sjqA=
github.com/loggie-io/go-elasticsearch/v7 v7.17.11-0.20230703032733-f33cec60fa85 h1:t9Y/0SeQK0xVLMW9zT4hvko0ZfuG6xygaTZVsp4sVoE=
github.com/loggie-io/go-elasticsearch/v7 v7.17.11-0.20230703032733-f33cec60fa85/go.mod h1:TZeb+NAt2QlTMff6gshmn3T/VzCU6QbfLOBIwglrFsE=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/machine3/lumberjack v0.2.0 h1:rkHdJafL6TvRlrDZLkVn2uSaRoGg3O9jh1KdF/VTb5c=
github.com/machine3/lumberjack v0.2.0/go.mod h1:waeKSoFFQ3bvIC6e3VbRgewexbAuoDigWF6YN6EOudw=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.5/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
@ -635,19 +677,21 @@ github.com/markbates/safe v1.0.1/go.mod h1:nAqgmRi7cY2nqMc92/bSEeQA+R4OheNU2T1kN
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.8 h1:c1ghPdyEDarC70ftn0y+A/Ee++9zz8ljHG1b13eJ0s8=
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.13 h1:lTGmDsbAYt5DmK6OnoV7EuIF1wEIFAcxld6ypU4OSgU=
github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-sqlite3 v1.11.0 h1:LDdKkqtYlom37fkvqs8rMPFKAMe8+SgjbwZ6ex1/A/Q=
github.com/mattn/go-sqlite3 v1.11.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/mattn/go-sqlite3 v1.14.6 h1:dNPt6NO46WmLVt2DLNpwczCmdV5boIZ6g/tlDrlRUbg=
github.com/mattn/go-sqlite3 v1.14.6/go.mod h1:NyWgC/yNuGj7Q9rpYnZvas74GogHl5/Z4A/KQRfk6bU=
github.com/mattn/go-tty v0.0.0-20180907095812-13ff1204f104/go.mod h1:XPvLUNfbS4fJH25nqRHfWLMa1ONC8Amw+mIA639KxkE=
github.com/mattn/go-zglob v0.0.3 h1:6Ry4EYsScDyt5di4OI6xw1bYhOqfE5S33Z1OPy+d+To=
github.com/mattn/go-zglob v0.0.3/go.mod h1:9fxibJccNxU2cnpIKLRRFA7zX7qhkJIQWBb449FYHOo=
@ -698,7 +742,6 @@ github.com/nats-io/nats.go v1.9.1/go.mod h1:ZjDU1L/7fJ09jvUSRVBR2e7+RnLiiIQyqyzE
github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
github.com/nats-io/nkeys v0.1.3/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
@ -744,8 +787,11 @@ github.com/panjf2000/ants/v2 v2.4.7 h1:MZnw2JRyTJxFwtaMtUJcwE618wKD04POWk2gwwP4E
github.com/panjf2000/ants/v2 v2.4.7/go.mod h1:f6F0NZVFsGCp5A7QW/Zj/m92atWwOkY0OIhFxRNFr4A=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
github.com/paulbellamy/ratecounter v0.2.0/go.mod h1:Hfx1hDpSGoqxkVVpBi/IlYD7kChlfo5C6hzIHwPqfFE=
github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.4.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo=
github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/performancecopilot/speed v3.0.0+incompatible/go.mod h1:/CLtqpZ5gBg1M9iaPbIdPPGyKcA8hKdoy6hAWba7Yac=
@ -771,6 +817,7 @@ github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndr
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
github.com/prometheus/alertmanager v0.21.0/go.mod h1:h7tJ81NA0VLWvWEayi1QltevFkLF3KxmC/malTcT8Go=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
@ -825,21 +872,24 @@ github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6L
github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.2.2/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/zerolog v1.20.0 h1:38k9hgtUBdxFwE34yS8rTHmHBa4eN16E4DJlv177LNs=
github.com/rs/zerolog v1.20.0/go.mod h1:IzD0RJ65iWH0w97OQQebJEvTZYvsCUm9WVLWBQrJRjo=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/samuel/go-zookeeper v0.0.0-20190923202752-2cc03de413da/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
github.com/samuel/go-zookeeper v0.0.0-20200724154423-2164a8ac840e/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/segmentio/kafka-go v0.1.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfPOCvTvk+EJo=
github.com/segmentio/kafka-go v0.2.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfPOCvTvk+EJo=
github.com/segmentio/kafka-go v0.4.23 h1:jjacNjmn1fPvkVGFs6dej98fa7UT/bYF8wZBFMMIld4=
github.com/segmentio/kafka-go v0.4.23/go.mod h1:XzMcoMjSzDGHcIwpWUI7GB43iKZ2fTVmryPSGLf/MPg=
github.com/segmentio/kafka-go v0.4.39 h1:75smaomhvkYRwtuOwqLsdhgCG30B82NsbdkdDfFbvrw=
github.com/segmentio/kafka-go v0.4.39/go.mod h1:T0MLgygYvmqmBvC+s8aCcbVNfJN4znVne5j0Pzowp/Q=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/shirou/gopsutil/v3 v3.22.2 h1:wCrArWFkHYIdDxx/FSfF5RB4dpJYW6t7rcp3+zL8uks=
github.com/shirou/gopsutil/v3 v3.22.2/go.mod h1:WapW1AOOPlHyXr+yOyw3uYx36enocrtSoSBy0L5vUHY=
@ -856,8 +906,10 @@ github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrf
github.com/smartystreets-prototypes/go-disruptor v0.0.0-20200316140655-c96477fd7a6a h1:mHEYm/fBGwtGwgIW/tlprwUd2syvcue8oosLVuQscic=
github.com/smartystreets-prototypes/go-disruptor v0.0.0-20200316140655-c96477fd7a6a/go.mod h1:slFCjqF2v0VgmCeB+J4uEy0d7HAgLkgEjVrG0DPO67M=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/assertions v1.1.1 h1:T/YLemO5Yp7KPzS+lVtu+WsHn8yoSwTfItdAd1r3cck=
github.com/smartystreets/assertions v1.1.1/go.mod h1:tcbTF8ujkAEcZ8TElKY+i30BzYlVhC/LOxJk7iOWnoo=
github.com/smartystreets/go-aws-auth v0.0.0-20180515143844-0c1422d1fdb9/go.mod h1:SnhjPscd9TpLiy1LpzGSKh3bXCfxxXuqd9xmQJy3slM=
github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/smartystreets/gunit v1.4.2/go.mod h1:ZjM1ozSIMJlAz/ay4SG8PeKF00ckUp+zMHZXV9/bvak=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
@ -865,19 +917,25 @@ github.com/sony/gobreaker v0.4.1/go.mod h1:ZKptC7FHNvhBz7dN2LGjPVBz2sZJmc0/PkyDJ
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/cobra v1.2.1/go.mod h1:ExllRjgxM/piMAM+3tAZvg8fsklGAf3tPfi+i8t68Nk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.8.1/go.mod h1:o0Pch8wJ9BVSWGQMbra6iw0oQ5oktSIBaujf1rJH9Ns=
github.com/stathat/consistent v1.0.0 h1:ZFJ1QTRn8npNBKW065raSZ8xfOqhpb8vLOkfp4CcL/U=
github.com/stathat/consistent v1.0.0/go.mod h1:uajTPbgSygZBJ+V+0mY7meZ8i0XAcZs7AQ6V121XSxw=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
github.com/streadway/amqp v0.0.0-20190404075320-75d898a42a94/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
@ -885,8 +943,9 @@ github.com/streadway/handy v0.0.0-20190108123426-d5acb3125c2a/go.mod h1:qNTQ5P5J
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0 h1:M2gUjqZET1qApGOWNSnZ49BAIMX4F/1plDv3+l31EJ4=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.0/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
@ -895,12 +954,18 @@ github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/thinkeridea/go-extend v1.3.2 h1:0ZImRXpJc+wBNIrNEMbTuKwIvJ6eFoeuNAewvzONrI0=
github.com/thinkeridea/go-extend v1.3.2/go.mod h1:xqN1e3y1PdVSij1VZp6iPKlO8I4jLbS8CUuTySj981g=
github.com/tidwall/gjson v1.13.0 h1:3TFY9yxOQShrvmjdM76K+jc66zJeT6D3/VFFYCGQf7M=
github.com/tidwall/gjson v1.13.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tinylib/msgp v1.0.2/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/tklauser/go-sysconf v0.3.9 h1:JeUVdAOWhhxVcU6Eqr/ATFHgXk/mmiItdKeJPev3vTo=
github.com/tklauser/go-sysconf v0.3.9/go.mod h1:11DU/5sG7UexIrp/O6g35hrWzu0JxlwQ3LSFUzyeuhs=
@ -908,6 +973,8 @@ github.com/tklauser/numcpus v0.3.0 h1:ILuRUQBtssgnxw0XXIjKUC56fgnOrFoQQ/4+DeU2bi
github.com/tklauser/numcpus v0.3.0/go.mod h1:yFGUr7TUHQRAhyqBcEg0Ge34zDBAsIvJJcyE6boqnA8=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
github.com/twmb/franz-go v1.7.0/go.mod h1:PMze0jNfNghhih2XHbkmTFykbMF5sJqmNJB31DOOzro=
github.com/twmb/franz-go v1.10.4 h1:1PGpRG0uGTSSZCBV6lAMYcuVsyReMqdNBQRd8QCzw9U=
github.com/twmb/franz-go v1.10.4/go.mod h1:PMze0jNfNghhih2XHbkmTFykbMF5sJqmNJB31DOOzro=
@ -917,18 +984,21 @@ github.com/twmb/franz-go/pkg/sasl/kerberos v1.1.0 h1:alKdbddkPw3rDh+AwmUEwh6HNYg
github.com/twmb/franz-go/pkg/sasl/kerberos v1.1.0/go.mod h1:k8BoBjyUbFj34f0rRbn+Ky12sZFAPbmShrg0karAIMo=
github.com/uber/jaeger-client-go v2.25.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
github.com/uber/jaeger-lib v2.4.0+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw=
github.com/willf/bitset v1.1.3/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c h1:u40Z8hqBAAQyv+vATcGgV0YCnDjqSL7/q/JyPhhJSPk=
github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c/go.mod h1:lB8K/P019DLNhemzwFU4jHLhdvlE6uDZjXFejJXr49I=
github.com/xdg/scram v1.0.5 h1:TuS0RFmt5Is5qm9Tm2SoD89OPqe4IRiFtyFY4iwWXsw=
github.com/xdg/scram v1.0.5/go.mod h1:lB8K/P019DLNhemzwFU4jHLhdvlE6uDZjXFejJXr49I=
github.com/xdg/stringprep v0.0.0-20180714160509-73f8eece6fdc/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y=
github.com/xdg/stringprep v1.0.0 h1:d9X0esnoa3dFsV0FG35rAT0RIhYFlPq7MiP+DW89La0=
github.com/xdg/stringprep v1.0.0/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y=
github.com/xdg/stringprep v1.0.3 h1:cmL5Enob4W83ti/ZHuZLuKD/xqJfus4fVPwE+/BDm+4=
github.com/xdg/stringprep v1.0.3/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xlab/treeprint v0.0.0-20180616005107-d6fb6747feb6/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg=
github.com/xlab/treeprint v1.0.0/go.mod h1:IoImgRak9i3zJyuxOKUP1v4UZd1tMoKkq/Cimt1uhCg=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
@ -954,13 +1024,15 @@ go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.5.1/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/automaxprocs v0.0.0-20200415073007-b685be8c1c23 h1:aT7f36RISw9ldPFbbefPwQpQtToX6MblzQXJZE96yTM=
go.uber.org/automaxprocs v0.0.0-20200415073007-b685be8c1c23/go.mod h1:9CWT6lKIep8U41DDaPiH6eFscnTyjfTANNQNx6LrIcA=
go.uber.org/automaxprocs v1.5.1 h1:e1YG66Lrk73dn4qhg8WFSvhF0JuFQF0ERIp4rpuV8Qk=
go.uber.org/automaxprocs v1.5.1/go.mod h1:BF4eumQw0P9GtnuxxovUd06vwm1o18oMzFtK66vU6XU=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/goleak v1.1.12 h1:gZAh5/EyT/HQwlpkCy6wTpqfH9H8Lz8zbm3dZh+OyzA=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
@ -973,12 +1045,14 @@ go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
go.uber.org/zap v1.21.0 h1:WefMeulhovoZ2sYXz7st6K0sLj7bBhpiFaud4r4zST8=
golang.org/x/arch v0.0.0-20210923205945-b76863e36670 h1:18EFjUmQOcUvxNYSkA6jO9VAiXCnxFY6NyDX0bHDmkU=
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190320223903-b7391e95e576/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190422162423-af44ce270edf/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
golang.org/x/crypto v0.0.0-20190506204251-e1dfcc566284/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190530122614-20be4c3c3ed5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
@ -992,9 +1066,11 @@ golang.org/x/crypto v0.0.0-20191202143827-86a70503ff7e/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220817201139-bc19a97f63c8 h1:GIAS/yBem/gq2MUqgNIzUHW7cJMmx3TGZOrnyYaNQ6c=
golang.org/x/crypto v0.0.0-20220817201139-bc19a97f63c8/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@ -1033,8 +1109,8 @@ golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0 h1:LUYupSeNrTNCGzR/hVBk2NHZO4hXcVaW1k4Qx7rjPx8=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -1087,10 +1163,11 @@ golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qx
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220706163947-c90051bbdb60/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220725212005-46097bf591d3/go.mod h1:AaygXjzTFtRAg2ttMY5RMuhpJ3cNnI0XpyFJD1iQRSM=
golang.org/x/net v0.0.0-20220812174116-3211cb980234 h1:RDqmgfe7SvlMWoqC3xwQ2blLO3fcWcxMa3eBLRdRW7E=
golang.org/x/net v0.0.0-20220812174116-3211cb980234/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -1119,7 +1196,7 @@ golang.org/x/sync v0.0.0-20200930132711-30421366ff76/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 h1:uVc8UZUe6tr40fFVnUP5Oj+veunVezqYl9z7DYw9xzw=
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -1127,6 +1204,7 @@ golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -1199,18 +1277,20 @@ golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210816074244-15123e1e1f71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220111092808-5a964db01320/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10 h1:WIoqL4EROvwiPdUtaip4VcDdpZ4kha7wBWZrbVKCIZg=
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20221010170243-090e33056c14/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.13.0 h1:bb+I9cTfFazGW51MZqBVmZy7+JEJMouUHTUSKVQLBek=
golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -1219,8 +1299,9 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@ -1301,11 +1382,12 @@ golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0 h1:BOw41kyTf3PuCW1pVQf8+Cyg8pMlkYB1oo9iJ6D/lKM=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gonum.org/v1/gonum v0.0.0-20180816165407-929014505bf4/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo=
gonum.org/v1/gonum v0.0.0-20181121035319-3f7ecaa7e8ca/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo=
@ -1400,8 +1482,9 @@ gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLks
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f h1:BLraFXnmrev5lT+xlilqcH8XK9/i0At2xKjWk4p6zsU=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
@ -1451,8 +1534,8 @@ k8s.io/client-go v0.25.4 h1:3RNRDffAkNU56M/a7gUfXaEzdhZlYhoW8dgViGy5fn8=
k8s.io/client-go v0.25.4/go.mod h1:8trHCAC83XKY0wsBIpbirZU4NTUpbuhc2JnI7OruGZw=
k8s.io/code-generator v0.25.4 h1:tjQ7/+9eN7UOiU2DP+0v4ntTI4JZLi2c1N0WllpFhTc=
k8s.io/code-generator v0.25.4/go.mod h1:9F5fuVZOMWRme7MYj2YT3L9ropPWPokd9VRhVyD3+0w=
k8s.io/cri-api v0.24.0 h1:PZ/MqhgYq4rxCarYe2rGNmd8G9ZuyS1NU9igolbkqlI=
k8s.io/cri-api v0.24.0/go.mod h1:t3tImFtGeStN+ES69bQUX9sFg67ek38BM9YIJhMmuig=
k8s.io/cri-api v0.28.3 h1:84ifk56rAy7yYI1zYqTjLLishpFgs3q7BkCKhoLhmFA=
k8s.io/cri-api v0.28.3/go.mod h1:MTdJO2fikImnX+YzE2Ccnosj3Hw2Cinw2fXYV3ppUIE=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20211129171323-c02415ce4185 h1:TT1WdmqqXareKxZ/oNXEUSwKlLiHzPMyB0t8BaFeBYI=
k8s.io/gengo v0.0.0-20211129171323-c02415ce4185/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
@ -1486,3 +1569,5 @@ sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
sourcegraph.com/sourcegraph/appdash v0.0.0-20190731080439-ebfcffb1b5c0/go.mod h1:hI742Nqp5OhwiqlzhgfbWU4mW4yO10fP+LoT9WOswdU=
stathat.com/c/consistent v1.0.0 h1:ezyc51EGcRPJUxfHGSgJjWzJdj3NiMU9pNfLNGiXV0c=
stathat.com/c/consistent v1.0.0/go.mod h1:QkzMWzcbB+yQBL2AttO6sgsQS/JSTapcDISJalmCDS0=

View File

@ -14,7 +14,7 @@ loggie:
sink: ~
queue: ~
pipeline: ~
normalize: ~
sys: ~
discovery:
enabled: false
@ -31,14 +31,18 @@ loggie:
defaults:
sink:
type: dev
printEvents: true
codec:
type: json
pretty: true
sources:
- type: file
timestampKey: "@timestamp"
bodyKey: "message"
fieldsUnderRoot: true
addonMeta: true
addonMetaSchema:
underRoot: true
fields:
filename: "${_meta.filename}"
line: "${_meta.line}"
watcher:
cleanFiles:
maxHistory: 1
maxOpenFds: 6000
http:
enabled: true

BIN
logo/loggie-draw.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@ -9,8 +9,7 @@ pipelines:
topic: "loggie"
sink:
type: dev
printEvents: true
codec:
pretty: true
type: json
printEvents: true
pretty: true

View File

@ -35,6 +35,23 @@ type PipelineConfig struct {
Pipelines []pipeline.Config `yaml:"pipelines" validate:"dive,required"`
}
func (c *PipelineConfig) DeepCopy() *PipelineConfig {
if c == nil {
return nil
}
out := new(PipelineConfig)
if len(c.Pipelines) > 0 {
pip := make([]pipeline.Config, 0)
for _, p := range c.Pipelines {
pip = append(pip, *p.DeepCopy())
}
out.Pipelines = pip
}
return out
}
func (c *PipelineConfig) Validate() error {
if err := c.ValidateUniquePipeName(); err != nil {
return err
@ -113,11 +130,21 @@ func ReadPipelineConfigFromFile(path string, ignore FileIgnore) (*PipelineConfig
for _, fn := range all {
pipes := &PipelineConfig{}
unpack := cfg.UnPackFromFile(fn, pipes)
if err = unpack.Defaults().Validate().Do(); err != nil {
log.Error("invalid pipeline configs: %v, \n%s", err, unpack.Contents())
if err = unpack.Do(); err != nil {
log.Error("read pipeline configs from path %s failed: %v", path, err)
continue
}
pipecfgs.AddPipelines(pipes.Pipelines)
for _, p := range pipes.Pipelines {
pip := p
if err := cfg.NewUnpack(nil, &pip, nil).Defaults().Validate().Do(); err != nil {
// ignore invalid pipeline, but continue to read other pipelines
// invalid pipeline will check by reloader later
log.Error("pipeline: %s configs invalid: %v", p.Name, err)
continue
}
pipecfgs.AddPipelines([]pipeline.Config{pip})
}
}
return pipecfgs, nil
}

View File

@ -129,7 +129,7 @@ func (c *Controller) reportMetric(p pipeline.Config, eventType eventbus.Componen
Category: api.INTERCEPTOR,
})
}
eventbus.Publish(eventbus.PipelineTopic, eventbus.PipelineMetricData{
eventbus.PublishOrDrop(eventbus.PipelineTopic, eventbus.PipelineMetricData{
EventType: eventType,
Name: p.Name,
Time: time.Now(),

View File

@ -167,7 +167,7 @@ func (u *UnPack) unpack() *UnPack {
return u
}
err := yaml.Unmarshal(u.content, u.config)
err := yaml.UnmarshalWithPrettyError(u.content, u.config)
u.err = err
return u
}

142
pkg/core/event/alert.go Normal file
View File

@ -0,0 +1,142 @@
/*
Copyright 2022 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package event
import (
"strings"
"time"
"github.com/loggie-io/loggie/pkg/core/api"
"github.com/loggie-io/loggie/pkg/core/log"
)
const (
sourceName = "sourceName"
pipelineName = "pipelineName"
timestamp = "timestamp"
meta = "_meta"
DefaultAlertKey = "_defaultAlertKey"
NoDataKey = "NoDataAlert"
Addition = "additions"
Fields = "fields"
ReasonKey = "reason"
AlertOriginDataKey = "Events"
loggieError = "LoggieError"
)
type Alert map[string]interface{}
type AlertMap map[string][]Alert
type PackageAndSendAlerts func(alerts []Alert)
func NewAlert(e api.Event, lineLimit int) Alert {
allMeta := e.Meta().GetAll()
if allMeta == nil {
allMeta = make(map[string]interface{})
}
if value, ok := allMeta[SystemSourceKey]; ok {
allMeta[sourceName] = value
}
if value, ok := allMeta[SystemPipelineKey]; ok {
allMeta[pipelineName] = value
}
if value, ok := allMeta[SystemProductTimeKey]; ok {
t, valueToTime := value.(time.Time)
if !valueToTime {
allMeta[timestamp] = value
} else {
textTime, err := t.MarshalText()
if err == nil {
allMeta[timestamp] = string(textTime)
} else {
allMeta[timestamp] = value
}
}
}
alert := Alert{
meta: allMeta,
}
if len(e.Body()) > 0 {
s := string(e.Body())
alert[Body] = splitBody(s, lineLimit)
}
for k, v := range e.Header() {
alert[k] = v
}
return alert
}
func splitBody(s string, lineLimit int) []string {
split := make([]string, 0)
for i, s := range strings.Split(s, "\n") {
if i > lineLimit {
log.Info("body exceeds line limit %d", lineLimit)
break
}
split = append(split, strings.TrimSpace(s))
}
return split
}
func ErrorToEvent(message string) *api.Event {
header := make(map[string]interface{})
var e api.Event
var meta api.Meta
meta = NewDefaultMeta()
meta.Set(SystemProductTimeKey, time.Now())
e = NewEvent(header, []byte(message))
e.Header()[ReasonKey] = loggieError
if len(log.AfterErrorConfig.Additions) > 0 {
e.Header()[Addition] = log.AfterErrorConfig.Additions
}
e.Fill(meta, header, e.Body())
return &e
}
func ErrorIntoEvent(event api.Event, message string) api.Event {
header := make(map[string]interface{})
var meta api.Meta
meta = NewDefaultMeta()
meta.Set(SystemProductTimeKey, time.Now())
header[ReasonKey] = loggieError
if len(log.AfterErrorConfig.Additions) > 0 {
header[Addition] = log.AfterErrorConfig.Additions
}
event.Fill(meta, header, []byte(message))
return event
}
func GenAlertsOriginData(alerts []Alert) map[string]interface{} {
return map[string]interface{}{
AlertOriginDataKey: alerts,
}
}

View File

@ -18,7 +18,7 @@ package event
import (
"fmt"
jsoniter "github.com/json-iterator/go"
"github.com/loggie-io/loggie/pkg/util/json"
"github.com/pkg/errors"
"strings"
"sync"
@ -128,7 +128,7 @@ func (de *DefaultEvent) Release() {
func (de *DefaultEvent) String() string {
var sb strings.Builder
sb.WriteString(`header:`)
header, _ := jsoniter.Marshal(de.Header())
header, _ := json.Marshal(de.Header())
sb.Write(header)
sb.WriteString(`, body:"`)
sb.WriteString(string(de.Body()))

View File

@ -28,6 +28,20 @@ type Config struct {
Properties cfg.CommonCfg `yaml:",inline"`
}
func (c *Config) DeepCopy() *Config {
if c == nil {
return nil
}
out := new(Config)
out.Enabled = c.Enabled
out.Name = c.Name
out.Type = c.Type
out.Properties = c.Properties.DeepCopy()
return out
}
func (c *Config) GetExtension() (*ExtensionConfig, error) {
ext := &ExtensionConfig{}

View File

@ -19,35 +19,39 @@ package log
import (
"flag"
"fmt"
"github.com/loggie-io/loggie/pkg/core/log/spi"
"github.com/rs/zerolog"
"gopkg.in/natefinch/lumberjack.v2"
"io"
"os"
"path"
"github.com/rs/zerolog"
"gopkg.in/natefinch/lumberjack.v2"
"github.com/loggie-io/loggie/pkg/core/log/spi"
"time"
)
var (
defaultLogger *Logger
AfterError spi.AfterError
gLoggerConfig = &LoggerConfig{}
defaultLogger *Logger
AfterError spi.AfterError
AfterErrorConfig AfterErrorConfiguration
gLoggerConfig = &LoggerConfig{}
)
func init() {
flag.StringVar(&gLoggerConfig.Level, "log.level", "info", "Global log output level")
flag.BoolVar(&gLoggerConfig.JsonFormat, "log.jsonFormat", false, "Parses the JSON log format")
flag.BoolVar(&gLoggerConfig.EnableStdout, "log.enableStdout", true, "EnableStdout enable the log print to stdout")
flag.BoolVar(&gLoggerConfig.EnableFile, "log.enableFile", false, "EnableFile makes the framework log to a file")
flag.StringVar(&gLoggerConfig.Directory, "log.directory", "/var/log", "Directory to log to to when log.enableFile is enabled")
flag.StringVar(&gLoggerConfig.Filename, "log.filename", "loggie.log", "Filename is the name of the logfile which will be placed inside the directory")
flag.IntVar(&gLoggerConfig.MaxSize, "log.maxSize", 1024, "Max size in MB of the logfile before it's rolled")
flag.IntVar(&gLoggerConfig.MaxBackups, "log.maxBackups", 3, "Max number of rolled files to keep")
flag.IntVar(&gLoggerConfig.MaxAge, "log.maxAge", 7, "Max age in days to keep a logfile")
flag.StringVar(&gLoggerConfig.TimeFormat, "log.timeFormat", "2006-01-02 15:04:05", "TimeFormat log time format")
flag.IntVar(&gLoggerConfig.CallerSkipCount, "log.callerSkipCount", 4, "CallerSkipCount is the number of stack frames to skip to find the caller")
flag.BoolVar(&gLoggerConfig.NoColor, "log.noColor", false, "NoColor disables the colorized output")
SetFlag(flag.CommandLine)
}
func SetFlag(f *flag.FlagSet) {
f.StringVar(&gLoggerConfig.Level, "log.level", "info", "Global log output level")
f.BoolVar(&gLoggerConfig.JsonFormat, "log.jsonFormat", false, "Parses the JSON log format")
f.BoolVar(&gLoggerConfig.EnableStdout, "log.enableStdout", true, "EnableStdout enable the log print to stdout")
f.BoolVar(&gLoggerConfig.EnableFile, "log.enableFile", false, "EnableFile makes the framework log to a file")
f.StringVar(&gLoggerConfig.Directory, "log.directory", "/var/log", "Directory to log to to when log.enableFile is enabled")
f.StringVar(&gLoggerConfig.Filename, "log.filename", "loggie.log", "Filename is the name of the logfile which will be placed inside the directory")
f.IntVar(&gLoggerConfig.MaxSize, "log.maxSize", 1024, "Max size in MB of the logfile before it's rolled")
f.IntVar(&gLoggerConfig.MaxBackups, "log.maxBackups", 3, "Max number of rolled files to keep")
f.IntVar(&gLoggerConfig.MaxAge, "log.maxAge", 7, "Max age in days to keep a logfile")
f.StringVar(&gLoggerConfig.TimeFormat, "log.timeFormat", "2006-01-02 15:04:05", "TimeFormat log time format")
f.IntVar(&gLoggerConfig.CallerSkipCount, "log.callerSkipCount", 4, "CallerSkipCount is the number of stack frames to skip to find the caller")
f.BoolVar(&gLoggerConfig.NoColor, "log.noColor", false, "NoColor disables the colorized output")
}
type LoggerConfig struct {
@ -171,6 +175,25 @@ func (logger *Logger) Fatal(format string, a ...interface{}) {
}
}
func (logger *Logger) SubLogger(name string) *Logger {
subLogger := logger.l.With().Str("component", name).CallerWithSkipFrameCount(gLoggerConfig.CallerSkipCount - 1).Logger()
return &Logger{
l: &subLogger,
}
}
// Sample returns a logger with a sampler.
// max: the maximum number of events to be logged per period
func (logger *Logger) Sample(max uint32, period time.Duration) *Logger {
s := logger.l.Sample(&zerolog.BurstSampler{
Burst: max,
Period: period,
})
return &Logger{
l: &s,
}
}
func (logger *Logger) GetLevel() string {
return logger.l.GetLevel().String()
}
@ -213,6 +236,10 @@ func Fatal(format string, a ...interface{}) {
defaultLogger.Fatal(format, a...)
}
func SubLogger(name string) *Logger {
return defaultLogger.SubLogger(name)
}
func afterErrorOpt(format string, a ...interface{}) {
if AfterError == nil {
return
@ -229,3 +256,7 @@ func afterErrorOpt(format string, a ...interface{}) {
func Level() zerolog.Level {
return defaultLogger.l.GetLevel()
}
type AfterErrorConfiguration struct {
Additions map[string]interface{} `yaml:"additions,omitempty"`
}

View File

@ -0,0 +1,95 @@
/*
Copyright 2022 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package logalert
import (
"github.com/loggie-io/loggie/pkg/core/event"
"github.com/loggie-io/loggie/pkg/core/log"
"github.com/loggie-io/loggie/pkg/util/pattern"
"github.com/loggie-io/loggie/pkg/util/runtime"
)
const (
DefaultAlertKey = event.DefaultAlertKey
)
type GroupConfig struct {
Pattern *pattern.Pattern
AlertSendingThreshold int
}
func GroupAlerts(alertMap event.AlertMap, alerts []event.Alert, sendFunc event.PackageAndSendAlerts, p GroupConfig) {
checkAlertsLists := func(key string) {
list := alertMap[key]
if len(list) >= p.AlertSendingThreshold {
sendFunc(list)
alertMap[key] = nil
}
}
if p.Pattern == nil {
alertMap[DefaultAlertKey] = append(alertMap[DefaultAlertKey], alerts...)
checkAlertsLists(DefaultAlertKey)
return
}
keyMap := make(map[string]struct{})
for _, alert := range alerts {
obj := runtime.NewObject(alert)
render, err := p.Pattern.WithObject(obj).Render()
if err != nil {
log.Warn("fail to render group key. Put alert in default group")
alertMap[DefaultAlertKey] = append(alertMap[DefaultAlertKey], alert)
keyMap[DefaultAlertKey] = struct{}{}
continue
}
log.Debug("alert group key %s", render)
alertMap[render] = append(alertMap[render], alert)
keyMap[render] = struct{}{}
}
for key := range keyMap {
checkAlertsLists(key)
}
}
func GroupAlertsAtOnce(alerts []event.Alert, sendFunc event.PackageAndSendAlerts, p GroupConfig) {
if p.Pattern == nil {
sendFunc(alerts)
return
}
tempMap := make(event.AlertMap)
for _, alert := range alerts {
obj := runtime.NewObject(alert)
render, err := p.Pattern.WithObject(obj).Render()
if err != nil {
log.Warn("fail to render group key. Put alert in default group")
tempMap[DefaultAlertKey] = append(tempMap[DefaultAlertKey], alert)
continue
}
log.Debug("alert group key %s", render)
tempMap[render] = append(tempMap[render], alert)
}
for _, list := range tempMap {
sendFunc(list)
}
}

View File

@ -0,0 +1,41 @@
/*
Copyright 2022 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package logalert
import "time"
type Config struct {
Addr []string `yaml:"addr,omitempty"`
BufferSize int `yaml:"bufferSize,omitempty" default:"100"`
BatchTimeout time.Duration `yaml:"batchTimeout,omitempty" default:"10s"`
BatchSize int `yaml:"batchSize,omitempty" default:"10"`
AlertConfig `yaml:",inline"`
}
type AlertConfig struct {
Template string `yaml:"template,omitempty"`
Timeout time.Duration `yaml:"timeout,omitempty" default:"30s"`
Headers map[string]string `yaml:"headers,omitempty"`
Method string `yaml:"method,omitempty"`
LineLimit int `yaml:"lineLimit,omitempty" default:"10"`
GroupKey string `yaml:"groupKey,omitempty" default:"${_meta.pipelineName}-${_meta.sourceName}"`
AlertSendingThreshold int `yaml:"alertSendingThreshold,omitempty" default:"1"`
SendLogAlertAtOnce bool `yaml:"sendLogAlertAtOnce"`
SendNoDataAlertAtOnce bool `yaml:"sendNoDataAlertAtOnce" default:"true"`
SendLoggieError bool `yaml:"sendLoggieError"`
SendLoggieErrorAtOnce bool `yaml:"sendLoggieErrorAtOnce"`
}

View File

@ -20,12 +20,13 @@ import (
"github.com/loggie-io/loggie/pkg/core/cfg"
)
const defaultBatchSize = 2048
type Config struct {
Enabled *bool `yaml:"enabled,omitempty"`
Name string `yaml:"name,omitempty"`
Type string `yaml:"type,omitempty" validate:"required"`
Properties cfg.CommonCfg `yaml:",inline"`
BatchSize int `yaml:"batchSize,omitempty" default:"2048" validate:"required,gte=1"`
}
func (c *Config) Merge(from *Config) {
@ -36,9 +37,33 @@ func (c *Config) Merge(from *Config) {
return
}
if c.BatchSize == 0 {
c.BatchSize = from.BatchSize
}
c.Properties = cfg.MergeCommonCfg(c.Properties, from.Properties, false)
}
func (c *Config) DeepCopy() *Config {
if c == nil {
return nil
}
out := new(Config)
out.Enabled = c.Enabled
out.Name = c.Name
out.Type = c.Type
out.Properties = c.Properties.DeepCopy()
return out
}
func (c *Config) GetBatchSize() int {
batchSize, ok := c.Properties["batchSize"]
if !ok {
return defaultBatchSize
}
intBatchSize, ok := batchSize.(int)
if !ok {
return defaultBatchSize
}
return intBatchSize
}

View File

@ -19,7 +19,6 @@ func TestConfig_Unmarshal(t *testing.T) {
name: bar
type: channel
batchAggTimeout: 1s
batchSize: 1024
`),
want: Config{
Name: "bar",
@ -27,7 +26,6 @@ func TestConfig_Unmarshal(t *testing.T) {
Properties: cfg.CommonCfg{
"batchAggTimeout": "1s",
},
BatchSize: 1024,
},
},
}
@ -55,13 +53,11 @@ func TestConfig_Marshal(t *testing.T) {
Properties: cfg.CommonCfg{
"batchAggTimeout": "1s",
},
BatchSize: 1024,
},
want: `
name: bar
type: channel
batchAggTimeout: 1s
batchSize: 1024
`,
},
}
@ -88,17 +84,14 @@ func TestConfig_Merge(t *testing.T) {
name: "common ok",
args: args{
base: &Config{
Type: "channel",
BatchSize: 1024,
Type: "channel",
},
from: &Config{
Type: "channel",
BatchSize: 2048,
Type: "channel",
},
},
want: &Config{
Type: "channel",
BatchSize: 1024,
Type: "channel",
},
},
{
@ -108,13 +101,11 @@ func TestConfig_Merge(t *testing.T) {
Type: "channel",
},
from: &Config{
Type: "channel",
BatchSize: 2048,
Type: "channel",
},
},
want: &Config{
Type: "channel",
BatchSize: 2048,
Type: "channel",
},
},
{
@ -124,8 +115,7 @@ func TestConfig_Merge(t *testing.T) {
Type: "channel",
},
from: &Config{
Type: "memory",
BatchSize: 2048,
Type: "memory",
},
},
want: &Config{

View File

@ -17,7 +17,6 @@ limitations under the License.
package reloader
import (
"io/ioutil"
"net/http"
"os"
"path/filepath"
@ -49,7 +48,7 @@ func (r *reloader) readPipelineConfigHandler(writer http.ResponseWriter, request
continue
}
content, err := ioutil.ReadFile(m)
content, err := os.ReadFile(m)
if err != nil {
log.Warn("read config error. err: %v", err)
return

View File

@ -95,7 +95,7 @@ func (r *reloader) Run(stopCh <-chan struct{}) {
}
}
eventbus.Publish(eventbus.ReloadTopic, eventbus.ReloadMetricData{
eventbus.PublishOrDrop(eventbus.ReloadTopic, eventbus.ReloadMetricData{
Tick: 1,
})
}

View File

@ -35,6 +35,22 @@ type Config struct {
Concurrency concurrency.Config `yaml:"concurrency,omitempty"`
}
func (c *Config) DeepCopy() *Config {
if c == nil {
return nil
}
out := new(Config)
out.Enabled = c.Enabled
out.Name = c.Name
out.Type = c.Type
out.Properties = c.Properties.DeepCopy()
out.Parallelism = c.Parallelism
out.Codec = *c.Codec.DeepCopy()
return out
}
func (c *Config) Validate() error {
if c.Type == "" {
return ErrSinkTypeRequired

View File

@ -38,6 +38,11 @@ type Config struct {
FieldsFromEnv map[string]string `yaml:"fieldsFromEnv,omitempty"`
FieldsFromPath map[string]string `yaml:"fieldsFromPath,omitempty"`
Codec *codec.Config `yaml:"codec,omitempty"`
TimestampKey string `yaml:"timestampKey,omitempty"`
TimestampLocation string `yaml:"timestampLocation,omitempty"`
TimestampLayout string `yaml:"timestampLayout,omitempty"`
BodyKey string `yaml:"bodyKey,omitempty"`
}
func (c *Config) DeepCopy() *Config {
@ -82,6 +87,11 @@ func (c *Config) DeepCopy() *Config {
FieldsFromEnv: newFieldsFromEnv,
FieldsFromPath: newFieldsFromPath,
Codec: c.Codec.DeepCopy(),
TimestampKey: c.TimestampKey,
TimestampLocation: c.TimestampLocation,
TimestampLayout: c.TimestampLayout,
BodyKey: c.BodyKey,
}
return out
@ -113,7 +123,7 @@ func (c *Config) Merge(from *Config) {
c.FieldsUnderRoot = from.FieldsUnderRoot
}
if c.FieldsUnderKey == "" {
if c.FieldsUnderKey != "" {
c.FieldsUnderKey = from.FieldsUnderKey
}
@ -155,6 +165,19 @@ func (c *Config) Merge(from *Config) {
} else {
c.Codec.Merge(from.Codec)
}
if c.TimestampKey == "" {
c.TimestampKey = from.TimestampKey
}
if c.TimestampLocation == "" {
c.TimestampLocation = from.TimestampLocation
}
if c.TimestampLayout == "" {
c.TimestampLayout = from.TimestampLayout
}
if c.BodyKey == "" {
c.BodyKey = from.BodyKey
}
}
func MergeSourceList(base []*Config, from []*Config) []*Config {

View File

@ -18,6 +18,7 @@ package sysconfig
import (
"github.com/loggie-io/loggie/pkg/core/interceptor"
"github.com/loggie-io/loggie/pkg/core/log"
"github.com/loggie-io/loggie/pkg/core/queue"
"github.com/loggie-io/loggie/pkg/core/reloader"
"github.com/loggie-io/loggie/pkg/core/sink"
@ -29,6 +30,7 @@ import (
"github.com/loggie-io/loggie/pkg/interceptor/retry"
"github.com/loggie-io/loggie/pkg/pipeline"
"github.com/loggie-io/loggie/pkg/queue/channel"
"github.com/loggie-io/loggie/pkg/util/persistence"
)
type Config struct {
@ -36,11 +38,14 @@ type Config struct {
}
type Loggie struct {
Reload reloader.ReloadConfig `yaml:"reload"`
Discovery discovery.Config `yaml:"discovery"`
Http Http `yaml:"http" validate:"dive"`
MonitorEventBus eventbus.Config `yaml:"monitor"`
Defaults Defaults `yaml:"defaults"`
Reload reloader.ReloadConfig `yaml:"reload"`
Discovery discovery.Config `yaml:"discovery"`
Http Http `yaml:"http" validate:"dive"`
MonitorEventBus eventbus.Config `yaml:"monitor"`
Defaults Defaults `yaml:"defaults"`
Db persistence.DbConfig `yaml:"db"`
ErrorAlertConfig log.AfterErrorConfiguration `yaml:"errorAlert"`
JSONEngine string `yaml:"jsonEngine,omitempty" default:"jsoniter" validate:"oneof=jsoniter sonic std go-json"`
}
type Defaults struct {
@ -66,8 +71,6 @@ func (d *Defaults) SetDefaults() {
if d.Queue == nil {
d.Queue = &queue.Config{
Type: channel.Type,
//Name: "default",
BatchSize: 2048,
}
}
if len(d.Interceptors) == 0 {
@ -85,7 +88,8 @@ func (d *Defaults) SetDefaults() {
}
type Http struct {
Enabled bool `yaml:"enabled" default:"false"`
Host string `yaml:"host" default:"0.0.0.0"`
Port int `yaml:"port" default:"9196"`
Enabled bool `yaml:"enabled" default:"false"`
Host string `yaml:"host" default:"0.0.0.0"`
Port int `yaml:"port" default:"9196"`
RandPort bool `yaml:"randPort" default:"false"`
}

View File

@ -20,7 +20,6 @@ import (
kubernetes "github.com/loggie-io/loggie/pkg/discovery/kubernetes/controller"
)
// TODO validate and defaults
type Config struct {
Enabled bool `yaml:"enabled"`
Kubernetes kubernetes.Config `yaml:"kubernetes" validate:"dive"`

View File

@ -18,14 +18,17 @@ package v1beta1
import (
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
SelectorTypePod = "pod"
SelectorTypeNode = "node"
SelectorTypeCluster = "cluster"
SelectorTypeAll = "all"
SelectorTypePod = "pod"
SelectorTypeNode = "node"
SelectorTypeCluster = "cluster"
SelectorTypeVm = "vm"
SelectorTypeWorkload = "workload"
SelectorTypeAll = "all"
)
// +genclient
@ -46,10 +49,12 @@ type Spec struct {
}
type Selector struct {
Cluster string `json:"cluster,omitempty"`
Type string `json:"type,omitempty"`
PodSelector `json:",inline"`
NodeSelector `json:",inline"`
Cluster string `json:"cluster,omitempty"`
Type string `json:"type,omitempty"`
PodSelector `json:",inline"`
NodeSelector `json:",inline"`
NamespaceSelector `json:",inline"`
WorkloadSelector []WorkloadSelector `json:"workloadSelector,omitempty"`
}
type PodSelector struct {
@ -60,6 +65,18 @@ type NodeSelector struct {
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
}
type NamespaceSelector struct {
NamespaceSelector []string `json:"namespaceSelector,omitempty"`
ExcludeNamespaceSelector []string `json:"excludeNamespaceSelector,omitempty"`
}
type WorkloadSelector struct {
Type []string `json:"type,omitempty"`
NameSelector []string `json:"nameSelector,omitempty"`
NamespaceSelector []string `json:"namespaceSelector,omitempty"`
ExcludeNamespaceSelector []string `json:"excludeNamespaceSelector,omitempty"`
}
type Pipeline struct {
Name string `json:"name,omitempty"`
Sources string `json:"sources,omitempty"`
@ -67,6 +84,7 @@ type Pipeline struct {
Interceptors string `json:"interceptors,omitempty"`
SinkRef string `json:"sinkRef,omitempty"`
InterceptorRef string `json:"interceptorRef,omitempty"`
Queue string `json:"queue,omitempty"`
}
type Status struct {
@ -88,8 +106,8 @@ func (in *ClusterLogConfig) Validate() error {
}
tp := in.Spec.Selector.Type
if tp != SelectorTypePod && tp != SelectorTypeNode && tp != SelectorTypeCluster {
return errors.New("spec.selector.type is invalid")
if tp != SelectorTypePod && tp != SelectorTypeNode && tp != SelectorTypeCluster && tp != SelectorTypeVm && tp != SelectorTypeWorkload {
return errors.New("spec.selector.type is invalidate")
}
if tp == SelectorTypeCluster && in.Spec.Selector.Cluster == "" {

View File

@ -59,6 +59,8 @@ func addKnownTypes(scheme *runtime.Scheme) error {
&SinkList{},
&Interceptor{},
&InterceptorList{},
&Vm{},
&VmList{},
)
// register the type in the scheme

View File

@ -0,0 +1,72 @@
/*
Copyright 2022 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
IndicateChinese = "loggie-cn"
AnnotationCnPrefix = "loggie.io/"
)
// +genclient
// +genclient:nonNamespaced
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// Vm represent a virtual machine, same as Node in Kubernetes, but we used in host outside Kubernetes Cluster.
type Vm struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec VmSpec `json:"spec"`
Status VmStatus `json:"status"`
}
type VmSpec struct {
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
type VmList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata"`
Items []Vm `json:"items"`
}
type VmStatus struct {
Addresses []NodeAddress `json:"addresses,omitempty"`
}
// NodeAddress contains information for the node's address.
type NodeAddress struct {
// Node address type, one of Hostname, ExternalIP or InternalIP.
Type string `json:"type,omitempty"`
// The node address.
Address string `json:"address,omitempty"`
}
func (in *Vm) ConvertChineseLabels() {
for k, v := range in.Labels {
if v == IndicateChinese {
// get Chinese from annotations
in.Labels[k] = in.Annotations[AnnotationCnPrefix+k]
}
}
}

View File

@ -238,6 +238,48 @@ func (in *Message) DeepCopy() *Message {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NamespaceSelector) DeepCopyInto(out *NamespaceSelector) {
*out = *in
if in.NamespaceSelector != nil {
in, out := &in.NamespaceSelector, &out.NamespaceSelector
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.ExcludeNamespaceSelector != nil {
in, out := &in.ExcludeNamespaceSelector, &out.ExcludeNamespaceSelector
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NamespaceSelector.
func (in *NamespaceSelector) DeepCopy() *NamespaceSelector {
if in == nil {
return nil
}
out := new(NamespaceSelector)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NodeAddress) DeepCopyInto(out *NodeAddress) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeAddress.
func (in *NodeAddress) DeepCopy() *NodeAddress {
if in == nil {
return nil
}
out := new(NodeAddress)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NodeSelector) DeepCopyInto(out *NodeSelector) {
*out = *in
@ -305,6 +347,14 @@ func (in *Selector) DeepCopyInto(out *Selector) {
*out = *in
in.PodSelector.DeepCopyInto(&out.PodSelector)
in.NodeSelector.DeepCopyInto(&out.NodeSelector)
in.NamespaceSelector.DeepCopyInto(&out.NamespaceSelector)
if in.WorkloadSelector != nil {
in, out := &in.WorkloadSelector, &out.WorkloadSelector
*out = make([]WorkloadSelector, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
@ -436,3 +486,137 @@ func (in *Status) DeepCopy() *Status {
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Vm) DeepCopyInto(out *Vm) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
out.Spec = in.Spec
in.Status.DeepCopyInto(&out.Status)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Vm.
func (in *Vm) DeepCopy() *Vm {
if in == nil {
return nil
}
out := new(Vm)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Vm) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *VmList) DeepCopyInto(out *VmList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Vm, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VmList.
func (in *VmList) DeepCopy() *VmList {
if in == nil {
return nil
}
out := new(VmList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *VmList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *VmSpec) DeepCopyInto(out *VmSpec) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VmSpec.
func (in *VmSpec) DeepCopy() *VmSpec {
if in == nil {
return nil
}
out := new(VmSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *VmStatus) DeepCopyInto(out *VmStatus) {
*out = *in
if in.Addresses != nil {
in, out := &in.Addresses, &out.Addresses
*out = make([]NodeAddress, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VmStatus.
func (in *VmStatus) DeepCopy() *VmStatus {
if in == nil {
return nil
}
out := new(VmStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkloadSelector) DeepCopyInto(out *WorkloadSelector) {
*out = *in
if in.Type != nil {
in, out := &in.Type, &out.Type
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.NameSelector != nil {
in, out := &in.NameSelector, &out.NameSelector
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.NamespaceSelector != nil {
in, out := &in.NamespaceSelector, &out.NamespaceSelector
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.ExcludeNamespaceSelector != nil {
in, out := &in.ExcludeNamespaceSelector, &out.ExcludeNamespaceSelector
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadSelector.
func (in *WorkloadSelector) DeepCopy() *WorkloadSelector {
if in == nil {
return nil
}
out := new(WorkloadSelector)
in.DeepCopyInto(out)
return out
}

View File

@ -19,6 +19,7 @@ package versioned
import (
"fmt"
"net/http"
loggiev1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/client/clientset/versioned/typed/loggie/v1beta1"
discovery "k8s.io/client-go/discovery"
@ -54,22 +55,45 @@ func (c *Clientset) Discovery() discovery.DiscoveryInterface {
// NewForConfig creates a new Clientset for the given config.
// If config's RateLimiter is not set and QPS and Burst are acceptable,
// NewForConfig will generate a rate-limiter in configShallowCopy.
// NewForConfig is equivalent to NewForConfigAndClient(c, httpClient),
// where httpClient was generated with rest.HTTPClientFor(c).
func NewForConfig(c *rest.Config) (*Clientset, error) {
configShallowCopy := *c
if configShallowCopy.UserAgent == "" {
configShallowCopy.UserAgent = rest.DefaultKubernetesUserAgent()
}
// share the transport between all clients
httpClient, err := rest.HTTPClientFor(&configShallowCopy)
if err != nil {
return nil, err
}
return NewForConfigAndClient(&configShallowCopy, httpClient)
}
// NewForConfigAndClient creates a new Clientset for the given config and http client.
// Note the http client provided takes precedence over the configured transport values.
// If config's RateLimiter is not set and QPS and Burst are acceptable,
// NewForConfigAndClient will generate a rate-limiter in configShallowCopy.
func NewForConfigAndClient(c *rest.Config, httpClient *http.Client) (*Clientset, error) {
configShallowCopy := *c
if configShallowCopy.RateLimiter == nil && configShallowCopy.QPS > 0 {
if configShallowCopy.Burst <= 0 {
return nil, fmt.Errorf("burst is required to be greater than 0 when RateLimiter is not set and QPS is set to greater than 0")
}
configShallowCopy.RateLimiter = flowcontrol.NewTokenBucketRateLimiter(configShallowCopy.QPS, configShallowCopy.Burst)
}
var cs Clientset
var err error
cs.loggieV1beta1, err = loggiev1beta1.NewForConfig(&configShallowCopy)
cs.loggieV1beta1, err = loggiev1beta1.NewForConfigAndClient(&configShallowCopy, httpClient)
if err != nil {
return nil, err
}
cs.DiscoveryClient, err = discovery.NewDiscoveryClientForConfig(&configShallowCopy)
cs.DiscoveryClient, err = discovery.NewDiscoveryClientForConfigAndClient(&configShallowCopy, httpClient)
if err != nil {
return nil, err
}
@ -79,11 +103,11 @@ func NewForConfig(c *rest.Config) (*Clientset, error) {
// NewForConfigOrDie creates a new Clientset for the given config and
// panics if there is an error in the config.
func NewForConfigOrDie(c *rest.Config) *Clientset {
var cs Clientset
cs.loggieV1beta1 = loggiev1beta1.NewForConfigOrDie(c)
cs.DiscoveryClient = discovery.NewDiscoveryClientForConfigOrDie(c)
return &cs
cs, err := NewForConfig(c)
if err != nil {
panic(err)
}
return cs
}
// New creates a new Clientset for the given RESTClient.

View File

@ -36,14 +36,14 @@ var localSchemeBuilder = runtime.SchemeBuilder{
// AddToScheme adds all types of this clientset into the given scheme. This allows composition
// of clientsets, like in:
//
// import (
// "k8s.io/client-go/kubernetes"
// clientsetscheme "k8s.io/client-go/kubernetes/scheme"
// aggregatorclientsetscheme "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/scheme"
// )
// import (
// "k8s.io/client-go/kubernetes"
// clientsetscheme "k8s.io/client-go/kubernetes/scheme"
// aggregatorclientsetscheme "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/scheme"
// )
//
// kclientset, _ := kubernetes.NewForConfig(c)
// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
// kclientset, _ := kubernetes.NewForConfig(c)
// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
//
// After this, RawExtensions in Kubernetes types will serialize kube-aggregator types
// correctly.

View File

@ -36,14 +36,14 @@ var localSchemeBuilder = runtime.SchemeBuilder{
// AddToScheme adds all types of this clientset into the given scheme. This allows composition
// of clientsets, like in:
//
// import (
// "k8s.io/client-go/kubernetes"
// clientsetscheme "k8s.io/client-go/kubernetes/scheme"
// aggregatorclientsetscheme "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/scheme"
// )
// import (
// "k8s.io/client-go/kubernetes"
// clientsetscheme "k8s.io/client-go/kubernetes/scheme"
// aggregatorclientsetscheme "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/scheme"
// )
//
// kclientset, _ := kubernetes.NewForConfig(c)
// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
// kclientset, _ := kubernetes.NewForConfig(c)
// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
//
// After this, RawExtensions in Kubernetes types will serialize kube-aggregator types
// correctly.

View File

@ -109,7 +109,7 @@ func (c *FakeClusterLogConfigs) UpdateStatus(ctx context.Context, clusterLogConf
// Delete takes name of the clusterLogConfig and deletes it. Returns an error if one occurs.
func (c *FakeClusterLogConfigs) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewRootDeleteAction(clusterlogconfigsResource, name), &v1beta1.ClusterLogConfig{})
Invokes(testing.NewRootDeleteActionWithOptions(clusterlogconfigsResource, name, opts), &v1beta1.ClusterLogConfig{})
return err
}

View File

@ -98,7 +98,7 @@ func (c *FakeInterceptors) Update(ctx context.Context, interceptor *v1beta1.Inte
// Delete takes name of the interceptor and deletes it. Returns an error if one occurs.
func (c *FakeInterceptors) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewRootDeleteAction(interceptorsResource, name), &v1beta1.Interceptor{})
Invokes(testing.NewRootDeleteActionWithOptions(interceptorsResource, name, opts), &v1beta1.Interceptor{})
return err
}

View File

@ -116,7 +116,7 @@ func (c *FakeLogConfigs) UpdateStatus(ctx context.Context, logConfig *v1beta1.Lo
// Delete takes name of the logConfig and deletes it. Returns an error if one occurs.
func (c *FakeLogConfigs) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewDeleteAction(logconfigsResource, c.ns, name), &v1beta1.LogConfig{})
Invokes(testing.NewDeleteActionWithOptions(logconfigsResource, c.ns, name, opts), &v1beta1.LogConfig{})
return err
}

View File

@ -43,6 +43,10 @@ func (c *FakeLoggieV1beta1) Sinks() v1beta1.SinkInterface {
return &FakeSinks{c}
}
func (c *FakeLoggieV1beta1) Vms() v1beta1.VmInterface {
return &FakeVms{c}
}
// RESTClient returns a RESTClient that is used to communicate
// with API server by this client implementation.
func (c *FakeLoggieV1beta1) RESTClient() rest.Interface {

View File

@ -98,7 +98,7 @@ func (c *FakeSinks) Update(ctx context.Context, sink *v1beta1.Sink, opts v1.Upda
// Delete takes name of the sink and deletes it. Returns an error if one occurs.
func (c *FakeSinks) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewRootDeleteAction(sinksResource, name), &v1beta1.Sink{})
Invokes(testing.NewRootDeleteActionWithOptions(sinksResource, name, opts), &v1beta1.Sink{})
return err
}

View File

@ -0,0 +1,132 @@
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package fake
import (
"context"
v1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
labels "k8s.io/apimachinery/pkg/labels"
schema "k8s.io/apimachinery/pkg/runtime/schema"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
testing "k8s.io/client-go/testing"
)
// FakeVms implements VmInterface
type FakeVms struct {
Fake *FakeLoggieV1beta1
}
var vmsResource = schema.GroupVersionResource{Group: "loggie.io", Version: "v1beta1", Resource: "vms"}
var vmsKind = schema.GroupVersionKind{Group: "loggie.io", Version: "v1beta1", Kind: "Vm"}
// Get takes name of the vm, and returns the corresponding vm object, and an error if there is any.
func (c *FakeVms) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1beta1.Vm, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootGetAction(vmsResource, name), &v1beta1.Vm{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.Vm), err
}
// List takes label and field selectors, and returns the list of Vms that match those selectors.
func (c *FakeVms) List(ctx context.Context, opts v1.ListOptions) (result *v1beta1.VmList, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootListAction(vmsResource, vmsKind, opts), &v1beta1.VmList{})
if obj == nil {
return nil, err
}
label, _, _ := testing.ExtractFromListOptions(opts)
if label == nil {
label = labels.Everything()
}
list := &v1beta1.VmList{ListMeta: obj.(*v1beta1.VmList).ListMeta}
for _, item := range obj.(*v1beta1.VmList).Items {
if label.Matches(labels.Set(item.Labels)) {
list.Items = append(list.Items, item)
}
}
return list, err
}
// Watch returns a watch.Interface that watches the requested vms.
func (c *FakeVms) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
return c.Fake.
InvokesWatch(testing.NewRootWatchAction(vmsResource, opts))
}
// Create takes the representation of a vm and creates it. Returns the server's representation of the vm, and an error, if there is any.
func (c *FakeVms) Create(ctx context.Context, vm *v1beta1.Vm, opts v1.CreateOptions) (result *v1beta1.Vm, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootCreateAction(vmsResource, vm), &v1beta1.Vm{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.Vm), err
}
// Update takes the representation of a vm and updates it. Returns the server's representation of the vm, and an error, if there is any.
func (c *FakeVms) Update(ctx context.Context, vm *v1beta1.Vm, opts v1.UpdateOptions) (result *v1beta1.Vm, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootUpdateAction(vmsResource, vm), &v1beta1.Vm{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.Vm), err
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *FakeVms) UpdateStatus(ctx context.Context, vm *v1beta1.Vm, opts v1.UpdateOptions) (*v1beta1.Vm, error) {
obj, err := c.Fake.
Invokes(testing.NewRootUpdateSubresourceAction(vmsResource, "status", vm), &v1beta1.Vm{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.Vm), err
}
// Delete takes name of the vm and deletes it. Returns an error if one occurs.
func (c *FakeVms) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewRootDeleteActionWithOptions(vmsResource, name, opts), &v1beta1.Vm{})
return err
}
// DeleteCollection deletes a collection of objects.
func (c *FakeVms) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
action := testing.NewRootDeleteCollectionAction(vmsResource, listOpts)
_, err := c.Fake.Invokes(action, &v1beta1.VmList{})
return err
}
// Patch applies the patch and returns the patched vm.
func (c *FakeVms) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.Vm, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootPatchSubresourceAction(vmsResource, name, pt, data, subresources...), &v1beta1.Vm{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.Vm), err
}

View File

@ -24,3 +24,5 @@ type InterceptorExpansion interface{}
type LogConfigExpansion interface{}
type SinkExpansion interface{}
type VmExpansion interface{}

View File

@ -18,6 +18,8 @@ limitations under the License.
package v1beta1
import (
"net/http"
v1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/client/clientset/versioned/scheme"
rest "k8s.io/client-go/rest"
@ -29,6 +31,7 @@ type LoggieV1beta1Interface interface {
InterceptorsGetter
LogConfigsGetter
SinksGetter
VmsGetter
}
// LoggieV1beta1Client is used to interact with features provided by the loggie.io group.
@ -52,13 +55,33 @@ func (c *LoggieV1beta1Client) Sinks() SinkInterface {
return newSinks(c)
}
func (c *LoggieV1beta1Client) Vms() VmInterface {
return newVms(c)
}
// NewForConfig creates a new LoggieV1beta1Client for the given config.
// NewForConfig is equivalent to NewForConfigAndClient(c, httpClient),
// where httpClient was generated with rest.HTTPClientFor(c).
func NewForConfig(c *rest.Config) (*LoggieV1beta1Client, error) {
config := *c
if err := setConfigDefaults(&config); err != nil {
return nil, err
}
client, err := rest.RESTClientFor(&config)
httpClient, err := rest.HTTPClientFor(&config)
if err != nil {
return nil, err
}
return NewForConfigAndClient(&config, httpClient)
}
// NewForConfigAndClient creates a new LoggieV1beta1Client for the given config and http client.
// Note the http client provided takes precedence over the configured transport values.
func NewForConfigAndClient(c *rest.Config, h *http.Client) (*LoggieV1beta1Client, error) {
config := *c
if err := setConfigDefaults(&config); err != nil {
return nil, err
}
client, err := rest.RESTClientForConfigAndClient(&config, h)
if err != nil {
return nil, err
}

View File

@ -0,0 +1,183 @@
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package v1beta1
import (
"context"
"time"
v1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
scheme "github.com/loggie-io/loggie/pkg/discovery/kubernetes/client/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
rest "k8s.io/client-go/rest"
)
// VmsGetter has a method to return a VmInterface.
// A group's client should implement this interface.
type VmsGetter interface {
Vms() VmInterface
}
// VmInterface has methods to work with Vm resources.
type VmInterface interface {
Create(ctx context.Context, vm *v1beta1.Vm, opts v1.CreateOptions) (*v1beta1.Vm, error)
Update(ctx context.Context, vm *v1beta1.Vm, opts v1.UpdateOptions) (*v1beta1.Vm, error)
UpdateStatus(ctx context.Context, vm *v1beta1.Vm, opts v1.UpdateOptions) (*v1beta1.Vm, error)
Delete(ctx context.Context, name string, opts v1.DeleteOptions) error
DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error
Get(ctx context.Context, name string, opts v1.GetOptions) (*v1beta1.Vm, error)
List(ctx context.Context, opts v1.ListOptions) (*v1beta1.VmList, error)
Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error)
Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.Vm, err error)
VmExpansion
}
// vms implements VmInterface
type vms struct {
client rest.Interface
}
// newVms returns a Vms
func newVms(c *LoggieV1beta1Client) *vms {
return &vms{
client: c.RESTClient(),
}
}
// Get takes name of the vm, and returns the corresponding vm object, and an error if there is any.
func (c *vms) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1beta1.Vm, err error) {
result = &v1beta1.Vm{}
err = c.client.Get().
Resource("vms").
Name(name).
VersionedParams(&options, scheme.ParameterCodec).
Do(ctx).
Into(result)
return
}
// List takes label and field selectors, and returns the list of Vms that match those selectors.
func (c *vms) List(ctx context.Context, opts v1.ListOptions) (result *v1beta1.VmList, err error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
result = &v1beta1.VmList{}
err = c.client.Get().
Resource("vms").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Do(ctx).
Into(result)
return
}
// Watch returns a watch.Interface that watches the requested vms.
func (c *vms) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
opts.Watch = true
return c.client.Get().
Resource("vms").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Watch(ctx)
}
// Create takes the representation of a vm and creates it. Returns the server's representation of the vm, and an error, if there is any.
func (c *vms) Create(ctx context.Context, vm *v1beta1.Vm, opts v1.CreateOptions) (result *v1beta1.Vm, err error) {
result = &v1beta1.Vm{}
err = c.client.Post().
Resource("vms").
VersionedParams(&opts, scheme.ParameterCodec).
Body(vm).
Do(ctx).
Into(result)
return
}
// Update takes the representation of a vm and updates it. Returns the server's representation of the vm, and an error, if there is any.
func (c *vms) Update(ctx context.Context, vm *v1beta1.Vm, opts v1.UpdateOptions) (result *v1beta1.Vm, err error) {
result = &v1beta1.Vm{}
err = c.client.Put().
Resource("vms").
Name(vm.Name).
VersionedParams(&opts, scheme.ParameterCodec).
Body(vm).
Do(ctx).
Into(result)
return
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *vms) UpdateStatus(ctx context.Context, vm *v1beta1.Vm, opts v1.UpdateOptions) (result *v1beta1.Vm, err error) {
result = &v1beta1.Vm{}
err = c.client.Put().
Resource("vms").
Name(vm.Name).
SubResource("status").
VersionedParams(&opts, scheme.ParameterCodec).
Body(vm).
Do(ctx).
Into(result)
return
}
// Delete takes name of the vm and deletes it. Returns an error if one occurs.
func (c *vms) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
return c.client.Delete().
Resource("vms").
Name(name).
Body(&opts).
Do(ctx).
Error()
}
// DeleteCollection deletes a collection of objects.
func (c *vms) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
var timeout time.Duration
if listOpts.TimeoutSeconds != nil {
timeout = time.Duration(*listOpts.TimeoutSeconds) * time.Second
}
return c.client.Delete().
Resource("vms").
VersionedParams(&listOpts, scheme.ParameterCodec).
Timeout(timeout).
Body(&opts).
Do(ctx).
Error()
}
// Patch applies the patch and returns the patched vm.
func (c *vms) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.Vm, err error) {
result = &v1beta1.Vm{}
err = c.client.Patch(pt).
Resource("vms").
Name(name).
SubResource(subresources...).
VersionedParams(&opts, scheme.ParameterCodec).
Body(data).
Do(ctx).
Into(result)
return
}

View File

@ -60,6 +60,8 @@ func (f *sharedInformerFactory) ForResource(resource schema.GroupVersionResource
return &genericInformer{resource: resource.GroupResource(), informer: f.Loggie().V1beta1().LogConfigs().Informer()}, nil
case v1beta1.SchemeGroupVersion.WithResource("sinks"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Loggie().V1beta1().Sinks().Informer()}, nil
case v1beta1.SchemeGroupVersion.WithResource("vms"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Loggie().V1beta1().Vms().Informer()}, nil
}

View File

@ -31,6 +31,8 @@ type Interface interface {
LogConfigs() LogConfigInformer
// Sinks returns a SinkInformer.
Sinks() SinkInformer
// Vms returns a VmInformer.
Vms() VmInformer
}
type version struct {
@ -63,3 +65,8 @@ func (v *version) LogConfigs() LogConfigInformer {
func (v *version) Sinks() SinkInformer {
return &sinkInformer{factory: v.factory, tweakListOptions: v.tweakListOptions}
}
// Vms returns a VmInformer.
func (v *version) Vms() VmInformer {
return &vmInformer{factory: v.factory, tweakListOptions: v.tweakListOptions}
}

View File

@ -0,0 +1,88 @@
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by informer-gen. DO NOT EDIT.
package v1beta1
import (
"context"
time "time"
loggiev1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
versioned "github.com/loggie-io/loggie/pkg/discovery/kubernetes/client/clientset/versioned"
internalinterfaces "github.com/loggie-io/loggie/pkg/discovery/kubernetes/client/informers/externalversions/internalinterfaces"
v1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/client/listers/loggie/v1beta1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
watch "k8s.io/apimachinery/pkg/watch"
cache "k8s.io/client-go/tools/cache"
)
// VmInformer provides access to a shared informer and lister for
// Vms.
type VmInformer interface {
Informer() cache.SharedIndexInformer
Lister() v1beta1.VmLister
}
type vmInformer struct {
factory internalinterfaces.SharedInformerFactory
tweakListOptions internalinterfaces.TweakListOptionsFunc
}
// NewVmInformer constructs a new informer for Vm type.
// Always prefer using an informer factory to get a shared informer instead of getting an independent
// one. This reduces memory footprint and number of connections to the server.
func NewVmInformer(client versioned.Interface, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer {
return NewFilteredVmInformer(client, resyncPeriod, indexers, nil)
}
// NewFilteredVmInformer constructs a new informer for Vm type.
// Always prefer using an informer factory to get a shared informer instead of getting an independent
// one. This reduces memory footprint and number of connections to the server.
func NewFilteredVmInformer(client versioned.Interface, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer {
return cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options v1.ListOptions) (runtime.Object, error) {
if tweakListOptions != nil {
tweakListOptions(&options)
}
return client.LoggieV1beta1().Vms().List(context.TODO(), options)
},
WatchFunc: func(options v1.ListOptions) (watch.Interface, error) {
if tweakListOptions != nil {
tweakListOptions(&options)
}
return client.LoggieV1beta1().Vms().Watch(context.TODO(), options)
},
},
&loggiev1beta1.Vm{},
resyncPeriod,
indexers,
)
}
func (f *vmInformer) defaultInformer(client versioned.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer {
return NewFilteredVmInformer(client, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions)
}
func (f *vmInformer) Informer() cache.SharedIndexInformer {
return f.factory.InformerFor(&loggiev1beta1.Vm{}, f.defaultInformer)
}
func (f *vmInformer) Lister() v1beta1.VmLister {
return v1beta1.NewVmLister(f.Informer().GetIndexer())
}

View File

@ -36,3 +36,7 @@ type LogConfigNamespaceListerExpansion interface{}
// SinkListerExpansion allows custom methods to be added to
// SinkLister.
type SinkListerExpansion interface{}
// VmListerExpansion allows custom methods to be added to
// VmLister.
type VmListerExpansion interface{}

View File

@ -0,0 +1,67 @@
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by lister-gen. DO NOT EDIT.
package v1beta1
import (
v1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/tools/cache"
)
// VmLister helps list Vms.
// All objects returned here must be treated as read-only.
type VmLister interface {
// List lists all Vms in the indexer.
// Objects returned here must be treated as read-only.
List(selector labels.Selector) (ret []*v1beta1.Vm, err error)
// Get retrieves the Vm from the index for a given name.
// Objects returned here must be treated as read-only.
Get(name string) (*v1beta1.Vm, error)
VmListerExpansion
}
// vmLister implements the VmLister interface.
type vmLister struct {
indexer cache.Indexer
}
// NewVmLister returns a new VmLister.
func NewVmLister(indexer cache.Indexer) VmLister {
return &vmLister{indexer: indexer}
}
// List lists all Vms in the indexer.
func (s *vmLister) List(selector labels.Selector) (ret []*v1beta1.Vm, err error) {
err = cache.ListAll(s.indexer, selector, func(m interface{}) {
ret = append(ret, m.(*v1beta1.Vm))
})
return ret, err
}
// Get retrieves the Vm from the index for a given name.
func (s *vmLister) Get(name string) (*v1beta1.Vm, error) {
obj, exists, err := s.indexer.GetByKey(name)
if err != nil {
return nil, err
}
if !exists {
return nil, errors.NewNotFound(v1beta1.Resource("vm"), name)
}
return obj.(*v1beta1.Vm), nil
}

View File

@ -36,15 +36,29 @@ type Config struct {
PodLogDirPrefix string `yaml:"podLogDirPrefix" default:"/var/log/pods"`
KubeletRootDir string `yaml:"kubeletRootDir" default:"/var/lib/kubelet"`
Fields Fields `yaml:"fields"` // Deprecated: use typePodFields
K8sFields map[string]string `yaml:"k8sFields"` // Deprecated: use typePodFields
TypePodFields map[string]string `yaml:"typePodFields"`
TypeNodeFields map[string]string `yaml:"typeNodeFields"`
ParseStdout bool `yaml:"parseStdout"`
HostRootMountPath string `yaml:"hostRootMountPath"`
Fields Fields `yaml:"fields"` // Deprecated: use typePodFields
K8sFields map[string]string `yaml:"k8sFields"` // Deprecated: use typePodFields
TypePodFields KubeMetaFields `yaml:"typePodFields"`
TypeNodeFields KubeMetaFields `yaml:"typeNodeFields"`
TypeVmFields KubeMetaFields `yaml:"typeVmFields"`
FieldsOmitEmpty bool `yaml:"fieldsOmitEmpty"`
ParseStdout bool `yaml:"parseStdout"`
Defaults Defaults `yaml:"defaults"`
// If set to true, it means that the pipeline configuration generated does not contain specific Pod paths and meta information.
// These data will be dynamically obtained by the file source, thereby reducing the number of configuration changes and reloads.
DynamicContainerLog bool `yaml:"dynamicContainerLog"`
VmMode bool `yaml:"vmMode"` // only for when Loggie running in Virtual Machine, and we use VM CRD as configurations
}
type KubeMetaFields map[string]string
type Defaults struct {
SinkRef string `yaml:"sinkRef"`
}
// Fields Deprecated
@ -87,15 +101,19 @@ func (c *Config) Validate() error {
}
if c.TypePodFields != nil {
for _, v := range c.TypePodFields {
if err := pattern.Validate(v); err != nil {
return err
}
if err := c.TypePodFields.validate(); err != nil {
return err
}
}
if c.TypeNodeFields != nil {
for _, v := range c.TypeNodeFields {
if err := c.TypeNodeFields.validate(); err != nil {
return err
}
}
if c.TypeVmFields != nil {
for _, v := range c.TypeVmFields {
if err := pattern.Validate(v); err != nil {
return err
}
@ -104,3 +122,21 @@ func (c *Config) Validate() error {
return nil
}
func (f KubeMetaFields) validate() error {
for _, v := range f {
if err := pattern.Validate(v); err != nil {
return err
}
}
return nil
}
func (f KubeMetaFields) initPattern() map[string]*pattern.Pattern {
typePattern := make(map[string]*pattern.Pattern)
for k, v := range f {
p, _ := pattern.Init(v)
typePattern[k] = p
}
return typePattern
}

View File

@ -19,10 +19,10 @@ package controller
import (
"context"
"fmt"
"github.com/loggie-io/loggie/pkg/core/global"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/runtime"
"github.com/loggie-io/loggie/pkg/util/pattern"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"reflect"
"time"
"github.com/loggie-io/loggie/pkg/core/log"
@ -52,9 +52,13 @@ const (
EventPod = "pod"
EventLogConf = "logConfig"
EventNode = "node"
EventVm = "vm"
EventClusterLogConf = "clusterLogConfig"
EventSink = "sink"
EventInterceptor = "interceptor"
InjectorAnnotationKey = "sidecar.loggie.io/inject"
InjectorAnnotationValueTrue = "true"
)
// Element the item add to queue
@ -72,28 +76,27 @@ type Controller struct {
logConfigClientset logconfigClientset.Interface
podsLister corev1Listers.PodLister
podsSynced cache.InformerSynced
logConfigLister logconfigLister.LogConfigLister
logConfigSynced cache.InformerSynced
clusterLogConfigLister logconfigLister.ClusterLogConfigLister
clusterLogConfigSynced cache.InformerSynced
sinkLister logconfigLister.SinkLister
sinkSynced cache.InformerSynced
interceptorLister logconfigLister.InterceptorLister
interceptorSynced cache.InformerSynced
nodeLister corev1Listers.NodeLister
nodeSynced cache.InformerSynced
// only in Vm mode
vmLister logconfigLister.VmLister
typePodIndex *index.LogConfigTypePodIndex
typeClusterIndex *index.LogConfigTypeClusterIndex
typeNodeIndex *index.LogConfigTypeNodeIndex
nodeInfo *corev1.Node
vmInfo *logconfigv1beta1.Vm
record record.EventRecorder
runtime runtime.Runtime
extraTypePodFieldsPattern map[string]*pattern.Pattern
extraTypeNodeFieldsPattern map[string]*pattern.Pattern
extraTypeVmFieldsPattern map[string]*pattern.Pattern
}
func NewController(
@ -106,6 +109,7 @@ func NewController(
sinkInformer logconfigInformers.SinkInformer,
interceptorInformer logconfigInformers.InterceptorInformer,
nodeInformer corev1Informers.NodeInformer,
vmInformer logconfigInformers.VmInformer,
runtime runtime.Runtime,
) *Controller {
@ -114,32 +118,46 @@ func NewController(
eventBroadcaster.StartRecordingToSink(&typedcorev1.EventSinkImpl{Interface: kubeClientset.CoreV1().Events("")})
recorder := eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: "loggie/" + config.NodeName})
controller := &Controller{
config: config,
workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "logConfig"),
var controller *Controller
if config.VmMode {
controller = &Controller{
config: config,
workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "logConfig"),
kubeClientset: kubeClientset,
logConfigClientset: logConfigClientset,
kubeClientset: kubeClientset,
logConfigClientset: logConfigClientset,
podsLister: podInformer.Lister(),
podsSynced: podInformer.Informer().HasSynced,
logConfigLister: logConfigInformer.Lister(),
logConfigSynced: logConfigInformer.Informer().HasSynced,
clusterLogConfigLister: clusterLogConfigInformer.Lister(),
clusterLogConfigSynced: clusterLogConfigInformer.Informer().HasSynced,
sinkLister: sinkInformer.Lister(),
sinkSynced: sinkInformer.Informer().HasSynced,
interceptorLister: interceptorInformer.Lister(),
interceptorSynced: interceptorInformer.Informer().HasSynced,
nodeLister: nodeInformer.Lister(),
nodeSynced: nodeInformer.Informer().HasSynced,
clusterLogConfigLister: clusterLogConfigInformer.Lister(),
sinkLister: sinkInformer.Lister(),
interceptorLister: interceptorInformer.Lister(),
vmLister: vmInformer.Lister(),
typePodIndex: index.NewLogConfigTypePodIndex(),
typeClusterIndex: index.NewLogConfigTypeLoggieIndex(),
typeNodeIndex: index.NewLogConfigTypeNodeIndex(),
typeNodeIndex: index.NewLogConfigTypeNodeIndex(),
record: recorder,
runtime: runtime,
record: recorder,
}
} else {
controller = &Controller{
config: config,
workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "logConfig"),
kubeClientset: kubeClientset,
logConfigClientset: logConfigClientset,
podsLister: podInformer.Lister(),
logConfigLister: logConfigInformer.Lister(),
clusterLogConfigLister: clusterLogConfigInformer.Lister(),
sinkLister: sinkInformer.Lister(),
interceptorLister: interceptorInformer.Lister(),
nodeLister: nodeInformer.Lister(),
typePodIndex: index.NewLogConfigTypePodIndex(),
typeClusterIndex: index.NewLogConfigTypeLoggieIndex(),
typeNodeIndex: index.NewLogConfigTypeNodeIndex(),
record: recorder,
runtime: runtime,
}
}
controller.InitK8sFieldsPattern()
@ -147,12 +165,21 @@ func NewController(
log.Info("Setting up event handlers")
utilruntime.Must(logconfigSchema.AddToScheme(scheme.Scheme))
// Since type node logic depends on node labels, we get and set node info at first.
node, err := kubeClientset.CoreV1().Nodes().Get(context.Background(), global.NodeName, metav1.GetOptions{})
if err != nil {
log.Panic("get node %s failed: %+v", global.NodeName, err)
if config.VmMode {
vm, err := logConfigClientset.LoggieV1beta1().Vms().Get(context.Background(), config.NodeName, metav1.GetOptions{})
if err != nil {
log.Panic("get vm %s failed: %+v", config.NodeName, err)
}
controller.vmInfo = vm.DeepCopy()
} else {
// Since type node logic depends on node labels, we get and set node info at first.
node, err := kubeClientset.CoreV1().Nodes().Get(context.Background(), config.NodeName, metav1.GetOptions{})
if err != nil {
log.Panic("get node %s failed: %+v", config.NodeName, err)
}
controller.nodeInfo = node.DeepCopy()
}
controller.nodeInfo = node.DeepCopy()
clusterLogConfigInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
@ -160,7 +187,7 @@ func NewController(
if config.Spec.Selector == nil {
return
}
if !controller.belongOfCluster(config.Spec.Selector.Cluster) {
if !controller.belongOfCluster(config.Spec.Selector.Cluster, config.Annotations) {
return
}
@ -178,7 +205,7 @@ func NewController(
if newConfig.Spec.Selector == nil {
return
}
if !controller.belongOfCluster(newConfig.Spec.Selector.Cluster) {
if !controller.belongOfCluster(newConfig.Spec.Selector.Cluster, newConfig.Annotations) {
return
}
@ -193,7 +220,7 @@ func NewController(
if config.Spec.Selector == nil {
return
}
if !controller.belongOfCluster(config.Spec.Selector.Cluster) {
if !controller.belongOfCluster(config.Spec.Selector.Cluster, config.Annotations) {
return
}
@ -201,13 +228,66 @@ func NewController(
},
})
interceptorInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
controller.enqueue(obj, EventInterceptor, logconfigv1beta1.SelectorTypeAll)
},
UpdateFunc: func(old, new interface{}) {
newConfig := new.(*logconfigv1beta1.Interceptor)
oldConfig := old.(*logconfigv1beta1.Interceptor)
if newConfig.ResourceVersion == oldConfig.ResourceVersion {
return
}
controller.enqueue(new, EventInterceptor, logconfigv1beta1.SelectorTypeAll)
},
})
sinkInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
controller.enqueue(obj, EventSink, logconfigv1beta1.SelectorTypeAll)
},
UpdateFunc: func(old, new interface{}) {
newConfig := new.(*logconfigv1beta1.Sink)
oldConfig := old.(*logconfigv1beta1.Sink)
if newConfig.ResourceVersion == oldConfig.ResourceVersion {
return
}
controller.enqueue(new, EventSink, logconfigv1beta1.SelectorTypeAll)
},
})
if config.VmMode {
vmInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
controller.enqueue(obj, EventVm, logconfigv1beta1.SelectorTypeAll)
},
UpdateFunc: func(old, new interface{}) {
newConfig := new.(*logconfigv1beta1.Vm)
oldConfig := old.(*logconfigv1beta1.Vm)
if newConfig.ResourceVersion == oldConfig.ResourceVersion {
return
}
if reflect.DeepEqual(newConfig.Labels, oldConfig.Labels) {
return
}
controller.enqueue(new, EventVm, logconfigv1beta1.SelectorTypeAll)
},
})
return controller
}
logConfigInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
config := obj.(*logconfigv1beta1.LogConfig)
if config.Spec.Selector == nil {
return
}
if !controller.belongOfCluster(config.Spec.Selector.Cluster) {
if !controller.belongOfCluster(config.Spec.Selector.Cluster, config.Annotations) {
return
}
@ -226,7 +306,7 @@ func NewController(
if newConfig.Spec.Selector == nil {
return
}
if !controller.belongOfCluster(newConfig.Spec.Selector.Cluster) {
if !controller.belongOfCluster(newConfig.Spec.Selector.Cluster, newConfig.Annotations) {
return
}
@ -242,7 +322,7 @@ func NewController(
if config.Spec.Selector == nil {
return
}
if !controller.belongOfCluster(config.Spec.Selector.Cluster) {
if !controller.belongOfCluster(config.Spec.Selector.Cluster, config.Annotations) {
return
}
@ -289,60 +369,21 @@ func NewController(
},
})
interceptorInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
controller.enqueue(obj, EventInterceptor, logconfigv1beta1.SelectorTypeAll)
},
UpdateFunc: func(old, new interface{}) {
newConfig := new.(*logconfigv1beta1.Interceptor)
oldConfig := old.(*logconfigv1beta1.Interceptor)
if newConfig.ResourceVersion == oldConfig.ResourceVersion {
return
}
controller.enqueue(new, EventInterceptor, logconfigv1beta1.SelectorTypeAll)
},
})
sinkInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
controller.enqueue(obj, EventSink, logconfigv1beta1.SelectorTypeAll)
},
UpdateFunc: func(old, new interface{}) {
newConfig := new.(*logconfigv1beta1.Sink)
oldConfig := old.(*logconfigv1beta1.Sink)
if newConfig.ResourceVersion == oldConfig.ResourceVersion {
return
}
controller.enqueue(new, EventSink, logconfigv1beta1.SelectorTypeAll)
},
})
return controller
}
func (c *Controller) InitK8sFieldsPattern() {
typePodPattern := make(map[string]*pattern.Pattern)
for k, v := range c.config.TypePodFields {
p, _ := pattern.Init(v)
typePodPattern[k] = p
}
typePodPattern := c.config.TypePodFields.initPattern()
for k, v := range c.config.K8sFields {
p, _ := pattern.Init(v)
typePodPattern[k] = p
}
c.extraTypePodFieldsPattern = typePodPattern
typeNodePattern := make(map[string]*pattern.Pattern)
for k, v := range c.config.TypeNodeFields {
p, _ := pattern.Init(v)
typeNodePattern[k] = p
}
c.extraTypeNodeFieldsPattern = c.config.TypeNodeFields.initPattern()
c.extraTypeNodeFieldsPattern = typeNodePattern
c.extraTypeVmFieldsPattern = c.config.TypeVmFields.initPattern()
}
// handleSelectorHasChange
@ -356,7 +397,7 @@ func (c *Controller) handleLogConfigSelectorHasChange(new *logconfigv1beta1.LogC
lgcKey := helper.MetaNamespaceKey(old.Namespace, old.Name)
switch new.Spec.Selector.Type {
case logconfigv1beta1.SelectorTypePod:
case logconfigv1beta1.SelectorTypePod, logconfigv1beta1.SelectorTypeWorkload:
if !helper.MatchStringMap(new.Spec.Selector.LabelSelector,
old.Spec.Selector.LabelSelector) {
err = c.handleAllTypesDelete(lgcKey, logconfigv1beta1.SelectorTypePod)
@ -406,7 +447,7 @@ func (c *Controller) enqueueForDelete(obj interface{}, eleType string, selectorT
c.workqueue.Add(e)
}
func (c *Controller) Run(stopCh <-chan struct{}) error {
func (c *Controller) Run(stopCh <-chan struct{}, cacheSyncs ...cache.InformerSynced) error {
defer utilruntime.HandleCrash()
defer c.workqueue.ShutDown()
@ -415,7 +456,7 @@ func (c *Controller) Run(stopCh <-chan struct{}) error {
// Wait for the caches to be synced before starting workers
log.Info("Waiting for informer caches to sync")
if ok := cache.WaitForCacheSync(stopCh, c.podsSynced, c.logConfigSynced, c.clusterLogConfigSynced, c.sinkSynced, c.interceptorSynced); !ok {
if ok := cache.WaitForCacheSync(stopCh, cacheSyncs...); !ok {
return fmt.Errorf("failed to wait for caches to sync")
}
@ -530,6 +571,11 @@ func (c *Controller) syncHandler(element Element) error {
log.Warn("reconcile interceptor %s err: %v", element.Key, err)
}
case EventVm:
if err = c.reconcileVm(element.Key); err != nil {
log.Warn("reconcile interceptor %s err: %v", element.Key, err)
}
default:
utilruntime.HandleError(fmt.Errorf("element type: %s not supported", element.Type))
return nil
@ -538,6 +584,17 @@ func (c *Controller) syncHandler(element Element) error {
return nil
}
func (c *Controller) belongOfCluster(cluster string) bool {
return c.config.Cluster == cluster
func (c *Controller) belongOfCluster(cluster string, annotations map[string]string) bool {
if c.config.Cluster != cluster {
return false
}
// If there's a Sidecar-injected annotation, just ignore it
if annotations != nil {
if _, ok := annotations[InjectorAnnotationKey]; ok {
return false
}
}
return true
}

View File

@ -35,8 +35,8 @@ const (
ReasonFailed = "syncFailed"
ReasonSuccess = "syncSuccess"
MessageSyncSuccess = "Sync type %s %v success"
MessageSyncFailed = "Sync type %s %v failed: %s"
MessageSyncSuccess = "Sync %s %v success"
MessageSyncFailed = "Sync %s failed: %s"
)
func (c *Controller) reconcileClusterLogConfig(element Element) error {
@ -55,11 +55,14 @@ func (c *Controller) reconcileClusterLogConfig(element Element) error {
}
err, keys := c.reconcileClusterLogConfigAddOrUpdate(clusterLogConfig)
if len(keys) > 0 {
msg := fmt.Sprintf(MessageSyncSuccess, clusterLogConfig.Spec.Selector.Type, keys)
c.record.Event(clusterLogConfig, corev1.EventTypeNormal, ReasonSuccess, msg)
if err != nil {
c.record.Eventf(clusterLogConfig, corev1.EventTypeWarning, ReasonFailed, MessageSyncFailed, clusterLogConfig.Spec.Selector.Type, err.Error())
return err
}
// no need to record failed event here because we recorded events when received pod create/update
if len(keys) > 0 {
c.record.Eventf(clusterLogConfig, corev1.EventTypeNormal, ReasonSuccess, MessageSyncSuccess, clusterLogConfig.Spec.Selector.Type, keys)
}
return err
}
@ -71,7 +74,6 @@ func (c *Controller) reconcileLogConfig(element Element) error {
}
logConf, err := c.logConfigLister.LogConfigs(namespace).Get(name)
if kerrors.IsNotFound(err) {
return c.reconcileLogConfigDelete(element.Key, element.SelectorType)
} else if err != nil {
@ -80,11 +82,15 @@ func (c *Controller) reconcileLogConfig(element Element) error {
}
err, keys := c.reconcileLogConfigAddOrUpdate(logConf)
if len(keys) > 0 {
msg := fmt.Sprintf(MessageSyncSuccess, logConf.Spec.Selector.Type, keys)
c.record.Event(logConf, corev1.EventTypeNormal, ReasonSuccess, msg)
if err != nil {
c.record.Eventf(logConf, corev1.EventTypeWarning, ReasonFailed, MessageSyncFailed, logConf.Spec.Selector.Type, err.Error())
return err
}
return err
if len(keys) > 0 {
c.record.Eventf(logConf, corev1.EventTypeNormal, ReasonSuccess, MessageSyncSuccess, logConf.Spec.Selector.Type, keys)
}
return nil
}
func (c *Controller) reconcilePod(key string) error {
@ -152,24 +158,7 @@ func (c *Controller) reconcileInterceptor(name string) error {
return nil
}
for lgcKey, pip := range c.typePodIndex.GetAllConfigMap() {
if err := reconcile(pip.Lgc); err != nil {
log.Info("reconcile interceptor %s and update logConfig %s error: %v", name, lgcKey, err)
}
}
for lgcKey, pip := range c.typeClusterIndex.GetAllConfigMap() {
if err := reconcile(pip.Lgc); err != nil {
log.Info("reconcile interceptor %s and update logConfig %s error: %v", name, lgcKey, err)
}
}
for lgcKey, pip := range c.typeNodeIndex.GetAllConfigMap() {
if err := reconcile(pip.Lgc); err != nil {
log.Info("reconcile interceptor %s and update logConfig %s error: %v", name, lgcKey, err)
}
}
c.syncWithLogConfigReconcile(reconcile, "interceptor/"+name)
return nil
}
@ -203,24 +192,52 @@ func (c *Controller) reconcileSink(name string) error {
return nil
}
for lgcKey, pip := range c.typePodIndex.GetAllConfigMap() {
if err := reconcile(pip.Lgc); err != nil {
log.Info("reconcile sink %s and update logConfig %s error: %v", name, lgcKey, err)
c.syncWithLogConfigReconcile(reconcile, "sink/"+name)
return nil
}
type syncLogConfigReconcile func(lgc *logconfigv1beta1.LogConfig) error
func (c *Controller) syncWithLogConfigReconcile(reconcile syncLogConfigReconcile, name string) {
if c.typePodIndex != nil {
for lgcKey, pip := range c.typePodIndex.GetAllConfigMap() {
if err := reconcile(pip.Lgc); err != nil {
log.Info("reconcile %s and update logConfig %s error: %v", name, lgcKey, err)
}
}
}
for lgcKey, pip := range c.typeClusterIndex.GetAllConfigMap() {
if err := reconcile(pip.Lgc); err != nil {
log.Info("reconcile sink %s and update logConfig %s error: %v", name, lgcKey, err)
if c.typeClusterIndex != nil {
for lgcKey, pip := range c.typeClusterIndex.GetAllConfigMap() {
if err := reconcile(pip.Lgc); err != nil {
log.Info("reconcile %s and update logConfig %s error: %v", name, lgcKey, err)
}
}
}
for lgcKey, pip := range c.typeNodeIndex.GetAllConfigMap() {
if err := reconcile(pip.Lgc); err != nil {
log.Info("reconcile sink %s and update logConfig %s error: %v", name, lgcKey, err)
if c.typeNodeIndex != nil {
for lgcKey, pip := range c.typeNodeIndex.GetAllConfigMap() {
if err := reconcile(pip.Lgc); err != nil {
log.Info("reconcile %s and update logConfig %s error: %v", name, lgcKey, err)
}
}
}
}
func (c *Controller) reconcileVm(name string) error {
vm, err := c.vmLister.Get(name)
if kerrors.IsNotFound(err) {
log.Warn("vm %s is not found", name)
return nil
} else if err != nil {
runtime.HandleError(fmt.Errorf("failed to get vm %s by lister", name))
return nil
}
// update vm labels
n := vm.DeepCopy()
c.vmInfo = n
log.Info("vm label %v is set", n.Labels)
return nil
}
@ -245,17 +262,27 @@ func (c *Controller) reconcileLogConfigAddOrUpdate(lgc *logconfigv1beta1.LogConf
}
func (c *Controller) handleAllTypesAddOrUpdate(lgc *logconfigv1beta1.LogConfig) (err error, keys []string) {
// set defaults
c.setDefaultsLogConfigFields(lgc)
lgc = lgc.DeepCopy()
switch lgc.Spec.Selector.Type {
case logconfigv1beta1.SelectorTypePod:
case logconfigv1beta1.SelectorTypePod, logconfigv1beta1.SelectorTypeWorkload:
return c.handleLogConfigTypePodAddOrUpdate(lgc)
case logconfigv1beta1.SelectorTypeNode:
err := c.handleLogConfigTypeNode(lgc)
return err, nil
case logconfigv1beta1.SelectorTypeCluster:
err := c.handleLogConfigTypeCluster(lgc)
return err, nil
case logconfigv1beta1.SelectorTypeVm:
err := c.handleLogConfigTypeVm(lgc)
return err, nil
default:
log.Warn("logConfig %s/%s selector type is not supported", lgc.Namespace, lgc.Name)
return errors.Errorf("logConfig %s/%s selector type is not supported", lgc.Namespace, lgc.Name), nil
@ -284,7 +311,7 @@ func (c *Controller) reconcileLogConfigDelete(key string, selectorType string) e
func (c *Controller) handleAllTypesDelete(key string, selectorType string) error {
switch selectorType {
case logconfigv1beta1.SelectorTypePod:
case logconfigv1beta1.SelectorTypePod, logconfigv1beta1.SelectorTypeWorkload:
if ok := c.typePodIndex.DeletePipeConfigsByLogConfigKey(key); !ok {
return nil
}
@ -299,6 +326,11 @@ func (c *Controller) handleAllTypesDelete(key string, selectorType string) error
return nil
}
case logconfigv1beta1.SelectorTypeVm:
if ok := c.typeNodeIndex.DeleteConfig(key); !ok {
return nil
}
default:
return errors.Errorf("selector.type %s unsupported", selectorType)
}
@ -351,6 +383,10 @@ func (c *Controller) syncConfigToFile(selectorType string) error {
cfgRaws = c.typeNodeIndex.GetAll()
fileName = GenerateTypeNodeConfigName
case logconfigv1beta1.SelectorTypeVm:
cfgRaws = c.typeNodeIndex.GetAll() // we reuse typeNodeIndex in type: Vm
fileName = GenerateTypeVmConfigName
default:
return errors.New("selector.type unsupported")
}
@ -366,3 +402,10 @@ func (c *Controller) syncConfigToFile(selectorType string) error {
}
return nil
}
func (c *Controller) setDefaultsLogConfigFields(lgc *logconfigv1beta1.LogConfig) {
// set defaults
if lgc.Spec.Pipeline.Sink == "" && lgc.Spec.Pipeline.SinkRef == "" && c.config.Defaults.SinkRef != "" {
lgc.Spec.Pipeline.SinkRef = c.config.Defaults.SinkRef
}
}

View File

@ -31,7 +31,8 @@ func (c *Controller) handleLogConfigTypeCluster(lgc *logconfigv1beta1.LogConfig)
return errors.WithMessage(err, "convert to pipeline config failed")
}
if err := cfg.NewUnpack(nil, pipRaws, nil).Defaults().Validate().Do(); err != nil {
pipRawsCopy := pipRaws.DeepCopy()
if err := cfg.NewUnpack(nil, pipRawsCopy, nil).Defaults().Validate().Do(); err != nil {
return err
}

View File

@ -17,16 +17,36 @@ limitations under the License.
package controller
import (
"path/filepath"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
"github.com/loggie-io/loggie/pkg/core/cfg"
"github.com/loggie-io/loggie/pkg/core/log"
"github.com/loggie-io/loggie/pkg/core/source"
logconfigv1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/helper"
"github.com/loggie-io/loggie/pkg/util/pattern"
"github.com/pkg/errors"
)
type KubeTypeNodeExtra struct {
TypeNodeFields KubeMetaFields `yaml:"typeNodeFields,omitempty"`
}
func GetKubeTypeNodeExtraSource(src *source.Config) (*KubeTypeNodeExtra, error) {
extra := &KubeTypeNodeExtra{}
if err := cfg.UnpackFromCommonCfg(src.Properties, extra).Do(); err != nil {
return nil, err
}
return extra, nil
}
func (c *Controller) handleLogConfigTypeNode(lgc *logconfigv1beta1.LogConfig) error {
if c.nodeInfo == nil {
return nil
}
// check node selector
if lgc.Spec.Selector.NodeSelector.NodeSelector != nil {
if !helper.LabelsSubset(lgc.Spec.Selector.NodeSelector.NodeSelector, c.nodeInfo.Labels) {
@ -40,7 +60,8 @@ func (c *Controller) handleLogConfigTypeNode(lgc *logconfigv1beta1.LogConfig) er
return errors.WithMessage(err, "convert to pipeline config failed")
}
if err := cfg.NewUnpack(nil, pipRaws, nil).Defaults().Validate().Do(); err != nil {
pipRawsCopy := pipRaws.DeepCopy()
if err := cfg.NewUnpack(nil, pipRawsCopy, nil).Defaults().Validate().Do(); err != nil {
return err
}
@ -51,7 +72,13 @@ func (c *Controller) handleLogConfigTypeNode(lgc *logconfigv1beta1.LogConfig) er
for i := range pipRaws.Pipelines {
for _, s := range pipRaws.Pipelines[i].Sources {
c.injectTypeNodeFields(s, lgc.Name)
if err := c.injectTypeNodeFields(s, lgc.Name); err != nil {
return err
}
if err := c.modifyNodePath(s); err != nil {
return err
}
}
}
@ -62,19 +89,69 @@ func (c *Controller) handleLogConfigTypeNode(lgc *logconfigv1beta1.LogConfig) er
return nil
}
func (c *Controller) injectTypeNodeFields(src *source.Config, clusterlogconfig string) {
func (c *Controller) injectTypeNodeFields(src *source.Config, clusterlogconfig string) error {
if src.Fields == nil {
src.Fields = make(map[string]interface{})
}
extra, err := GetKubeTypeNodeExtraSource(src)
if err != nil {
return err
}
if len(c.extraTypeNodeFieldsPattern) > 0 {
for k, p := range c.extraTypeNodeFieldsPattern {
res, err := p.WithK8sNode(pattern.NewTypeNodeFieldsData(c.nodeInfo, clusterlogconfig)).Render()
if err != nil {
log.Warn("add extra k8s node fields %s failed: %v", k, err)
continue
}
src.Fields[k] = res
np := renderTypeNodeFieldsPattern(c.extraTypeNodeFieldsPattern, c.nodeInfo, clusterlogconfig)
for k, v := range np {
src.Fields[k] = v
}
}
if len(extra.TypeNodeFields) > 0 {
if err := extra.TypeNodeFields.validate(); err != nil {
return err
}
p := extra.TypeNodeFields.initPattern()
np := renderTypeNodeFieldsPattern(p, c.nodeInfo, clusterlogconfig)
for k, v := range np {
src.Fields[k] = v
}
}
return nil
}
func renderTypeNodeFieldsPattern(pm map[string]*pattern.Pattern, node *corev1.Node, clusterlogconfig string) map[string]interface{} {
fields := make(map[string]interface{}, len(pm))
for k, p := range pm {
res, err := p.WithK8sNode(pattern.NewTypeNodeFieldsData(node, clusterlogconfig)).Render()
if err != nil {
log.Warn("add extra k8s node fields %s failed: %v", k, err)
continue
}
fields[k] = res
}
return fields
}
func (c *Controller) modifyNodePath(src *source.Config) error {
if len(c.config.HostRootMountPath) == 0 {
return nil
}
fileSource, err := getFileSource(src)
if err != nil {
log.Warn("fail to convert source to file source")
return nil
}
paths := fileSource.CollectConfig.Paths
newPaths := make([]string, 0, len(paths))
rootPath := c.config.HostRootMountPath
for _, path := range paths {
newPaths = append(newPaths, filepath.Join(rootPath, path))
}
fileSource.CollectConfig.Paths = newPaths
return setFileSource(src, fileSource)
}

View File

@ -1,18 +1,21 @@
package controller
import (
"testing"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/loggie-io/loggie/pkg/core/cfg"
"github.com/loggie-io/loggie/pkg/core/interceptor"
"github.com/loggie-io/loggie/pkg/core/queue"
"github.com/loggie-io/loggie/pkg/core/sink"
"github.com/loggie-io/loggie/pkg/core/source"
logconfigv1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/client/clientset/versioned/fake"
lgcInformer "github.com/loggie-io/loggie/pkg/discovery/kubernetes/client/informers/externalversions"
"github.com/loggie-io/loggie/pkg/pipeline"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
func TestGetConfigFromPodAndLogConfig(t *testing.T) {
@ -62,6 +65,10 @@ func TestGetConfigFromPodAndLogConfig(t *testing.T) {
Sink: `
type: dev
printEvents: false
`,
Queue: `
type: channel
name: queue
`,
},
},
@ -138,6 +145,12 @@ func TestGetConfigFromPodAndLogConfig(t *testing.T) {
},
},
},
Queue: &queue.Config{
Enabled: nil,
Name: "queue",
Type: "channel",
Properties: nil,
},
}
got, err := ctrl.getConfigFromPodAndLogConfig(&lgc, &pod, lgcInf.Loggie().V1beta1().Sinks().Lister(), lgcInf.Loggie().V1beta1().Interceptors().Lister())

View File

@ -17,42 +17,46 @@ limitations under the License.
package controller
import (
"fmt"
"path/filepath"
"regexp"
"github.com/google/go-cmp/cmp"
"github.com/loggie-io/loggie/pkg/core/interceptor"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/index"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/runtime"
"github.com/loggie-io/loggie/pkg/source/codec"
"github.com/loggie-io/loggie/pkg/source/codec/json"
"github.com/loggie-io/loggie/pkg/source/codec/regex"
"github.com/loggie-io/loggie/pkg/util/pattern"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
utilerrors "k8s.io/apimachinery/pkg/util/errors"
"k8s.io/apimachinery/pkg/util/sets"
"regexp"
"github.com/loggie-io/loggie/pkg/core/cfg"
"github.com/loggie-io/loggie/pkg/core/interceptor"
"github.com/loggie-io/loggie/pkg/core/log"
"github.com/loggie-io/loggie/pkg/core/queue"
"github.com/loggie-io/loggie/pkg/core/source"
logconfigv1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/client/listers/loggie/v1beta1"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/helper"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/index"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/runtime"
"github.com/loggie-io/loggie/pkg/pipeline"
"github.com/loggie-io/loggie/pkg/source/codec"
"github.com/loggie-io/loggie/pkg/source/codec/json"
"github.com/loggie-io/loggie/pkg/source/codec/regex"
"github.com/loggie-io/loggie/pkg/source/file"
"github.com/loggie-io/loggie/pkg/util"
"github.com/loggie-io/loggie/pkg/util/pattern"
)
const (
GenerateConfigName = "kube-loggie.yml"
GenerateTypeLoggieConfigName = "cluster-config.yml"
GenerateTypeNodeConfigName = "node-config.yml"
GenerateTypeVmConfigName = "vm-config.yml"
)
type KubeFileSourceExtra struct {
ContainerName string `yaml:"containerName,omitempty"`
MatchFields *matchFields `yaml:"matchFields,omitempty"`
TypePodFields KubeMetaFields `yaml:"typePodFields,omitempty"`
ExcludeContainerPatterns []string `yaml:"excludeContainerPatterns,omitempty"` // regular pattern
excludeContainerRegexps []*regexp.Regexp `yaml:"-,omitempty"`
}
@ -128,7 +132,7 @@ type fmtKey struct {
func (c *Controller) handleLogConfigTypePodAddOrUpdate(lgc *logconfigv1beta1.LogConfig) (err error, podsName []string) {
// find pods related in the node
podList, err := helper.GetLogConfigRelatedPod(lgc, c.podsLister)
podList, err := helper.GetLogConfigRelatedPod(lgc, c.podsLister, c.kubeClientset)
if err != nil {
return err, nil
}
@ -145,12 +149,17 @@ func (c *Controller) handleLogConfigTypePodAddOrUpdate(lgc *logconfigv1beta1.Log
}
if err := c.handleLogConfigPerPod(lgc, pod); err != nil {
errs = append(errs, errors.WithMessagef(err, "match pod %s/%s", pod.Namespace, pod.Name))
errs = append(errs, errors.WithMessagef(err, "pod %s/%s", pod.Namespace, pod.Name))
continue
}
successPodNames = append(successPodNames, pod.Name)
}
if len(errs) > 0 {
// To prevent missing long reports, only a part of the error can be shown here
if len(errs) > 2 {
errs = errs[:2]
errs = append(errs, errors.New("..."))
}
return utilerrors.NewAggregate(errs), successPodNames
}
return nil, successPodNames
@ -173,13 +182,13 @@ func (c *Controller) handlePodAddOrUpdate(pod *corev1.Pod) error {
func (c *Controller) handlePodAddOrUpdateOfLogConfig(pod *corev1.Pod) {
// label selected logConfigs
lgcList, err := helper.GetPodRelatedLogConfigs(pod, c.logConfigLister)
lgcList, err := helper.GetPodRelatedLogConfigs(pod, c.logConfigLister, c.kubeClientset)
if err != nil || len(lgcList) == 0 {
return
}
for _, lgc := range lgcList {
if !c.belongOfCluster(lgc.Spec.Selector.Cluster) {
if !c.belongOfCluster(lgc.Spec.Selector.Cluster, lgc.Annotations) {
continue
}
@ -188,26 +197,23 @@ func (c *Controller) handlePodAddOrUpdateOfLogConfig(pod *corev1.Pod) {
}
if err := c.handleLogConfigPerPod(lgc, pod); err != nil {
msg := fmt.Sprintf(MessageSyncFailed, lgc.Spec.Selector.Type, pod.Name, err.Error())
c.record.Event(lgc, corev1.EventTypeWarning, ReasonFailed, msg)
log.Warn("sync %s %v failed: %s", lgc.Spec.Selector.Type, pod.Name, err.Error())
return
}
log.Info("handle pod %s/%s addOrUpdate event and sync config file success, related logConfig is %s", pod.Namespace, pod.Name, lgc.Name)
msg := fmt.Sprintf(MessageSyncSuccess, lgc.Spec.Selector.Type, pod.Name)
c.record.Event(lgc, corev1.EventTypeNormal, ReasonSuccess, msg)
c.record.Eventf(lgc, corev1.EventTypeNormal, ReasonSuccess, MessageSyncSuccess, lgc.Spec.Selector.Type, pod.Name)
}
}
func (c *Controller) handlePodAddOrUpdateOfClusterLogConfig(pod *corev1.Pod) {
// label selected clusterLogConfigs
clgcList, err := helper.GetPodRelatedClusterLogConfigs(pod, c.clusterLogConfigLister)
clgcList, err := helper.GetPodRelatedClusterLogConfigs(pod, c.clusterLogConfigLister, c.kubeClientset)
if err != nil || len(clgcList) == 0 {
return
}
for _, clgc := range clgcList {
if !c.belongOfCluster(clgc.Spec.Selector.Cluster) {
if !c.belongOfCluster(clgc.Spec.Selector.Cluster, clgc.Annotations) {
continue
}
@ -216,17 +222,17 @@ func (c *Controller) handlePodAddOrUpdateOfClusterLogConfig(pod *corev1.Pod) {
}
if err := c.handleLogConfigPerPod(clgc.ToLogConfig(), pod); err != nil {
msg := fmt.Sprintf(MessageSyncFailed, clgc.Spec.Selector.Type, pod.Name, err.Error())
c.record.Event(clgc, corev1.EventTypeWarning, ReasonFailed, msg)
log.Warn("sync %s %v failed: %s", clgc.Spec.Selector.Type, pod.Name, err.Error())
return
}
log.Info("handle pod %s/%s addOrUpdate event and sync config file success, related clusterLogConfig is %s", pod.Namespace, pod.Name, clgc.Name)
msg := fmt.Sprintf(MessageSyncSuccess, clgc.Spec.Selector.Type, pod.Name)
c.record.Event(clgc, corev1.EventTypeNormal, ReasonSuccess, msg)
c.record.Eventf(clgc, corev1.EventTypeNormal, ReasonSuccess, MessageSyncSuccess, clgc.Spec.Selector.Type, pod.Name)
}
}
func (c *Controller) handleLogConfigPerPod(lgc *logconfigv1beta1.LogConfig, pod *corev1.Pod) error {
// set defaults
c.setDefaultsLogConfigFields(lgc)
// generate pod related pipeline configs
pipeRaw, err := c.getConfigFromPodAndLogConfig(lgc, pod, c.sinkLister, c.interceptorLister)
@ -237,7 +243,8 @@ func (c *Controller) handleLogConfigPerPod(lgc *logconfigv1beta1.LogConfig, pod
return nil
}
if err := cfg.NewUnpack(nil, pipeRaw, nil).Defaults().Validate().Do(); err != nil {
pipeRawCopy := pipeRaw.DeepCopy()
if err := cfg.NewUnpack(nil, pipeRawCopy, nil).Defaults().Validate().Do(); err != nil {
return err
}
@ -295,7 +302,7 @@ func (c *Controller) getConfigFromContainerAndLogConfig(lgc *logconfigv1beta1.Lo
return nil, errors.WithMessagef(err, "unpack logConfig %s sources failed", lgc.Namespace)
}
filesources, err := c.updateSources(sourceConfList, pod, logConf.Name)
filesources, err := c.updateSources(sourceConfList, pod, logConf)
if err != nil {
return nil, err
}
@ -306,10 +313,10 @@ func (c *Controller) getConfigFromContainerAndLogConfig(lgc *logconfigv1beta1.Lo
return pipecfg, nil
}
func (c *Controller) updateSources(sourceConfList []*source.Config, pod *corev1.Pod, logConfigName string) ([]*source.Config, error) {
func (c *Controller) updateSources(sourceConfList []*source.Config, pod *corev1.Pod, lgc *logconfigv1beta1.LogConfig) ([]*source.Config, error) {
filesources := make([]*source.Config, 0)
for _, sourceConf := range sourceConfList {
filesrc, err := c.makeConfigPerSource(sourceConf, pod, logConfigName)
filesrc, err := c.makeConfigPerSource(sourceConf, pod, lgc)
if err != nil {
return nil, err
}
@ -318,7 +325,7 @@ func (c *Controller) updateSources(sourceConfList []*source.Config, pod *corev1.
return filesources, nil
}
func (c *Controller) makeConfigPerSource(s *source.Config, pod *corev1.Pod, logconfigName string) ([]*source.Config, error) {
func (c *Controller) makeConfigPerSource(s *source.Config, pod *corev1.Pod, lgc *logconfigv1beta1.LogConfig) ([]*source.Config, error) {
extra, err := GetKubeExtraFromFileSource(s)
if err != nil {
return nil, err
@ -357,10 +364,10 @@ func (c *Controller) makeConfigPerSource(s *source.Config, pod *corev1.Pod, logc
}
// change the source name, add pod.Name-containerName as prefix, since there maybe multiple containers in pod
filesrc.Name = helper.GenTypePodSourceName(pod.Name, status.Name, filesrc.Name)
filesrc.Name = helper.GenTypePodSourceName(lgc.Namespace, pod.Namespace, pod.Name, status.Name, filesrc.Name)
// inject default pod metadata
if err := c.injectTypePodFields(c.config.DynamicContainerLog, filesrc, extra.MatchFields, pod, logconfigName, status.Name); err != nil {
if err := c.injectTypePodFields(c.config.DynamicContainerLog, filesrc, extra, pod, lgc, status.Name); err != nil {
return nil, err
}
@ -425,14 +432,27 @@ func (c *Controller) getPathsInNode(containerPaths []string, pod *corev1.Pod, co
return nil, errors.New("path is empty")
}
return helper.PathsInNode(c.config.PodLogDirPrefix, c.config.KubeletRootDir, c.config.RootFsCollectionEnabled, c.runtime, containerPaths, pod, containerId, containerName)
paths, err := helper.PathsInNode(c.config.PodLogDirPrefix, c.config.KubeletRootDir, c.config.RootFsCollectionEnabled, c.runtime, containerPaths, pod, containerId, containerName)
if err != nil || len(c.config.HostRootMountPath) == 0 {
return paths, err
}
newPaths := make([]string, 0, len(paths))
rootPath := c.config.HostRootMountPath
for _, path := range paths {
newPaths = append(newPaths, filepath.Join(rootPath, path))
}
return newPaths, err
}
func (c *Controller) injectTypePodFields(dynamicContainerLogs bool, src *source.Config, match *matchFields, pod *corev1.Pod, lgcName string, containerName string) error {
func (c *Controller) injectTypePodFields(dynamicContainerLogs bool, src *source.Config, extra *KubeFileSourceExtra, pod *corev1.Pod, lgc *logconfigv1beta1.LogConfig, containerName string) error {
if src.Fields == nil {
src.Fields = make(map[string]interface{})
}
omitempty := c.config.FieldsOmitEmpty
k8sFields := make(map[string]interface{})
// Deprecated
@ -456,33 +476,50 @@ func (c *Controller) injectTypePodFields(dynamicContainerLogs bool, src *source.
k8sFields[m.ContainerName] = containerName
}
if m.LogConfig != "" {
k8sFields[m.LogConfig] = lgcName
k8sFields[m.LogConfig] = lgc.Name
}
if len(c.extraTypePodFieldsPattern) > 0 {
for k, p := range c.extraTypePodFieldsPattern {
res, err := p.WithK8sPod(pattern.NewTypePodFieldsData(pod, containerName, lgcName)).Render()
if err != nil {
log.Warn("add extra k8s fields %s failed: %v", k, err)
continue
}
k8sFields[k] = res
for k, v := range renderTypePodFieldsPattern(c.extraTypePodFieldsPattern, pod, containerName, lgc, omitempty) {
k8sFields[k] = v
}
}
podFields := extra.TypePodFields
if len(podFields) > 0 {
if err := podFields.validate(); err != nil {
return err
}
podPattern := podFields.initPattern()
for k, v := range renderTypePodFieldsPattern(podPattern, pod, containerName, lgc, omitempty) {
k8sFields[k] = v
}
}
// inject pod labels, annotations, envs as fields
match := extra.MatchFields
if match != nil {
if len(match.LabelKey) > 0 {
for k, v := range helper.GetMatchedPodLabel(match.LabelKey, pod) {
if omitempty && v == "" {
continue
}
k8sFields[k] = v
}
}
if len(match.AnnotationKey) > 0 {
for k, v := range helper.GetMatchedPodAnnotation(match.AnnotationKey, pod) {
if omitempty && v == "" {
continue
}
k8sFields[k] = v
}
}
if len(match.Env) > 0 {
for k, v := range helper.GetMatchedPodEnv(match.Env, pod, containerName) {
if omitempty && v == "" {
continue
}
k8sFields[k] = v
}
}
@ -493,6 +530,9 @@ func (c *Controller) injectTypePodFields(dynamicContainerLogs bool, src *source.
return err
}
for k, v := range ret {
if omitempty && v == "" {
continue
}
k8sFields[k] = v
}
}
@ -596,6 +636,12 @@ func toPipeConfig(dynamicContainerLog bool, lgcNamespace string, lgcName string,
pipecfg.Name = helper.MetaNamespaceKey(lgcNamespace, lgcName)
pipecfg.Sources = filesources
queueConf, err := toPipelineQueue(lgcPipe.Queue)
if err != nil {
return pipecfg, err
}
pipecfg.Queue = queueConf
sink, err := helper.ToPipelineSink(lgcPipe.Sink, lgcPipe.SinkRef, sinkLister)
if err != nil {
return pipecfg, err
@ -675,3 +721,33 @@ func toPipelineInterceptorWithPodInject(dynamicContainerLog bool, interceptorRaw
return icpConfList, nil
}
func renderTypePodFieldsPattern(pm map[string]*pattern.Pattern, pod *corev1.Pod, containerName string, lgc *logconfigv1beta1.LogConfig, omitempty bool) map[string]interface{} {
fields := make(map[string]interface{}, len(pm))
for k, p := range pm {
res, err := p.WithK8sPod(pattern.NewTypePodFieldsData(pod, containerName, lgc)).Render()
if err != nil {
log.Warn("add extra k8s fields %s failed: %v", k, err)
continue
}
if omitempty && res == "" {
continue
}
fields[k] = res
}
return fields
}
func toPipelineQueue(queueRaw string) (*queue.Config, error) {
if len(queueRaw) == 0 {
return nil, nil
}
queueConf := queue.Config{}
err := cfg.UnPackFromRaw([]byte(queueRaw), &queueConf).Do()
if err != nil {
return nil, err
}
return &queueConf, nil
}

View File

@ -0,0 +1,95 @@
/*
Copyright 2022 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"github.com/loggie-io/loggie/pkg/core/cfg"
"github.com/loggie-io/loggie/pkg/core/log"
"github.com/loggie-io/loggie/pkg/core/source"
logconfigv1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/helper"
"github.com/loggie-io/loggie/pkg/util/pattern"
"github.com/pkg/errors"
"strings"
)
func (c *Controller) handleLogConfigTypeVm(lgc *logconfigv1beta1.LogConfig) error {
if !c.config.VmMode {
return nil
}
// check node selector
if lgc.Spec.Selector.NodeSelector.NodeSelector != nil {
if !helper.LabelsSubset(lgc.Spec.Selector.NodeSelector.NodeSelector, c.vmInfo.Labels) {
log.Debug("clusterLogConfig %s/%s is not belong to this vm", lgc.Namespace, lgc.Name)
return nil
}
}
pipRaws, err := helper.ToPipeline(lgc, c.sinkLister, c.interceptorLister)
if err != nil {
return errors.WithMessage(err, "convert to pipeline config failed")
}
if err := cfg.NewUnpack(nil, pipRaws, nil).Defaults().Validate().Do(); err != nil {
return err
}
lgcKey := helper.MetaNamespaceKey(lgc.Namespace, lgc.Name)
if err = c.typeNodeIndex.ValidateAndSetConfig(lgcKey, pipRaws.Pipelines, lgc); err != nil {
return err
}
for i := range pipRaws.Pipelines {
for _, s := range pipRaws.Pipelines[i].Sources {
c.injectTypeVmFields(s, lgc.Name)
}
}
if err = c.syncConfigToFile(logconfigv1beta1.SelectorTypeVm); err != nil {
return errors.WithMessage(err, "failed to sync config to file")
}
log.Info("handle clusterLogConfig %s addOrUpdate event and sync config file success", lgc.Name)
return nil
}
func (c *Controller) injectTypeVmFields(src *source.Config, clusterlogconfig string) {
if src.Fields == nil {
src.Fields = make(map[string]interface{})
}
if len(c.extraTypeVmFieldsPattern) > 0 {
for k, p := range c.extraTypeVmFieldsPattern {
// inject all vm labels
if strings.Contains(k, pattern.AllLabelToken) {
c.vmInfo.ConvertChineseLabels()
for labelKey, labelVal := range c.vmInfo.Labels {
src.Fields[strings.Replace(k, pattern.AllLabelToken, labelKey, -1)] = labelVal
}
continue
}
res, err := p.WithVm(pattern.NewTypeVmFieldsData(c.vmInfo, clusterlogconfig)).Render()
if err != nil {
log.Warn("add extra vm fields %s failed: %v", k, err)
continue
}
src.Fields[k] = res
}
}
}

View File

@ -122,7 +122,12 @@ func ToPipelineInterceptor(interceptorsRaw string, interceptorRef string, interc
return interConfList, nil
}
func GenTypePodSourceName(podName string, containerName string, sourceName string) string {
func GenTypePodSourceName(lgcNamespace string, podNamespace string, podName string, containerName string, sourceName string) string {
// if lgcNamespace is empty, we use podNamespace as the first part of the source name,
// because this is the pod matched by clusterLogConfig, if the pod namespace is not added, it may cause the source to be duplicated
if lgcNamespace == "" {
return fmt.Sprintf("%s/%s/%s/%s", podNamespace, podName, containerName, sourceName)
}
return fmt.Sprintf("%s/%s/%s", podName, containerName, sourceName)
}

View File

@ -0,0 +1,210 @@
package helper
import (
"context"
"github.com/loggie-io/loggie/pkg/core/log"
logconfigv1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
kubeclientset "k8s.io/client-go/kubernetes"
)
type workloadFilterInfo struct {
// namespace list
namespaces map[string]struct{}
// exclude namespace list
excludeNamespaces map[string]struct{}
// workload name
names map[string]struct{}
}
type filterCacheChecker struct {
namespaces map[string]string
excludeNamespaces map[string]string
// workload(type) => workloadFilterInfo
workloadSelector map[string]workloadFilterInfo
clientSet kubeclientset.Interface
lgc *logconfigv1beta1.LogConfig
}
func newFilterCacheChecker(lgc *logconfigv1beta1.LogConfig, clientSet kubeclientset.Interface) *filterCacheChecker {
f := &filterCacheChecker{
clientSet: clientSet,
lgc: lgc,
}
if lgc.Spec.Selector == nil {
return f
}
if len(lgc.Spec.Selector.NamespaceSelector.NamespaceSelector) != 0 {
f.namespaces = make(map[string]string)
for _, v := range lgc.Spec.Selector.NamespaceSelector.NamespaceSelector {
f.namespaces[v] = v
}
}
if len(lgc.Spec.Selector.NamespaceSelector.ExcludeNamespaceSelector) != 0 {
f.excludeNamespaces = make(map[string]string)
for _, v := range lgc.Spec.Selector.NamespaceSelector.ExcludeNamespaceSelector {
f.excludeNamespaces[v] = v
}
}
if len(lgc.Spec.Selector.NamespaceSelector.ExcludeNamespaceSelector) != 0 {
f.excludeNamespaces = make(map[string]string)
for _, v := range lgc.Spec.Selector.NamespaceSelector.ExcludeNamespaceSelector {
f.excludeNamespaces[v] = v
}
}
if len(lgc.Spec.Selector.WorkloadSelector) != 0 {
f.workloadSelector = make(map[string]workloadFilterInfo)
for _, v := range lgc.Spec.Selector.WorkloadSelector {
for _, workloadType := range v.Type {
_, ok := f.workloadSelector[workloadType]
if !ok {
f.workloadSelector[workloadType] = workloadFilterInfo{
namespaces: make(map[string]struct{}),
excludeNamespaces: make(map[string]struct{}),
names: make(map[string]struct{}),
}
}
if len(v.NamespaceSelector) != 0 {
for _, namespace := range v.NamespaceSelector {
f.workloadSelector[workloadType].namespaces[namespace] = struct{}{}
}
}
if len(v.ExcludeNamespaceSelector) != 0 {
for _, namespace := range v.ExcludeNamespaceSelector {
f.workloadSelector[workloadType].excludeNamespaces[namespace] = struct{}{}
}
}
if len(v.NameSelector) != 0 {
for _, name := range v.NameSelector {
f.workloadSelector[workloadType].names[name] = struct{}{}
}
}
}
}
}
return f
}
// checkNamespace Check whether the namespace is legal
func (p *filterCacheChecker) checkNamespace(pod *corev1.Pod) bool {
if len(p.namespaces) == 0 && len(p.excludeNamespaces) == 0 {
return true
}
if len(p.excludeNamespaces) != 0 {
_, ok := p.excludeNamespaces[pod.GetNamespace()]
if ok {
return false
}
}
if len(p.namespaces) == 0 {
return true
}
_, ok := p.namespaces[pod.GetNamespace()]
if ok {
return true
}
return false
}
func (p *filterCacheChecker) checkOwner(owner metav1.OwnerReference, namespace string) (bool, error) {
// If workloadSelector is not selected, then all are consistent by default.
if len(p.workloadSelector) == 0 {
return true, nil
}
kind := owner.Kind
name := owner.Name
if owner.Kind == "ReplicaSet" {
rs, err := p.clientSet.AppsV1().ReplicaSets(namespace).Get(context.TODO(), owner.Name, metav1.GetOptions{})
if err != nil {
return false, err
}
if len(rs.GetOwnerReferences()) == 0 {
return false, nil
}
deploymentOwner := rs.GetOwnerReferences()[0]
if deploymentOwner.Kind != "Deployment" {
return false, nil
}
kind = "Deployment"
name = deploymentOwner.Name
}
workloadInfo, ok := p.workloadSelector[kind]
if !ok {
return false, nil
}
if len(workloadInfo.namespaces) != 0 {
_, ok = workloadInfo.namespaces[namespace]
if !ok {
return false, nil
}
}
if len(workloadInfo.excludeNamespaces) != 0 {
_, ok = workloadInfo.excludeNamespaces[namespace]
if ok {
return false, nil
}
}
if len(workloadInfo.names) != 0 {
_, ok = workloadInfo.names[name]
if !ok {
return false, nil
}
}
return true, nil
}
func (p *filterCacheChecker) checkWorkload(pod *corev1.Pod) bool {
owners := pod.GetOwnerReferences()
if len(owners) == 0 {
return false
}
for _, owner := range owners {
ret, err := p.checkOwner(owner, pod.GetNamespace())
if err != nil {
log.Error("check owner error:%s", err)
return false
}
if !ret {
return false
}
}
return true
}
func (p *filterCacheChecker) checkLabels(pod *corev1.Pod) bool {
if p.lgc.Spec.Selector == nil {
return true
}
if len(p.lgc.Spec.Selector.LabelSelector) != 0 {
if LabelsSubset(p.lgc.Spec.Selector.LabelSelector, pod.Labels) {
return true
}
return false
}
return true
}

View File

@ -19,14 +19,19 @@ package helper
import (
"bytes"
"context"
"encoding/json"
"fmt"
dockerclient "github.com/docker/docker/client"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/runtime"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"path/filepath"
"strings"
"time"
"k8s.io/client-go/kubernetes"
dockerclient "github.com/docker/docker/client"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/runtime"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
criapi "k8s.io/cri-api/pkg/apis/runtime/v1"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
@ -63,18 +68,13 @@ func MetaNamespaceKey(namespace string, name string) string {
type FuncGetRelatedPod func() ([]*corev1.Pod, error)
func GetLogConfigRelatedPod(lgc *logconfigv1beta1.LogConfig, podsLister corev1listers.PodLister) ([]*corev1.Pod, error) {
sel, err := Selector(lgc.Spec.Selector.LabelSelector)
func GetLogConfigRelatedPod(lgc *logconfigv1beta1.LogConfig, podsLister corev1listers.PodLister, clientSet kubernetes.Interface) ([]*corev1.Pod, error) {
filter := NewPodFilter(lgc, podsLister, clientSet)
pods, err := filter.Filter()
if err != nil {
return nil, err
}
ret, err := podsLister.Pods(lgc.Namespace).List(sel)
if err != nil {
return nil, errors.WithMessagef(err, "%s/%s cannot find pod by labelSelector %#v", lgc.Namespace, lgc.Name, lgc.Spec.Selector.PodSelector.LabelSelector)
}
return ret, nil
return pods, nil
}
func Selector(labelSelector map[string]string) (labels.Selector, error) {
@ -107,7 +107,7 @@ func Selector(labelSelector map[string]string) (labels.Selector, error) {
return selector, nil
}
func GetPodRelatedLogConfigs(pod *corev1.Pod, lgcLister logconfigLister.LogConfigLister) ([]*logconfigv1beta1.LogConfig, error) {
func GetPodRelatedLogConfigs(pod *corev1.Pod, lgcLister logconfigLister.LogConfigLister, clientSet kubernetes.Interface) ([]*logconfigv1beta1.LogConfig, error) {
lgcList, err := lgcLister.LogConfigs(pod.Namespace).List(labels.Everything())
if err != nil {
return nil, err
@ -119,14 +119,19 @@ func GetPodRelatedLogConfigs(pod *corev1.Pod, lgcLister logconfigLister.LogConfi
continue
}
if LabelsSubset(lgc.Spec.Selector.LabelSelector, pod.Labels) {
confirm := NewPodsConfirm(lgc, clientSet)
result, err := confirm.Confirm(pod)
if err != nil {
return nil, err
}
if result {
ret = append(ret, lgc)
}
}
return ret, nil
}
func GetPodRelatedClusterLogConfigs(pod *corev1.Pod, clgcLister logconfigLister.ClusterLogConfigLister) ([]*logconfigv1beta1.ClusterLogConfig, error) {
func GetPodRelatedClusterLogConfigs(pod *corev1.Pod, clgcLister logconfigLister.ClusterLogConfigLister, clientSet kubernetes.Interface) ([]*logconfigv1beta1.ClusterLogConfig, error) {
clgcList, err := clgcLister.List(labels.Everything())
if err != nil {
return nil, err
@ -134,11 +139,18 @@ func GetPodRelatedClusterLogConfigs(pod *corev1.Pod, clgcLister logconfigLister.
ret := make([]*logconfigv1beta1.ClusterLogConfig, 0)
for _, lgc := range clgcList {
if lgc.Spec.Selector == nil || lgc.Spec.Selector.Type != logconfigv1beta1.SelectorTypePod {
if lgc.Spec.Selector.Type != logconfigv1beta1.SelectorTypeWorkload && (lgc.Spec.Selector == nil || lgc.Spec.Selector.Type != logconfigv1beta1.SelectorTypePod) {
continue
}
if LabelsSubset(lgc.Spec.Selector.LabelSelector, pod.Labels) {
logConfig := lgc.ToLogConfig()
confirm := NewPodsConfirm(logConfig, clientSet)
result, err := confirm.Confirm(pod)
if err != nil {
log.Error("filter pod error:%s", err)
continue
}
if result {
ret = append(ret, lgc)
}
}
@ -437,8 +449,12 @@ func nodePathByContainerPath(pathPattern string, pod *corev1.Pod, volumeName str
return getEmptyDirNodePath(pathPattern, pod, volumeName, volumeMountPath, kubeletRootDir, subPathRes), nil
}
// If pod mount pvc as log pathwe need set rootFsCollectionEnabled to true, and container runtime should be docker.
if vol.PersistentVolumeClaim != nil && rootFsCollectionEnabled && containerRuntime.Name() == runtime.RuntimeDocker {
if vol.NFS != nil && containerRuntime.Name() == runtime.RuntimeDocker {
return getNfsPath(pathPattern, pod, volumeName, volumeMountPath, kubeletRootDir, subPathRes), nil
}
// If pod mount pvc as log pathwe need set rootFsCollectionEnabled to true.
if vol.PersistentVolumeClaim != nil && rootFsCollectionEnabled {
return getPVNodePath(pathPattern, volumeMountPath, containerId, containerRuntime)
}
@ -460,28 +476,98 @@ func getEmptyDirNodePath(pathPattern string, pod *corev1.Pod, volumeName string,
return filepath.Join(emptyDirPath, subPath, pathSuffix)
}
// refers to https://github.com/kubernetes/kubernetes/blob/6aac45ff1e99068e834ba3b93b673530cf62c007/pkg/volume/nfs/nfs.go#L202
func getNfsPath(pathPattern string, pod *corev1.Pod, volumeName string, volumeMountPath string, kubeletRootDir string, subPath string) string {
emptyDirPath := filepath.Join(kubeletRootDir, "pods", string(pod.UID), "volumes/kubernetes.io~nfs", volumeName)
pathSuffix := strings.TrimPrefix(pathPattern, volumeMountPath)
return filepath.Join(emptyDirPath, subPath, pathSuffix)
}
// Find the actual path on the node based on pvc.
func getPVNodePath(pathPattern string, volumeMountPath string, containerId string, containerRuntime runtime.Runtime) (string, error) {
ctx := context.Background()
if containerRuntime == nil {
return "", errors.New("docker runtime is not initial")
}
cli := containerRuntime.Client().(*dockerclient.Client)
containerJson, err := cli.ContainerInspect(ctx, containerId)
if err != nil {
return "", errors.Errorf("containerId: %s, docker inspect error: %s", containerId, err)
}
for _, mnt := range containerJson.Mounts {
if !PathEqual(mnt.Destination, volumeMountPath) {
continue
if containerRuntime.Name() == runtime.RuntimeDocker {
cli := containerRuntime.Client().(*dockerclient.Client)
containerJson, err := cli.ContainerInspect(ctx, containerId)
if err != nil {
return "", errors.Errorf("containerId: %s, docker inspect error: %s", containerId, err)
}
pathSuffix := strings.TrimPrefix(pathPattern, volumeMountPath)
return filepath.Join(mnt.Source, pathSuffix), nil
for _, mnt := range containerJson.Mounts {
if !PathEqual(mnt.Destination, volumeMountPath) {
continue
}
pathSuffix := strings.TrimPrefix(pathPattern, volumeMountPath)
return filepath.Join(mnt.Source, pathSuffix), nil
}
return "", errors.New("cannot find pv volume path in node")
} else if containerRuntime.Name() == runtime.RuntimeContainerd {
cli := containerRuntime.Client().(criapi.RuntimeServiceClient)
request := &criapi.ContainerStatusRequest{
ContainerId: containerId,
Verbose: true,
}
response, err := cli.ContainerStatus(ctx, request)
if err != nil {
return "", errors.WithMessagef(err, "get container(id: %s) status failed", containerId)
}
infoStr, ok := response.GetInfo()["info"]
if !ok {
if log.IsDebugLevel() {
info, _ := json.Marshal(response.GetInfo())
log.Debug("get info: %s from container(id: %s)", string(info), containerId)
}
return "", errors.Errorf("cannot get info from container(id: %s) status", containerId)
}
infoMap := make(map[string]interface{})
if err := json.Unmarshal([]byte(infoStr), &infoMap); err != nil {
return "", errors.WithMessagef(err, "get info from container(id: %s)", containerId)
}
configIf, ok := infoMap["config"]
if !ok {
return "", errors.Errorf("cannot get config from container(id: %s) status", containerId)
}
configMap, ok := configIf.(map[string]interface{})
if !ok {
return "", errors.Errorf("cannot get config map from container(id: %s) status", containerId)
}
mountsIf, ok := configMap["mounts"]
if !ok {
return "", errors.Errorf("cannot get mounts from container(id: %s) status", containerId)
}
mountsSlice, ok := mountsIf.([]interface{})
if !ok {
return "", errors.Errorf("cannot get mounts slice from container(id: %s) status", containerId)
}
for _, mntIf := range mountsSlice {
mnt, ok := mntIf.(map[string]interface{})
if !ok {
return "", errors.Errorf("cannot get mount from container(id: %s) status", containerId)
}
container_path, ok := mnt["container_path"].(string)
if !ok {
return "", errors.Errorf("cannot get container_path from container(id: %s) status", containerId)
}
host_path, ok := mnt["host_path"].(string)
if !ok {
return "", errors.Errorf("cannot get host_path from container(id: %s) status", containerId)
}
if !PathEqual(container_path, volumeMountPath) {
continue
}
pathSuffix := strings.TrimPrefix(pathPattern, volumeMountPath)
return filepath.Join(host_path, pathSuffix), nil
}
return "", errors.New("cannot find pv volume path in node")
} else {
return "", errors.New("docker or containerd runtime is not initial")
}
return "", errors.New("cannot find pv volume path in node")
}
func GetMatchedPodLabel(labelKeys []string, pod *corev1.Pod) map[string]string {

View File

@ -0,0 +1,49 @@
package helper
import (
"errors"
logconfigv1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
corev1 "k8s.io/api/core/v1"
kubeclientset "k8s.io/client-go/kubernetes"
)
// PodsConfirm 检查pod是否符合logConfig的规则
type PodsConfirm struct {
lgc *logconfigv1beta1.LogConfig
clientSet kubeclientset.Interface
cache *filterCacheChecker
}
func NewPodsConfirm(lgc *logconfigv1beta1.LogConfig, clientSet kubeclientset.Interface) *PodsConfirm {
return &PodsConfirm{
lgc: lgc,
clientSet: clientSet,
cache: newFilterCacheChecker(lgc, clientSet),
}
}
// Confirm Confirm whether the pod meets the lgc rules
func (p *PodsConfirm) Confirm(pod *corev1.Pod) (bool, error) {
if pod == nil {
return false, errors.New("confirm pod error;pod is nil")
}
if !IsPodReady(pod) {
return false, nil
}
// check label
if !p.cache.checkLabels(pod) {
return false, nil
}
if !p.cache.checkNamespace(pod) {
return false, nil
}
if !p.cache.checkWorkload(pod) {
return false, nil
}
return true, nil
}

View File

@ -0,0 +1,119 @@
package helper
import (
logconfigv1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
kubeclientset "k8s.io/client-go/kubernetes"
corev1listers "k8s.io/client-go/listers/core/v1"
)
type PodsFilter struct {
podsLister corev1listers.PodLister
lgc *logconfigv1beta1.LogConfig
clientSet kubeclientset.Interface
cache *filterCacheChecker
}
func NewPodFilter(lgc *logconfigv1beta1.LogConfig, podsLister corev1listers.PodLister, clientSet kubeclientset.Interface) *PodsFilter {
p := &PodsFilter{
lgc: lgc,
clientSet: clientSet,
podsLister: podsLister,
cache: newFilterCacheChecker(lgc, clientSet),
}
return p
}
func (p *PodsFilter) getLabelSelector(lgc *logconfigv1beta1.LogConfig) (labels.Selector, error) {
var matchExpressions []metav1.LabelSelectorRequirement
for key, val := range lgc.Spec.Selector.LabelSelector {
if val != MatchAllToken {
continue
}
sel := metav1.LabelSelectorRequirement{
Key: key,
Operator: metav1.LabelSelectorOpExists,
}
matchExpressions = append(matchExpressions, sel)
}
for k, v := range lgc.Spec.Selector.LabelSelector {
if v == MatchAllToken {
delete(lgc.Spec.Selector.LabelSelector, k)
}
}
selector, err := metav1.LabelSelectorAsSelector(&metav1.LabelSelector{
MatchLabels: lgc.Spec.Selector.LabelSelector,
MatchExpressions: matchExpressions,
})
if err != nil {
return nil, errors.WithMessagef(err, "make LabelSelector error")
}
return selector, nil
}
// GetPodsByLabelSelector select pod by label
func (p *PodsFilter) getPodsByLabelSelector() ([]*corev1.Pod, error) {
// By default read all
if p.lgc.Spec.Selector == nil || (len(p.lgc.Spec.Selector.PodSelector.LabelSelector) == 0) {
selector, err := metav1.LabelSelectorAsSelector(&metav1.LabelSelector{})
if err != nil {
return nil, errors.WithMessagef(err, "make LabelSelector error")
}
ret, err := p.podsLister.List(selector)
if err != nil {
return nil, errors.WithMessagef(err, "%s/%s cannot find pod by labelSelector %#v", p.lgc.Namespace, p.lgc.Name, p.lgc.Spec.Selector.PodSelector.LabelSelector)
}
return ret, nil
}
// Prefer labelSelector
labelSelectors, err := p.getLabelSelector(p.lgc)
if err != nil {
return nil, err
}
ret, err := p.podsLister.List(labelSelectors)
if err != nil {
return nil, errors.WithMessagef(err, "%s/%s cannot find pod by labelSelector %#v", p.lgc.Namespace, p.lgc.Name, p.lgc.Spec.Selector.PodSelector.LabelSelector)
}
return ret, nil
}
// Filter Filter pods
func (p *PodsFilter) Filter() ([]*corev1.Pod, error) {
pods, err := p.getPodsByLabelSelector()
if err != nil {
return nil, err
}
if len(p.cache.namespaces) == 0 && len(p.cache.excludeNamespaces) == 0 && len(p.cache.workloadSelector) == 0 {
return pods, nil
}
result := make([]*corev1.Pod, 0)
for _, pod := range pods {
if !IsPodReady(pod) {
continue
}
if !p.cache.checkNamespace(pod) {
continue
}
if !p.cache.checkWorkload(pod) {
continue
}
result = append(result, pod)
}
return result, nil
}

View File

@ -19,6 +19,7 @@ import (
"github.com/loggie-io/loggie/pkg/pipeline"
)
// LogConfigTypeNodeIndex stores logConfigKey to pipeline configurations mapping cache for type: Node and type: Vm
type LogConfigTypeNodeIndex struct {
pipeConfigs map[string]*TypeNodePipeConfig // key: logConfigNamespace/Name, value: pipeline configs
}

View File

@ -230,6 +230,9 @@ func (p *LogConfigTypePodIndex) mergePodsSources(dynamicContainerLog bool, lgcKe
// sink is same
aggCfg.Sink = cfgRaw.Raw.Sink
// queue is same
aggCfg.Queue = cfgRaw.Raw.Queue
// in normal, interceptor is same, but we may need to append interceptor.belongTo
mergeInterceptors(icpSets, cfgRaw.Raw.Interceptors)
}
@ -288,6 +291,7 @@ func (p *LogConfigTypePodIndex) getDynamicPipelines(lgcKey string, pods []string
aggCfg.Name = latestPodPipeline.Name
aggCfg.Interceptors = latestPodPipeline.Interceptors
aggCfg.Sink = latestPodPipeline.Sink
aggCfg.Queue = latestPodPipeline.Queue
return aggCfg
}

View File

@ -17,14 +17,19 @@ limitations under the License.
package kubernetes
import (
"context"
"github.com/loggie-io/loggie/pkg/core/log"
logconfigclientset "github.com/loggie-io/loggie/pkg/discovery/kubernetes/client/clientset/versioned"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/controller"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/external"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/runtime"
netutils "github.com/loggie-io/loggie/pkg/util/net"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/fields"
kubeclientset "k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"strings"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
kubeinformers "k8s.io/client-go/informers"
@ -52,7 +57,6 @@ func (d *Discovery) scanRunTime() {
name, err := runtime.GetRunTimeName(d.config.RuntimeEndpoints)
if err != nil {
log.Error("%s", err)
return
}
if name == "" {
@ -100,13 +104,19 @@ func (d *Discovery) Start(stopCh <-chan struct{}) {
lo.FieldSelector = fields.OneTermEqualSelector("spec.nodeName", d.config.NodeName).String()
}))
// In the vmMode, Loggie should be running on the virtual machine rather than Kubernetes
if d.config.VmMode {
d.VmModeRun(stopCh, kubeClient, logConfigClient, logConfInformerFactory, kubeInformerFactory)
return
}
nodeInformerFactory := kubeinformers.NewSharedInformerFactoryWithOptions(kubeClient, 0, kubeinformers.WithTweakListOptions(func(lo *metav1.ListOptions) {
lo.FieldSelector = fields.OneTermEqualSelector("metadata.name", d.config.NodeName).String()
}))
ctrl := controller.NewController(d.config, kubeClient, logConfigClient, kubeInformerFactory.Core().V1().Pods(),
logConfInformerFactory.Loggie().V1beta1().LogConfigs(), logConfInformerFactory.Loggie().V1beta1().ClusterLogConfigs(), logConfInformerFactory.Loggie().V1beta1().Sinks(),
logConfInformerFactory.Loggie().V1beta1().Interceptors(), nodeInformerFactory.Core().V1().Nodes(), d.runtime)
logConfInformerFactory.Loggie().V1beta1().Interceptors(), nodeInformerFactory.Core().V1().Nodes(), logConfInformerFactory.Loggie().V1beta1().Vms(), d.runtime)
logConfInformerFactory.Start(stopCh)
kubeInformerFactory.Start(stopCh)
@ -118,7 +128,76 @@ func (d *Discovery) Start(stopCh <-chan struct{}) {
}
external.Cluster = d.config.Cluster
if err := ctrl.Run(stopCh); err != nil {
synced := []cache.InformerSynced{
kubeInformerFactory.Core().V1().Pods().Informer().HasSynced,
logConfInformerFactory.Loggie().V1beta1().LogConfigs().Informer().HasSynced,
logConfInformerFactory.Loggie().V1beta1().ClusterLogConfigs().Informer().HasSynced,
logConfInformerFactory.Loggie().V1beta1().Sinks().Informer().HasSynced,
logConfInformerFactory.Loggie().V1beta1().Interceptors().Informer().HasSynced,
}
if d.config.VmMode {
synced = append(synced, logConfInformerFactory.Loggie().V1beta1().Vms().Informer().HasSynced)
}
if err := ctrl.Run(stopCh, synced...); err != nil {
log.Panic("Error running controller: %s", err.Error())
}
}
func (d *Discovery) VmModeRun(stopCh <-chan struct{}, kubeClient kubeclientset.Interface, logConfigClient logconfigclientset.Interface,
logConfInformerFactory logconfigInformer.SharedInformerFactory, kubeInformerFactory kubeinformers.SharedInformerFactory) {
metadataName := tryToFindRelatedVm(logConfigClient, d.config.NodeName)
if metadataName == "" {
log.Panic("cannot find loggie agent related vm")
return
}
d.config.NodeName = metadataName
vmInformerFactory := logconfigInformer.NewSharedInformerFactoryWithOptions(logConfigClient, 0, logconfigInformer.WithTweakListOptions(func(lo *metav1.ListOptions) {
lo.FieldSelector = fields.OneTermEqualSelector("metadata.name", d.config.NodeName).String()
}))
ctrl := controller.NewController(d.config, kubeClient, logConfigClient, nil,
nil, logConfInformerFactory.Loggie().V1beta1().ClusterLogConfigs(), logConfInformerFactory.Loggie().V1beta1().Sinks(),
logConfInformerFactory.Loggie().V1beta1().Interceptors(), nil, vmInformerFactory.Loggie().V1beta1().Vms(), d.runtime)
logConfInformerFactory.Start(stopCh)
kubeInformerFactory.Start(stopCh)
vmInformerFactory.Start(stopCh)
if err := ctrl.Run(stopCh,
logConfInformerFactory.Loggie().V1beta1().ClusterLogConfigs().Informer().HasSynced,
logConfInformerFactory.Loggie().V1beta1().Sinks().Informer().HasSynced,
logConfInformerFactory.Loggie().V1beta1().Interceptors().Informer().HasSynced,
vmInformerFactory.Loggie().V1beta1().Vms().Informer().HasSynced); err != nil {
log.Panic("Error running controller: %s", err.Error())
}
}
func tryToFindRelatedVm(logConfigClient logconfigclientset.Interface, nodeName string) string {
ipList, err := netutils.GetHostIPv4()
if err != nil {
log.Error("cannot get host IPs: %+v", err)
return ""
}
// If there is no vm name with the same name as nodeName, try to use the ip name of this node to discover vm.
for _, ip := range ipList {
name := strings.ReplaceAll(ip, ".", "-")
log.Info("try to get related vm name %s", name)
vm, err := logConfigClient.LoggieV1beta1().Vms().Get(context.Background(), name, metav1.GetOptions{})
if err != nil {
if errors.IsNotFound(err) {
continue
}
log.Warn("get vm name %s error: %+v", name, err)
return ""
}
if vm.Name != "" {
return name
}
}
return ""
}

View File

@ -18,15 +18,22 @@ package runtime
import (
"context"
"encoding/json"
"fmt"
"github.com/loggie-io/loggie/pkg/core/log"
logconfigv1beta1 "github.com/loggie-io/loggie/pkg/discovery/kubernetes/apis/loggie/v1beta1"
"github.com/loggie-io/loggie/pkg/util/json"
"github.com/pkg/errors"
criapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
criapi "k8s.io/cri-api/pkg/apis/runtime/v1"
"path"
)
type RuntimeType string
const (
KataRuntimeType RuntimeType = "io.containerd.kata.v2"
RuncRuntimeType RuntimeType = "io.containerd.runc.v2"
)
type ContainerD struct {
cli criapi.RuntimeServiceClient
}
@ -49,6 +56,31 @@ func (c *ContainerD) Client() interface{} {
return c.cli
}
func (c *ContainerD) getKataRuntimeRootfsPath(infoMap map[string]any, containerId string) (string, error) {
sandboxID, ok := infoMap["sandboxID"]
if !ok {
im, _ := json.Marshal(infoMap)
log.Debug("get info map: %s from container(id: %s)", string(im), containerId)
return "", errors.Errorf("cannot get sandboxID from container(id: %s) status", containerId)
}
//refer to https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/kata_agent.go#L83
rootfs := fmt.Sprintf("/run/kata-containers/shared/sandboxes/%v/shared/%v/rootfs", sandboxID, containerId)
return rootfs, nil
}
func (c *ContainerD) getRuncRuntimeRootfsPath(infoMap map[string]any, containerId string) (string, error) {
pid, ok := infoMap["pid"]
if !ok {
if log.IsDebugLevel() {
im, _ := json.Marshal(infoMap)
log.Debug("get info map: %s from container(id: %s)", string(im), containerId)
}
return "", errors.Errorf("cannot get pid from container(id: %s) status", containerId)
}
return fmt.Sprintf("/proc/%.0f/root", pid), nil
}
func (c *ContainerD) GetRootfsPath(ctx context.Context, containerId string, containerPaths []string) ([]string, error) {
request := &criapi.ContainerStatusRequest{
ContainerId: containerId,
@ -73,17 +105,22 @@ func (c *ContainerD) GetRootfsPath(ctx context.Context, containerId string, cont
if err := json.Unmarshal([]byte(infoStr), &infoMap); err != nil {
return nil, errors.WithMessagef(err, "get pid from container(id: %s)", containerId)
}
pid, ok := infoMap["pid"]
runtime, ok := infoMap["runtimeType"]
if !ok {
if log.IsDebugLevel() {
im, _ := json.Marshal(infoMap)
log.Debug("get info map: %s from container(id: %s)", string(im), containerId)
}
return nil, errors.Errorf("cannot get pid from container(id: %s) status", containerId)
return nil, errors.Errorf("can not get runtime type from container(id: %s) status", containerId)
}
var prefix string
if runtime == string(KataRuntimeType) {
prefix, err = c.getKataRuntimeRootfsPath(infoMap, containerId)
if err != nil {
return nil, err
}
} else {
prefix, err = c.getRuncRuntimeRootfsPath(infoMap, containerId)
if err != nil {
return nil, err
}
}
prefix := fmt.Sprintf("/proc/%.0f/root", pid)
var rootfsPaths []string
for _, p := range containerPaths {

View File

@ -21,7 +21,7 @@ import (
"github.com/loggie-io/loggie/pkg/core/log"
"github.com/pkg/errors"
"google.golang.org/grpc"
criapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
criapi "k8s.io/cri-api/pkg/apis/runtime/v1"
"net"
"net/url"
"time"
@ -46,6 +46,7 @@ func Init(endpoints []string, runtime string) Runtime {
runtimeName, err := getRuntimeName(endpoints)
if err != nil {
// fallback to Docker runtime
log.Warn("get runtime name failed, fallback to docker runtime")
return NewDocker()
}
runtime = runtimeName

View File

@ -39,13 +39,12 @@ var (
ReloadTopic = "reload"
ErrorTopic = "error"
LogAlertTopic = "log"
AlertTempTopic = "logTemp"
QueueMetricTopic = "queue"
PipelineTopic = "pipeline"
ComponentBaseTopic = "component"
SystemTopic = "sys"
NormalizeTopic = "normalize"
WebhookTopic = "alertwebhook"
NoDataTopic = "noDataAlert"
InfoTopic = "info"
)

View File

@ -27,10 +27,6 @@ import (
// asyncConsumerSize should always be 1 because concurrency may cause panic
var defaultEventCenter = NewEventCenter(2048, 1)
func Publish(topic string, data interface{}) {
defaultEventCenter.publish(NewEvent(topic, data))
}
func PublishOrDrop(topic string, data interface{}) {
defaultEventCenter.publishOrDrop(NewEvent(topic, data))
}
@ -58,7 +54,7 @@ func UnRegistrySubscribeTemporary(subscribe *Subscribe) {
}
func AfterErrorFunc(errorMsg string) {
Publish(ErrorTopic, ErrorMetricData{
PublishOrDrop(ErrorTopic, ErrorMetricData{
ErrorMsg: errorMsg,
})
}
@ -125,10 +121,6 @@ func (ec *EventCenter) unRegistryTemporary(subscribe *Subscribe) {
ec.inActiveSubscribe(subscribe)
}
func (ec *EventCenter) publish(event Event) {
ec.eventChan <- event
}
func (ec *EventCenter) publishOrDrop(event Event) {
select {
case ec.eventChan <- event:
@ -147,7 +139,9 @@ func (ec *EventCenter) start(config Config) {
}
subscribe.listener = subscribe.factory()
listener := subscribe.listener
listener.Init(&context.DefaultContext{})
if err := listener.Init(&context.DefaultContext{}); err != nil {
log.Panic("init listener %s failed: %v", name, err)
}
if conf == nil {
conf = cfg.NewCommonCfg()
@ -157,7 +151,9 @@ func (ec *EventCenter) start(config Config) {
log.Panic("unpack listener %s config error: %v", name, err)
}
config.ListenerConfigs[name] = conf
listener.Start()
if err := listener.Start(); err != nil {
log.Panic("start listener %s failed: %v", name, err)
}
log.Info("listener(%s) start", listener.Name())
ec.activeSubscribe(subscribe)
@ -210,8 +206,6 @@ func (ec *EventCenter) run() {
for _, subscribe := range metas {
subscribe.listener.Subscribe(e)
}
} else {
log.Debug("topic(%s) has no consumer listener", topic)
}
}
}

View File

@ -18,61 +18,81 @@ package alertmanager
import (
"bytes"
"encoding/json"
"io/ioutil"
"github.com/loggie-io/loggie/pkg/util/json"
"github.com/pkg/errors"
"io"
"net/http"
"strconv"
"strings"
"sync"
"text/template"
"time"
"github.com/loggie-io/loggie/pkg/core/api"
"github.com/loggie-io/loggie/pkg/core/event"
"github.com/loggie-io/loggie/pkg/core/log"
"github.com/loggie-io/loggie/pkg/eventbus"
"github.com/loggie-io/loggie/pkg/sink/alertwebhook"
"github.com/loggie-io/loggie/pkg/core/logalert"
"github.com/loggie-io/loggie/pkg/util"
"github.com/loggie-io/loggie/pkg/util/bufferpool"
"github.com/loggie-io/loggie/pkg/util/pattern"
)
type AlertManager struct {
Address []string
Client http.Client
temp *template.Template
bp *BufferPool
headers map[string]string
method string
LineLimit int
Client http.Client
temp *template.Template
bp *bufferpool.BufferPool
headers map[string]string
method string
LineLimit int
groupConfig logalert.GroupConfig
lock sync.Mutex
alertMap map[string][]event.Alert
}
type ResetTempEvent struct {
}
func NewAlertManager(addr []string, timeout, lineLimit int, temp *string, headers map[string]string, method string) *AlertManager {
func NewAlertManager(config *logalert.Config) (*AlertManager, error) {
cli := http.Client{
Timeout: time.Duration(timeout) * time.Second,
Timeout: config.Timeout,
}
manager := &AlertManager{
Address: addr,
Address: config.Addr,
Client: cli,
headers: headers,
bp: newBufferPool(1024),
LineLimit: lineLimit,
headers: config.Headers,
bp: bufferpool.NewBufferPool(1024),
LineLimit: config.LineLimit,
method: http.MethodPost,
alertMap: make(map[string][]event.Alert),
}
if strings.ToUpper(method) == http.MethodPut {
manager.method = http.MethodPost
if strings.ToUpper(config.Method) == http.MethodPut {
manager.method = http.MethodPut
}
if temp != nil {
t, err := template.New("alertTemplate").Parse(*temp)
if len(config.Template) > 0 {
t, err := makeTemplate(config.Template)
if err != nil {
log.Error("fail to generate temp %s", *temp)
return nil, err
}
manager.temp = t
}
return manager
manager.groupConfig.AlertSendingThreshold = config.AlertSendingThreshold
log.Debug("alertManager groupKey: %s", config.GroupKey)
if len(config.GroupKey) > 0 {
p, err := pattern.Init(config.GroupKey)
if err != nil {
return nil, errors.WithMessagef(err, "fail to init group key %s", config.GroupKey)
}
manager.groupConfig.Pattern = p
}
return manager, nil
}
type AlertEvent struct {
@ -82,30 +102,29 @@ type AlertEvent struct {
Annotations map[string]string `json:"Annotations,omitempty"`
}
func (a *AlertManager) SendAlert(events []*eventbus.Event) {
func (a *AlertManager) SendAlert(events []*api.Event, sendAtOnce bool) {
var alerts []*alertwebhook.Alert
var alerts []event.Alert
for _, e := range events {
if e.Data == nil {
continue
}
data, ok := e.Data.(*api.Event)
if !ok {
log.Info("fail to convert data to event")
return
}
alert := alertwebhook.NewAlert(*data, a.LineLimit)
alerts = append(alerts, &alert)
alert := event.NewAlert(*e, a.LineLimit)
alerts = append(alerts, alert)
}
alertCenterObj := map[string]interface{}{
"Alerts": alerts,
if sendAtOnce {
logalert.GroupAlertsAtOnce(alerts, a.packageAndSendAlerts, a.groupConfig)
return
}
logalert.GroupAlerts(a.alertMap, alerts, a.packageAndSendAlerts, a.groupConfig)
}
func (a *AlertManager) packageAndSendAlerts(alerts []event.Alert) {
if len(alerts) == 0 {
return
}
alertCenterObj := event.GenAlertsOriginData(alerts)
var request []byte
a.lock.Lock()
@ -128,7 +147,7 @@ func (a *AlertManager) SendAlert(events []*eventbus.Event) {
}
a.lock.Unlock()
log.Debug("sending alert %s", request)
log.Debug("sending alert:\n %s", request)
for _, address := range a.Address { // send alerts to alertManager cluster, no need to retry
a.send(address, request)
}
@ -153,8 +172,8 @@ func (a *AlertManager) send(address string, alert []byte) {
}
defer resp.Body.Close()
if !alertwebhook.Is2xxSuccess(resp.StatusCode) {
r, err := ioutil.ReadAll(resp.Body)
if !util.Is2xxSuccess(resp.StatusCode) {
r, err := io.ReadAll(resp.Body)
if err != nil {
log.Warn("read response body error: %v", err)
}
@ -162,10 +181,24 @@ func (a *AlertManager) send(address string, alert []byte) {
}
}
func (a *AlertManager) UpdateTemp(temp string) {
t, err := template.New("alertTemplate").Parse(temp)
func makeTemplate(temp string) (*template.Template, error) {
t, err := template.New("alertTemplate").Funcs(template.FuncMap{
"escape": escape,
"pruneEscape": pruneEscape,
}).Parse(temp)
if err != nil {
log.Error("fail to generate temp %s", temp)
return nil, errors.WithMessagef(err, "fail to generate template %s", temp)
}
a.temp = t
return t, nil
}
func escape(s string) string {
return strconv.Quote(s)
}
func pruneEscape(s string) string {
raw := strconv.Quote(s)
raw = strings.TrimPrefix(raw, `"`)
raw = strings.TrimSuffix(raw, `"`)
return raw
}

View File

@ -17,7 +17,7 @@ limitations under the License.
package logger
import (
"encoding/json"
"github.com/loggie-io/loggie/pkg/util/json"
"time"
"github.com/loggie-io/loggie/pkg/core/log"
@ -88,6 +88,8 @@ func newLogger() *logger {
}
func Run(config Config) {
lg.config = config
if !config.Enabled {
return
}
@ -151,6 +153,10 @@ func (l *logger) clean() {
}
func Export(topic string, data []byte) {
if !lg.config.Enabled {
return
}
e := &Event{
Topic: topic,
Data: data,

View File

@ -17,7 +17,7 @@ limitations under the License.
package filesource
import (
"encoding/json"
"github.com/loggie-io/loggie/pkg/util/json"
"os"
"strings"
"time"

View File

@ -17,7 +17,7 @@ limitations under the License.
package filewatcher
import (
"encoding/json"
"github.com/loggie-io/loggie/pkg/util/json"
"strings"
"time"

View File

@ -94,7 +94,7 @@ func (l *Listener) exportPrometheus() {
metric := promeExporter.ExportedMetrics{
{
Desc: prometheus.NewDesc(
prometheus.BuildFQName(promeExporter.Loggie, eventbus.InfoTopic, "stat"),
prometheus.BuildFQName(promeExporter.Loggie, eventbus.InfoTopic, "status"),
"Loggie info",
nil, labels,
),

View File

@ -20,7 +20,9 @@ import (
"time"
"github.com/loggie-io/loggie/pkg/core/api"
"github.com/loggie-io/loggie/pkg/core/event"
"github.com/loggie-io/loggie/pkg/core/log"
"github.com/loggie-io/loggie/pkg/core/logalert"
"github.com/loggie-io/loggie/pkg/eventbus"
"github.com/loggie-io/loggie/pkg/eventbus/export/alertmanager"
)
@ -28,36 +30,25 @@ import (
const name = "logAlert"
func init() {
eventbus.Registry(name, makeListener, eventbus.WithTopic(eventbus.LogAlertTopic), eventbus.WithTopic(eventbus.AlertTempTopic))
eventbus.Registry(name, makeListener, eventbus.WithTopic(eventbus.LogAlertTopic),
eventbus.WithTopic(eventbus.ErrorTopic), eventbus.WithTopic(eventbus.NoDataTopic))
}
func makeListener() eventbus.Listener {
l := &Listener{
config: &Config{},
config: &logalert.Config{},
done: make(chan struct{}),
}
return l
}
type Config struct {
Addr []string `yaml:"addr,omitempty" validate:"required"`
BufferSize int `yaml:"bufferSize,omitempty" default:"100"`
BatchTimeout time.Duration `yaml:"batchTimeout,omitempty" default:"10s"`
BatchSize int `yaml:"batchSize,omitempty" default:"10"`
Template *string `yaml:"template,omitempty"`
Timeout int `yaml:"timeout,omitempty" default:"30"`
Headers map[string]string `yaml:"headers,omitempty"`
Method string `yaml:"method,omitempty"`
LineLimit int `yaml:"lineLimit,omitempty" default:"10"`
}
type Listener struct {
config *Config
config *logalert.Config
done chan struct{}
bufferChan chan *eventbus.Event
SendBatch []*eventbus.Event
SendBatch []*api.Event
alertCli *alertmanager.AlertManager
}
@ -76,9 +67,13 @@ func (l *Listener) Config() interface{} {
func (l *Listener) Start() error {
l.bufferChan = make(chan *eventbus.Event, l.config.BufferSize)
l.SendBatch = make([]*eventbus.Event, 0)
l.SendBatch = make([]*api.Event, 0)
l.alertCli = alertmanager.NewAlertManager(l.config.Addr, l.config.Timeout, l.config.LineLimit, l.config.Template, l.config.Headers, l.config.Method)
cli, err := alertmanager.NewAlertManager(l.config)
if err != nil {
return err
}
l.alertCli = cli
log.Info("starting logAlert listener")
go l.run()
@ -111,17 +106,68 @@ func (l *Listener) run() {
}
}
func (l *Listener) process(event *eventbus.Event) {
log.Debug("process event %s", event)
if event.Topic == eventbus.AlertTempTopic {
s, ok := event.Data.(string)
if ok {
l.alertCli.UpdateTemp(s)
}
func (l *Listener) process(e *eventbus.Event) {
if e.Topic == eventbus.ErrorTopic {
l.processErrorTopic(e)
} else if e.Topic == eventbus.NoDataTopic {
l.processNoDataTopic(e)
} else {
l.processLogAlertTopic(e)
}
}
func (l *Listener) processErrorTopic(e *eventbus.Event) {
if !l.config.SendLoggieError {
return
}
l.SendBatch = append(l.SendBatch, event)
errorData, ok := e.Data.(eventbus.ErrorMetricData)
if !ok {
log.Warn("fail to convert loggie error to event, ignore...")
return
}
apiEvent := event.ErrorToEvent(errorData.ErrorMsg)
if l.config.SendLoggieErrorAtOnce {
l.alertCli.SendAlert([]*api.Event{apiEvent}, true)
return
}
l.sendToBatch(apiEvent)
}
func (l *Listener) processLogAlertTopic(e *eventbus.Event) {
apiEvent, ok := e.Data.(*api.Event)
if !ok {
log.Warn("fail to convert data to event, ignore...")
return
}
if (*apiEvent).Header()[event.ReasonKey] == event.NoDataKey && l.config.SendNoDataAlertAtOnce {
l.alertCli.SendAlert([]*api.Event{apiEvent}, true)
return
}
l.sendToBatch(apiEvent)
}
func (l *Listener) processNoDataTopic(e *eventbus.Event) {
apiEvent, ok := e.Data.(*api.Event)
if !ok {
log.Warn("fail to convert data to event, ignore...")
return
}
if l.config.SendNoDataAlertAtOnce {
l.alertCli.SendAlert([]*api.Event{apiEvent}, true)
return
}
l.sendToBatch(apiEvent)
}
func (l *Listener) sendToBatch(e *api.Event) {
l.SendBatch = append(l.SendBatch, e)
if len(l.SendBatch) >= l.config.BatchSize {
l.flush()
@ -133,10 +179,9 @@ func (l *Listener) flush() {
return
}
events := make([]*eventbus.Event, len(l.SendBatch))
events := make([]*api.Event, len(l.SendBatch))
copy(events, l.SendBatch)
l.alertCli.SendAlert(events)
l.alertCli.SendAlert(events, l.config.SendLogAlertAtOnce)
l.SendBatch = l.SendBatch[:0]
}

View File

@ -0,0 +1,35 @@
# example for logalert in interceptor
loggie:
monitor:
listeners:
logAlert:
addr: [ "http://127.0.0.1:8080/loggie" ]
bufferSize: 100
batchTimeout: 10s
batchSize: 1
lineLimit: 10
template: |
{
"alerts":
[
{{- $first := true -}}
{{- range .Events -}}
{{- if $first}}{{$first = false}}{{else}},{{end}}
{
"labels": {
"logconfig": "{{.logconfig}}",
"namespace": "{{.namespace}}"
"nodename": "{{.nodename}}",
"podname": "{{.pod_name}}",
"file": "{{.state.filename}}"
},
"annotations": {
"reason": {{escape ._meta.reason}},
"message": {{range .body}}{{escape .}}{{end}}
},
"startsAt": "{{._meta.timestamp}}",
"endsAt": "{{._meta.timestamp}}"
}
{{- end}}
]
}

View File

@ -17,10 +17,10 @@ limitations under the License.
package normalize
import (
"encoding/json"
"fmt"
"github.com/loggie-io/loggie/pkg/eventbus/export/logger"
promeExporter "github.com/loggie-io/loggie/pkg/eventbus/export/prometheus"
"github.com/loggie-io/loggie/pkg/util/json"
"github.com/prometheus/client_golang/prometheus"
"time"

View File

@ -17,7 +17,7 @@ limitations under the License.
package queue
import (
"encoding/json"
"github.com/loggie-io/loggie/pkg/util/json"
"strings"
"time"

View File

@ -17,7 +17,7 @@ limitations under the License.
package reload
import (
"encoding/json"
"github.com/loggie-io/loggie/pkg/util/json"
"time"
"github.com/loggie-io/loggie/pkg/core/api"

View File

@ -17,7 +17,7 @@ limitations under the License.
package sink
import (
"encoding/json"
"github.com/loggie-io/loggie/pkg/util/json"
"strings"
"time"

View File

@ -17,8 +17,9 @@ limitations under the License.
package sys
import (
"encoding/json"
"fmt"
"github.com/dustin/go-humanize"
"github.com/loggie-io/loggie/pkg/util/json"
"os"
"strconv"
"time"
@ -48,8 +49,9 @@ func makeListener() eventbus.Listener {
}
type sysData struct {
MemoryRss uint64 `json:"memRss"`
CPUPercent float64 `json:"cpuPercent"`
MemoryRss uint64 `json:"-"`
MemoryRssHumanize string `json:"memRss"`
CPUPercent float64 `json:"cpuPercent"`
}
type Config struct {
@ -122,6 +124,7 @@ func (l *Listener) getSysStat() error {
return err
}
l.data.MemoryRss = mem.RSS
l.data.MemoryRssHumanize = humanize.Bytes(mem.RSS)
cpuPer, err := l.proc.Percent(1 * time.Second)
if err != nil {

View File

@ -1,5 +1,7 @@
//go:build !include_core
/*
Copyright 2021 Loggie Authors
Copyright 2023 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@ -28,6 +30,7 @@ import (
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/reload"
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/sink"
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/sys"
_ "github.com/loggie-io/loggie/pkg/interceptor/addhostmeta"
_ "github.com/loggie-io/loggie/pkg/interceptor/addk8smeta"
_ "github.com/loggie-io/loggie/pkg/interceptor/cost"
_ "github.com/loggie-io/loggie/pkg/interceptor/json_decode"
@ -55,6 +58,7 @@ import (
_ "github.com/loggie-io/loggie/pkg/sink/kafka"
_ "github.com/loggie-io/loggie/pkg/sink/loki"
_ "github.com/loggie-io/loggie/pkg/sink/pulsar"
_ "github.com/loggie-io/loggie/pkg/sink/rocketmq"
_ "github.com/loggie-io/loggie/pkg/sink/sls"
_ "github.com/loggie-io/loggie/pkg/sink/zinc"
_ "github.com/loggie-io/loggie/pkg/source/codec/json"
@ -63,6 +67,7 @@ import (
_ "github.com/loggie-io/loggie/pkg/source/elasticsearch"
_ "github.com/loggie-io/loggie/pkg/source/file"
_ "github.com/loggie-io/loggie/pkg/source/file/process"
_ "github.com/loggie-io/loggie/pkg/source/franz"
_ "github.com/loggie-io/loggie/pkg/source/grpc"
_ "github.com/loggie-io/loggie/pkg/source/kafka"
_ "github.com/loggie-io/loggie/pkg/source/kubernetes_event"

62
pkg/include/core.go Normal file
View File

@ -0,0 +1,62 @@
//go:build include_core
/*
Copyright 2023 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package include
import (
_ "github.com/loggie-io/loggie/pkg/eventbus/export/prometheus"
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/filesource"
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/filewatcher"
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/info"
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/logalerting"
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/pipeline"
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/queue"
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/reload"
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/sink"
_ "github.com/loggie-io/loggie/pkg/eventbus/listener/sys"
_ "github.com/loggie-io/loggie/pkg/interceptor/addhostmeta"
_ "github.com/loggie-io/loggie/pkg/interceptor/addk8smeta"
_ "github.com/loggie-io/loggie/pkg/interceptor/limit"
_ "github.com/loggie-io/loggie/pkg/interceptor/logalert"
_ "github.com/loggie-io/loggie/pkg/interceptor/logalert/condition"
_ "github.com/loggie-io/loggie/pkg/interceptor/maxbytes"
_ "github.com/loggie-io/loggie/pkg/interceptor/metric"
_ "github.com/loggie-io/loggie/pkg/interceptor/retry"
_ "github.com/loggie-io/loggie/pkg/interceptor/schema"
_ "github.com/loggie-io/loggie/pkg/interceptor/transformer"
_ "github.com/loggie-io/loggie/pkg/interceptor/transformer/action"
_ "github.com/loggie-io/loggie/pkg/interceptor/transformer/condition"
_ "github.com/loggie-io/loggie/pkg/queue/channel"
_ "github.com/loggie-io/loggie/pkg/queue/memory"
_ "github.com/loggie-io/loggie/pkg/sink/alertwebhook"
_ "github.com/loggie-io/loggie/pkg/sink/codec/json"
_ "github.com/loggie-io/loggie/pkg/sink/codec/raw"
_ "github.com/loggie-io/loggie/pkg/sink/dev"
_ "github.com/loggie-io/loggie/pkg/sink/elasticsearch"
_ "github.com/loggie-io/loggie/pkg/sink/file"
_ "github.com/loggie-io/loggie/pkg/sink/franz"
_ "github.com/loggie-io/loggie/pkg/sink/kafka"
_ "github.com/loggie-io/loggie/pkg/source/codec/json"
_ "github.com/loggie-io/loggie/pkg/source/codec/regex"
_ "github.com/loggie-io/loggie/pkg/source/dev"
_ "github.com/loggie-io/loggie/pkg/source/elasticsearch"
_ "github.com/loggie-io/loggie/pkg/source/file"
_ "github.com/loggie-io/loggie/pkg/source/file/process"
_ "github.com/loggie-io/loggie/pkg/source/franz"
_ "github.com/loggie-io/loggie/pkg/source/kafka"
)

View File

@ -0,0 +1,26 @@
/*
Copyright 2023 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package addhostmeta
import "github.com/loggie-io/loggie/pkg/core/interceptor"
type Config struct {
interceptor.ExtensionConfig `yaml:",inline"`
FieldsName string `yaml:"fieldsName" default:"host"`
AddFields map[string]string `yaml:"addFields,omitempty"`
}

View File

@ -0,0 +1,152 @@
/*
Copyright 2023 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package addhostmeta
import (
"fmt"
"github.com/loggie-io/loggie/pkg/core/api"
"github.com/loggie-io/loggie/pkg/core/source"
"github.com/loggie-io/loggie/pkg/pipeline"
netutils "github.com/loggie-io/loggie/pkg/util/net"
"github.com/pkg/errors"
"github.com/shirou/gopsutil/v3/host"
)
const (
Type = "addHostMeta"
)
func init() {
pipeline.Register(api.INTERCEPTOR, Type, makeInterceptor)
}
func makeInterceptor(info pipeline.Info) api.Component {
return &Interceptor{
config: &Config{},
metadata: make(map[string]interface{}),
}
}
type Interceptor struct {
config *Config
host *host.InfoStat
IPv4s []string
metadata map[string]interface{}
}
func (icp *Interceptor) Config() interface{} {
return icp.config
}
func (icp *Interceptor) Category() api.Category {
return api.INTERCEPTOR
}
func (icp *Interceptor) Type() api.Type {
return Type
}
func (icp *Interceptor) String() string {
return fmt.Sprintf("%s/%s", icp.Category(), icp.Type())
}
func (icp *Interceptor) Init(context api.Context) error {
return nil
}
func (icp *Interceptor) Start() error {
info, err := host.Info()
if err != nil {
return errors.WithMessagef(err, "get host info")
}
icp.host = info
ips, err := netutils.GetHostIPv4()
if err != nil {
return err
}
icp.IPv4s = ips
icp.metadata = icp.addFields()
return nil
}
func (icp *Interceptor) Stop() {
}
func (icp *Interceptor) Intercept(invoker source.Invoker, invocation source.Invocation) api.Result {
event := invocation.Event
header := event.Header()
metaClone := make(map[string]interface{})
for k, v := range icp.metadata {
metaClone[k] = v
}
header[icp.config.FieldsName] = metaClone
return invoker.Invoke(invocation)
}
func (icp *Interceptor) Order() int {
return icp.config.Order
}
func (icp *Interceptor) BelongTo() (componentTypes []string) {
return icp.config.BelongTo
}
func (icp *Interceptor) IgnoreRetry() bool {
return true
}
func (icp *Interceptor) addFields() map[string]interface{} {
result := make(map[string]interface{})
for k, v := range icp.config.AddFields {
result[k] = icp.metadatas(v)
}
return result
}
func (icp *Interceptor) metadatas(fields string) interface{} {
switch fields {
case "${hostname}":
return icp.host.Hostname
case "${os}":
return icp.host.OS
case "${platform}":
return icp.host.Platform
case "${platformFamily}":
return icp.host.PlatformFamily
case "${platformVersion}":
return icp.host.PlatformVersion
case "${kernelVersion}":
return icp.host.KernelVersion
case "${kernelArch}":
return icp.host.KernelArch
case "${ip}":
return icp.IPv4s
}
return ""
}

View File

@ -0,0 +1,9 @@
interceptors:
- type: addHostMeta
addFields:
hostname: "${hostname}"
ip: "${ip}"
os: "${os}"
platform: "${platform}"
kernelVersion: "${kernelVersion}"
kernel_arch: "${kernelArch}"

View File

@ -18,19 +18,21 @@ package addk8smeta
import (
"fmt"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/helper"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
"github.com/loggie-io/loggie/pkg/core/api"
"github.com/loggie-io/loggie/pkg/core/global"
"github.com/loggie-io/loggie/pkg/core/log"
"github.com/loggie-io/loggie/pkg/core/source"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/external"
"github.com/loggie-io/loggie/pkg/discovery/kubernetes/helper"
"github.com/loggie-io/loggie/pkg/pipeline"
"github.com/loggie-io/loggie/pkg/source/file"
"github.com/loggie-io/loggie/pkg/util/pattern"
"github.com/loggie-io/loggie/pkg/util/persistence"
"github.com/loggie-io/loggie/pkg/util/runtime"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
)
const Type = "addK8sMeta"
@ -139,7 +141,7 @@ func valueOfPatternFields(event api.Event, patternFields string) (string, error)
return "", errors.New("get systemState from meta is null")
}
stat, ok := state.(*file.State)
stat, ok := state.(*persistence.State)
if !ok {
return "", errors.New("assert file.State failed")
}

View File

@ -18,9 +18,9 @@ package json_decode
import (
"fmt"
"github.com/loggie-io/loggie/pkg/util/json"
"strings"
jsoniter "github.com/json-iterator/go"
"github.com/loggie-io/loggie/pkg/core/api"
"github.com/loggie-io/loggie/pkg/core/event"
"github.com/loggie-io/loggie/pkg/core/log"
@ -35,10 +35,6 @@ func init() {
pipeline.Register(api.INTERCEPTOR, Type, makeInterceptor)
}
var (
json = jsoniter.ConfigFastest
)
func makeInterceptor(info pipeline.Info) api.Component {
return &Interceptor{
config: &Config{},

View File

@ -0,0 +1,50 @@
/*
Copyright 2021 Loggie Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package limit
import (
"runtime"
"time"
)
type clockDescriptor struct {
clock Clock
}
func NewHighPrecisionClockDescriptor(clock Clock) Clock {
return &clockDescriptor{clock: clock}
}
func (c *clockDescriptor) Sleep(d time.Duration) {
// Sleeps less than 10ms use long polling instead to improve precision
if d < time.Millisecond*10 {
start := time.Now()
needYield := d >= time.Millisecond*5
for time.Since(start) < d {
if needYield {
runtime.Gosched()
}
// do nothing
}
return
}
c.clock.Sleep(d)
}
func (c *clockDescriptor) Now() time.Time {
return c.clock.Now()
}

View File

@ -20,5 +20,6 @@ import "github.com/loggie-io/loggie/pkg/core/interceptor"
type Config struct {
interceptor.ExtensionConfig `yaml:",inline"`
Qps int `yaml:"qps,omitempty" default:"2048" validate:"gte=0"`
Qps int `yaml:"qps,omitempty" default:"2048" validate:"gte=0"`
HighPrecision bool `yaml:"highPrecision,omitempty" default:"false"`
}

View File

@ -68,7 +68,12 @@ func (i *Interceptor) Init(context api.Context) error {
func (i *Interceptor) Start() error {
log.Info("rate limit: qps->%d", i.qps)
i.l = newUnsafeBased(i.qps, WithoutLock())
ops := make([]Option, 0)
ops = append(ops, WithoutLock())
if i.config.HighPrecision {
ops = append(ops, WithHighPrecision())
}
i.l = newUnsafeBased(i.qps, ops...)
return nil
}

View File

@ -40,10 +40,11 @@ type Clock interface {
// config configures a limiter.
type config struct {
clock Clock
maxSlack time.Duration
per time.Duration
lock bool
clock Clock
maxSlack time.Duration
per time.Duration
lock bool
highPrecision bool
}
// Option configures a Limiter.
@ -76,6 +77,18 @@ func WithoutLock() Option {
return unLockOption{}
}
type highPrecisionOption struct {
}
func (hpo highPrecisionOption) apply(c *config) {
c.highPrecision = true
c.clock = NewHighPrecisionClockDescriptor(c.clock)
}
func WithHighPrecision() Option {
return highPrecisionOption{}
}
type slackOption int
func (o slackOption) apply(c *config) {

View File

@ -35,9 +35,9 @@ import (
)
const Type = "logAlert"
const NoDataKey = "NoDataAlert"
const addition = "_additions"
const reasonKey = "reason"
const NoDataKey = event.NoDataKey
const addition = event.Addition
const reasonKey = event.ReasonKey
func init() {
pipeline.Register(api.INTERCEPTOR, Type, makeInterceptor)
@ -146,9 +146,6 @@ func (i *Interceptor) Start() error {
if i.nodataMode {
i.runTicker()
}
if i.config.Template != nil {
eventbus.PublishOrDrop(eventbus.AlertTempTopic, *i.config.Template)
}
return nil
}
@ -190,9 +187,7 @@ func (i *Interceptor) runTicker() {
}
e.Fill(meta, header, e.Body())
eventbus.PublishOrDrop(eventbus.LogAlertTopic, &e)
eventbus.PublishOrDrop(eventbus.WebhookTopic, &e)
eventbus.PublishOrDrop(eventbus.NoDataTopic, &e)
case <-i.eventFlag:
i.ticker.Reset(duration)
@ -227,13 +222,19 @@ func (i *Interceptor) Intercept(invoker source.Invoker, invocation source.Invoca
}
log.Debug("logAlert matched: %s, %s", message, reason)
// do fire alert
ev.Header()[reasonKey] = reason
if len(i.config.Additions) > 0 {
ev.Header()[addition] = i.config.Additions
}
e := ev.DeepCopy()
// do fire alert
meta := e.Meta()
if meta == nil {
meta = event.NewDefaultMeta()
}
meta.Set(reasonKey, reason)
if len(i.config.Additions) > 0 {
meta.Set(addition, i.config.Additions)
}
e.Fill(meta, e.Header(), e.Body())
eventbus.PublishOrDrop(eventbus.LogAlertTopic, &e)
return invoker.Invoke(invocation)

View File

@ -37,13 +37,11 @@ const (
type Config struct {
interceptor.ExtensionConfig `yaml:",inline"`
Matcher Matcher `yaml:"matcher,omitempty"`
Labels Labels `yaml:"labels,omitempty"`
Additions map[string]string `yaml:"additions,omitempty"`
Ignore []string `yaml:"ignore,omitempty"`
Advanced Advanced `yaml:"advanced,omitempty"`
Template *string `yaml:"template,omitempty"`
SendOnlyMatched bool `yaml:"sendOnlyMatched,omitempty"`
Matcher Matcher `yaml:"matcher,omitempty"`
Additions map[string]interface{} `yaml:"additions,omitempty"`
Ignore []string `yaml:"ignore,omitempty"`
Advanced Advanced `yaml:"advanced,omitempty"`
SendOnlyMatched bool `yaml:"sendOnlyMatched,omitempty"`
}
type Matcher struct {
@ -52,10 +50,6 @@ type Matcher struct {
TargetHeader string `yaml:"target,omitempty"`
}
type Labels struct {
FromHeader []string `yaml:"from,omitempty"`
}
type Advanced struct {
Enable bool `yaml:"enabled"`
Mode []string `yaml:"mode,omitempty"`

Some files were not shown because too many files have changed in this diff Show More