Compare commits

...

156 Commits

Author SHA1 Message Date
ChrisLiu 293619796d
bugfix(Kubernetes-HostPort): allow pod update when node notfound (#260)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-11 20:37:35 +08:00
ChrisLiu cac4ba793e
enhance: network trigger time adapts to different time zones (#259)
* enhance: network trigger time adapts to different time zones

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* update config of kruise-game manager for using cert-manager

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-10 16:39:53 +08:00
Kagaya 52d9a14c13
feat: add enable-cert-generation option (#245)
* add enable-cert-generation option

Signed-off-by: Kagaya <kagaya85@outlook.com>

* update webhook manifests config

Signed-off-by: Kagaya <kagaya85@outlook.com>

* e2e: install cert manager

Signed-off-by: Kagaya <kagaya85@outlook.com>

---------

Signed-off-by: Kagaya <kagaya85@outlook.com>
2025-07-08 21:33:54 +08:00
ChrisLiu f6d679cc75
bugfix: consider preDelete pods when scaling (#257)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-08 14:09:36 +08:00
Xuetao Song 90c0b68350
feat: support EnableMultiIngress for vke(#251) 2025-07-07 17:22:38 +08:00
ChrisLiu b82d7e34f7
feat: add PreDeleteReplicas for GameServerSet status (#254)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-03 22:13:17 +08:00
ChrisLiu 1a1c256460
fix the meaning of CURRENT printcolumn when using kubectl (#253)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-03 21:51:54 +08:00
ChrisLiu 19d8ce0b2c
bugfix: gs state should be changed from PreDelete to Deleting (#252)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-03 21:19:23 +08:00
ChrisLiu 5095740248
AlibabaCloud-AutoNLBs support multi intranet type eip (#248)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-30 10:55:28 +08:00
ChrisLiu 7dfe07097b
feat: support new plugin named AlibabaCloud-AutoNLBs (#246)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-26 17:27:03 +08:00
ChrisLiu 0ff70733c6
feat: support user-defined number of controller workers (#247)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-23 19:17:37 +08:00
roc fbcb3953c0
Add PersistentVolumeClaimRetentionPolicy support to GameServerSet (#243)
Signed-off-by: roc <roc@imroc.cc>
2025-06-20 18:09:20 +08:00
ChrisLiu 94a15fdb38
feat: add annotation of state-last-changed-time (#238)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-16 23:48:09 +08:00
roc a6ed1d95c4
feat(Kubernetes-HostPort): support TCPUDP protocol (#244)
Signed-off-by: roc <roc@imroc.cc>
2025-06-16 23:47:42 +08:00
Xuetao Song 9a04f87f5e
feat: volcengine-clb plugin support EnableClbScatter 2025-06-16 14:22:35 +08:00
Xuetao Song f175e0d73c
fix duplicated port for Volcengine-CLB plugin (#240) 2025-06-13 14:04:32 +08:00
roc 1414654f46
升级 TencentCloud-CLB 插件 (#239)
* upgrade tencentcloud clb plugin

* deprecate DedicatedCLBListener CRD
* use CLBPortPool's pod annotation

Signed-off-by: roc <roc@imroc.cc>

* add comments

Signed-off-by: roc <roc@imroc.cc>

---------

Signed-off-by: roc <roc@imroc.cc>
2025-06-10 21:34:58 +08:00
lizhipeng629 7136738627
fix old svc remain after pod recreate when using Volcengine-CLB (#233)
feat(*): check pod uid in svc

fix:add pod create time in svc

Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2025-06-06 17:00:09 +08:00
ChrisLiu 6dbab6be15
fix go-lint err (#237)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-06 13:32:19 +08:00
ChrisLiu 1ca95a5c36
cancel the limit of Ali NLB port range (#235)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-06 12:08:42 +08:00
ChrisLiu 40c7bba35e
enhance: AlibabaCloud-SLB-SharedPort plugin support managed services (#224)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-06 11:55:53 +08:00
ChrisLiu 51a82bd107
enhance: Kubernetes-HostPort support container port same as host (#230)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-06 11:49:42 +08:00
ChrisLiu a64b21eab5
enhance: activity of externalscaler relate to minAvailable (#228)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-05-19 18:08:11 +08:00
ChrisLiu 4e6ae2e2d0
fix the external scaler error when minAvailable is 0 (#227)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-05-19 17:37:40 +08:00
ChrisLiu f2044b8f1a
fix: update ppmHash when ServiceQualities changed (#226)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-05-19 14:04:51 +08:00
ChrisLiu 9c4ce841c3
fix: support auto-scaling when replicas is 0 (#225)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-05-19 11:51:35 +08:00
Kagaya 5180743458
feat: support minAvailable percentage type (#222)
Signed-off-by: Kagaya <kagaya85@outlook.com>
2025-05-12 20:04:27 +08:00
陈欣宇 7c51b24e6e
feat(metrics): improve observability for GameServersOpsStateCount metrics (#221)
* feat(metrics): improve observability

add gssName namespace label for  metrics: okg_gameservers_opsState_count to improve observability

* fix: remove gssName Compare

---------

Co-authored-by: 陈欣宇 <chenxinyu@YJ-IT-02836.local>
2025-05-06 20:49:41 +08:00
Xuetao Song fc88742857
add doc of Volcengine-EIP (#219) 2025-04-30 17:26:11 +08:00
Xuetao Song d04f8d0a7a
feat(*): add eip provider of VKE (#218) 2025-04-27 15:33:36 +08:00
ChrisLiu 5a272eaec3
enhance: add network ready condition for AlibabaCloud-Multi-NLBs plugin (#214)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-04-24 15:32:54 +08:00
ChrisLiu 6d5f041afc
enhance: support svc external traffic policy for AlibabaCloud-Multi-NLBs (#216)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-04-24 15:32:09 +08:00
berg 3da984ff96
ServiceQualities support serverless pod (#212) 2025-04-22 20:21:32 +08:00
ChrisLiu 897e706a85
update ci workflow to ubuntu-24.04 (#215)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-04-21 16:22:27 +08:00
Kagaya 624d17ff11
feat: support range type for ReserveIDs (#209) 2025-04-21 15:31:16 +08:00
ChrisLiu d038737580
feat: support multi groups for nlbs (#213)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-04-14 16:40:38 +08:00
Kagaya f2d02a6ab2
deps: update to k8s 0.30.10 (#210) 2025-04-14 15:04:10 +08:00
ChrisLiu 0bfc500fec
enhance: create service of ali-multi-nlbs in parallel (#207)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-03-24 10:51:48 +08:00
ChrisLiu 6133bab818
update workflow ci go cache to v4 (#206)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-03-12 20:33:59 +08:00
LHB6540 a2a0864f27
Add index-offset-scheduler (#205)
Co-authored-by: 李海彬 <lihaibin@goatgames.com>
2025-03-12 18:22:50 +08:00
ChrisLiu 0b3575947b
Increase the upper limit of ali-nlb ports (#204)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-02-27 17:44:08 +08:00
Gao PeiLiang aaa63740a4
Add hwcloud provider and elb plugin (#201)
* add hwcloud ELB Network Plugin

* add hwcloud cloud provider register

* fix register error

* fix error

* add log

* fix hwcloud provider regster error

* fix health check error

* only suuport use exist elb

* add docs

* add hwcloud elb config

* fix docs
2025-02-12 17:46:54 +08:00
Durgin 2ea11c1cb3
feat: add annotation of opsState-last-changed-time (#200)
- add annotation `game.kruise.io/opsState-last-changed-time`

#199
2025-02-08 18:01:06 +08:00
Gao PeiLiang 8079c29c22
alibabacloud slb support map same TCP and UDP port , eg 8000/TCPUDP (#197)
* create svc use port + protocol as name to fix when use same port but different protocol

* alibabacloud slb support TCP/UDP

* add log info

* fix alibabacloud slb init same port svc

* add doc

* clear log print, avoid too many info
2025-02-06 16:58:17 +08:00
Gao PeiLiang f0c82f1b1f
add support svc external traffic policy for alibabacloud slb (#194)
* add test log

* add support svc external traffic policy for alibabacloud slb

* fix error

* add e2e test timeout

* add aliyun slb param ExternalTrafficPolicyType doc
2025-01-16 10:44:24 +08:00
roc 8c229c1191
add rbac role for tencentcloud provider (#193)
Signed-off-by: roc <roc@imroc.cc>
2025-01-08 19:07:39 +08:00
roc 65d230658e
add tencentcloud in config.yaml (#192)
Signed-off-by: roc <roc@imroc.cc>
2025-01-08 15:51:18 +08:00
ChrisLiu 41c76a0d7a
Update CHANGELOG.md for v0.10.0 2025-01-08 15:21:50 +08:00
ChrisLiu be2b9065d8
feat: Add new networkType named AlibabaCloud-Multi-NLBs (#187)
* feat: Add new networkType named AlibabaCloud-Multi-NLBs

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* support same port of tcp&udp

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-01-07 17:36:44 +08:00
ChrisLiu ea98123211
feat: add maxAvailable param for external scaler (#190)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-01-07 17:36:10 +08:00
roc 7976e9002e
enhance: support network isolation for tencentcloud clb plugin (#183)
Signed-off-by: roc <roc@imroc.cc>
2024-11-12 23:04:23 +08:00
lizhipeng629 b841f0d313
fix:add block port in volc engine (#182)
Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2024-11-12 11:11:31 +08:00
ChrisLiu 51aad5b0a0
Semantic fixes for network port ranges (#181)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-11-06 19:59:25 +08:00
hhr 6bba287858
feat: add jdcloud provider and the nlb&eip plugin (#180) 2024-11-05 17:11:36 +08:00
ChrisLiu 468b2c77fb
enhance: add block ports config for AlibabaCloud LB network models (#175)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-11-01 17:11:00 +08:00
ChrisLiu c114781c7e
reconstruct the logic of GameServers scaling (#171)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-11-01 17:10:50 +08:00
ChrisLiu 41d902a8f2
update kruise-api to v1.7.1 (#173)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-11-01 17:10:37 +08:00
roc c680b411b7
feat(*): add tencent cloud provider and clb plugin (#179)
* feat(*): add tencent cloud provider and clb plugin

Signed-off-by: rockerchen <rockerchen@tencent.com>
2024-10-29 14:13:11 +08:00
ChrisLiu e121bcc109
add user logo of jjworld (#178)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-10-22 20:20:45 +08:00
ChrisLiu ecce453d7f
Update CHANGELOG.md for v0.9.0 2024-08-20 18:06:16 +08:00
ChrisLiu a1d0065e0c
enhance: service quality support patch labels & annotations (#159)
* enhance: service quality support patch labels & annotations

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* remove ci markdownlint check

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-08-16 10:45:03 +08:00
ChrisLiu 92475c1451
enhance: labels from gs can be synced to pod (#160)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-08-16 10:44:48 +08:00
ChrisLiu 26fcaf7889
feat: add lifecycle field for gameserverset (#162)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-08-16 10:44:26 +08:00
ChrisLiu 14e281dfa9
fix old svc remain after pod recreate when using ali-lb models (#165)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-08-16 10:43:50 +08:00
ChrisLiu ba65115f08
add users (#167)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-08-08 11:33:03 +08:00
clarklee b3991f24d5
fix: AmazonWebServices-NLB controller parameter modification and doc update (#164)
Signed-off-by: clarklee92 <clarklee1992@hotmail.com>
2024-07-19 19:49:41 +08:00
ChrisLiu f9467003b5
add user yongshi (#158)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-07-03 14:23:46 +08:00
clarklee92 08ce8f6fcd Upgrade Golang to version 1.21
Signed-off-by: clarklee92 <clarklee1992@hotmail.com>
2024-06-21 22:54:32 +08:00
clarklee92 ad0744df7a Fix go-lint error
Signed-off-by: clarklee92 <clarklee1992@hotmail.com>
2024-06-21 22:54:32 +08:00
clarklee92 782a250dd7 feat: add AmazonWebServices-NLB network plugin
Signed-off-by: clarklee92 <clarklee1992@hotmail.com>
2024-06-21 22:54:32 +08:00
ChrisLiu 3c8ddbdd4e
Fix the allocation error when Ali loadbalancers reache the limit of ports number (#149)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-06-18 15:19:32 +08:00
ChrisLiu adff8bdd54
Enhance: support custom health checks for AlibabaCloud-SLB (#154)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-06-18 15:17:39 +08:00
ChrisLiu d911eb3cd8
enhance: Kubernetes-NodePort supports network disabled (#156)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-06-18 15:13:58 +08:00
ChrisLiu 7d8e169d0c
enhance: check networkType when create GameServerSet (#157)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-06-18 15:12:40 +08:00
ChrisLiu b6fdc2353e
Enhance: support custom health checks for AlibabaCloud-NLB (#147)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-05-22 19:20:35 +08:00
ChrisLiu cafaab3216
Add users of OKG (#145)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-05-10 12:43:29 +08:00
李志朋 88dc6f5f97 add changelog in v0.8.0 2024-04-26 19:01:59 +08:00
lizhipeng629 eb547228b6
add v0.8.0 changelog (#143)
Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2024-04-26 17:21:19 +08:00
lizhipeng629 ffccbc5023
enhance: add AllocateLoadBalancerNodePorts in clb plugin (#141)
Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2024-04-26 11:24:00 +08:00
ChrisLiu d67e058e0f
feat: sync annotations from gs to pod (#140)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-04-26 11:23:13 +08:00
ChrisLiu d547f61323
feat: add Kubernetes-NodePort network plugin (#138)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-04-26 11:22:45 +08:00
ChrisLiu 56d9071c01
enhance: Kubernetes-HostPort plugin support to wait for network ready (#136)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-04-26 11:22:29 +08:00
ChrisLiu 4829414955
feat: add AlibabaCloud-NLB network plugin (#135)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-04-26 11:22:02 +08:00
lizhipeng629 53b69204e3
enhance: add annotations config in clb plugin (#137)
Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2024-04-18 11:15:04 +08:00
ChrisLiu 69babe66fe
replace patch asts with update (#131)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-03-27 10:25:07 +08:00
ChrisLiu 2dd97c2567
FailurePolicy of PodMutatingWebhook turn to Fail (#129)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-02-22 15:48:20 +08:00
lizhipeng629 9c203d01c9
feat(*): add volcengine provider and clb plugin (#127)
Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2024-01-29 16:36:23 +08:00
ChrisLiu 2b6ce6fcfc
fix: avoid patching gameserver continuously (#124)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-01-18 17:00:31 +08:00
Whislly 02c00091d2
add "apiVersion: game.kruise.io/v1alpha1" (#122) 2024-01-03 15:08:40 +08:00
ChrisLiu 4eab53302e
Update CHANGELOG.md for v0.7.0 2023-12-29 11:42:50 +08:00
ChrisLiu b399cd104b
bugfix: patch pod image fail when gs image is nil (#121)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-12-28 19:49:54 +08:00
ChrisLiu 5a15888804
feat: add ReclaimPolicy for GameServer (#115)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-12-28 10:40:02 +08:00
ChrisLiu ecf81d9259
feat: differentiated updates to GameServers (#120)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-12-27 15:28:10 +08:00
ChrisLiu 4013d3e597
enhance: ServiceQuality supports multiple results returned by a single probe (#117)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-12-27 15:22:07 +08:00
ChrisLiu 1181b22e4e
add users wanglong&lbdj (#118)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-12-18 15:32:22 +08:00
ChrisLiu 250bd86ffb
Update CHANGELOG.md for v0.6.0 2023-10-27 11:09:04 +08:00
ChrisLiu 4ea376f9bc
feat: add GameServerConditions (#95)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-10-26 11:11:32 +08:00
ChrisLiu 58cd242780
feat: add network plugin AlibabaCloud-NLB-SharedPort & support AllowNotReadyContainers (#98)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-10-26 11:09:03 +08:00
ChrisLiu 758eb33911
hostport network not ready when no ports exist (#100)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-10-26 11:07:14 +08:00
hongdou 6513d8ef2a
add qps and burst settings (#108) 2023-10-26 11:06:06 +08:00
ChrisLiu bd41239718
update go version to 1.19 & e2e dependency version (#104)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-10-17 11:20:59 +08:00
ChrisLiu ddcfeb759f
add OKG users: wuduan & juren (#99)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-10-03 15:00:46 +08:00
ChrisLiu 4d003d6ee9
add gameserverset controller unit tests (#97)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-09-13 15:38:09 +08:00
ChrisLiu 72ee4ca111
feat: support auto scaling-up based on minAvailable (#88)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-09-06 20:38:33 +08:00
ChrisLiu 4ec1a65e3a
fix AlibabaCloud-NATGW network ready condition when multi-ports (#94)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-08-30 10:48:41 +08:00
ChrisLiu 3d523f9611
Update CHANGELOG.md for v0.5.0 2023-08-11 11:16:36 +08:00
ChrisLiu 649ab11773
Feat/svc name (#92)
* feat: add new field ServiceName for GameServerSet

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* fix log for kill gs

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-08-09 10:00:27 +08:00
ChrisLiu 3f5a1ea13f
feat: AlibabaCloud-EIP support to define EIP name & description (#91)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-08-04 15:25:46 +08:00
ChrisLiu acc12c12f7
feat: add new opsState type named Kill (#90)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-08-04 15:25:33 +08:00
ChrisLiu 1b05801484
feat: add new opsState named Allocated (#89)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-28 14:49:20 +08:00
ChrisLiu 6f89f6e637
add network plugin AlibabaCloud-EIP (#86)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-24 14:49:34 +08:00
ChrisLiu defcb15f02
refactor NetworkPortRange into a pointer (#87)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-18 16:28:16 +08:00
ChrisLiu dcc3d8260c
support to sync gs metadata from from gsTemplate (#85)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-17 17:02:33 +08:00
ChrisLiu c4e4197b12
correct gs network status when pod network status is nil (#80)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-12 10:25:33 +08:00
ChrisLiu 0e6708dce9
enhance pod scaling efficiency (#81)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-12 10:24:39 +08:00
ChrisLiu 2be1717984
improve hostport cache to record allocated ports of pod (#82)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-12 10:20:20 +08:00
ChrisLiu d5d58eeb56
Update CHANGELOG.md for v0.4.0 2023-07-07 11:25:49 +08:00
ChrisLiu 06762612df
feat: AlibabaCloud-SLB support muti-slbIds (#69)
* feat: AlibabaCloud-SLB support muti-slbIds

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* fix: AlibabaCloud-SLB allocate and deallocate imprecisely

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-05 16:21:11 +08:00
ChrisLiu 6c6ad6b6cc
assume that slices without elements are equal (#74)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-05 16:20:47 +08:00
ChrisLiu 65558bd646
add reserveIds when init asts (#73)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-05 11:35:11 +08:00
ChrisLiu 2ccf2260f4
enhance: decouple triggering network update from Gs Ready (#71)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-04 11:53:31 +08:00
ChrisLiu f2b81a993f
add detailed explanation of GameServer hot-update (#70)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-06-29 16:06:33 +08:00
ChrisLiu 056cbe351c
Autoscaler Improvement (#64)
* update docs for network
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-06-29 16:04:46 +08:00
ChrisLiu b9af47ba84
Fix the problem that the hostPort plugin repeatedly allocates ports when pods with the same name are created (#66)
* update docs for network

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* update doc of gs scaling

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* Fix: the hostPort plugin repeatedly deallocates ports

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-06-19 10:51:12 +08:00
ChrisLiu 1192d4dc62
Add user xingzhe.ai (#68)
* update docs for network

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* update doc of gs scaling

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* add user xingzhe.ai

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-06-17 18:56:58 +08:00
ChrisLiu a6dfabc7ee
update docs (#60)
* update docs for network
* update doc of gs scaling
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-06-09 19:08:39 +08:00
ChrisLiu 6d4e6cafa2
impove Ingress Network Plugin (#61)
* fix: avoid metrics controller fatal when gss status is nil
* add param Fixed for ingress network
* avoid Network NotReady too long for Ingress NetworkType
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-06-09 19:07:56 +08:00
ChrisLiu 1c7fd16eb6
Merge pull request #59 from smartwang/master
Fix the issue of unable to complete scaling down from 1 to 0.
2023-05-31 16:15:12 +08:00
wangying 1eba95af78 fmt code 2023-05-31 15:50:11 +08:00
smartwang 66598607c9
Fix the issue of unable to complete scaling down from 1 to 0. 2023-05-29 17:13:56 +08:00
ChrisLiu 543e02a413
Merge pull request #57 from chrisliu1995/feat/network_params
add params for network plugins
2023-05-26 11:10:42 +08:00
ChrisLiu 7d4aefe300
Update CHANGELOG.md 2023-05-26 10:44:25 +08:00
ChrisLiu b13b0aa2e2
Merge branch 'openkruise:master' into feat/network_params 2023-05-26 10:41:05 +08:00
ChrisLiu 1d6c6e22b2 add params for network plugins
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-05-25 22:06:41 +08:00
ChrisLiu 7a4a40b6bd
Update CHANGELOG.md 2023-05-25 16:10:42 +08:00
ChrisLiu 07a26a9b81
feat: add networkType kubernetes-ingress (#54)
* feat: add networkType kubernetes-ingress

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* feat: network plugin kubernetes-ingress support multi paths

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* feat: add param <id> for host of Kuberntes-Ingress plugin

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-05-22 16:38:45 +08:00
skkkkkkk ab6aff7c0d
new grafana json (#56)
Co-authored-by: “skkkkkkk” <sk01199367@alibaba-inc.com>
2023-05-08 09:53:29 +08:00
ChrisLiu 9e596dd72a
add users of OKG (#55)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-05-06 10:49:48 +08:00
ChrisLiu e888ec00aa
update OKG users (#53)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-04-27 18:41:43 +08:00
ChrisLiu 4eaed8df57
feat: add ReserveIds ScaleDownStrategyType (#52)
* feat: add ReserveIds ScaleDownStrategyType

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* add docs for CRD description

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-04-27 14:05:16 +08:00
ChrisLiu af327dc8a6
fix: avoid gs status sync delay when network failure (#45)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-04-26 16:29:14 +08:00
ChrisLiu 8e90b10bb6
feat: export prometheus metrics (#40)
* Feat: export prometheus metrics

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* code indentation optimization

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-04-26 16:12:50 +08:00
ChrisLiu 7c072bc48e
Feat: add external scaler (#39)
* Feat: add external scaler

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* Add docs for autoscaling

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-04-26 15:47:42 +08:00
ChrisLiu 1cb53dea1b
update alibabacloud API group version to v1beta1 (#41)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-04-19 19:53:30 +08:00
ChrisLiu ba8ce6c92d
feat: add default serviceName for asts (#51)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-04-19 19:50:39 +08:00
ChrisLiu f283461b6c
fix: add marginal conditions to avoid fatal errors (#49)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-04-11 14:52:28 +08:00
ChrisLiu 92043599cf
add print columns for GameServerSet and GameServer (#48)
* add print columns for GameServerSet and GameServer
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-04-04 16:16:38 +08:00
ChrisLiu 9e5bbde674
fix: gss status sync failed when template metadata is not null (#46)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-04-03 15:32:20 +08:00
ChrisLiu b1f3d943be
add Unit Tests for controllers (#44)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-03-23 14:00:37 +08:00
ChrisLiu f585041c5d
Update CHANGELOG.md 2023-02-09 21:52:18 +08:00
ChrisLiu a069e9399c
Merge pull request #38 from chrisliu1995/master
update docs: rename HostPort to Kubernetes-HostPort
2023-02-09 21:48:41 +08:00
ChrisLiu c74e66c5da update docs: rename HostPort to Kubernetes-HostPort
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-02-09 21:40:04 +08:00
ChrisLiu 6b58728b74
Feat: add cloud provider & network plugin (#16)
* Feat: add cloud provider & network plugin
2023-02-08 16:04:40 +08:00
ChrisLiu cde5c22594
update README (#35)
* update README
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-01-03 16:58:08 +08:00
ChrisLiu 55479c8c8e
add docs(EN) (#32)
* add docs(EN)
2022-12-28 10:16:35 +08:00
ChrisLiu cef830862d
update docs (#31)
* add CRD field description
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2022-12-16 10:40:24 +08:00
212 changed files with 34481 additions and 2353 deletions

View File

@ -10,8 +10,8 @@ on:
env:
# Common versions
GO_VERSION: '1.18'
GOLANGCI_VERSION: 'v1.47'
GO_VERSION: '1.22'
GOLANGCI_VERSION: 'v1.58'
DOCKER_BUILDX_VERSION: 'v0.4.2'
# Common users. We can't run a step 'if secrets.AWS_USR != ""' but we can run
@ -23,7 +23,7 @@ env:
jobs:
golangci-lint:
runs-on: ubuntu-18.04
runs-on: ubuntu-24.04
steps:
- name: Checkout
uses: actions/checkout@v3
@ -34,7 +34,7 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
@ -43,27 +43,14 @@ jobs:
run: |
make generate
- name: Lint golang code
uses: golangci/golangci-lint-action@v3.2.0
uses: golangci/golangci-lint-action@v3.5.0
with:
version: ${{ env.GOLANGCI_VERSION }}
args: --verbose --timeout=10m
skip-pkg-cache: true
markdownlint-misspell-shellcheck:
runs-on: ubuntu-18.04
# this image is build from Dockerfile
# https://github.com/pouchcontainer/pouchlinter/blob/master/Dockerfile
container: pouchcontainer/pouchlinter:v0.1.2
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Run misspell
run: find ./* -name "*" | grep -v vendor | xargs misspell -error
- name: Run shellcheck
run: find ./ -name "*.sh" | grep -v vendor | xargs shellcheck
unit-tests:
runs-on: ubuntu-18.04
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v3
with:
@ -75,7 +62,7 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}

View File

@ -1,4 +1,4 @@
name: E2E-1.24
name: E2E-1.26
on:
push:
@ -10,16 +10,16 @@ on:
env:
# Common versions
GO_VERSION: '1.18'
KIND_ACTION_VERSION: 'v1.3.0'
KIND_VERSION: 'v0.14.0'
KIND_IMAGE: 'kindest/node:v1.24.2'
GO_VERSION: '1.22'
KIND_VERSION: 'v0.18.0'
KIND_IMAGE: 'kindest/node:v1.26.4'
KIND_CLUSTER_NAME: 'ci-testing'
CERT_MANAGER_VERSION: 'v1.18.2'
jobs:
game-kruise:
runs-on: ubuntu-18.04
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v3
with:
@ -40,6 +40,10 @@ jobs:
export IMAGE="openkruise/kruise-game-manager:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Cert-Manager
run: |
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/${{ env.CERT_MANAGER_VERSION }}/cert-manager.yaml
kubectl -n cert-manager rollout status deploy/cert-manager-webhook --timeout=180s
- name: Install Kruise
run: |
set -ex
@ -47,7 +51,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.3.0 --set featureGates="PodProbeMarkerGate=true"
helm install kruise openkruise/kruise --version 1.5.0
for ((i=1;i<10;i++));
do
set +e

117
.github/workflows/e2e-1.30.yaml vendored Normal file
View File

@ -0,0 +1,117 @@
name: E2E-1.30
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.22'
KIND_VERSION: 'v0.22.0'
KIND_IMAGE: 'kindest/node:v1.30.8'
KIND_CLUSTER_NAME: 'ci-testing'
CERT_MANAGER_VERSION: 'v1.18.2'
jobs:
game-kruise:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v3
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.12.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
version: ${{ env.KIND_VERSION }}
- name: Build image
run: |
export IMAGE="openkruise/kruise-game-manager:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Cert-Manager
run: |
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/${{ env.CERT_MANAGER_VERSION }}/cert-manager.yaml
kubectl -n cert-manager rollout status deploy/cert-manager-webhook --timeout=180s
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.3
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Game
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-game-manager:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-game-system | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-game-system | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-game-system -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-game ready successfully"
else
echo "Timeout to wait for kruise-game ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v test/e2e
retVal=$?
# kubectl get pod -n kruise-game-system --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-game-system
restartCount=$(kubectl get pod -n kruise-game-system --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-game has not restarted"
else
kubectl get pod -n kruise-game-system --no-headers
echo "Kruise-game has restarted, abort!!!"
kubectl get pod -n kruise-game-system --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-game-system
exit 1
fi
exit $retVal

1
.gitignore vendored
View File

@ -23,3 +23,4 @@ testbin/*
*.swp
*.swo
*~
.vscode

View File

@ -1,5 +1,140 @@
# Change Log
## v0.10.0
> Change log since v0.9.0
### Features & Enhancements
- Feat: add tencent cloud provider and clb plugin. https://github.com/openkruise/kruise-game/pull/179
- Enhance: update kruise-api to v1.7.1 https://github.com/openkruise/kruise-game/pull/173
- Enhance: add block ports config for AlibabaCloud LB network models. https://github.com/openkruise/kruise-game/pull/175
- Enhance: add block port in volc engine. https://github.com/openkruise/kruise-game/pull/182
- Feat: add jdcloud provider and the nlb&eip plugin. https://github.com/openkruise/kruise-game/pull/180
- Enhance: support network isolation for tencentcloud clb plugin. https://github.com/openkruise/kruise-game/pull/183
- Feat: add maxAvailable param for external scaler. https://github.com/openkruise/kruise-game/pull/190
- Feat: add new networkType named AlibabaCloud-Multi-NLBs. https://github.com/openkruise/kruise-game/pull/187
### Bug Fixes
- Reconstruct the logic of GameServers scaling. https://github.com/openkruise/kruise-game/pull/171
- Semantic fixes for network port ranges. https://github.com/openkruise/kruise-game/pull/181
## v0.9.0
> Change log since v0.8.0
### Features & Enhancements
- Enhance: support custom health checks for AlibabaCloud-NLB. https://github.com/openkruise/kruise-game/pull/147
- Feat: add AmazonWebServices-NLB network plugin. https://github.com/openkruise/kruise-game/pull/150
- Enhance: support custom health checks for AlibabaCloud-SLB. https://github.com/openkruise/kruise-game/pull/154
- Enhance: Kubernetes-NodePort supports network disabled. https://github.com/openkruise/kruise-game/pull/156
- Enhance: check networkType when create GameServerSet. https://github.com/openkruise/kruise-game/pull/157
- Enhance: service quality support patch labels & annotations. https://github.com/openkruise/kruise-game/pull/159
- Enhance: labels from gs can be synced to pod. https://github.com/openkruise/kruise-game/pull/160
- Feat: add lifecycle field for gameserverset. https://github.com/openkruise/kruise-game/pull/162
### Bug Fixes
- Fix the allocation error when Ali loadbalancers reache the limit of ports number. https://github.com/openkruise/kruise-game/pull/149
- Fix: AmazonWebServices-NLB controller parameter modification. https://github.com/openkruise/kruise-game/pull/164
- Fix old svc remain after pod recreate when using ali-lb models. https://github.com/openkruise/kruise-game/pull/165
## v0.8.0
> Change log since v0.7.0
### Features & Enhancements
- Add AlibabaCloud-NLB network plugin. https://github.com/openkruise/kruise-game/pull/135
- Add Volcengine-CLB network plugin. https://github.com/openkruise/kruise-game/pull/127
- Add Kubernetes-NodePort network plugin. https://github.com/openkruise/kruise-game/pull/138
- Sync annotations from gs to pod. https://github.com/openkruise/kruise-game/pull/140
- FailurePolicy of PodMutatingWebhook turn to Fail. https://github.com/openkruise/kruise-game/pull/129
- Replace patch asts with update. https://github.com/openkruise/kruise-game/pull/131
- Kubernetes-HostPort plugin support to wait for network ready. https://github.com/openkruise/kruise-game/pull/136
- Add AllocateLoadBalancerNodePorts in clb plugin. https://github.com/openkruise/kruise-game/pull/141
### Bug Fixes
- Avoid patching gameserver continuously. https://github.com/openkruise/kruise-game/pull/124
## v0.7.0
> Change log since v0.6.0
### Features & Enhancements
- Add ReclaimPolicy for GameServer. https://github.com/openkruise/kruise-game/pull/115
- ServiceQuality supports multiple results returned by one probe. https://github.com/openkruise/kruise-game/pull/117
- Support differentiated updates to GameServers. https://github.com/openkruise/kruise-game/pull/120
### Bug Fixes
- Fix the error of patching pod image failure when gs image is nil. https://github.com/openkruise/kruise-game/pull/121
## v0.6.0
> Change log since v0.5.0
### Features & Enhancements
- Support auto scaling-up based on minAvailable. https://github.com/openkruise/kruise-game/pull/88
- Update go version to 1.19. https://github.com/openkruise/kruise-game/pull/104
- Add GameServerConditions. https://github.com/openkruise/kruise-game/pull/95
- Add network plugin AlibabaCloud-NLB-SharedPort. https://github.com/openkruise/kruise-game/pull/98
- Support AllowNotReadyContainers for network plugin. https://github.com/openkruise/kruise-game/pull/98
- Add qps and burst settings of controller-manager. https://github.com/openkruise/kruise-game/pull/108
### Bug Fixes
- Fix AlibabaCloud-NATGW network ready condition when multi-ports. https://github.com/openkruise/kruise-game/pull/94
- Hostport network should be not ready when no ports exist. https://github.com/openkruise/kruise-game/pull/100
## v0.5.0
> Change log since v0.4.0
### Features & Enhancements
- Improve hostport cache to record allocated ports of pod. https://github.com/openkruise/kruise-game/pull/82
- Enhance pod scaling efficiency. https://github.com/openkruise/kruise-game/pull/81
- Support to sync gs metadata from from gsTemplate. https://github.com/openkruise/kruise-game/pull/85
- Refactor NetworkPortRange into a pointer. https://github.com/openkruise/kruise-game/pull/87
- Add network plugin AlibabaCloud-EIP. https://github.com/openkruise/kruise-game/pull/86
- Add new opsState type named Allocated. https://github.com/openkruise/kruise-game/pull/89
- Add new opsState type named Kill. https://github.com/openkruise/kruise-game/pull/90
- AlibabaCloud-EIP support to define EIP name & description. https://github.com/openkruise/kruise-game/pull/91
- Support customized serviceName. https://github.com/openkruise/kruise-game/pull/92
### Bug Fixes
- correct gs network status when pod network status is nil. https://github.com/openkruise/kruise-game/pull/80
## v0.4.0
> Change log since v0.3.0
### Features & Enhancements
- Avoid Network NotReady too long for Kubernetes-Ingress NetworkType. https://github.com/openkruise/kruise-game/pull/61/commits/7f66be508004d393299e037cba052c5111d6678a
- Add param Fixed for Kubernetes-Ingress NetworkType. https://github.com/openkruise/kruise-game/pull/61/commits/00c8d5502f1813f0092f05ffb3b0c2dbca5a63f7
- Change autoscaler trigger metricType from Value to AverageValue. https://github.com/openkruise/kruise-game/pull/64
- Decouple triggering network update from Gs Ready. https://github.com/openkruise/kruise-game/pull/71
- Add reserveIds when init asts. https://github.com/openkruise/kruise-game/pull/73
- AlibabaCloud-SLB support muti-slbIds. https://github.com/openkruise/kruise-game/pull/69/commits/42b8ab3e739c872f477d4faa7286ace3a87a07d6
### Bug Fixes
- Fix the issue of unable to complete scaling down from 1 to 0 when autoscaling. https://github.com/openkruise/kruise-game/pull/59
- Avoid metrics controller fatal when gss status is nil. https://github.com/openkruise/kruise-game/pull/61/commits/c56574ecaf5af1e5d0625f4555a470c90125f4d4
- Fix the problem that the hostPort plugin repeatedly allocates ports when pods with the same name are created. https://github.com/openkruise/kruise-game/pull/66
- Assume that slices without elements are equal to avoid marginal problem. https://github.com/openkruise/kruise-game/pull/74
- Fix AlibabaCloud-SLB allocate and deallocate imprecisely. https://github.com/openkruise/kruise-game/pull/69/commits/05fea83e08a39f496cb9f93a83a0775460cb160a
## v0.3.0
> Change log since v0.2.0
### Features & Enhancements
- Add prometheus metrics and monitor dashboard for game servers. https://github.com/openkruise/kruise-game/pull/40
- Add external scaler to make the game servers in the WaitToBeDeleted opsState automatically deleted. https://github.com/openkruise/kruise-game/pull/39
- Support ReserveIds ScaleDownStrategyType (backfill the deleted Gs ID to the Reserve field). https://github.com/openkruise/kruise-game/pull/52
- Update AlibabaCloud API Group Version from v1alpha1 to v1beta1. https://github.com/openkruise/kruise-game/pull/41
- Add more print columns for GameServer & GameServerSet. https://github.com/openkruise/kruise-game/pull/48
- Add default serviceName for GameServerSet. https://github.com/openkruise/kruise-game/pull/51
- Add new networkType Kubernetes-Ingress. https://github.com/openkruise/kruise-game/pull/54
- Add network-related environment variables to allow users to adjust the network waiting time and detection interval. https://github.com/openkruise/kruise-game/pull/57
### Bug Fixes
- Avoid GameServer status sync delay when network failure. https://github.com/openkruise/kruise-game/pull/45
- Avoid GameServerSet status sync failed when template metadata is not null. https://github.com/openkruise/kruise-game/pull/46
- Add marginal conditions to avoid fatal errors when scaling. https://github.com/openkruise/kruise-game/pull/49
## v0.2.0
> Change log since v0.1.0
@ -11,13 +146,11 @@
- Supporting network types:
- HostPort
- Kubernetes-HostPort
- AlibabaCloud-NATGW
- AlibabaCloud-SLB
- AlibabaCloud-SLB-SharedPort
---
## v0.1.0
### Features

View File

@ -1,5 +1,5 @@
# Build the manager binary
FROM golang:1.18 as builder
FROM golang:1.22.12 AS builder
WORKDIR /workspace
# Copy the Go Modules manifests
@ -13,6 +13,7 @@ RUN go mod download
COPY main.go main.go
COPY apis/ apis/
COPY pkg/ pkg/
COPY cloudprovider/ cloudprovider/
# Build
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager main.go

View File

@ -2,7 +2,7 @@
# Image URL to use all building/pushing images targets
IMG ?= kruise-game-manager:test
# ENVTEST_K8S_VERSION refers to the version of kubebuilder assets to be downloaded by envtest binary.
ENVTEST_K8S_VERSION = 1.24.1
ENVTEST_K8S_VERSION = 1.30.0
# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)
ifeq (,$(shell go env GOBIN))
@ -41,7 +41,7 @@ help: ## Display this help.
.PHONY: manifests
manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
$(CONTROLLER_GEN) rbac:roleName=manager-role crd:generateEmbeddedObjectMeta=true webhook paths="./..." output:crd:artifacts:config=config/crd/bases
.PHONY: generate
generate: controller-gen ## Generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
@ -57,7 +57,7 @@ vet: ## Run go vet against code.
.PHONY: test
test: manifests generate fmt vet envtest ## Run tests.
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) -p path)" go test ./pkg/... -coverprofile cover.out
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) -p path)" go test ./pkg/... ./cloudprovider/... -coverprofile cover.out
##@ Build
@ -114,7 +114,7 @@ ENVTEST ?= $(LOCALBIN)/setup-envtest
## Tool Versions
KUSTOMIZE_VERSION ?= v4.5.5
CONTROLLER_TOOLS_VERSION ?= v0.9.0
CONTROLLER_TOOLS_VERSION ?= v0.16.5
KUSTOMIZE_INSTALL_SCRIPT ?= "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh"
.PHONY: kustomize
@ -130,7 +130,7 @@ $(CONTROLLER_GEN): $(LOCALBIN)
.PHONY: envtest
envtest: $(ENVTEST) ## Download envtest-setup locally if necessary.
$(ENVTEST): $(LOCALBIN)
GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@c7e1dc9b5302d649d5531e19168dd7ea0013736d
HELM = $(shell pwd)/bin/helm
helm: ## Download helm locally if necessary.

126
README.md
View File

@ -1,54 +1,96 @@
# Kruise-game
English | <a href="docs/中文/说明文档.md">中文</a>
## Introduction
Kruise-Game is a subproject of OpenKruise for solving the problem of game server landing in Kubernetes.
## kruise-game
[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
[![Go Report Card](https://goreportcard.com/badge/github.com/openkruise/kruise-game)](https://goreportcard.com/report/github.com/openkruise/kruise)
[![codecov](https://codecov.io/gh/openkruise/kruise-game/branch/master/graph/badge.svg)](https://codecov.io/gh/openkruise/kruise-game)
[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg)](./CODE_OF_CONDUCT.md)
<img width="250px" src="docs/images/logo.jpg" alt="OpenKruiseGame logo"/>
English | [中文](./docs/中文/说明文档.md)
### Overview
Kruise-Game utilizes the features of [Kruise](https://github.com/openkruise/kruise), including:
- In-Place Update
- Update sequence
- Ordinals reserve(skip)
- Pod probe marker
- …
OpenKruiseGame (OKG) is a multicloud-oriented, open source Kubernetes workload specialized for game servers. It is a sub-project of the open source workload project OpenKruise of the Cloud Native Computing Foundation (CNCF) in the gaming field. OpenKruiseGame makes the cloud-native transformation of game servers easier, faster, and stabler.
## Why is Kruise-Game?
Game servers are stateful services, and there are differences in the operation and maintenance of each game server, which also increase with time. In Kubernetes, general workloads manage a batch of game servers according to pod templates, which cannot take into account the differences in game server status. Batch management and directional management conflict in K8s. **Kruise-Game** was born to resolve that. Kruise-Game contains two CRDs, GameServer and GameServerSet:
<p align="center">
<img src="./docs/images/okg.png" width="90%"/>
</p>
- `GameServer` is responsible for the management of the game server status. Users can customize the game server status to reflect the differences between game servers;
- `GameServerSet` is responsible for batch management of game servers. Users can customize update/reduction strategies according to the status of game servers.
### What is OpenKruiseGame?
OpenKruiseGame is a custom Kubernetes workload designed specially for game server scenarios. It simplifies the cloud-native transformation of game servers. Compared with the built-in workloads of Kubernetes, such as Deployment and StatefulSet, OpenKruiseGame provides common game server management features, such as hot update, in-place update, and management of specified game servers.
Features:
- Game server status management
- Mark the game servers status without affecting its lifecycle
- Flexible scaling/deletion mechanism
- Support scaling down by user-defined status & priority
- Support specifying the game server to delete directly
- Flexible update mechanism
- Support hot update (in-place update)
- Support updating game server by user-defined priority
- Can control the range of the game servers to be updated
- Can control the pace of the entire update process
- Custom service quality
- Support probing game servers containers and marking game server status automatically
In addition, OpenKruiseGame connects game servers to cloud service providers, matchmaking services, and O&M platforms. It automatically integrates features such as logging, monitoring, network, storage, elasticity, and matching by using low-code or zero-code technologies during the cloud-native transformation of game servers. With the consistent delivery standard of Kubernetes, OpenKruiseGame implements centralized management of clusters on multiple clouds and hybrid clouds.
## Quick Start
OpenKruiseGame is a fully open source project. It allows developers to customize workloads and build the release and O&M platforms for game servers by using custom development. OpenKruiseGame can use Kubernetes templates or call APIs to use or extend features. It can also connect to delivery systems, such as KubeVela, to implement the orchestration and full lifecycle management of game servers on a GUI.
- [Installation](docs/en/getting_started/installation.md)
- [Basic Usage](docs/en/tutorials/basic_usage.md)
### Why is OpenKruiseGame needed?
## License
Kubernetes is an application delivery and O&M standard in the cloud-native era. The capabilities of Kubernetes such as declarative resource management, auto scaling, and consistent delivery in a multi-cloud environment can provide support for game server scenarios that cover fast server activation, cost control, version management, and global reach. However, certain features of game servers make it difficult to adapt game servers for Kubernetes. For example:
Copyright 2022.
* Hot update or hot reload
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
To ensure a better game experience for players, many game servers are updated by using hot update or hot reload. However, for various workloads of Kubernetes, the lifecycle of pods is consistent with that of images. When an image is published, pods are recreated. When pods are recreated, issues may occur such as interruptions to player battles and changes in the network metadata of player servers.
http://www.apache.org/licenses/LICENSE-2.0
* O&M for specified game servers
Game servers are stateful in most scenarios. For example, when a player versus player (PVP) game is updated or goes offline, only game servers without online active players can be changed; when game servers for a player versus environment (PVE) game are suspended or merged, you can perform operations on game servers with specific IDs.
* Network models suitable for games
The network models in Kubernetes are implemented by declaring Services. In most cases, the network models are applicable to stateless scenarios. For network-sensitive game servers, a solution with high-performance gateways, fixed IP addresses and ports, or lossless direct connections is more suitable for actual business scenarios.
* Game server orchestration
The architecture of game servers has become increasingly complex. The player servers for many massive multiplayer online role-playing games (MMORPGs) are combinations of game servers with different features and purposes, such as a gateway server responsible for network access, a central server for running the game engine, and a policy server responsible for game scripts and gameplay. Different game servers have different capacities and management policies. Hence, it is difficult to describe and quickly deliver all the game servers by using a single workload type.
The preceding challenges make it difficult to implement cloud-native transformation of game servers. OpenKruiseGame is aimed to abstract the common requirements of the gaming industry, and use the semantic method to make the cloud-native transformation of various game servers simple, efficient, and secure.
### List of core features
OpenKruiseGame has the following core features:
* Hot update based on images and hot reload of configurations
* Update, deletion, and isolation of specified game servers
* Multiple built-in network models (fixed IP address and port, lossless direct connection, and global acceleration)
* Auto scaling
* Automated O&M (service quality)
* Independent of cloud service providers
* Complex game server orchestration
### Users of OpenKruiseGame(OKG)
<table style="border-collapse: collapse;">
<tr style="border: none;">
<td style="border: none;"><center><img src="./docs/images/lilith-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="./docs/images/hypergryph-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/jjworld-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/bilibili-logo.png" width="80"> </center></td>
<td style="border: none;"><center><img src="./docs/images/shangyou-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="./docs/images/yahaha-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/xingzhe-logo.png" width="80" ></center> </td>
</tr>
<tr style="border: none;">
<td style="border: none;"><center><img src="./docs/images/juren-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/baibian-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="./docs/images/chillyroom-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="./docs/images/wuduan-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/yostar-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/bekko-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/xingchao-logo.png" width="80" ></center> </td>
</tr>
<tr style="border: none;">
<td style="border: none;"><center><img src="./docs/images/wanglong-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/guanying-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/booming-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/gsshosting-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/yongshi-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/360-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/vma-logo.png" width="80" ></center> </td>
</tr>
</table>
### What to do next
* Install and use OpenKruiseGame. For more information, see [Getting Started](./docs/en/getting_started) & [User Manuals](./docs/en/user_manuals).
* Submit code for OpenKruiseGame. For more information, see [Developer Guide](./docs/en/developer_manuals/contribution.md).
* Submit [issues](https://github.com/openkruise/kruise-game/issues) to offer suggestions for OpenKruiseGame or discuss the best practices of cloud-native transformation of games.
* Join the DingTalk group (ID: 44862615) to have a discussion with core contributors to OpenKruiseGame.
* Contact us by email at zhongwei.lzw@alibaba-inc.com.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -23,11 +23,18 @@ import (
)
const (
GameServerStateKey = "game.kruise.io/gs-state"
GameServerOpsStateKey = "game.kruise.io/gs-opsState"
GameServerUpdatePriorityKey = "game.kruise.io/gs-update-priority"
GameServerDeletePriorityKey = "game.kruise.io/gs-delete-priority"
GameServerDeletingKey = "game.kruise.io/gs-deleting"
GameServerStateKey = "game.kruise.io/gs-state"
GameServerOpsStateKey = "game.kruise.io/gs-opsState"
GameServerUpdatePriorityKey = "game.kruise.io/gs-update-priority"
GameServerDeletePriorityKey = "game.kruise.io/gs-delete-priority"
GameServerDeletingKey = "game.kruise.io/gs-deleting"
GameServerNetworkType = "game.kruise.io/network-type"
GameServerNetworkConf = "game.kruise.io/network-conf"
GameServerNetworkDisabled = "game.kruise.io/network-disabled"
GameServerNetworkStatus = "game.kruise.io/network-status"
GameServerNetworkTriggerTime = "game.kruise.io/network-trigger-time"
GameServerOpsStateLastChangedTime = "game.kruise.io/opsState-last-changed-time"
GameServerStateLastChangedTime = "game.kruise.io/state-last-changed-time"
)
// GameServerSpec defines the desired state of GameServer
@ -36,18 +43,35 @@ type GameServerSpec struct {
UpdatePriority *intstr.IntOrString `json:"updatePriority,omitempty"`
DeletionPriority *intstr.IntOrString `json:"deletionPriority,omitempty"`
NetworkDisabled bool `json:"networkDisabled,omitempty"`
// Containers can be used to make the corresponding GameServer container fields
// different from the fields defined by GameServerTemplate in GameServerSetSpec.
Containers []GameServerContainer `json:"containers,omitempty"`
}
type GameServerContainer struct {
// Name indicates the name of the container to update.
Name string `json:"name"`
// Image indicates the image of the container to update.
// When Image updated, pod.spec.containers[*].image will be updated immediately.
Image string `json:"image,omitempty"`
// Resources indicates the resources of the container to update.
// When Resources updated, pod.spec.containers[*].Resources will be not updated immediately,
// which will be updated when pod recreate.
Resources corev1.ResourceRequirements `json:"resources,omitempty"`
}
type GameServerState string
const (
Unknown GameServerState = "Unknown"
Creating GameServerState = "Creating"
Ready GameServerState = "Ready"
NotReady GameServerState = "NotReady"
Crash GameServerState = "Crash"
Updating GameServerState = "Updating"
Deleting GameServerState = "Deleting"
Unknown GameServerState = "Unknown"
Creating GameServerState = "Creating"
Ready GameServerState = "Ready"
NotReady GameServerState = "NotReady"
Crash GameServerState = "Crash"
Updating GameServerState = "Updating"
Deleting GameServerState = "Deleting"
PreDelete GameServerState = "PreDelete"
PreUpdate GameServerState = "PreUpdate"
)
type OpsState string
@ -56,6 +80,8 @@ const (
Maintaining OpsState = "Maintaining"
WaitToDelete OpsState = "WaitToBeDeleted"
None OpsState = "None"
Allocated OpsState = "Allocated"
Kill OpsState = "Kill"
)
type ServiceQuality struct {
@ -70,16 +96,23 @@ type ServiceQuality struct {
}
type ServiceQualityCondition struct {
Name string `json:"name"`
Status string `json:"status,omitempty"`
Name string `json:"name"`
Status string `json:"status,omitempty"`
// Result indicate the probe message returned by the script
Result string `json:"result,omitempty"`
LastProbeTime metav1.Time `json:"lastProbeTime,omitempty"`
LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty"`
LastActionTransitionTime metav1.Time `json:"lastActionTransitionTime,omitempty"`
}
type ServiceQualityAction struct {
State bool `json:"state"`
State bool `json:"state"`
// Result indicate the probe message returned by the script.
// When Result is defined, it would exec action only when the according Result is actually returns.
Result string `json:"result,omitempty"`
GameServerSpec `json:",inline"`
Annotations map[string]string `json:"annotations,omitempty"`
Labels map[string]string `json:"labels,omitempty"`
}
// GameServerStatus defines the observed state of GameServer
@ -95,8 +128,39 @@ type GameServerStatus struct {
UpdatePriority *intstr.IntOrString `json:"updatePriority,omitempty"`
DeletionPriority *intstr.IntOrString `json:"deletionPriority,omitempty"`
LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty"`
// Conditions is an array of current observed GameServer conditions.
// +optional
Conditions []GameServerCondition `json:"conditions,omitempty" `
}
type GameServerCondition struct {
// Type is the type of the condition.
Type GameServerConditionType `json:"type"`
// Status is the status of the condition.
// Can be True, False, Unknown.
Status corev1.ConditionStatus `json:"status"`
// Last time we probed the condition.
// +optional
LastProbeTime metav1.Time `json:"lastProbeTime,omitempty"`
// Last time the condition transitioned from one status to another.
// +optional
LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty"`
// Unique, one-word, CamelCase reason for the condition's last transition.
// +optional
Reason string `json:"reason,omitempty"`
// Human-readable message indicating details about last transition.
// +optional
Message string `json:"message,omitempty"`
}
type GameServerConditionType string
const (
NodeNormal GameServerConditionType = "NodeNormal"
PersistentVolumeNormal GameServerConditionType = "PersistentVolumeNormal"
PodNormal GameServerConditionType = "PodNormal"
)
type NetworkStatus struct {
NetworkType string `json:"networkType,omitempty"`
InternalAddresses []NetworkAddress `json:"internalAddresses,omitempty"`
@ -109,12 +173,18 @@ type NetworkStatus struct {
type NetworkState string
const (
NetworkReady NetworkState = "Ready"
NetworkWaiting NetworkState = "Waiting"
NetworkNotReady NetworkState = "NotReady"
)
type NetworkAddress struct {
IP string `json:"ip"`
// TODO add IPv6
Ports []NetworkPort `json:"ports,omitempty"`
PortRange NetworkPortRange `json:"portRange,omitempty"`
EndPoint string `json:"endPoint,omitempty"`
Ports []NetworkPort `json:"ports,omitempty"`
PortRange *NetworkPortRange `json:"portRange,omitempty"`
EndPoint string `json:"endPoint,omitempty"`
}
type NetworkPort struct {
@ -135,6 +205,7 @@ type NetworkPortRange struct {
//+kubebuilder:printcolumn:name="OPSSTATE",type="string",JSONPath=".spec.opsState",description="The operations state of GameServer"
//+kubebuilder:printcolumn:name="DP",type="string",JSONPath=".status.deletionPriority",description="The current deletionPriority of GameServer"
//+kubebuilder:printcolumn:name="UP",type="string",JSONPath=".status.updatePriority",description="The current updatePriority of GameServer"
//+kubebuilder:printcolumn:name="AGE",type="date",JSONPath=".metadata.creationTimestamp",description="The age of GameServer"
//+kubebuilder:resource:shortName=gs
// GameServer is the Schema for the gameservers API

View File

@ -33,6 +33,11 @@ const (
GameServerSetReserveIdsKey = "game.kruise.io/reserve-ids"
AstsHashKey = "game.kruise.io/asts-hash"
PpmHashKey = "game.kruise.io/ppm-hash"
GsTemplateMetadataHashKey = "game.kruise.io/gsTemplate-metadata-hash"
)
const (
InplaceUpdateNotReadyBlocker = "game.kruise.io/inplace-update-not-ready-blocker"
)
// GameServerSetSpec defines the desired state of GameServerSet
@ -45,12 +50,19 @@ type GameServerSetSpec struct {
Replicas *int32 `json:"replicas"`
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
GameServerTemplate GameServerTemplate `json:"gameServerTemplate,omitempty"`
ReserveGameServerIds []int `json:"reserveGameServerIds,omitempty"`
ServiceQualities []ServiceQuality `json:"serviceQualities,omitempty"`
UpdateStrategy UpdateStrategy `json:"updateStrategy,omitempty"`
ScaleStrategy ScaleStrategy `json:"scaleStrategy,omitempty"`
Network *Network `json:"network,omitempty"`
GameServerTemplate GameServerTemplate `json:"gameServerTemplate,omitempty"`
ServiceName string `json:"serviceName,omitempty"`
ReserveGameServerIds []intstr.IntOrString `json:"reserveGameServerIds,omitempty"`
ServiceQualities []ServiceQuality `json:"serviceQualities,omitempty"`
UpdateStrategy UpdateStrategy `json:"updateStrategy,omitempty"`
ScaleStrategy ScaleStrategy `json:"scaleStrategy,omitempty"`
Network *Network `json:"network,omitempty"`
Lifecycle *appspub.Lifecycle `json:"lifecycle,omitempty"`
// PersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from
// the StatefulSet VolumeClaimTemplates. This requires the
// StatefulSetAutoDeletePVC feature gate to be enabled, which is alpha.
// +optional
PersistentVolumeClaimRetentionPolicy *kruiseV1beta1.StatefulSetPersistentVolumeClaimRetentionPolicy `json:"persistentVolumeClaimRetentionPolicy,omitempty"`
}
type GameServerTemplate struct {
@ -58,8 +70,22 @@ type GameServerTemplate struct {
// +kubebuilder:validation:Schemaless
corev1.PodTemplateSpec `json:",inline"`
VolumeClaimTemplates []corev1.PersistentVolumeClaim `json:"volumeClaimTemplates,omitempty"`
// ReclaimPolicy indicates the reclaim policy for GameServer.
// Default is Cascade.
ReclaimPolicy GameServerReclaimPolicy `json:"reclaimPolicy,omitempty"`
}
type GameServerReclaimPolicy string
const (
// CascadeGameServerReclaimPolicy indicates that GameServer is deleted when the pod is deleted.
// The age of GameServer is exactly the same as that of the pod.
CascadeGameServerReclaimPolicy GameServerReclaimPolicy = "Cascade"
// DeleteGameServerReclaimPolicy indicates that GameServers will be deleted when replicas of GameServerSet decreases.
// The GameServer will not be deleted when the corresponding pod is deleted due to manual deletion, update, eviction, etc.
DeleteGameServerReclaimPolicy GameServerReclaimPolicy = "Delete"
)
type Network struct {
NetworkType string `json:"networkType,omitempty"`
NetworkConf []NetworkConfParams `json:"networkConf,omitempty"`
@ -67,6 +93,10 @@ type Network struct {
type NetworkConfParams KVParams
const (
AllowNotReadyContainersNetworkConfName = "AllowNotReadyContainers"
)
type KVParams struct {
Name string `json:"name,omitempty"`
Value string `json:"value,omitempty"`
@ -124,8 +154,29 @@ type RollingUpdateStatefulSetStrategy struct {
type ScaleStrategy struct {
kruiseV1beta1.StatefulSetScaleStrategy `json:",inline"`
// ScaleDownStrategyType indicates the scaling down strategy.
// Default is GeneralScaleDownStrategyType
// +optional
ScaleDownStrategyType ScaleDownStrategyType `json:"scaleDownStrategyType,omitempty"`
}
// ScaleDownStrategyType is a string enumeration type that enumerates
// all possible scale down strategies for the GameServerSet controller.
// +enum
type ScaleDownStrategyType string
const (
// GeneralScaleDownStrategyType will first consider the ReserveGameServerIds
// field when game server scaling down. When the number of reserved game servers
// does not meet the scale down number, continue to select and delete the game
// servers from the current game server list.
GeneralScaleDownStrategyType ScaleDownStrategyType = "General"
// ReserveIdsScaleDownStrategyType will backfill the sequence numbers into
// ReserveGameServerIds field when GameServers scale down, whether set by
// ReserveGameServerIds field or the GameServerSet controller chooses to remove it.
ReserveIdsScaleDownStrategyType ScaleDownStrategyType = "ReserveIds"
)
// GameServerSetStatus defines the observed state of GameServerSet
type GameServerSetStatus struct {
// The generation observed by the controller.
@ -140,12 +191,21 @@ type GameServerSetStatus struct {
UpdatedReadyReplicas int32 `json:"updatedReadyReplicas,omitempty"`
MaintainingReplicas *int32 `json:"maintainingReplicas,omitempty"`
WaitToBeDeletedReplicas *int32 `json:"waitToBeDeletedReplicas,omitempty"`
PreDeleteReplicas *int32 `json:"preDeleteReplicas,omitempty"`
// LabelSelector is label selectors for query over pods that should match the replica count used by HPA.
LabelSelector string `json:"labelSelector,omitempty"`
}
//+genclient
//+kubebuilder:object:root=true
//+kubebuilder:printcolumn:name="DESIRED",type="integer",JSONPath=".spec.replicas",description="The desired number of GameServers."
//+kubebuilder:printcolumn:name="CURRENT",type="integer",JSONPath=".status.currentReplicas",description="The number of currently all GameServers."
//+kubebuilder:printcolumn:name="UPDATED",type="integer",JSONPath=".status.updatedReplicas",description="The number of GameServers updated."
//+kubebuilder:printcolumn:name="READY",type="integer",JSONPath=".status.readyReplicas",description="The number of GameServers ready."
//+kubebuilder:printcolumn:name="Maintaining",type="integer",JSONPath=".status.maintainingReplicas",description="The number of GameServers Maintaining."
//+kubebuilder:printcolumn:name="WaitToBeDeleted",type="integer",JSONPath=".status.waitToBeDeletedReplicas",description="The number of GameServers WaitToBeDeleted."
//+kubebuilder:printcolumn:name="PreDelete",type="integer",JSONPath=".status.preDeleteReplicas",description="The number of GameServers PreDelete."
//+kubebuilder:printcolumn:name="AGE",type="date",JSONPath=".metadata.creationTimestamp",description="The age of GameServerSet."
//+kubebuilder:subresource:status
//+kubebuilder:subresource:scale:specpath=.spec.replicas,statuspath=.status.replicas,selectorpath=.status.labelSelector
//+kubebuilder:resource:shortName=gss

View File

@ -15,8 +15,8 @@ limitations under the License.
*/
// Package v1alpha1 contains API Schema definitions for the game.kruise.io v1alpha1 API group
//+kubebuilder:object:generate=true
//+groupName=game.kruise.io
// +kubebuilder:object:generate=true
// +groupName=game.kruise.io
package v1alpha1
import (

View File

@ -1,5 +1,4 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2022 The Kruise Authors.
@ -23,6 +22,7 @@ package v1alpha1
import (
"github.com/openkruise/kruise-api/apps/pub"
"github.com/openkruise/kruise-api/apps/v1beta1"
"k8s.io/api/core/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/intstr"
@ -55,6 +55,39 @@ func (in *GameServer) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GameServerCondition) DeepCopyInto(out *GameServerCondition) {
*out = *in
in.LastProbeTime.DeepCopyInto(&out.LastProbeTime)
in.LastTransitionTime.DeepCopyInto(&out.LastTransitionTime)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerCondition.
func (in *GameServerCondition) DeepCopy() *GameServerCondition {
if in == nil {
return nil
}
out := new(GameServerCondition)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GameServerContainer) DeepCopyInto(out *GameServerContainer) {
*out = *in
in.Resources.DeepCopyInto(&out.Resources)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerContainer.
func (in *GameServerContainer) DeepCopy() *GameServerContainer {
if in == nil {
return nil
}
out := new(GameServerContainer)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GameServerList) DeepCopyInto(out *GameServerList) {
*out = *in
@ -157,7 +190,7 @@ func (in *GameServerSetSpec) DeepCopyInto(out *GameServerSetSpec) {
in.GameServerTemplate.DeepCopyInto(&out.GameServerTemplate)
if in.ReserveGameServerIds != nil {
in, out := &in.ReserveGameServerIds, &out.ReserveGameServerIds
*out = make([]int, len(*in))
*out = make([]intstr.IntOrString, len(*in))
copy(*out, *in)
}
if in.ServiceQualities != nil {
@ -174,6 +207,16 @@ func (in *GameServerSetSpec) DeepCopyInto(out *GameServerSetSpec) {
*out = new(Network)
(*in).DeepCopyInto(*out)
}
if in.Lifecycle != nil {
in, out := &in.Lifecycle, &out.Lifecycle
*out = new(pub.Lifecycle)
(*in).DeepCopyInto(*out)
}
if in.PersistentVolumeClaimRetentionPolicy != nil {
in, out := &in.PersistentVolumeClaimRetentionPolicy, &out.PersistentVolumeClaimRetentionPolicy
*out = new(v1beta1.StatefulSetPersistentVolumeClaimRetentionPolicy)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerSetSpec.
@ -199,6 +242,11 @@ func (in *GameServerSetStatus) DeepCopyInto(out *GameServerSetStatus) {
*out = new(int32)
**out = **in
}
if in.PreDeleteReplicas != nil {
in, out := &in.PreDeleteReplicas, &out.PreDeleteReplicas
*out = new(int32)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerSetStatus.
@ -224,6 +272,13 @@ func (in *GameServerSpec) DeepCopyInto(out *GameServerSpec) {
*out = new(intstr.IntOrString)
**out = **in
}
if in.Containers != nil {
in, out := &in.Containers, &out.Containers
*out = make([]GameServerContainer, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerSpec.
@ -259,6 +314,13 @@ func (in *GameServerStatus) DeepCopyInto(out *GameServerStatus) {
**out = **in
}
in.LastTransitionTime.DeepCopyInto(&out.LastTransitionTime)
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
*out = make([]GameServerCondition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerStatus.
@ -339,7 +401,11 @@ func (in *NetworkAddress) DeepCopyInto(out *NetworkAddress) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
out.PortRange = in.PortRange
if in.PortRange != nil {
in, out := &in.PortRange, &out.PortRange
*out = new(NetworkPortRange)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NetworkAddress.
@ -511,6 +577,20 @@ func (in *ServiceQuality) DeepCopy() *ServiceQuality {
func (in *ServiceQualityAction) DeepCopyInto(out *ServiceQualityAction) {
*out = *in
in.GameServerSpec.DeepCopyInto(&out.GameServerSpec)
if in.Annotations != nil {
in, out := &in.Annotations, &out.Annotations
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ServiceQualityAction.

View File

@ -0,0 +1,61 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
"github.com/openkruise/kruise-game/cloudprovider"
"k8s.io/klog/v2"
)
const (
AlibabaCloud = "AlibabaCloud"
)
var (
alibabaCloudProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
)
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (ap *Provider) Name() string {
return AlibabaCloud
}
func (ap *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if ap.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return ap.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (ap *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
klog.Fatal("empty plugin name")
}
ap.plugins[name] = plugin
}
func NewAlibabaCloudProvider() (cloudprovider.CloudProvider, error) {
return alibabaCloudProvider, nil
}

View File

@ -0,0 +1,4 @@
// Package v1beta1 Package v1 contains API Schema definitions for the alibabacloud v1beta1 API group
// +k8s:deepcopy-gen=package,register
// +groupName=alibabacloud.com
package v1beta1

View File

@ -0,0 +1,88 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// PodDNATSpec defines the desired state of PodDNAT
type PodDNATSpec struct {
VSwitch *string `json:"vswitch,omitempty"` // deprecated
ENI *string `json:"eni,omitempty"` // deprecated
ZoneID *string `json:"zoneID,omitempty"`
ExternalIP *string `json:"externalIP,omitempty"`
ExternalPort *string `json:"externalPort,omitempty"` // deprecated
InternalIP *string `json:"internalIP,omitempty"` // pod IP may change
InternalPort *string `json:"internalPort,omitempty"` // deprecated
Protocol *string `json:"protocol,omitempty"`
TableId *string `json:"tableId,omitempty"` // natGateway ID
EntryId *string `json:"entryId,omitempty"` // deprecated
PortMapping []PortMapping `json:"portMapping,omitempty"`
}
type PortMapping struct {
ExternalPort string `json:"externalPort,omitempty"`
InternalPort string `json:"internalPort,omitempty"`
}
// PodDNATStatus defines the observed state of PodDNAT
type PodDNATStatus struct {
// created create status
// +optional
Created *string `json:"created,omitempty"` // deprecated
// entries
// +optional
Entries []Entry `json:"entries,omitempty"`
}
// Entry record for forwardEntry
type Entry struct {
ExternalPort string `json:"externalPort,omitempty"`
ExternalIP string `json:"externalIP,omitempty"`
InternalPort string `json:"internalPort,omitempty"`
InternalIP string `json:"internalIP,omitempty"`
ForwardEntryID string `json:"forwardEntryId,omitempty"`
IPProtocol string `json:"ipProtocol,omitempty"`
}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// PodDNAT is the Schema for the poddnats API
type PodDNAT struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec PodDNATSpec `json:"spec,omitempty"`
Status PodDNATStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// PodDNATList contains a list of PodDNAT
type PodDNATList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []PodDNAT `json:"items"`
}
func init() {
SchemeBuilder.Register(&PodDNAT{}, &PodDNATList{})
}

View File

@ -0,0 +1,96 @@
package v1beta1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// PodEIPSpec defines the desired state of PodEIP
type PodEIPSpec struct {
// +kubebuilder:validation:Required
AllocationID string `json:"allocationID"`
BandwidthPackageID string `json:"bandwidthPackageID,omitempty"`
// +kubebuilder:validation:Required
AllocationType AllocationType `json:"allocationType"`
}
// AllocationType ip type and release strategy
type AllocationType struct {
// +kubebuilder:default:=Auto
// +kubebuilder:validation:Required
Type IPAllocType `json:"type"`
// +kubebuilder:validation:Required
// +kubebuilder:validation:Enum=Follow;TTL;Never
ReleaseStrategy ReleaseStrategy `json:"releaseStrategy"`
ReleaseAfter string `json:"releaseAfter,omitempty"` // go type 5m0s
}
// +kubebuilder:validation:Enum=Auto;Static
// IPAllocType is the type for eip alloc strategy
type IPAllocType string
// IPAllocType
const (
IPAllocTypeAuto IPAllocType = "Auto"
IPAllocTypeStatic IPAllocType = "Static"
)
// +kubebuilder:validation:Enum=Follow;TTL;Never
// ReleaseStrategy is the type for eip release strategy
type ReleaseStrategy string
// ReleaseStrategy
const (
ReleaseStrategyFollow ReleaseStrategy = "Follow" // default policy
ReleaseStrategyTTL ReleaseStrategy = "TTL"
ReleaseStrategyNever ReleaseStrategy = "Never"
)
// PodEIPStatus defines the observed state of PodEIP
type PodEIPStatus struct {
// eni
NetworkInterfaceID string `json:"networkInterfaceID,omitempty"`
PrivateIPAddress string `json:"privateIPAddress,omitempty"`
// eip
EipAddress string `json:"eipAddress,omitempty"`
ISP string `json:"isp,omitempty"`
InternetChargeType string `json:"internetChargeType,omitempty"`
ResourceGroupID string `json:"resourceGroupID,omitempty"`
Name string `json:"name,omitempty"`
PublicIpAddressPoolID string `json:"publicIpAddressPoolID,omitempty"`
Status string `json:"status,omitempty"`
// BandwidthPackageID
BandwidthPackageID string `json:"bandwidthPackageID,omitempty"`
// PodLastSeen is the timestamp when pod resource last seen
PodLastSeen metav1.Time `json:"podLastSeen,omitempty"`
}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// PodEIP is the Schema for the podeips API
type PodEIP struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec PodEIPSpec `json:"spec,omitempty"`
Status PodEIPStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// PodEIPList contains a list of PodEIP
type PodEIPList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []PodEIP `json:"items"`
}
func init() {
SchemeBuilder.Register(&PodEIP{}, &PodEIPList{})
}

View File

@ -0,0 +1,23 @@
// NOTE: Boilerplate only. Ignore this file.
// Package v1beta1 contains API Schema definitions for the alibabacloud v1beta1 API group
// +k8s:deepcopy-gen=package,register
// +groupName=alibabacloud.com
package v1beta1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
var (
// SchemeGroupVersion is group version used to register these objects
SchemeGroupVersion = schema.GroupVersion{Group: "alibabacloud.com", Version: "v1beta1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: SchemeGroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)

View File

@ -0,0 +1,315 @@
//go:build !ignore_autogenerated
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
package v1beta1
import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AllocationType) DeepCopyInto(out *AllocationType) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AllocationType.
func (in *AllocationType) DeepCopy() *AllocationType {
if in == nil {
return nil
}
out := new(AllocationType)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Entry) DeepCopyInto(out *Entry) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Entry.
func (in *Entry) DeepCopy() *Entry {
if in == nil {
return nil
}
out := new(Entry)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodDNAT) DeepCopyInto(out *PodDNAT) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodDNAT.
func (in *PodDNAT) DeepCopy() *PodDNAT {
if in == nil {
return nil
}
out := new(PodDNAT)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *PodDNAT) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodDNATList) DeepCopyInto(out *PodDNATList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]PodDNAT, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodDNATList.
func (in *PodDNATList) DeepCopy() *PodDNATList {
if in == nil {
return nil
}
out := new(PodDNATList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *PodDNATList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodDNATSpec) DeepCopyInto(out *PodDNATSpec) {
*out = *in
if in.VSwitch != nil {
in, out := &in.VSwitch, &out.VSwitch
*out = new(string)
**out = **in
}
if in.ENI != nil {
in, out := &in.ENI, &out.ENI
*out = new(string)
**out = **in
}
if in.ZoneID != nil {
in, out := &in.ZoneID, &out.ZoneID
*out = new(string)
**out = **in
}
if in.ExternalIP != nil {
in, out := &in.ExternalIP, &out.ExternalIP
*out = new(string)
**out = **in
}
if in.ExternalPort != nil {
in, out := &in.ExternalPort, &out.ExternalPort
*out = new(string)
**out = **in
}
if in.InternalIP != nil {
in, out := &in.InternalIP, &out.InternalIP
*out = new(string)
**out = **in
}
if in.InternalPort != nil {
in, out := &in.InternalPort, &out.InternalPort
*out = new(string)
**out = **in
}
if in.Protocol != nil {
in, out := &in.Protocol, &out.Protocol
*out = new(string)
**out = **in
}
if in.TableId != nil {
in, out := &in.TableId, &out.TableId
*out = new(string)
**out = **in
}
if in.EntryId != nil {
in, out := &in.EntryId, &out.EntryId
*out = new(string)
**out = **in
}
if in.PortMapping != nil {
in, out := &in.PortMapping, &out.PortMapping
*out = make([]PortMapping, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodDNATSpec.
func (in *PodDNATSpec) DeepCopy() *PodDNATSpec {
if in == nil {
return nil
}
out := new(PodDNATSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodDNATStatus) DeepCopyInto(out *PodDNATStatus) {
*out = *in
if in.Created != nil {
in, out := &in.Created, &out.Created
*out = new(string)
**out = **in
}
if in.Entries != nil {
in, out := &in.Entries, &out.Entries
*out = make([]Entry, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodDNATStatus.
func (in *PodDNATStatus) DeepCopy() *PodDNATStatus {
if in == nil {
return nil
}
out := new(PodDNATStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodEIP) DeepCopyInto(out *PodEIP) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
out.Spec = in.Spec
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodEIP.
func (in *PodEIP) DeepCopy() *PodEIP {
if in == nil {
return nil
}
out := new(PodEIP)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *PodEIP) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodEIPList) DeepCopyInto(out *PodEIPList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]PodEIP, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodEIPList.
func (in *PodEIPList) DeepCopy() *PodEIPList {
if in == nil {
return nil
}
out := new(PodEIPList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *PodEIPList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodEIPSpec) DeepCopyInto(out *PodEIPSpec) {
*out = *in
out.AllocationType = in.AllocationType
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodEIPSpec.
func (in *PodEIPSpec) DeepCopy() *PodEIPSpec {
if in == nil {
return nil
}
out := new(PodEIPSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodEIPStatus) DeepCopyInto(out *PodEIPStatus) {
*out = *in
in.PodLastSeen.DeepCopyInto(&out.PodLastSeen)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodEIPStatus.
func (in *PodEIPStatus) DeepCopy() *PodEIPStatus {
if in == nil {
return nil
}
out := new(PodEIPStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PortMapping) DeepCopyInto(out *PortMapping) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PortMapping.
func (in *PortMapping) DeepCopy() *PortMapping {
if in == nil {
return nil
}
out := new(PortMapping)
in.DeepCopyInto(out)
return out
}

View File

@ -0,0 +1,544 @@
/*
Copyright 2025 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
"context"
"fmt"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
"sync"
)
const (
AutoNLBsNetwork = "AlibabaCloud-AutoNLBs"
AliasAutoNLBs = "Auto-NLBs-Network"
ReserveNlbNumConfigName = "ReserveNlbNum"
EipTypesConfigName = "EipTypes"
ZoneMapsConfigName = "ZoneMaps"
MinPortConfigName = "MinPort"
MaxPortConfigName = "MaxPort"
BlockPortsConfigName = "BlockPorts"
NLBZoneMapsServiceAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-zone-maps"
NLBAddressTypeAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type"
IntranetEIPType = "intranet"
DefaultEIPType = "default"
)
type AutoNLBsPlugin struct {
gssMaxPodIndex map[string]int
mutex sync.RWMutex
}
type autoNLBsConfig struct {
minPort int32
maxPort int32
blockPorts []int32
zoneMaps string
reserveNlbNum int
targetPorts []int
protocols []corev1.Protocol
eipTypes []string
externalTrafficPolicy corev1.ServiceExternalTrafficPolicyType
*nlbHealthConfig
}
func (a *AutoNLBsPlugin) Name() string {
return AutoNLBsNetwork
}
func (a *AutoNLBsPlugin) Alias() string {
return AliasAutoNLBs
}
func (a *AutoNLBsPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
gssList := &gamekruiseiov1alpha1.GameServerSetList{}
err := c.List(ctx, gssList, &client.ListOptions{})
if err != nil {
log.Errorf("cannot list gameserverset in cluster because %s", err.Error())
return err
}
for _, gss := range gssList.Items {
if gss.Spec.Network != nil && gss.Spec.Network.NetworkType == AutoNLBsNetwork {
a.gssMaxPodIndex[gss.GetNamespace()+"/"+gss.GetName()] = int(*gss.Spec.Replicas)
nc, err := parseAutoNLBsConfig(gss.Spec.Network.NetworkConf)
if err != nil {
log.Errorf("pasrse config wronge because %s", err.Error())
return err
}
err = a.ensureServices(ctx, c, gss.GetNamespace(), gss.GetName(), nc)
if err != nil {
log.Errorf("ensure services error because %s", err.Error())
return err
}
}
}
return nil
}
func (a *AutoNLBsPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
conf, err := parseAutoNLBsConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
a.ensureMaxPodIndex(pod)
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if err := a.ensureServices(ctx, c, pod.GetNamespace(), gssName, conf); err != nil {
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
containerPorts := make([]corev1.ContainerPort, 0)
podIndex := util.GetIndexFromGsName(pod.GetName())
for i, port := range conf.targetPorts {
if conf.protocols[i] == ProtocolTCPUDP {
containerPortTCP := corev1.ContainerPort{
ContainerPort: int32(port),
Protocol: corev1.ProtocolTCP,
Name: "tcp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(port),
}
containerPortUDP := corev1.ContainerPort{
ContainerPort: int32(port),
Protocol: corev1.ProtocolUDP,
Name: "udp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(port),
}
containerPorts = append(containerPorts, containerPortTCP, containerPortUDP)
} else {
containerPort := corev1.ContainerPort{
ContainerPort: int32(port),
Protocol: conf.protocols[i],
Name: strings.ToLower(string(conf.protocols[i])) + "-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(port),
}
containerPorts = append(containerPorts, containerPort)
}
}
pod.Spec.Containers[0].Ports = containerPorts
lenRange := int(conf.maxPort) - int(conf.minPort) - len(conf.blockPorts) + 1
svcIndex := podIndex / (lenRange / len(conf.targetPorts))
for _, eipType := range conf.eipTypes {
svcName := gssName + "-" + eipType + "-" + strconv.Itoa(svcIndex)
pod.Spec.ReadinessGates = append(pod.Spec.ReadinessGates, corev1.PodReadinessGate{
ConditionType: corev1.PodConditionType(PrefixReadyReadinessGate + svcName),
})
}
return pod, nil
}
func (a *AutoNLBsPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
networkConfig := networkManager.GetNetworkConfig()
conf, err := parseAutoNLBsConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
_, readyCondition := util.GetPodConditionFromList(pod.Status.Conditions, corev1.PodReady)
if readyCondition == nil || readyCondition.Status == corev1.ConditionFalse {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
var internalPorts []gamekruiseiov1alpha1.NetworkPort
var externalPorts []gamekruiseiov1alpha1.NetworkPort
endPoints := ""
podIndex := util.GetIndexFromGsName(pod.GetName())
lenRange := int(conf.maxPort) - int(conf.minPort) - len(conf.blockPorts) + 1
svcIndex := podIndex / (lenRange / len(conf.targetPorts))
for i, eipType := range conf.eipTypes {
svcName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey] + "-" + eipType + "-" + strconv.Itoa(svcIndex)
svc := &corev1.Service{}
err := c.Get(ctx, types.NamespacedName{
Name: svcName,
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
if svc.Status.LoadBalancer.Ingress == nil || len(svc.Status.LoadBalancer.Ingress) == 0 {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
endPoints = endPoints + svc.Status.LoadBalancer.Ingress[0].Hostname + "/" + eipType
if i == len(conf.eipTypes)-1 {
for i, port := range conf.targetPorts {
if conf.protocols[i] == ProtocolTCPUDP {
portNameTCP := "tcp-" + strconv.Itoa(podIndex) + strconv.Itoa(port)
portNameUDP := "udp-" + strconv.Itoa(podIndex) + strconv.Itoa(port)
iPort := intstr.FromInt(port)
internalPorts = append(internalPorts, gamekruiseiov1alpha1.NetworkPort{
Name: portNameTCP,
Protocol: corev1.ProtocolTCP,
Port: &iPort,
}, gamekruiseiov1alpha1.NetworkPort{
Name: portNameUDP,
Protocol: corev1.ProtocolUDP,
Port: &iPort,
})
for _, svcPort := range svc.Spec.Ports {
if svcPort.Name == portNameTCP || svcPort.Name == portNameUDP {
ePort := intstr.FromInt32(svcPort.Port)
externalPorts = append(externalPorts, gamekruiseiov1alpha1.NetworkPort{
Name: portNameTCP,
Protocol: corev1.ProtocolTCP,
Port: &ePort,
}, gamekruiseiov1alpha1.NetworkPort{
Name: portNameUDP,
Protocol: corev1.ProtocolUDP,
Port: &ePort,
})
break
}
}
} else {
portName := strings.ToLower(string(conf.protocols[i])) + "-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(port)
iPort := intstr.FromInt(port)
internalPorts = append(internalPorts, gamekruiseiov1alpha1.NetworkPort{
Name: portName,
Protocol: conf.protocols[i],
Port: &iPort,
})
for _, svcPort := range svc.Spec.Ports {
if svcPort.Name == portName {
ePort := intstr.FromInt32(svcPort.Port)
externalPorts = append(externalPorts, gamekruiseiov1alpha1.NetworkPort{
Name: portName,
Protocol: conf.protocols[i],
Port: &ePort,
})
break
}
}
}
}
} else {
endPoints = endPoints + ","
}
}
networkStatus = &gamekruiseiov1alpha1.NetworkStatus{
InternalAddresses: []gamekruiseiov1alpha1.NetworkAddress{
{
IP: pod.Status.PodIP,
Ports: internalPorts,
},
},
ExternalAddresses: []gamekruiseiov1alpha1.NetworkAddress{
{
EndPoint: endPoints,
Ports: externalPorts,
},
},
CurrentNetworkState: gamekruiseiov1alpha1.NetworkReady,
}
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (a *AutoNLBsPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
return nil
}
func init() {
autoNLBsPlugin := AutoNLBsPlugin{
mutex: sync.RWMutex{},
gssMaxPodIndex: make(map[string]int),
}
alibabaCloudProvider.registerPlugin(&autoNLBsPlugin)
}
func (a *AutoNLBsPlugin) ensureMaxPodIndex(pod *corev1.Pod) {
a.mutex.Lock()
defer a.mutex.Unlock()
podIndex := util.GetIndexFromGsName(pod.GetName())
gssNsName := pod.GetNamespace() + "/" + pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if podIndex > a.gssMaxPodIndex[gssNsName] {
a.gssMaxPodIndex[gssNsName] = podIndex
}
}
func (a *AutoNLBsPlugin) checkSvcNumToCreate(namespace, gssName string, config *autoNLBsConfig) int {
a.mutex.RLock()
defer a.mutex.RUnlock()
lenRange := int(config.maxPort) - int(config.minPort) - len(config.blockPorts) + 1
expectSvcNum := a.gssMaxPodIndex[namespace+"/"+gssName]/(lenRange/len(config.targetPorts)) + config.reserveNlbNum + 1
return expectSvcNum
}
func (a *AutoNLBsPlugin) ensureServices(ctx context.Context, client client.Client, namespace, gssName string, config *autoNLBsConfig) error {
expectSvcNum := a.checkSvcNumToCreate(namespace, gssName, config)
for _, eipType := range config.eipTypes {
for j := 0; j < expectSvcNum; j++ {
// get svc
svcName := gssName + "-" + eipType + "-" + strconv.Itoa(j)
svc := &corev1.Service{}
err := client.Get(ctx, types.NamespacedName{
Name: svcName,
Namespace: namespace,
}, svc)
if err != nil {
if errors.IsNotFound(err) {
// create svc
toAddSvc := a.consSvc(namespace, gssName, eipType, j, config)
if err := setSvcOwner(client, ctx, toAddSvc, namespace, gssName); err != nil {
return err
} else {
if err := client.Create(ctx, toAddSvc); err != nil {
return err
}
}
} else {
return err
}
}
}
}
return nil
}
func (a *AutoNLBsPlugin) consSvcPorts(svcIndex int, config *autoNLBsConfig) []corev1.ServicePort {
lenRange := int(config.maxPort) - int(config.minPort) - len(config.blockPorts) + 1
ports := make([]corev1.ServicePort, 0)
toAllocatedPort := config.minPort
portNumPerPod := lenRange / len(config.targetPorts)
for podIndex := svcIndex * portNumPerPod; podIndex < (svcIndex+1)*portNumPerPod; podIndex++ {
for i, protocol := range config.protocols {
if protocol == ProtocolTCPUDP {
svcPortTCP := corev1.ServicePort{
Name: "tcp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i]),
TargetPort: intstr.FromString("tcp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i])),
Port: toAllocatedPort,
Protocol: corev1.ProtocolTCP,
}
svcPortUDP := corev1.ServicePort{
Name: "udp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i]),
TargetPort: intstr.FromString("udp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i])),
Port: toAllocatedPort,
Protocol: corev1.ProtocolUDP,
}
ports = append(ports, svcPortTCP, svcPortUDP)
} else {
svcPort := corev1.ServicePort{
Name: strings.ToLower(string(protocol)) + "-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i]),
TargetPort: intstr.FromString(strings.ToLower(string(protocol)) + "-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i])),
Port: toAllocatedPort,
Protocol: protocol,
}
ports = append(ports, svcPort)
}
toAllocatedPort++
for util.IsNumInListInt32(toAllocatedPort, config.blockPorts) {
toAllocatedPort++
}
}
}
return ports
}
func (a *AutoNLBsPlugin) consSvc(namespace, gssName, eipType string, svcIndex int, conf *autoNLBsConfig) *corev1.Service {
loadBalancerClass := "alibabacloud.com/nlb"
svcAnnotations := map[string]string{
//SlbConfigHashKey: util.GetHash(conf),
NLBZoneMapsServiceAnnotationKey: conf.zoneMaps,
LBHealthCheckFlagAnnotationKey: conf.lBHealthCheckFlag,
}
if conf.lBHealthCheckFlag == "on" {
svcAnnotations[LBHealthCheckTypeAnnotationKey] = conf.lBHealthCheckType
svcAnnotations[LBHealthCheckConnectPortAnnotationKey] = conf.lBHealthCheckConnectPort
svcAnnotations[LBHealthCheckConnectTimeoutAnnotationKey] = conf.lBHealthCheckConnectTimeout
svcAnnotations[LBHealthCheckIntervalAnnotationKey] = conf.lBHealthCheckInterval
svcAnnotations[LBHealthyThresholdAnnotationKey] = conf.lBHealthyThreshold
svcAnnotations[LBUnhealthyThresholdAnnotationKey] = conf.lBUnhealthyThreshold
if conf.lBHealthCheckType == "http" {
svcAnnotations[LBHealthCheckDomainAnnotationKey] = conf.lBHealthCheckDomain
svcAnnotations[LBHealthCheckUriAnnotationKey] = conf.lBHealthCheckUri
svcAnnotations[LBHealthCheckMethodAnnotationKey] = conf.lBHealthCheckMethod
}
}
if strings.Contains(eipType, IntranetEIPType) {
svcAnnotations[NLBAddressTypeAnnotationKey] = IntranetEIPType
}
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: gssName + "-" + eipType + "-" + strconv.Itoa(svcIndex),
Namespace: namespace,
Annotations: svcAnnotations,
},
Spec: corev1.ServiceSpec{
Ports: a.consSvcPorts(svcIndex, conf),
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
gamekruiseiov1alpha1.GameServerOwnerGssKey: gssName,
},
LoadBalancerClass: &loadBalancerClass,
AllocateLoadBalancerNodePorts: ptr.To[bool](false),
ExternalTrafficPolicy: conf.externalTrafficPolicy,
},
}
}
func setSvcOwner(c client.Client, ctx context.Context, svc *corev1.Service, namespace, gssName string) error {
gss := &gamekruiseiov1alpha1.GameServerSet{}
err := c.Get(ctx, types.NamespacedName{
Namespace: namespace,
Name: gssName,
}, gss)
if err != nil {
return err
}
ownerRef := []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
svc.OwnerReferences = ownerRef
return nil
}
func parseAutoNLBsConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*autoNLBsConfig, error) {
reserveNlbNum := 1
eipTypes := []string{"default"}
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
externalTrafficPolicy := corev1.ServiceExternalTrafficPolicyTypeLocal
zoneMaps := ""
blockPorts := make([]int32, 0)
minPort := int32(1000)
maxPort := int32(1499)
for _, c := range conf {
switch c.Name {
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
return nil, fmt.Errorf("invalid PortProtocols %s", c.Value)
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case ExternalTrafficPolicyTypeConfigName:
if strings.EqualFold(c.Value, string(corev1.ServiceExternalTrafficPolicyTypeCluster)) {
externalTrafficPolicy = corev1.ServiceExternalTrafficPolicyTypeCluster
}
case ReserveNlbNumConfigName:
reserveNlbNum, _ = strconv.Atoi(c.Value)
case EipTypesConfigName:
eipTypes = strings.Split(c.Value, ",")
case ZoneMapsConfigName:
zoneMaps = c.Value
case BlockPortsConfigName:
blockPorts = util.StringToInt32Slice(c.Value, ",")
case MinPortConfigName:
val, err := strconv.ParseInt(c.Value, 10, 32)
if err != nil {
return nil, fmt.Errorf("invalid MinPort %s", c.Value)
} else {
minPort = int32(val)
}
case MaxPortConfigName:
val, err := strconv.ParseInt(c.Value, 10, 32)
if err != nil {
return nil, fmt.Errorf("invalid MaxPort %s", c.Value)
} else {
maxPort = int32(val)
}
}
}
if minPort > maxPort {
return nil, fmt.Errorf("invalid MinPort %d and MaxPort %d", minPort, maxPort)
}
if zoneMaps == "" {
return nil, fmt.Errorf("invalid ZoneMaps, which can not be empty")
}
// check ports & protocols
if len(ports) == 0 || len(protocols) == 0 {
return nil, fmt.Errorf("invalid PortProtocols, which can not be empty")
}
nlbHealthConfig, err := parseNlbHealthConfig(conf)
if err != nil {
return nil, err
}
return &autoNLBsConfig{
blockPorts: blockPorts,
minPort: minPort,
maxPort: maxPort,
nlbHealthConfig: nlbHealthConfig,
reserveNlbNum: reserveNlbNum,
eipTypes: eipTypes,
protocols: protocols,
targetPorts: ports,
zoneMaps: zoneMaps,
externalTrafficPolicy: externalTrafficPolicy,
}, nil
}

View File

@ -0,0 +1,272 @@
/*
Copyright 2025 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"reflect"
"sync"
"testing"
)
func TestIsNeedToCreateService(t *testing.T) {
tests := []struct {
ns string
gssName string
config *autoNLBsConfig
a *AutoNLBsPlugin
expectSvcNum int
}{
// case 0
{
ns: "default",
gssName: "pod",
config: &autoNLBsConfig{
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
corev1.ProtocolUDP,
},
reserveNlbNum: 2,
targetPorts: []int{
6666,
8888,
},
maxPort: 2500,
minPort: 1000,
blockPorts: []int32{},
},
a: &AutoNLBsPlugin{
gssMaxPodIndex: map[string]int{
"default/pod": 1499,
},
mutex: sync.RWMutex{},
},
expectSvcNum: 4,
},
// case 1
{
ns: "default",
gssName: "pod",
config: &autoNLBsConfig{
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
corev1.ProtocolTCP,
corev1.ProtocolUDP,
},
reserveNlbNum: 2,
targetPorts: []int{
6666,
7777,
8888,
},
maxPort: 1005,
minPort: 1000,
blockPorts: []int32{},
},
a: &AutoNLBsPlugin{
gssMaxPodIndex: map[string]int{
"default/pod": 1,
},
mutex: sync.RWMutex{},
},
expectSvcNum: 3,
},
}
for i, test := range tests {
a := test.a
expectSvcNum := a.checkSvcNumToCreate(test.ns, test.gssName, test.config)
if expectSvcNum != test.expectSvcNum {
t.Errorf("case %d: expect toAddSvcNum: %d, but got toAddSvcNum: %d", i, test.expectSvcNum, expectSvcNum)
}
}
}
func TestConsSvcPorts(t *testing.T) {
tests := []struct {
a *AutoNLBsPlugin
svcIndex int
config *autoNLBsConfig
expectSvcPorts []corev1.ServicePort
}{
// case 0
{
a: &AutoNLBsPlugin{
mutex: sync.RWMutex{},
},
svcIndex: 0,
config: &autoNLBsConfig{
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
corev1.ProtocolUDP,
},
targetPorts: []int{
6666,
8888,
},
maxPort: 1003,
minPort: 1000,
blockPorts: []int32{},
},
expectSvcPorts: []corev1.ServicePort{
{
Name: "tcp-0-6666",
TargetPort: intstr.FromString("tcp-0-6666"),
Port: 1000,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-0-8888",
TargetPort: intstr.FromString("udp-0-8888"),
Port: 1001,
Protocol: corev1.ProtocolUDP,
},
{
Name: "tcp-1-6666",
TargetPort: intstr.FromString("tcp-1-6666"),
Port: 1002,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-1-8888",
TargetPort: intstr.FromString("udp-1-8888"),
Port: 1003,
Protocol: corev1.ProtocolUDP,
},
},
},
// case 1
{
a: &AutoNLBsPlugin{
mutex: sync.RWMutex{},
},
svcIndex: 1,
config: &autoNLBsConfig{
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
corev1.ProtocolTCP,
corev1.ProtocolUDP,
},
targetPorts: []int{
6666,
7777,
8888,
},
maxPort: 1004,
minPort: 1000,
blockPorts: []int32{},
},
expectSvcPorts: []corev1.ServicePort{
{
Name: "tcp-1-6666",
TargetPort: intstr.FromString("tcp-1-6666"),
Port: 1000,
Protocol: corev1.ProtocolTCP,
},
{
Name: "tcp-1-7777",
TargetPort: intstr.FromString("tcp-1-7777"),
Port: 1001,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-1-8888",
TargetPort: intstr.FromString("udp-1-8888"),
Port: 1002,
Protocol: corev1.ProtocolUDP,
},
},
},
// case 2
{
a: &AutoNLBsPlugin{
mutex: sync.RWMutex{},
},
svcIndex: 3,
config: &autoNLBsConfig{
protocols: []corev1.Protocol{
ProtocolTCPUDP,
},
targetPorts: []int{
6666,
},
maxPort: 1004,
minPort: 1000,
blockPorts: []int32{1002},
},
expectSvcPorts: []corev1.ServicePort{
{
Name: "tcp-12-6666",
TargetPort: intstr.FromString("tcp-12-6666"),
Port: 1000,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-12-6666",
TargetPort: intstr.FromString("udp-12-6666"),
Port: 1000,
Protocol: corev1.ProtocolUDP,
},
{
Name: "tcp-13-6666",
TargetPort: intstr.FromString("tcp-13-6666"),
Port: 1001,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-13-6666",
TargetPort: intstr.FromString("udp-13-6666"),
Port: 1001,
Protocol: corev1.ProtocolUDP,
},
{
Name: "tcp-14-6666",
TargetPort: intstr.FromString("tcp-14-6666"),
Port: 1003,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-14-6666",
TargetPort: intstr.FromString("udp-14-6666"),
Port: 1003,
Protocol: corev1.ProtocolUDP,
},
{
Name: "tcp-15-6666",
TargetPort: intstr.FromString("tcp-15-6666"),
Port: 1004,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-15-6666",
TargetPort: intstr.FromString("udp-15-6666"),
Port: 1004,
Protocol: corev1.ProtocolUDP,
},
},
},
}
for i, test := range tests {
svcPorts := test.a.consSvcPorts(test.svcIndex, test.config)
if !reflect.DeepEqual(svcPorts, test.expectSvcPorts) {
t.Errorf("case %d: expect svcPorts: %v, but got svcPorts: %v", i, test.expectSvcPorts, svcPorts)
}
}
}

View File

@ -0,0 +1,123 @@
package alibabacloud
import (
"context"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
"github.com/openkruise/kruise-game/cloudprovider/alibabacloud/apis/v1beta1"
"github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
EIPNetwork = "AlibabaCloud-EIP"
AliasSEIP = "EIP-Network"
ReleaseStrategyConfigName = "ReleaseStrategy"
PoolIdConfigName = "PoolId"
ResourceGroupIdConfigName = "ResourceGroupId"
BandwidthConfigName = "Bandwidth"
BandwidthPackageIdConfigName = "BandwidthPackageId"
ChargeTypeConfigName = "ChargeType"
DescriptionConfigName = "Description"
WithEIPAnnotationKey = "k8s.aliyun.com/pod-with-eip"
ReleaseStrategyAnnotationkey = "k8s.aliyun.com/pod-eip-release-strategy"
PoolIdAnnotationkey = "k8s.aliyun.com/eip-public-ip-address-pool-id"
ResourceGroupIdAnnotationkey = "k8s.aliyun.com/eip-resource-group-id"
BandwidthAnnotationkey = "k8s.aliyun.com/eip-bandwidth"
BandwidthPackageIdAnnotationkey = "k8s.aliyun.com/eip-common-bandwidth-package-id"
ChargeTypeConfigAnnotationkey = "k8s.aliyun.com/eip-internet-charge-type"
EIPNameAnnotationKey = "k8s.aliyun.com/eip-name"
EIPDescriptionAnnotationKey = "k8s.aliyun.com/eip-description"
)
type EipPlugin struct {
}
func (E EipPlugin) Name() string {
return EIPNetwork
}
func (E EipPlugin) Alias() string {
return AliasSEIP
}
func (E EipPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (E EipPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
conf := networkManager.GetNetworkConfig()
pod.Annotations[WithEIPAnnotationKey] = "true"
pod.Annotations[EIPNameAnnotationKey] = pod.GetNamespace() + "/" + pod.GetName()
// parse network configuration
for _, c := range conf {
switch c.Name {
case ReleaseStrategyConfigName:
pod.Annotations[ReleaseStrategyAnnotationkey] = c.Value
case PoolIdConfigName:
pod.Annotations[PoolIdAnnotationkey] = c.Value
case ResourceGroupIdConfigName:
pod.Annotations[ResourceGroupIdAnnotationkey] = c.Value
case BandwidthConfigName:
pod.Annotations[BandwidthAnnotationkey] = c.Value
case BandwidthPackageIdConfigName:
pod.Annotations[BandwidthPackageIdAnnotationkey] = c.Value
case ChargeTypeConfigName:
pod.Annotations[ChargeTypeConfigAnnotationkey] = c.Value
case DescriptionConfigName:
pod.Annotations[EIPDescriptionAnnotationKey] = c.Value
}
}
return pod, nil
}
func (E EipPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkWaiting,
}, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
podEip := &v1beta1.PodEIP{}
err := client.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, podEip)
if err != nil || podEip.Status.EipAddress == "" {
return pod, nil
}
networkStatus.InternalAddresses = []gamekruiseiov1alpha1.NetworkAddress{
{
IP: podEip.Status.PrivateIPAddress,
},
}
networkStatus.ExternalAddresses = []gamekruiseiov1alpha1.NetworkAddress{
{
IP: podEip.Status.EipAddress,
},
}
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
func (E EipPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) errors.PluginError {
return nil
}
func init() {
alibabaCloudProvider.registerPlugin(&EipPlugin{})
}

View File

@ -0,0 +1,643 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
"context"
"fmt"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
"sync"
)
const (
MultiNlbsNetwork = "AlibabaCloud-Multi-NLBs"
AliasMultiNlbs = "Multi-NLBs-Network"
// ConfigNames defined by OKG
NlbIdNamesConfigName = "NlbIdNames"
// service annotation defined by OKG
LBIDBelongIndexKey = "game.kruise.io/lb-belong-index"
// service label defined by OKG
ServiceBelongNetworkTypeKey = "game.kruise.io/network-type"
ProtocolTCPUDP corev1.Protocol = "TCPUDP"
PrefixReadyReadinessGate = "service.readiness.alibabacloud.com/"
)
type MultiNlbsPlugin struct {
maxPort int32
minPort int32
blockPorts []int32
cache [][]bool
// podAllocate format {pod ns/name}: -{lbId: xxx-a, port: -8001 -8002} -{lbId: xxx-b, port: -8001 -8002}
podAllocate map[string]*lbsPorts
mutex sync.RWMutex
}
type lbsPorts struct {
index int
lbIds []string
ports []int32
targetPort []int
protocols []corev1.Protocol
}
func (m *MultiNlbsPlugin) Name() string {
return MultiNlbsNetwork
}
func (m *MultiNlbsPlugin) Alias() string {
return AliasMultiNlbs
}
func (m *MultiNlbsPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
m.mutex.Lock()
defer m.mutex.Unlock()
nlbOptions := options.(provideroptions.AlibabaCloudOptions).NLBOptions
m.minPort = nlbOptions.MinPort
m.maxPort = nlbOptions.MaxPort
m.blockPorts = nlbOptions.BlockPorts
svcList := &corev1.ServiceList{}
err := c.List(ctx, svcList, client.MatchingLabels{ServiceBelongNetworkTypeKey: MultiNlbsNetwork})
if err != nil {
return err
}
m.podAllocate, m.cache = initMultiLBCache(svcList.Items, m.maxPort, m.minPort, m.blockPorts)
log.Infof("[%s] podAllocate cache complete initialization: ", MultiNlbsNetwork)
for podNsName, lps := range m.podAllocate {
log.Infof("[%s] pod %s: %v", MultiNlbsNetwork, podNsName, *lps)
}
return nil
}
func initMultiLBCache(svcList []corev1.Service, maxPort, minPort int32, blockPorts []int32) (map[string]*lbsPorts, [][]bool) {
podAllocate := make(map[string]*lbsPorts)
cache := make([][]bool, 0)
for _, svc := range svcList {
index, err := strconv.Atoi(svc.GetAnnotations()[LBIDBelongIndexKey])
if err != nil {
continue
}
lenCache := len(cache)
for i := lenCache; i <= index; i++ {
cacheLevel := make([]bool, int(maxPort-minPort)+1)
for _, p := range blockPorts {
cacheLevel[int(p-minPort)] = true
}
cache = append(cache, cacheLevel)
}
ports := make([]int32, 0)
protocols := make([]corev1.Protocol, 0)
targetPorts := make([]int, 0)
for _, port := range svc.Spec.Ports {
cache[index][(port.Port - minPort)] = true
ports = append(ports, port.Port)
protocols = append(protocols, port.Protocol)
targetPorts = append(targetPorts, port.TargetPort.IntValue())
}
nsName := svc.GetNamespace() + "/" + svc.Spec.Selector[SvcSelectorKey]
if podAllocate[nsName] == nil {
podAllocate[nsName] = &lbsPorts{
index: index,
lbIds: []string{svc.Labels[SlbIdLabelKey]},
ports: ports,
protocols: protocols,
targetPort: targetPorts,
}
} else {
podAllocate[nsName].lbIds = append(podAllocate[nsName].lbIds, svc.Labels[SlbIdLabelKey])
}
}
return podAllocate, cache
}
func (m *MultiNlbsPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
conf, err := parseMultiNLBsConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
var lbNames []string
for _, lbName := range conf.lbNames {
if !util.IsStringInList(lbName, lbNames) {
lbNames = append(lbNames, lbName)
}
}
for _, lbName := range lbNames {
pod.Spec.ReadinessGates = append(pod.Spec.ReadinessGates, corev1.PodReadinessGate{
ConditionType: corev1.PodConditionType(PrefixReadyReadinessGate + pod.GetName() + "-" + strings.ToLower(lbName)),
})
}
return pod, nil
}
func (m *MultiNlbsPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
networkConfig := networkManager.GetNetworkConfig()
conf, err := parseMultiNLBsConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
podNsName := pod.GetNamespace() + "/" + pod.GetName()
podLbsPorts, err := m.allocate(conf, podNsName)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
for _, lbId := range conf.idList[podLbsPorts.index] {
// get svc
lbName := conf.lbNames[lbId]
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName() + "-" + strings.ToLower(lbName),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
service, err := m.consSvc(podLbsPorts, conf, pod, lbName, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Create(ctx, service), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
}
endPoints := ""
for i, lbId := range conf.idList[podLbsPorts.index] {
// get svc
lbName := conf.lbNames[lbId]
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName() + "-" + strings.ToLower(lbName),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
service, err := m.consSvc(podLbsPorts, conf, pod, lbName, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Create(ctx, service), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// old svc remain
if svc.OwnerReferences[0].Kind == "Pod" && svc.OwnerReferences[0].UID != pod.UID {
log.Infof("[%s] waitting old svc %s/%s deleted. old owner pod uid is %s, but now is %s", NlbNetwork, svc.Namespace, svc.Name, svc.OwnerReferences[0].UID, pod.UID)
return pod, nil
}
// update svc
if util.GetHash(conf) != svc.GetAnnotations()[SlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
service, err := m.consSvc(podLbsPorts, conf, pod, lbName, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Update(ctx, service), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
svc.Spec.Type = corev1.ServiceTypeClusterIP
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeClusterIP {
svc.Spec.Type = corev1.ServiceTypeLoadBalancer
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// network not ready
if svc.Status.LoadBalancer.Ingress == nil {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
_, readyCondition := util.GetPodConditionFromList(pod.Status.Conditions, corev1.PodReady)
if readyCondition == nil || readyCondition.Status == corev1.ConditionFalse {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
endPoints = endPoints + svc.Status.LoadBalancer.Ingress[0].Hostname + "/" + lbName
if i != len(conf.idList[0])-1 {
endPoints = endPoints + ","
}
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: port.Name,
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
EndPoint: endPoints,
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: port.Name,
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
}
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (m *MultiNlbsPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseMultiNLBsConfig(networkConfig)
if err != nil {
return cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
var podKeys []string
if sc.isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range m.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
m.deAllocate(podKey)
}
return nil
}
func init() {
multiNlbsPlugin := MultiNlbsPlugin{
mutex: sync.RWMutex{},
}
alibabaCloudProvider.registerPlugin(&multiNlbsPlugin)
}
type multiNLBsConfig struct {
lbNames map[string]string
idList [][]string
targetPorts []int
protocols []corev1.Protocol
isFixed bool
externalTrafficPolicy corev1.ServiceExternalTrafficPolicyType
*nlbHealthConfig
}
func (m *MultiNlbsPlugin) consSvc(podLbsPorts *lbsPorts, conf *multiNLBsConfig, pod *corev1.Pod, lbName string, c client.Client, ctx context.Context) (*corev1.Service, error) {
var selectId string
for _, lbId := range podLbsPorts.lbIds {
if conf.lbNames[lbId] == lbName {
selectId = lbId
break
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(podLbsPorts.ports); i++ {
if podLbsPorts.protocols[i] == ProtocolTCPUDP {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(podLbsPorts.targetPort[i]) + "-" + strings.ToLower(string(corev1.ProtocolTCP)),
Port: podLbsPorts.ports[i],
TargetPort: intstr.FromInt(podLbsPorts.targetPort[i]),
Protocol: corev1.ProtocolTCP,
})
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(podLbsPorts.targetPort[i]) + "-" + strings.ToLower(string(corev1.ProtocolUDP)),
Port: podLbsPorts.ports[i],
TargetPort: intstr.FromInt(podLbsPorts.targetPort[i]),
Protocol: corev1.ProtocolUDP,
})
} else {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(podLbsPorts.targetPort[i]) + "-" + strings.ToLower(string(podLbsPorts.protocols[i])),
Port: podLbsPorts.ports[i],
TargetPort: intstr.FromInt(podLbsPorts.targetPort[i]),
Protocol: podLbsPorts.protocols[i],
})
}
}
loadBalancerClass := "alibabacloud.com/nlb"
svcAnnotations := map[string]string{
SlbListenerOverrideKey: "true",
SlbIdAnnotationKey: selectId,
SlbConfigHashKey: util.GetHash(conf),
LBHealthCheckFlagAnnotationKey: conf.lBHealthCheckFlag,
}
if conf.lBHealthCheckFlag == "on" {
svcAnnotations[LBHealthCheckTypeAnnotationKey] = conf.lBHealthCheckType
svcAnnotations[LBHealthCheckConnectPortAnnotationKey] = conf.lBHealthCheckConnectPort
svcAnnotations[LBHealthCheckConnectTimeoutAnnotationKey] = conf.lBHealthCheckConnectTimeout
svcAnnotations[LBHealthCheckIntervalAnnotationKey] = conf.lBHealthCheckInterval
svcAnnotations[LBHealthyThresholdAnnotationKey] = conf.lBHealthyThreshold
svcAnnotations[LBUnhealthyThresholdAnnotationKey] = conf.lBUnhealthyThreshold
if conf.lBHealthCheckType == "http" {
svcAnnotations[LBHealthCheckDomainAnnotationKey] = conf.lBHealthCheckDomain
svcAnnotations[LBHealthCheckUriAnnotationKey] = conf.lBHealthCheckUri
svcAnnotations[LBHealthCheckMethodAnnotationKey] = conf.lBHealthCheckMethod
}
}
svcAnnotations[LBIDBelongIndexKey] = strconv.Itoa(podLbsPorts.index)
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName() + "-" + strings.ToLower(lbName),
Namespace: pod.GetNamespace(),
Annotations: svcAnnotations,
Labels: map[string]string{
ServiceBelongNetworkTypeKey: MultiNlbsNetwork,
},
OwnerReferences: getSvcOwnerReference(c, ctx, pod, conf.isFixed),
},
Spec: corev1.ServiceSpec{
AllocateLoadBalancerNodePorts: ptr.To[bool](false),
ExternalTrafficPolicy: conf.externalTrafficPolicy,
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
LoadBalancerClass: &loadBalancerClass,
},
}, nil
}
func (m *MultiNlbsPlugin) allocate(conf *multiNLBsConfig, nsName string) (*lbsPorts, error) {
m.mutex.Lock()
defer m.mutex.Unlock()
// check if pod is already allocated
if m.podAllocate[nsName] != nil {
return m.podAllocate[nsName], nil
}
// if the pod has not been allocated, allocate new ports to it
var ports []int32
needNum := len(conf.targetPorts)
index := -1
// init cache according to conf.idList
lenCache := len(m.cache)
for i := lenCache; i < len(conf.idList); i++ {
cacheLevel := make([]bool, int(m.maxPort-m.minPort)+1)
for _, p := range m.blockPorts {
cacheLevel[int(p-m.minPort)] = true
}
m.cache = append(m.cache, cacheLevel)
}
// find allocated ports
for i := 0; i < len(m.cache); i++ {
sum := 0
ports = make([]int32, 0)
for j := 0; j < len(m.cache[i]); j++ {
if !m.cache[i][j] {
ports = append(ports, int32(j)+m.minPort)
sum++
if sum == needNum {
index = i
break
}
}
}
if index != -1 {
break
}
}
if index == -1 {
return nil, fmt.Errorf("no available ports found")
}
if index >= len(conf.idList) {
return nil, fmt.Errorf("NlbIdNames configuration have not synced")
}
for _, port := range ports {
m.cache[index][port-m.minPort] = true
}
m.podAllocate[nsName] = &lbsPorts{
index: index,
lbIds: conf.idList[index],
ports: ports,
protocols: conf.protocols,
targetPort: conf.targetPorts,
}
log.Infof("[%s] pod %s allocated: lbIds %v; ports %v", MultiNlbsNetwork, nsName, conf.idList[index], ports)
return m.podAllocate[nsName], nil
}
func (m *MultiNlbsPlugin) deAllocate(nsName string) {
m.mutex.Lock()
defer m.mutex.Unlock()
podLbsPorts := m.podAllocate[nsName]
if podLbsPorts == nil {
return
}
for _, port := range podLbsPorts.ports {
m.cache[podLbsPorts.index][port-m.minPort] = false
}
delete(m.podAllocate, nsName)
log.Infof("[%s] pod %s deallocate: lbIds %s ports %v", MultiNlbsNetwork, nsName, podLbsPorts.lbIds, podLbsPorts.ports)
}
func parseMultiNLBsConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*multiNLBsConfig, error) {
// lbNames format {id}: {name}
lbNames := make(map[string]string)
idList := make([][]string, 0)
nameNums := make(map[string]int)
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
externalTrafficPolicy := corev1.ServiceExternalTrafficPolicyTypeLocal
for _, c := range conf {
switch c.Name {
case NlbIdNamesConfigName:
for _, nlbIdNamesConfig := range strings.Split(c.Value, ",") {
if nlbIdNamesConfig != "" {
idName := strings.Split(nlbIdNamesConfig, "/")
if len(idName) != 2 {
return nil, fmt.Errorf("invalid NlbIdNames %s. You should input as the format {nlb-id-0}/{name-0}", c.Value)
}
id := idName[0]
name := idName[1]
nameNum := nameNums[name]
if nameNum >= len(idList) {
idList = append(idList, []string{id})
} else {
idList[nameNum] = append(idList[nameNum], id)
}
nameNums[name]++
lbNames[id] = name
}
}
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
return nil, fmt.Errorf("invalid PortProtocols %s", c.Value)
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid Fixed %s", c.Value)
}
isFixed = v
case ExternalTrafficPolicyTypeConfigName:
if strings.EqualFold(c.Value, string(corev1.ServiceExternalTrafficPolicyTypeCluster)) {
externalTrafficPolicy = corev1.ServiceExternalTrafficPolicyTypeCluster
}
}
}
// check idList
if len(idList) == 0 {
return nil, fmt.Errorf("invalid NlbIdNames. You should input as the format {nlb-id-0}/{name-0}")
}
num := len(idList[0])
for i := 1; i < len(idList); i++ {
if num != len(idList[i]) {
return nil, fmt.Errorf("invalid NlbIdNames. The number of names should be same")
}
num = len(idList[i])
}
// check ports & protocols
if len(ports) == 0 || len(protocols) == 0 {
return nil, fmt.Errorf("invalid PortProtocols, which can not be empty")
}
nlbHealthConfig, err := parseNlbHealthConfig(conf)
if err != nil {
return nil, err
}
return &multiNLBsConfig{
lbNames: lbNames,
idList: idList,
targetPorts: ports,
protocols: protocols,
isFixed: isFixed,
externalTrafficPolicy: externalTrafficPolicy,
nlbHealthConfig: nlbHealthConfig,
}, nil
}

View File

@ -0,0 +1,453 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"reflect"
"sync"
"testing"
)
func TestParseMultiNLBsConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
multiNLBsConfig *multiNLBsConfig
}{
// case 0
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdNamesConfigName,
Value: "id-xx-A/dianxin,id-xx-B/liantong,id-xx-C/dianxin,id-xx-D/liantong",
},
{
Name: PortProtocolsConfigName,
Value: "80/TCP,80/UDP",
},
},
multiNLBsConfig: &multiNLBsConfig{
lbNames: map[string]string{
"id-xx-A": "dianxin",
"id-xx-B": "liantong",
"id-xx-C": "dianxin",
"id-xx-D": "liantong",
},
idList: [][]string{
{
"id-xx-A", "id-xx-B",
},
{
"id-xx-C", "id-xx-D",
},
},
},
},
// case 1
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdNamesConfigName,
Value: "id-xx-A/dianxin,id-xx-B/dianxin,id-xx-C/dianxin,id-xx-D/liantong,id-xx-E/liantong,id-xx-F/liantong",
},
{
Name: PortProtocolsConfigName,
Value: "80/TCP,80/UDP",
},
},
multiNLBsConfig: &multiNLBsConfig{
lbNames: map[string]string{
"id-xx-A": "dianxin",
"id-xx-B": "dianxin",
"id-xx-C": "dianxin",
"id-xx-D": "liantong",
"id-xx-E": "liantong",
"id-xx-F": "liantong",
},
idList: [][]string{
{
"id-xx-A", "id-xx-D",
},
{
"id-xx-B", "id-xx-E",
},
{
"id-xx-C", "id-xx-F",
},
},
},
},
}
for i, tt := range tests {
actual, err := parseMultiNLBsConfig(tt.conf)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(actual.lbNames, tt.multiNLBsConfig.lbNames) {
t.Errorf("case %d: parseMultiNLBsConfig lbNames actual: %v, expect: %v", i, actual.lbNames, tt.multiNLBsConfig.lbNames)
}
if !reflect.DeepEqual(actual.idList, tt.multiNLBsConfig.idList) {
t.Errorf("case %d: parseMultiNLBsConfig idList actual: %v, expect: %v", i, actual.idList, tt.multiNLBsConfig.idList)
}
}
}
func TestAllocate(t *testing.T) {
tests := []struct {
plugin *MultiNlbsPlugin
conf *multiNLBsConfig
nsName string
lbsPorts *lbsPorts
cacheAfter [][]bool
podAllocateAfter map[string]*lbsPorts
}{
// case 0: cache is nil
{
plugin: &MultiNlbsPlugin{
maxPort: int32(8002),
minPort: int32(8000),
blockPorts: []int32{8001},
mutex: sync.RWMutex{},
podAllocate: make(map[string]*lbsPorts),
cache: make([][]bool, 0),
},
conf: &multiNLBsConfig{
lbNames: map[string]string{
"id-xx-A": "dianxin",
"id-xx-B": "liantong",
"id-xx-C": "dianxin",
"id-xx-D": "liantong",
},
idList: [][]string{
{
"id-xx-A", "id-xx-B",
},
{
"id-xx-C", "id-xx-D",
},
},
targetPorts: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
nsName: "default/test-0",
lbsPorts: &lbsPorts{
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000, 8002},
targetPort: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
cacheAfter: [][]bool{{true, true, true}, {false, true, false}},
podAllocateAfter: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000, 8002},
targetPort: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
},
},
// case 1: cache not nil & new pod
{
plugin: &MultiNlbsPlugin{
maxPort: int32(8002),
minPort: int32(8000),
blockPorts: []int32{8001},
mutex: sync.RWMutex{},
podAllocate: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
cache: [][]bool{{true, true, false}},
},
conf: &multiNLBsConfig{
lbNames: map[string]string{
"id-xx-A": "dianxin",
"id-xx-B": "liantong",
"id-xx-C": "dianxin",
"id-xx-D": "liantong",
},
idList: [][]string{
{
"id-xx-A", "id-xx-B",
},
{
"id-xx-C", "id-xx-D",
},
},
targetPorts: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
nsName: "default/test-1",
lbsPorts: &lbsPorts{
index: 1,
lbIds: []string{"id-xx-C", "id-xx-D"},
ports: []int32{8000, 8002},
targetPort: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
cacheAfter: [][]bool{{true, true, false}, {true, true, true}},
podAllocateAfter: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
"default/test-1": {
index: 1,
lbIds: []string{"id-xx-C", "id-xx-D"},
ports: []int32{8000, 8002},
targetPort: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
},
},
// case 2: cache not nil & old pod
{
plugin: &MultiNlbsPlugin{
maxPort: int32(8002),
minPort: int32(8000),
blockPorts: []int32{8001},
mutex: sync.RWMutex{},
podAllocate: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
cache: [][]bool{{true, true, false}},
},
conf: &multiNLBsConfig{
lbNames: map[string]string{
"id-xx-A": "dianxin",
"id-xx-B": "liantong",
"id-xx-C": "dianxin",
"id-xx-D": "liantong",
},
idList: [][]string{
{
"id-xx-A", "id-xx-B",
},
{
"id-xx-C", "id-xx-D",
},
},
targetPorts: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
nsName: "default/test-0",
lbsPorts: &lbsPorts{
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
cacheAfter: [][]bool{{true, true, false}},
podAllocateAfter: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
},
}
for i, tt := range tests {
plugin := tt.plugin
lbsPorts, err := plugin.allocate(tt.conf, tt.nsName)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(lbsPorts, tt.lbsPorts) {
t.Errorf("case %d: allocate actual: %v, expect: %v", i, lbsPorts, tt.lbsPorts)
}
if !reflect.DeepEqual(plugin.podAllocate, tt.podAllocateAfter) {
t.Errorf("case %d: podAllocate actual: %v, expect: %v", i, plugin.podAllocate, tt.podAllocateAfter)
}
if !reflect.DeepEqual(plugin.cache, tt.cacheAfter) {
t.Errorf("case %d: cache actual: %v, expect: %v", i, plugin.cache, tt.cacheAfter)
}
}
}
func TestDeAllocate(t *testing.T) {
tests := []struct {
plugin *MultiNlbsPlugin
nsName string
cacheAfter [][]bool
podAllocateAfter map[string]*lbsPorts
}{
{
plugin: &MultiNlbsPlugin{
maxPort: int32(8002),
minPort: int32(8000),
blockPorts: []int32{8001},
mutex: sync.RWMutex{},
podAllocate: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
"default/test-1": {
index: 1,
lbIds: []string{"id-xx-C", "id-xx-D"},
ports: []int32{8000, 8002},
targetPort: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
},
cache: [][]bool{{true, true, false}, {true, true, true}},
},
nsName: "default/test-1",
cacheAfter: [][]bool{{true, true, false}, {false, true, false}},
podAllocateAfter: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
},
}
for i, tt := range tests {
plugin := tt.plugin
plugin.deAllocate(tt.nsName)
if !reflect.DeepEqual(plugin.podAllocate, tt.podAllocateAfter) {
t.Errorf("case %d: podAllocate actual: %v, expect: %v", i, plugin.podAllocate, tt.podAllocateAfter)
}
if !reflect.DeepEqual(plugin.cache, tt.cacheAfter) {
t.Errorf("case %d: cache actual: %v, expect: %v", i, plugin.cache, tt.cacheAfter)
}
}
}
func TestInitMultiLBCache(t *testing.T) {
tests := []struct {
svcList []corev1.Service
maxPort int32
minPort int32
blockPorts []int32
podAllocate map[string]*lbsPorts
cache [][]bool
}{
{
svcList: []corev1.Service{
{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
LBIDBelongIndexKey: "0",
},
Labels: map[string]string{
SlbIdLabelKey: "xxx-A",
ServiceBelongNetworkTypeKey: MultiNlbsNetwork,
},
Namespace: "ns-0",
Name: "name-0",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-A",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(80),
Port: 666,
Protocol: corev1.ProtocolTCP,
},
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
LBIDBelongIndexKey: "0",
},
Labels: map[string]string{
SlbIdLabelKey: "xxx-B",
ServiceBelongNetworkTypeKey: MultiNlbsNetwork,
},
Namespace: "ns-0",
Name: "name-1",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-A",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(80),
Port: 666,
Protocol: corev1.ProtocolTCP,
},
},
},
},
},
maxPort: int32(667),
minPort: int32(665),
blockPorts: []int32{},
podAllocate: map[string]*lbsPorts{
"ns-0/pod-A": {
index: 0,
lbIds: []string{"xxx-A", "xxx-B"},
ports: []int32{666},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
cache: [][]bool{{false, true, false}},
},
}
for i, tt := range tests {
podAllocate, cache := initMultiLBCache(tt.svcList, tt.maxPort, tt.minPort, tt.blockPorts)
if !reflect.DeepEqual(podAllocate, tt.podAllocate) {
t.Errorf("case %d: podAllocate actual: %v, expect: %v", i, podAllocate, tt.podAllocate)
}
if !reflect.DeepEqual(cache, tt.cache) {
t.Errorf("case %d: cache actual: %v, expect: %v", i, cache, tt.cache)
}
}
}

View File

@ -0,0 +1,158 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
"context"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
"github.com/openkruise/kruise-game/cloudprovider/alibabacloud/apis/v1beta1"
"github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strings"
)
const (
NATGWNetwork = "AlibabaCloud-NATGW"
AliasNATGW = "NATGW-Network"
FixedConfigName = "Fixed"
PortsConfigName = "Ports"
ProtocolConfigName = "Protocol"
DnatAnsKey = "k8s.aliyun.com/pod-dnat"
PortsAnsKey = "k8s.aliyun.com/pod-dnat-expose-port"
ProtocolAnsKey = "k8s.aliyun.com/pod-dnat-expose-protocol"
FixedAnsKey = "k8s.aliyun.com/pod-dnat-fixed"
)
type NatGwPlugin struct {
}
func (n NatGwPlugin) Name() string {
return NATGWNetwork
}
func (n NatGwPlugin) Alias() string {
return AliasNATGW
}
func (n NatGwPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (n NatGwPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
conf := networkManager.GetNetworkConfig()
ports, protocol, fixed := parseConfig(conf)
pod.Annotations[DnatAnsKey] = "true"
pod.Annotations[PortsAnsKey] = ports
if protocol != "" {
pod.Annotations[ProtocolAnsKey] = protocol
}
if fixed != "" {
pod.Annotations[FixedAnsKey] = fixed
}
return pod, nil
}
func (n NatGwPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkWaiting,
}, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
podDNat := &v1beta1.PodDNAT{}
err := c.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, podDNat)
if err != nil || podDNat.Status.Entries == nil {
return pod, nil
}
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, entry := range podDNat.Status.Entries {
instrIPort := intstr.FromString(entry.InternalPort)
instrEPort := intstr.FromString(entry.ExternalPort)
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: entry.InternalIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: entry.InternalPort,
Port: &instrIPort,
Protocol: corev1.Protocol(strings.ToUpper(entry.IPProtocol)),
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: entry.ExternalIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: entry.InternalPort,
Port: &instrEPort,
Protocol: corev1.Protocol(strings.ToUpper(entry.IPProtocol)),
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
// NetworkReady when all ports have external addresses
if len(strings.Split(pod.Annotations[PortsAnsKey], ",")) == len(podDNat.Status.Entries) {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
}
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
func (n NatGwPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) errors.PluginError {
return nil
}
func init() {
alibabaCloudProvider.registerPlugin(&NatGwPlugin{})
}
func parseConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (string, string, string) {
var ports string
var protocol string
var fixed string
for _, c := range conf {
switch c.Name {
case PortsConfigName:
ports = c.Value
case ProtocolConfigName:
protocol = c.Value
case FixedConfigName:
fixed = c.Value
}
}
return ports, protocol, fixed
}

View File

@ -0,0 +1,640 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
"context"
"fmt"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"regexp"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
"sync"
)
const (
NlbNetwork = "AlibabaCloud-NLB"
AliasNLB = "NLB-Network"
// annotations provided by AlibabaCloud Cloud Controller Manager
LBHealthCheckFlagAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-flag"
LBHealthCheckTypeAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-type"
LBHealthCheckConnectPortAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-connect-port"
LBHealthCheckConnectTimeoutAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-connect-timeout"
LBHealthyThresholdAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-healthy-threshold"
LBUnhealthyThresholdAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-unhealthy-threshold"
LBHealthCheckIntervalAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-interval"
LBHealthCheckUriAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-uri"
LBHealthCheckDomainAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-domain"
LBHealthCheckMethodAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-method"
// ConfigNames defined by OKG
LBHealthCheckFlagConfigName = "LBHealthCheckFlag"
LBHealthCheckTypeConfigName = "LBHealthCheckType"
LBHealthCheckConnectPortConfigName = "LBHealthCheckConnectPort"
LBHealthCheckConnectTimeoutConfigName = "LBHealthCheckConnectTimeout"
LBHealthCheckIntervalConfigName = "LBHealthCheckInterval"
LBHealthCheckUriConfigName = "LBHealthCheckUri"
LBHealthCheckDomainConfigName = "LBHealthCheckDomain"
LBHealthCheckMethodConfigName = "LBHealthCheckMethod"
LBHealthyThresholdConfigName = "LBHealthyThreshold"
LBUnhealthyThresholdConfigName = "LBUnhealthyThreshold"
)
type NlbPlugin struct {
maxPort int32
minPort int32
blockPorts []int32
cache map[string]portAllocated
podAllocate map[string]string
mutex sync.RWMutex
}
type nlbConfig struct {
lbIds []string
targetPorts []int
protocols []corev1.Protocol
isFixed bool
*nlbHealthConfig
}
type nlbHealthConfig struct {
lBHealthCheckFlag string
lBHealthCheckType string
lBHealthCheckConnectPort string
lBHealthCheckConnectTimeout string
lBHealthCheckInterval string
lBHealthCheckUri string
lBHealthCheckDomain string
lBHealthCheckMethod string
lBHealthyThreshold string
lBUnhealthyThreshold string
}
func (n *NlbPlugin) Name() string {
return NlbNetwork
}
func (n *NlbPlugin) Alias() string {
return AliasNLB
}
func (n *NlbPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
n.mutex.Lock()
defer n.mutex.Unlock()
slbOptions := options.(provideroptions.AlibabaCloudOptions).NLBOptions
n.minPort = slbOptions.MinPort
n.maxPort = slbOptions.MaxPort
n.blockPorts = slbOptions.BlockPorts
svcList := &corev1.ServiceList{}
err := c.List(ctx, svcList)
if err != nil {
return err
}
n.cache, n.podAllocate = initLbCache(svcList.Items, n.minPort, n.maxPort, n.blockPorts)
log.Infof("[%s] podAllocate cache complete initialization: %v", NlbNetwork, n.podAllocate)
return nil
}
func (n *NlbPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (n *NlbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseNlbConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// get svc
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
service, err := n.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Create(ctx, service), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// old svc remain
if svc.OwnerReferences[0].Kind == "Pod" && svc.OwnerReferences[0].UID != pod.UID {
log.Infof("[%s] waitting old svc %s/%s deleted. old owner pod uid is %s, but now is %s", NlbNetwork, svc.Namespace, svc.Name, svc.OwnerReferences[0].UID, pod.UID)
return pod, nil
}
// update svc
if util.GetHash(sc) != svc.GetAnnotations()[SlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
service, err := n.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Update(ctx, service), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
svc.Spec.Type = corev1.ServiceTypeClusterIP
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeClusterIP {
svc.Spec.Type = corev1.ServiceTypeLoadBalancer
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// network not ready
if svc.Status.LoadBalancer.Ingress == nil {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
EndPoint: svc.Status.LoadBalancer.Ingress[0].Hostname,
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (n *NlbPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseNlbConfig(networkConfig)
if err != nil {
return cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
var podKeys []string
if sc.isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range n.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
n.deAllocate(podKey)
}
return nil
}
func init() {
nlbPlugin := NlbPlugin{
mutex: sync.RWMutex{},
}
alibabaCloudProvider.registerPlugin(&nlbPlugin)
}
func (n *NlbPlugin) consSvc(nc *nlbConfig, pod *corev1.Pod, c client.Client, ctx context.Context) (*corev1.Service, error) {
var ports []int32
var lbId string
podKey := pod.GetNamespace() + "/" + pod.GetName()
allocatedPorts, exist := n.podAllocate[podKey]
if exist {
slbPorts := strings.Split(allocatedPorts, ":")
lbId = slbPorts[0]
ports = util.StringToInt32Slice(slbPorts[1], ",")
} else {
lbId, ports = n.allocate(nc.lbIds, len(nc.targetPorts), podKey)
if lbId == "" && ports == nil {
return nil, fmt.Errorf("there are no avaialable ports for %v", nc.lbIds)
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(nc.targetPorts); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(nc.targetPorts[i]),
Port: ports[i],
Protocol: nc.protocols[i],
TargetPort: intstr.FromInt(nc.targetPorts[i]),
})
}
loadBalancerClass := "alibabacloud.com/nlb"
svcAnnotations := map[string]string{
SlbListenerOverrideKey: "true",
SlbIdAnnotationKey: lbId,
SlbConfigHashKey: util.GetHash(nc),
LBHealthCheckFlagAnnotationKey: nc.lBHealthCheckFlag,
}
if nc.lBHealthCheckFlag == "on" {
svcAnnotations[LBHealthCheckTypeAnnotationKey] = nc.lBHealthCheckType
svcAnnotations[LBHealthCheckConnectPortAnnotationKey] = nc.lBHealthCheckConnectPort
svcAnnotations[LBHealthCheckConnectTimeoutAnnotationKey] = nc.lBHealthCheckConnectTimeout
svcAnnotations[LBHealthCheckIntervalAnnotationKey] = nc.lBHealthCheckInterval
svcAnnotations[LBHealthyThresholdAnnotationKey] = nc.lBHealthyThreshold
svcAnnotations[LBUnhealthyThresholdAnnotationKey] = nc.lBUnhealthyThreshold
if nc.lBHealthCheckType == "http" {
svcAnnotations[LBHealthCheckDomainAnnotationKey] = nc.lBHealthCheckDomain
svcAnnotations[LBHealthCheckUriAnnotationKey] = nc.lBHealthCheckUri
svcAnnotations[LBHealthCheckMethodAnnotationKey] = nc.lBHealthCheckMethod
}
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: svcAnnotations,
OwnerReferences: getSvcOwnerReference(c, ctx, pod, nc.isFixed),
},
Spec: corev1.ServiceSpec{
ExternalTrafficPolicy: corev1.ServiceExternalTrafficPolicyTypeLocal,
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
LoadBalancerClass: &loadBalancerClass,
},
}
return svc, nil
}
func (n *NlbPlugin) allocate(lbIds []string, num int, nsName string) (string, []int32) {
n.mutex.Lock()
defer n.mutex.Unlock()
var ports []int32
var lbId string
// find lb with adequate ports
for _, slbId := range lbIds {
sum := 0
for i := n.minPort; i <= n.maxPort; i++ {
if !n.cache[slbId][i] {
sum++
}
if sum >= num {
lbId = slbId
break
}
}
}
if lbId == "" {
return "", nil
}
// select ports
for i := 0; i < num; i++ {
var port int32
if n.cache[lbId] == nil {
// init cache for new lb
n.cache[lbId] = make(portAllocated, n.maxPort-n.minPort+1)
for i := n.minPort; i <= n.maxPort; i++ {
n.cache[lbId][i] = false
}
// block ports
for _, blockPort := range n.blockPorts {
n.cache[lbId][blockPort] = true
}
}
for p, allocated := range n.cache[lbId] {
if !allocated {
port = p
break
}
}
n.cache[lbId][port] = true
ports = append(ports, port)
}
n.podAllocate[nsName] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("pod %s allocate nlb %s ports %v", nsName, lbId, ports)
return lbId, ports
}
func (n *NlbPlugin) deAllocate(nsName string) {
n.mutex.Lock()
defer n.mutex.Unlock()
allocatedPorts, exist := n.podAllocate[nsName]
if !exist {
return
}
slbPorts := strings.Split(allocatedPorts, ":")
lbId := slbPorts[0]
ports := util.StringToInt32Slice(slbPorts[1], ",")
for _, port := range ports {
n.cache[lbId][port] = false
}
// block ports
for _, blockPort := range n.blockPorts {
n.cache[lbId][blockPort] = true
}
delete(n.podAllocate, nsName)
log.Infof("pod %s deallocate nlb %s ports %v", nsName, lbId, ports)
}
func parseNlbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*nlbConfig, error) {
var lbIds []string
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
for _, c := range conf {
switch c.Name {
case NlbIdsConfigName:
for _, slbId := range strings.Split(c.Value, ",") {
if slbId != "" {
lbIds = append(lbIds, slbId)
}
}
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
isFixed = v
}
}
nlbHealthConfig, err := parseNlbHealthConfig(conf)
if err != nil {
return nil, err
}
return &nlbConfig{
lbIds: lbIds,
protocols: protocols,
targetPorts: ports,
isFixed: isFixed,
nlbHealthConfig: nlbHealthConfig,
}, nil
}
func parseNlbHealthConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*nlbHealthConfig, error) {
lBHealthCheckFlag := "on"
lBHealthCheckType := "tcp"
lBHealthCheckConnectPort := "0"
lBHealthCheckConnectTimeout := "5"
lBHealthCheckInterval := "10"
lBUnhealthyThreshold := "2"
lBHealthyThreshold := "2"
lBHealthCheckUri := ""
lBHealthCheckDomain := ""
lBHealthCheckMethod := ""
for _, c := range conf {
switch c.Name {
case LBHealthCheckFlagConfigName:
flag := strings.ToLower(c.Value)
if flag != "on" && flag != "off" {
return nil, fmt.Errorf("invalid lb health check flag value: %s", c.Value)
}
lBHealthCheckFlag = flag
case LBHealthCheckTypeConfigName:
checkType := strings.ToLower(c.Value)
if checkType != "tcp" && checkType != "http" {
return nil, fmt.Errorf("invalid lb health check type: %s", c.Value)
}
lBHealthCheckType = checkType
case LBHealthCheckConnectPortConfigName:
portInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb health check connect port: %s", c.Value)
}
if portInt < 0 || portInt > 65535 {
return nil, fmt.Errorf("invalid lb health check connect port: %d", portInt)
}
lBHealthCheckConnectPort = c.Value
case LBHealthCheckConnectTimeoutConfigName:
timeoutInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb health check connect timeout: %s", c.Value)
}
if timeoutInt < 1 || timeoutInt > 300 {
return nil, fmt.Errorf("invalid lb health check connect timeout: %d", timeoutInt)
}
lBHealthCheckConnectTimeout = c.Value
case LBHealthCheckIntervalConfigName:
intervalInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb health check interval: %s", c.Value)
}
if intervalInt < 1 || intervalInt > 50 {
return nil, fmt.Errorf("invalid lb health check interval: %d", intervalInt)
}
lBHealthCheckInterval = c.Value
case LBHealthyThresholdConfigName:
thresholdInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb healthy threshold: %s", c.Value)
}
if thresholdInt < 2 || thresholdInt > 10 {
return nil, fmt.Errorf("invalid lb healthy threshold: %d", thresholdInt)
}
lBHealthyThreshold = c.Value
case LBUnhealthyThresholdConfigName:
thresholdInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb unhealthy threshold: %s", c.Value)
}
if thresholdInt < 2 || thresholdInt > 10 {
return nil, fmt.Errorf("invalid lb unhealthy threshold: %d", thresholdInt)
}
lBUnhealthyThreshold = c.Value
case LBHealthCheckUriConfigName:
if validateUri(c.Value) != nil {
return nil, fmt.Errorf("invalid lb health check uri: %s", c.Value)
}
lBHealthCheckUri = c.Value
case LBHealthCheckDomainConfigName:
if validateDomain(c.Value) != nil {
return nil, fmt.Errorf("invalid lb health check domain: %s", c.Value)
}
lBHealthCheckDomain = c.Value
case LBHealthCheckMethodConfigName:
method := strings.ToLower(c.Value)
if method != "get" && method != "head" {
return nil, fmt.Errorf("invalid lb health check method: %s", c.Value)
}
lBHealthCheckMethod = method
}
}
return &nlbHealthConfig{
lBHealthCheckFlag: lBHealthCheckFlag,
lBHealthCheckType: lBHealthCheckType,
lBHealthCheckConnectPort: lBHealthCheckConnectPort,
lBHealthCheckConnectTimeout: lBHealthCheckConnectTimeout,
lBHealthCheckInterval: lBHealthCheckInterval,
lBHealthCheckUri: lBHealthCheckUri,
lBHealthCheckDomain: lBHealthCheckDomain,
lBHealthCheckMethod: lBHealthCheckMethod,
lBHealthyThreshold: lBHealthyThreshold,
lBUnhealthyThreshold: lBUnhealthyThreshold,
}, nil
}
func validateDomain(domain string) error {
if len(domain) < 1 || len(domain) > 80 {
return fmt.Errorf("the domain length must be between 1 and 80 characters")
}
// Regular expression matches lowercase letters, numbers, dashes and periods
domainRegex := regexp.MustCompile(`^[a-z0-9-.]+$`)
if !domainRegex.MatchString(domain) {
return fmt.Errorf("the domain must only contain lowercase letters, numbers, hyphens, and periods")
}
// make sure the domain name does not start or end with a dash or period
if domain[0] == '-' || domain[0] == '.' || domain[len(domain)-1] == '-' || domain[len(domain)-1] == '.' {
return fmt.Errorf("the domain must not start or end with a hyphen or period")
}
// make sure the domain name does not contain consecutive dots or dashes
if regexp.MustCompile(`(--|\.\.)`).MatchString(domain) {
return fmt.Errorf("the domain must not contain consecutive hyphens or periods")
}
return nil
}
func validateUri(uri string) error {
if len(uri) < 1 || len(uri) > 80 {
return fmt.Errorf("string length must be between 1 and 80 characters")
}
regexPattern := `^/[0-9a-zA-Z.!$%&'*+/=?^_` + "`" + `{|}~-]*$`
matched, err := regexp.MatchString(regexPattern, uri)
if err != nil {
return fmt.Errorf("regex error: %v", err)
}
if !matched {
return fmt.Errorf("string does not match the required pattern")
}
return nil
}

View File

@ -0,0 +1,239 @@
package alibabacloud
import (
"context"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
)
const (
NlbSPNetwork = "AlibabaCloud-NLB-SharedPort"
NlbIdsConfigName = "NlbIds"
)
func init() {
alibabaCloudProvider.registerPlugin(&NlbSpPlugin{})
}
type NlbSpPlugin struct {
}
func (N *NlbSpPlugin) Name() string {
return NlbSPNetwork
}
func (N *NlbSpPlugin) Alias() string {
return ""
}
func (N *NlbSpPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (N *NlbSpPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
podNetConfig := parseNLbSpConfig(networkManager.GetNetworkConfig())
pod.Labels[SlbIdLabelKey] = podNetConfig.lbId
// Get Svc
svc := &corev1.Service{}
err := c.Get(ctx, types.NamespacedName{
Namespace: pod.GetNamespace(),
Name: podNetConfig.lbId,
}, svc)
if err != nil {
if errors.IsNotFound(err) {
// Create Svc
return pod, cperrors.ToPluginError(c.Create(ctx, consNlbSvc(podNetConfig, pod, c, ctx)), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
return pod, nil
}
func (N *NlbSpPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
podNetConfig := parseNLbSpConfig(networkConfig)
// Get Svc
svc := &corev1.Service{}
err := c.Get(context.Background(), types.NamespacedName{
Namespace: pod.GetNamespace(),
Name: podNetConfig.lbId,
}, svc)
if err != nil {
if errors.IsNotFound(err) {
// Create Svc
return pod, cperrors.ToPluginError(c.Create(ctx, consNlbSvc(podNetConfig, pod, c, ctx)), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// update svc
if util.GetHash(podNetConfig) != svc.GetAnnotations()[SlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
return pod, cperrors.ToPluginError(c.Update(ctx, consNlbSvc(podNetConfig, pod, c, ctx)), cperrors.ApiCallError)
}
_, hasLabel := pod.Labels[SlbIdLabelKey]
// disable network
if networkManager.GetNetworkDisabled() && hasLabel {
newLabels := pod.GetLabels()
delete(newLabels, SlbIdLabelKey)
pod.Labels = newLabels
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// enable network
if !networkManager.GetNetworkDisabled() && !hasLabel {
pod.Labels[SlbIdLabelKey] = podNetConfig.lbId
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// network not ready
if svc.Status.LoadBalancer.Ingress == nil {
return pod, nil
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkConfig) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, true)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
EndPoint: svc.Status.LoadBalancer.Ingress[0].Hostname,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (N *NlbSpPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
return nil
}
type nlbSpConfig struct {
lbId string
ports []int
protocols []corev1.Protocol
}
func parseNLbSpConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *nlbSpConfig {
var lbIds string
var ports []int
var protocols []corev1.Protocol
for _, c := range conf {
switch c.Name {
case NlbIdsConfigName:
lbIds = c.Value
case PortProtocolsConfigName:
ports, protocols = parsePortProtocols(c.Value)
}
}
return &nlbSpConfig{
lbId: lbIds,
ports: ports,
protocols: protocols,
}
}
func consNlbSvc(nc *nlbSpConfig, pod *corev1.Pod, c client.Client, ctx context.Context) *corev1.Service {
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(nc.ports); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(nc.ports[i]),
Port: int32(nc.ports[i]),
Protocol: nc.protocols[i],
TargetPort: intstr.FromInt(nc.ports[i]),
})
}
loadBalancerClass := "alibabacloud.com/nlb"
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: nc.lbId,
Namespace: pod.GetNamespace(),
Annotations: map[string]string{
SlbListenerOverrideKey: "true",
SlbIdAnnotationKey: nc.lbId,
SlbConfigHashKey: util.GetHash(nc),
},
OwnerReferences: getSvcOwnerReference(c, ctx, pod, true),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SlbIdLabelKey: nc.lbId,
},
Ports: svcPorts,
LoadBalancerClass: &loadBalancerClass,
},
}
return svc
}

View File

@ -0,0 +1,58 @@
package alibabacloud
import (
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
corev1 "k8s.io/api/core/v1"
"reflect"
"testing"
)
func TestParseNLbSpConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
nc *nlbSpConfig
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "nlb-xxx",
},
{
Name: PortProtocolsConfigName,
Value: "80/UDP",
},
},
nc: &nlbSpConfig{
protocols: []corev1.Protocol{corev1.ProtocolUDP},
ports: []int{80},
lbId: "nlb-xxx",
},
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "nlb-xxx",
},
{
Name: PortProtocolsConfigName,
Value: "80",
},
},
nc: &nlbSpConfig{
protocols: []corev1.Protocol{corev1.ProtocolTCP},
ports: []int{80},
lbId: "nlb-xxx",
},
},
}
for i, test := range tests {
expect := test.nc
actual := parseNLbSpConfig(test.conf)
if !reflect.DeepEqual(expect, actual) {
t.Errorf("case %d: expect nlbSpConfig is %v, but actually is %v", i, expect, actual)
}
}
}

View File

@ -0,0 +1,330 @@
package alibabacloud
import (
"context"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/utils/ptr"
"reflect"
"sigs.k8s.io/controller-runtime/pkg/client"
"sync"
"testing"
)
func TestNLBAllocateDeAllocate(t *testing.T) {
test := struct {
lbIds []string
nlb *NlbPlugin
num int
podKey string
}{
lbIds: []string{"xxx-A"},
nlb: &NlbPlugin{
maxPort: int32(712),
minPort: int32(512),
cache: make(map[string]portAllocated),
podAllocate: make(map[string]string),
mutex: sync.RWMutex{},
},
podKey: "xxx/xxx",
num: 3,
}
lbId, ports := test.nlb.allocate(test.lbIds, test.num, test.podKey)
if _, exist := test.nlb.podAllocate[test.podKey]; !exist {
t.Errorf("podAllocate[%s] is empty after allocated", test.podKey)
}
for _, port := range ports {
if port > test.nlb.maxPort || port < test.nlb.minPort {
t.Errorf("allocate port %d, unexpected", port)
}
if test.nlb.cache[lbId][port] == false {
t.Errorf("Allocate port %d failed", port)
}
}
test.nlb.deAllocate(test.podKey)
for _, port := range ports {
if test.nlb.cache[lbId][port] == true {
t.Errorf("deAllocate port %d failed", port)
}
}
if _, exist := test.nlb.podAllocate[test.podKey]; exist {
t.Errorf("podAllocate[%s] is not empty after deallocated", test.podKey)
}
}
func TestParseNlbConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
nlbConfig *nlbConfig
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "xxx-A",
},
{
Name: PortProtocolsConfigName,
Value: "80",
},
{
Name: LBHealthCheckFlagConfigName,
Value: "On",
},
{
Name: LBHealthCheckTypeConfigName,
Value: "HTTP",
},
{
Name: LBHealthCheckConnectPortConfigName,
Value: "6000",
},
{
Name: LBHealthCheckConnectTimeoutConfigName,
Value: "100",
},
{
Name: LBHealthCheckIntervalConfigName,
Value: "30",
},
{
Name: LBHealthCheckUriConfigName,
Value: "/another?valid",
},
{
Name: LBHealthCheckDomainConfigName,
Value: "www.test.com",
},
{
Name: LBHealthCheckMethodConfigName,
Value: "HEAD",
},
{
Name: LBHealthyThresholdConfigName,
Value: "5",
},
{
Name: LBUnhealthyThresholdConfigName,
Value: "5",
},
},
nlbConfig: &nlbConfig{
lbIds: []string{"xxx-A"},
targetPorts: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
isFixed: false,
nlbHealthConfig: &nlbHealthConfig{
lBHealthCheckFlag: "on",
lBHealthCheckType: "http",
lBHealthCheckConnectPort: "6000",
lBHealthCheckConnectTimeout: "100",
lBHealthCheckInterval: "30",
lBHealthCheckUri: "/another?valid",
lBHealthCheckDomain: "www.test.com",
lBHealthCheckMethod: "head",
lBHealthyThreshold: "5",
lBUnhealthyThreshold: "5",
},
},
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "xxx-A,xxx-B,",
},
{
Name: PortProtocolsConfigName,
Value: "81/UDP,82,83/TCP",
},
{
Name: FixedConfigName,
Value: "true",
},
},
nlbConfig: &nlbConfig{
lbIds: []string{"xxx-A", "xxx-B"},
targetPorts: []int{81, 82, 83},
protocols: []corev1.Protocol{corev1.ProtocolUDP, corev1.ProtocolTCP, corev1.ProtocolTCP},
isFixed: true,
nlbHealthConfig: &nlbHealthConfig{
lBHealthCheckFlag: "on",
lBHealthCheckType: "tcp",
lBHealthCheckConnectPort: "0",
lBHealthCheckConnectTimeout: "5",
lBHealthCheckInterval: "10",
lBUnhealthyThreshold: "2",
lBHealthyThreshold: "2",
lBHealthCheckUri: "",
lBHealthCheckDomain: "",
lBHealthCheckMethod: "",
},
},
},
}
for i, test := range tests {
sc, err := parseNlbConfig(test.conf)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(test.nlbConfig, sc) {
t.Errorf("case %d: lbId expect: %v, actual: %v", i, test.nlbConfig, sc)
}
}
}
func TestNlbPlugin_consSvc(t *testing.T) {
loadBalancerClass := "alibabacloud.com/nlb"
type fields struct {
maxPort int32
minPort int32
cache map[string]portAllocated
podAllocate map[string]string
}
type args struct {
config *nlbConfig
pod *corev1.Pod
client client.Client
ctx context.Context
}
tests := []struct {
name string
fields fields
args args
want *corev1.Service
}{
{
name: "convert svc cache exist",
fields: fields{
maxPort: 3000,
minPort: 1,
cache: map[string]portAllocated{
"default/test-pod": map[int32]bool{},
},
podAllocate: map[string]string{
"default/test-pod": "clb-xxx:80,81",
},
},
args: args{
config: &nlbConfig{
lbIds: []string{"clb-xxx"},
targetPorts: []int{82},
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
},
isFixed: false,
nlbHealthConfig: &nlbHealthConfig{
lBHealthCheckFlag: "on",
lBHealthCheckType: "tcp",
lBHealthCheckConnectPort: "0",
lBHealthCheckConnectTimeout: "5",
lBHealthCheckInterval: "10",
lBUnhealthyThreshold: "2",
lBHealthyThreshold: "2",
lBHealthCheckUri: "",
lBHealthCheckDomain: "",
lBHealthCheckMethod: "",
},
},
pod: &corev1.Pod{
TypeMeta: metav1.TypeMeta{
Kind: "pod",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
UID: "32fqwfqfew",
},
},
client: nil,
ctx: context.Background(),
},
want: &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
SlbListenerOverrideKey: "true",
SlbIdAnnotationKey: "clb-xxx",
SlbConfigHashKey: util.GetHash(&nlbConfig{
lbIds: []string{"clb-xxx"},
targetPorts: []int{82},
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
},
isFixed: false,
nlbHealthConfig: &nlbHealthConfig{
lBHealthCheckFlag: "on",
lBHealthCheckType: "tcp",
lBHealthCheckConnectPort: "0",
lBHealthCheckConnectTimeout: "5",
lBHealthCheckInterval: "10",
lBUnhealthyThreshold: "2",
lBHealthyThreshold: "2",
lBHealthCheckUri: "",
lBHealthCheckDomain: "",
lBHealthCheckMethod: "",
},
}),
LBHealthCheckFlagAnnotationKey: "on",
LBHealthCheckTypeAnnotationKey: "tcp",
LBHealthCheckConnectPortAnnotationKey: "0",
LBHealthCheckConnectTimeoutAnnotationKey: "5",
LBHealthCheckIntervalAnnotationKey: "10",
LBUnhealthyThresholdAnnotationKey: "2",
LBHealthyThresholdAnnotationKey: "2",
},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "v1",
Kind: "pod",
Name: "test-pod",
UID: "32fqwfqfew",
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
ExternalTrafficPolicy: corev1.ServiceExternalTrafficPolicyTypeLocal,
LoadBalancerClass: &loadBalancerClass,
Selector: map[string]string{
SvcSelectorKey: "test-pod",
},
Ports: []corev1.ServicePort{{
Name: "82",
Port: 80,
Protocol: "TCP",
TargetPort: intstr.IntOrString{
Type: 0,
IntVal: 82,
},
},
},
},
},
},
}
for _, tt := range tests {
c := &NlbPlugin{
maxPort: tt.fields.maxPort,
minPort: tt.fields.minPort,
cache: tt.fields.cache,
podAllocate: tt.fields.podAllocate,
}
got, err := c.consSvc(tt.args.config, tt.args.pod, tt.args.client, tt.args.ctx)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("consSvc() = %v, want %v", got, tt.want)
}
}
}

View File

@ -0,0 +1,691 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
"context"
"fmt"
"strconv"
"strings"
"sync"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
)
const (
SlbNetwork = "AlibabaCloud-SLB"
AliasSLB = "LB-Network"
SlbIdsConfigName = "SlbIds"
PortProtocolsConfigName = "PortProtocols"
ExternalTrafficPolicyTypeConfigName = "ExternalTrafficPolicyType"
SlbListenerOverrideKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners"
SlbIdAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id"
SlbIdLabelKey = "service.k8s.alibaba/loadbalancer-id"
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
SlbConfigHashKey = "game.kruise.io/network-config-hash"
)
const (
// annotations provided by AlibabaCloud Cloud Controller Manager
LBHealthCheckSwitchAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-switch"
LBHealthCheckProtocolPortAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-protocol-port"
// ConfigNames defined by OKG
LBHealthCheckSwitchConfigName = "LBHealthCheckSwitch"
LBHealthCheckProtocolPortConfigName = "LBHealthCheckProtocolPort"
)
type portAllocated map[int32]bool
type SlbPlugin struct {
maxPort int32
minPort int32
blockPorts []int32
cache map[string]portAllocated
podAllocate map[string]string
mutex sync.RWMutex
}
type slbConfig struct {
lbIds []string
targetPorts []int
protocols []corev1.Protocol
isFixed bool
externalTrafficPolicyType corev1.ServiceExternalTrafficPolicyType
lBHealthCheckSwitch string
lBHealthCheckProtocolPort string
lBHealthCheckFlag string
lBHealthCheckType string
lBHealthCheckConnectTimeout string
lBHealthCheckInterval string
lBHealthCheckUri string
lBHealthCheckDomain string
lBHealthCheckMethod string
lBHealthyThreshold string
lBUnhealthyThreshold string
}
func (s *SlbPlugin) Name() string {
return SlbNetwork
}
func (s *SlbPlugin) Alias() string {
return AliasSLB
}
func (s *SlbPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
s.mutex.Lock()
defer s.mutex.Unlock()
slbOptions := options.(provideroptions.AlibabaCloudOptions).SLBOptions
s.minPort = slbOptions.MinPort
s.maxPort = slbOptions.MaxPort
s.blockPorts = slbOptions.BlockPorts
svcList := &corev1.ServiceList{}
err := c.List(ctx, svcList)
if err != nil {
return err
}
s.cache, s.podAllocate = initLbCache(svcList.Items, s.minPort, s.maxPort, s.blockPorts)
log.Infof("[%s] podAllocate cache complete initialization: %v", SlbNetwork, s.podAllocate)
return nil
}
func initLbCache(svcList []corev1.Service, minPort, maxPort int32, blockPorts []int32) (map[string]portAllocated, map[string]string) {
newCache := make(map[string]portAllocated)
newPodAllocate := make(map[string]string)
for _, svc := range svcList {
lbId := svc.Labels[SlbIdLabelKey]
if lbId != "" && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
// init cache for that lb
if newCache[lbId] == nil {
newCache[lbId] = make(portAllocated, maxPort-minPort+1)
for i := minPort; i <= maxPort; i++ {
newCache[lbId][i] = false
}
}
// block ports
for _, blockPort := range blockPorts {
newCache[lbId][blockPort] = true
}
// fill in cache for that lb
var ports []int32
for _, port := range getPorts(svc.Spec.Ports) {
if port <= maxPort && port >= minPort {
value, ok := newCache[lbId][port]
if !ok || !value {
newCache[lbId][port] = true
ports = append(ports, port)
}
}
}
if len(ports) != 0 {
newPodAllocate[svc.GetNamespace()+"/"+svc.GetName()] = lbId + ":" + util.Int32SliceToString(ports, ",")
}
}
}
return newCache, newPodAllocate
}
func (s *SlbPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (s *SlbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseLbConfig(networkConfig)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
// get svc
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
service, err := s.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Create(ctx, service), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// old svc remain
if svc.OwnerReferences[0].Kind == "Pod" && svc.OwnerReferences[0].UID != pod.UID {
log.Infof("[%s] waitting old svc %s/%s deleted. old owner pod uid is %s, but now is %s", SlbNetwork, svc.Namespace, svc.Name, svc.OwnerReferences[0].UID, pod.UID)
return pod, nil
}
// update svc
if util.GetHash(sc) != svc.GetAnnotations()[SlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
service, err := s.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
return pod, cperrors.ToPluginError(c.Update(ctx, service), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
svc.Spec.Type = corev1.ServiceTypeClusterIP
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeClusterIP {
svc.Spec.Type = corev1.ServiceTypeLoadBalancer
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// network not ready
if svc.Status.LoadBalancer.Ingress == nil {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (s *SlbPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseLbConfig(networkConfig)
if err != nil {
return cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
var podKeys []string
if sc.isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range s.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
s.deAllocate(podKey)
}
return nil
}
func (s *SlbPlugin) allocate(lbIds []string, num int, nsName string) (string, []int32) {
s.mutex.Lock()
defer s.mutex.Unlock()
var ports []int32
var lbId string
// find lb with adequate ports
for _, slbId := range lbIds {
sum := 0
for i := s.minPort; i <= s.maxPort; i++ {
if !s.cache[slbId][i] {
sum++
}
if sum >= num {
lbId = slbId
break
}
}
}
if lbId == "" {
return "", nil
}
// select ports
for i := 0; i < num; i++ {
var port int32
if s.cache[lbId] == nil {
// init cache for new lb
s.cache[lbId] = make(portAllocated, s.maxPort-s.minPort+1)
for i := s.minPort; i <= s.maxPort; i++ {
s.cache[lbId][i] = false
}
// block ports
for _, blockPort := range s.blockPorts {
s.cache[lbId][blockPort] = true
}
}
for p, allocated := range s.cache[lbId] {
if !allocated {
port = p
break
}
}
s.cache[lbId][port] = true
ports = append(ports, port)
}
s.podAllocate[nsName] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("pod %s allocate slb %s ports %v", nsName, lbId, ports)
return lbId, ports
}
func (s *SlbPlugin) deAllocate(nsName string) {
s.mutex.Lock()
defer s.mutex.Unlock()
allocatedPorts, exist := s.podAllocate[nsName]
if !exist {
return
}
slbPorts := strings.Split(allocatedPorts, ":")
lbId := slbPorts[0]
ports := util.StringToInt32Slice(slbPorts[1], ",")
for _, port := range ports {
s.cache[lbId][port] = false
}
// block ports
for _, blockPort := range s.blockPorts {
s.cache[lbId][blockPort] = true
}
delete(s.podAllocate, nsName)
log.Infof("pod %s deallocate slb %s ports %v", nsName, lbId, ports)
}
func init() {
slbPlugin := SlbPlugin{
mutex: sync.RWMutex{},
}
alibabaCloudProvider.registerPlugin(&slbPlugin)
}
func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*slbConfig, error) {
var lbIds []string
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
externalTrafficPolicy := corev1.ServiceExternalTrafficPolicyTypeCluster
lBHealthCheckSwitch := "on"
lBHealthCheckProtocolPort := ""
lBHealthCheckFlag := "off"
lBHealthCheckType := "tcp"
lBHealthCheckConnectTimeout := "5"
lBHealthCheckInterval := "10"
lBUnhealthyThreshold := "2"
lBHealthyThreshold := "2"
lBHealthCheckUri := ""
lBHealthCheckDomain := ""
lBHealthCheckMethod := ""
for _, c := range conf {
switch c.Name {
case SlbIdsConfigName:
for _, slbId := range strings.Split(c.Value, ",") {
if slbId != "" {
lbIds = append(lbIds, slbId)
}
}
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
isFixed = v
case ExternalTrafficPolicyTypeConfigName:
if strings.EqualFold(c.Value, string(corev1.ServiceExternalTrafficPolicyTypeLocal)) {
externalTrafficPolicy = corev1.ServiceExternalTrafficPolicyTypeLocal
}
case LBHealthCheckSwitchConfigName:
checkSwitch := strings.ToLower(c.Value)
if checkSwitch != "on" && checkSwitch != "off" {
return nil, fmt.Errorf("invalid lb health check switch value: %s", c.Value)
}
lBHealthCheckSwitch = checkSwitch
case LBHealthCheckFlagConfigName:
flag := strings.ToLower(c.Value)
if flag != "on" && flag != "off" {
return nil, fmt.Errorf("invalid lb health check flag value: %s", c.Value)
}
lBHealthCheckFlag = flag
case LBHealthCheckTypeConfigName:
checkType := strings.ToLower(c.Value)
if checkType != "tcp" && checkType != "http" {
return nil, fmt.Errorf("invalid lb health check type: %s", c.Value)
}
lBHealthCheckType = checkType
case LBHealthCheckProtocolPortConfigName:
if validateHttpProtocolPort(c.Value) != nil {
return nil, fmt.Errorf("invalid lb health check protocol port: %s", c.Value)
}
lBHealthCheckProtocolPort = c.Value
case LBHealthCheckConnectTimeoutConfigName:
timeoutInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb health check connect timeout: %s", c.Value)
}
if timeoutInt < 1 || timeoutInt > 300 {
return nil, fmt.Errorf("invalid lb health check connect timeout: %d", timeoutInt)
}
lBHealthCheckConnectTimeout = c.Value
case LBHealthCheckIntervalConfigName:
intervalInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb health check interval: %s", c.Value)
}
if intervalInt < 1 || intervalInt > 50 {
return nil, fmt.Errorf("invalid lb health check interval: %d", intervalInt)
}
lBHealthCheckInterval = c.Value
case LBHealthyThresholdConfigName:
thresholdInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb healthy threshold: %s", c.Value)
}
if thresholdInt < 2 || thresholdInt > 10 {
return nil, fmt.Errorf("invalid lb healthy threshold: %d", thresholdInt)
}
lBHealthyThreshold = c.Value
case LBUnhealthyThresholdConfigName:
thresholdInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb unhealthy threshold: %s", c.Value)
}
if thresholdInt < 2 || thresholdInt > 10 {
return nil, fmt.Errorf("invalid lb unhealthy threshold: %d", thresholdInt)
}
lBUnhealthyThreshold = c.Value
case LBHealthCheckUriConfigName:
if validateUri(c.Value) != nil {
return nil, fmt.Errorf("invalid lb health check uri: %s", c.Value)
}
lBHealthCheckUri = c.Value
case LBHealthCheckDomainConfigName:
if validateDomain(c.Value) != nil {
return nil, fmt.Errorf("invalid lb health check domain: %s", c.Value)
}
lBHealthCheckDomain = c.Value
case LBHealthCheckMethodConfigName:
method := strings.ToLower(c.Value)
if method != "get" && method != "head" {
return nil, fmt.Errorf("invalid lb health check method: %s", c.Value)
}
lBHealthCheckMethod = method
}
}
return &slbConfig{
lbIds: lbIds,
protocols: protocols,
targetPorts: ports,
isFixed: isFixed,
externalTrafficPolicyType: externalTrafficPolicy,
lBHealthCheckSwitch: lBHealthCheckSwitch,
lBHealthCheckFlag: lBHealthCheckFlag,
lBHealthCheckType: lBHealthCheckType,
lBHealthCheckProtocolPort: lBHealthCheckProtocolPort,
lBHealthCheckConnectTimeout: lBHealthCheckConnectTimeout,
lBHealthCheckInterval: lBHealthCheckInterval,
lBHealthCheckUri: lBHealthCheckUri,
lBHealthCheckDomain: lBHealthCheckDomain,
lBHealthCheckMethod: lBHealthCheckMethod,
lBHealthyThreshold: lBHealthyThreshold,
lBUnhealthyThreshold: lBUnhealthyThreshold,
}, nil
}
func getPorts(ports []corev1.ServicePort) []int32 {
var ret []int32
for _, port := range ports {
ret = append(ret, port.Port)
}
return ret
}
func (s *SlbPlugin) consSvc(sc *slbConfig, pod *corev1.Pod, c client.Client, ctx context.Context) (*corev1.Service, error) {
var ports []int32
var lbId string
podKey := pod.GetNamespace() + "/" + pod.GetName()
allocatedPorts, exist := s.podAllocate[podKey]
if exist {
slbPorts := strings.Split(allocatedPorts, ":")
lbId = slbPorts[0]
ports = util.StringToInt32Slice(slbPorts[1], ",")
} else {
lbId, ports = s.allocate(sc.lbIds, len(sc.targetPorts), podKey)
if lbId == "" && ports == nil {
return nil, fmt.Errorf("there are no avaialable ports for %v", sc.lbIds)
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(sc.targetPorts); i++ {
if sc.protocols[i] == ProtocolTCPUDP {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), corev1.ProtocolTCP),
Port: ports[i],
Protocol: corev1.ProtocolTCP,
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), corev1.ProtocolUDP),
Port: ports[i],
Protocol: corev1.ProtocolUDP,
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
} else {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), sc.protocols[i]),
Port: ports[i],
Protocol: sc.protocols[i],
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
}
}
svcAnnotations := map[string]string{
SlbListenerOverrideKey: "true",
SlbIdAnnotationKey: lbId,
SlbConfigHashKey: util.GetHash(sc),
LBHealthCheckFlagAnnotationKey: sc.lBHealthCheckFlag,
LBHealthCheckSwitchAnnotationKey: sc.lBHealthCheckSwitch,
}
if sc.lBHealthCheckSwitch == "on" {
svcAnnotations[LBHealthCheckTypeAnnotationKey] = sc.lBHealthCheckType
svcAnnotations[LBHealthCheckConnectTimeoutAnnotationKey] = sc.lBHealthCheckConnectTimeout
svcAnnotations[LBHealthCheckIntervalAnnotationKey] = sc.lBHealthCheckInterval
svcAnnotations[LBHealthyThresholdAnnotationKey] = sc.lBHealthyThreshold
svcAnnotations[LBUnhealthyThresholdAnnotationKey] = sc.lBUnhealthyThreshold
if sc.lBHealthCheckType == "http" {
svcAnnotations[LBHealthCheckProtocolPortAnnotationKey] = sc.lBHealthCheckProtocolPort
svcAnnotations[LBHealthCheckDomainAnnotationKey] = sc.lBHealthCheckDomain
svcAnnotations[LBHealthCheckUriAnnotationKey] = sc.lBHealthCheckUri
svcAnnotations[LBHealthCheckMethodAnnotationKey] = sc.lBHealthCheckMethod
}
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: svcAnnotations,
OwnerReferences: getSvcOwnerReference(c, ctx, pod, sc.isFixed),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
ExternalTrafficPolicy: sc.externalTrafficPolicyType,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
},
}
return svc, nil
}
func getSvcOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, isFixed bool) []metav1.OwnerReference {
ownerReferences := []metav1.OwnerReference{
{
APIVersion: pod.APIVersion,
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err == nil {
ownerReferences = []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}
}
return ownerReferences
}
func validateHttpProtocolPort(protocolPort string) error {
protocolPorts := strings.Split(protocolPort, ",")
for _, pp := range protocolPorts {
protocol := strings.Split(pp, ":")[0]
if protocol != "http" && protocol != "https" {
return fmt.Errorf("invalid http protocol: %s", protocol)
}
port := strings.Split(pp, ":")[1]
_, err := strconv.Atoi(port)
if err != nil {
return fmt.Errorf("invalid http port: %s", port)
}
}
return nil
}

View File

@ -0,0 +1,388 @@
package alibabacloud
import (
"context"
"fmt"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
"sync"
)
const (
SlbSPNetwork = "AlibabaCloud-SLB-SharedPort"
SvcSLBSPLabel = "game.kruise.io/AlibabaCloud-SLB-SharedPort"
ManagedServiceNamesConfigName = "ManagedServiceNames"
ManagedServiceSelectorConfigName = "ManagedServiceSelector"
)
const (
ErrorUpperLimit = "the number of backends supported by slb reaches the upper limit"
)
func init() {
slbSpPlugin := SlbSpPlugin{
mutex: sync.RWMutex{},
}
alibabaCloudProvider.registerPlugin(&slbSpPlugin)
}
type SlbSpPlugin struct {
numBackends map[string]int
podSlbId map[string]string
mutex sync.RWMutex
}
type lbSpConfig struct {
lbIds []string
ports []int
protocols []corev1.Protocol
managedServiceNames []string
managedServiceSelectorKey string
managedServiceSelectorValue string
}
func (s *SlbSpPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
podNetConfig := parseLbSpConfig(networkManager.GetNetworkConfig())
lbId, err := s.getOrAllocate(podNetConfig, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
// Get Svc
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Namespace: pod.GetNamespace(),
Name: lbId,
}, svc)
if err != nil {
if errors.IsNotFound(err) {
// Create Svc
return pod, cperrors.ToPluginError(s.createSvc(c, ctx, pod, podNetConfig, lbId), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
pod, err = networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (s *SlbSpPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
podNetConfig := parseLbSpConfig(networkManager.GetNetworkConfig())
podSlbId, err := s.getOrAllocate(podNetConfig, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
// Get Svc
svc := &corev1.Service{}
err = c.Get(context.Background(), types.NamespacedName{
Namespace: pod.GetNamespace(),
Name: podSlbId,
}, svc)
if err != nil {
if errors.IsNotFound(err) {
// Create Svc
return pod, cperrors.ToPluginError(s.createSvc(c, ctx, pod, podNetConfig, podSlbId), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
_, hasLabel := pod.Labels[SlbIdLabelKey]
// disable network
if networkManager.GetNetworkDisabled() && hasLabel {
newLabels := pod.GetLabels()
delete(newLabels, SlbIdLabelKey)
delete(newLabels, podNetConfig.managedServiceSelectorKey)
pod.Labels = newLabels
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// enable network
if !networkManager.GetNetworkDisabled() && !hasLabel {
pod.Labels[SlbIdLabelKey] = podSlbId
pod.Labels[podNetConfig.managedServiceSelectorKey] = podNetConfig.managedServiceSelectorValue
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// network not ready
if svc.Status.LoadBalancer.Ingress == nil {
return pod, nil
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, true)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
for _, svcName := range podNetConfig.managedServiceNames {
managedSvc := &corev1.Service{}
getErr := c.Get(ctx, types.NamespacedName{
Namespace: pod.GetNamespace(),
Name: svcName,
}, managedSvc)
if getErr != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
toUpDateManagedSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, managedSvc, true)
if err != nil {
return pod, err
}
if toUpDateManagedSvc {
err := c.Update(ctx, managedSvc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (s *SlbSpPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
s.deAllocate(pod.GetNamespace() + "/" + pod.GetName())
return nil
}
func (s *SlbSpPlugin) Name() string {
return SlbSPNetwork
}
func (s *SlbSpPlugin) Alias() string {
return ""
}
func (s *SlbSpPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
s.mutex.Lock()
defer s.mutex.Unlock()
svcList := &corev1.ServiceList{}
err := c.List(ctx, svcList, &client.ListOptions{
LabelSelector: labels.SelectorFromSet(map[string]string{
SvcSLBSPLabel: "true",
})})
if err != nil {
return err
}
numBackends := make(map[string]int)
podSlbId := make(map[string]string)
for _, svc := range svcList.Items {
slbId := svc.Labels[SlbIdLabelKey]
podList := &corev1.PodList{}
err := c.List(ctx, podList, &client.ListOptions{
Namespace: svc.GetNamespace(),
LabelSelector: labels.SelectorFromSet(map[string]string{
SlbIdLabelKey: slbId,
})})
if err != nil {
return err
}
num := len(podList.Items)
numBackends[slbId] += num
for _, pod := range podList.Items {
podSlbId[pod.GetNamespace()+"/"+pod.GetName()] = slbId
}
}
s.numBackends = numBackends
s.podSlbId = podSlbId
return nil
}
func (s *SlbSpPlugin) createSvc(c client.Client, ctx context.Context, pod *corev1.Pod, podConfig *lbSpConfig, lbId string) error {
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(podConfig.ports); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(podConfig.ports[i]),
Port: int32(podConfig.ports[i]),
Protocol: podConfig.protocols[i],
TargetPort: intstr.FromInt(podConfig.ports[i]),
})
}
return c.Create(ctx, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: lbId,
Namespace: pod.GetNamespace(),
Annotations: map[string]string{
SlbIdAnnotationKey: lbId,
SlbListenerOverrideKey: "true",
},
Labels: map[string]string{
SvcSLBSPLabel: "true",
},
OwnerReferences: getSvcOwnerReference(c, ctx, pod, true),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SlbIdLabelKey: lbId,
},
Ports: svcPorts,
},
})
}
func (s *SlbSpPlugin) getOrAllocate(podNetConfig *lbSpConfig, pod *corev1.Pod) (string, error) {
s.mutex.Lock()
defer s.mutex.Unlock()
if slbId, ok := s.podSlbId[pod.GetNamespace()+"/"+pod.GetName()]; ok {
return slbId, nil
}
minValue := 200
selectId := ""
for _, id := range podNetConfig.lbIds {
numBackends := s.numBackends[id]
if numBackends < 200 && numBackends < minValue {
minValue = numBackends
selectId = id
}
}
if selectId == "" {
return "", fmt.Errorf(ErrorUpperLimit)
}
s.numBackends[selectId]++
s.podSlbId[pod.GetNamespace()+"/"+pod.GetName()] = selectId
pod.Labels[SlbIdLabelKey] = selectId
return selectId, nil
}
func (s *SlbSpPlugin) deAllocate(nsName string) {
s.mutex.Lock()
defer s.mutex.Unlock()
slbId, ok := s.podSlbId[nsName]
if !ok {
return
}
s.numBackends[slbId]--
delete(s.podSlbId, nsName)
}
func parseLbSpConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *lbSpConfig {
var lbIds []string
var ports []int
var protocols []corev1.Protocol
var managedServiceNames []string
var managedServiceSelectorKey string
var managedServiceSelectorValue string
for _, c := range conf {
switch c.Name {
case SlbIdsConfigName:
lbIds = parseLbIds(c.Value)
case PortProtocolsConfigName:
ports, protocols = parsePortProtocols(c.Value)
case ManagedServiceNamesConfigName:
managedServiceNames = strings.Split(c.Value, ",")
case ManagedServiceSelectorConfigName:
managedServiceSelectorKey = strings.Split(c.Value, "=")[0]
managedServiceSelectorValue = strings.Split(c.Value, "=")[1]
}
}
return &lbSpConfig{
lbIds: lbIds,
ports: ports,
protocols: protocols,
managedServiceNames: managedServiceNames,
managedServiceSelectorKey: managedServiceSelectorKey,
managedServiceSelectorValue: managedServiceSelectorValue,
}
}
func parsePortProtocols(value string) ([]int, []corev1.Protocol) {
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
for _, pp := range strings.Split(value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
return ports, protocols
}
func parseLbIds(value string) []string {
return strings.Split(value, ",")
}

View File

@ -0,0 +1,164 @@
package alibabacloud
import (
"fmt"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"reflect"
"sync"
"testing"
)
func TestSlpSpAllocate(t *testing.T) {
tests := []struct {
slbsp *SlbSpPlugin
pod *corev1.Pod
podNetConfig *lbSpConfig
numBackends map[string]int
podSlbId map[string]string
expErr error
}{
{
slbsp: &SlbSpPlugin{
numBackends: make(map[string]int),
podSlbId: make(map[string]string),
mutex: sync.RWMutex{},
},
pod: &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod-name",
Namespace: "pod-ns",
Labels: map[string]string{
"xxx": "xxx",
},
},
},
podNetConfig: &lbSpConfig{
lbIds: []string{"lb-xxa"},
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
numBackends: map[string]int{"lb-xxa": 1},
podSlbId: map[string]string{"pod-ns/pod-name": "lb-xxa"},
expErr: nil,
},
{
slbsp: &SlbSpPlugin{
numBackends: map[string]int{"lb-xxa": 200},
podSlbId: map[string]string{"a-ns/a-name": "lb-xxa"},
mutex: sync.RWMutex{},
},
pod: &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod-name",
Namespace: "pod-ns",
Labels: map[string]string{
"xxx": "xxx",
},
},
},
podNetConfig: &lbSpConfig{
lbIds: []string{"lb-xxa"},
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
numBackends: map[string]int{"lb-xxa": 200},
podSlbId: map[string]string{"a-ns/a-name": "lb-xxa"},
expErr: fmt.Errorf(ErrorUpperLimit),
},
}
for _, test := range tests {
slbId, err := test.slbsp.getOrAllocate(test.podNetConfig, test.pod)
if (err == nil) != (test.expErr == nil) {
t.Errorf("expect err: %v, but acutal err: %v", test.expErr, err)
}
if test.pod.GetLabels()[SlbIdLabelKey] != slbId {
t.Errorf("expect pod have slblabel value: %s, but actual value: %s", slbId, test.pod.GetLabels()[SlbIdLabelKey])
}
if !reflect.DeepEqual(test.numBackends, test.slbsp.numBackends) {
t.Errorf("expect numBackends: %v, but actual: %v", test.numBackends, test.slbsp.numBackends)
}
if !reflect.DeepEqual(test.podSlbId, test.slbsp.podSlbId) {
t.Errorf("expect numBackends: %v, but actual: %v", test.podSlbId, test.slbsp.podSlbId)
}
}
}
func TestParseLbSpConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
podNetConfig *lbSpConfig
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: PortProtocolsConfigName,
Value: "80",
},
{
Name: SlbIdsConfigName,
Value: "lb-xxa",
},
{
Name: ManagedServiceNamesConfigName,
Value: "service-clusterIp",
},
{
Name: ManagedServiceSelectorConfigName,
Value: "game=v1",
},
},
podNetConfig: &lbSpConfig{
lbIds: []string{"lb-xxa"},
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
managedServiceNames: []string{"service-clusterIp"},
managedServiceSelectorKey: "game",
managedServiceSelectorValue: "v1",
},
},
}
for _, test := range tests {
podNetConfig := parseLbSpConfig(test.conf)
if !reflect.DeepEqual(podNetConfig, test.podNetConfig) {
t.Errorf("expect podNetConfig: %v, but actual: %v", test.podNetConfig, podNetConfig)
}
}
}
func TestParsePortProtocols(t *testing.T) {
tests := []struct {
value string
ports []int
protocols []corev1.Protocol
}{
{
value: "80",
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
{
value: "8080/UDP,80/TCP",
ports: []int{8080, 80},
protocols: []corev1.Protocol{corev1.ProtocolUDP, corev1.ProtocolTCP},
},
}
for i, test := range tests {
actualPorts, actualProtocols := parsePortProtocols(test.value)
if !util.IsSliceEqual(actualPorts, test.ports) {
t.Errorf("case %d: expect ports is %v, but actually is %v", i, test.ports, actualPorts)
}
if !reflect.DeepEqual(actualProtocols, test.protocols) {
t.Errorf("case %d: expect protocols is %v, but actually is %v", i, test.protocols, actualProtocols)
}
}
}

View File

@ -0,0 +1,291 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
"reflect"
"sync"
"testing"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
)
func TestAllocateDeAllocate(t *testing.T) {
test := struct {
lbIds []string
slb *SlbPlugin
num int
podKey string
}{
lbIds: []string{"xxx-A"},
slb: &SlbPlugin{
maxPort: int32(712),
minPort: int32(512),
cache: make(map[string]portAllocated),
podAllocate: make(map[string]string),
mutex: sync.RWMutex{},
},
podKey: "xxx/xxx",
num: 3,
}
lbId, ports := test.slb.allocate(test.lbIds, test.num, test.podKey)
if _, exist := test.slb.podAllocate[test.podKey]; !exist {
t.Errorf("podAllocate[%s] is empty after allocated", test.podKey)
}
for _, port := range ports {
if port > test.slb.maxPort || port < test.slb.minPort {
t.Errorf("allocate port %d, unexpected", port)
}
if test.slb.cache[lbId][port] == false {
t.Errorf("Allocate port %d failed", port)
}
}
test.slb.deAllocate(test.podKey)
for _, port := range ports {
if test.slb.cache[lbId][port] == true {
t.Errorf("deAllocate port %d failed", port)
}
}
if _, exist := test.slb.podAllocate[test.podKey]; exist {
t.Errorf("podAllocate[%s] is not empty after deallocated", test.podKey)
}
}
func TestParseLbConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
slbConfig *slbConfig
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: SlbIdsConfigName,
Value: "xxx-A",
},
{
Name: PortProtocolsConfigName,
Value: "80",
},
{
Name: LBHealthCheckSwitchConfigName,
Value: "off",
},
{
Name: LBHealthCheckFlagConfigName,
Value: "off",
},
{
Name: LBHealthCheckTypeConfigName,
Value: "HTTP",
},
{
Name: LBHealthCheckConnectPortConfigName,
Value: "6000",
},
{
Name: LBHealthCheckConnectTimeoutConfigName,
Value: "100",
},
{
Name: LBHealthCheckIntervalConfigName,
Value: "30",
},
{
Name: LBHealthCheckUriConfigName,
Value: "/another?valid",
},
{
Name: LBHealthCheckDomainConfigName,
Value: "www.test.com",
},
{
Name: LBHealthCheckMethodConfigName,
Value: "HEAD",
},
{
Name: LBHealthyThresholdConfigName,
Value: "5",
},
{
Name: LBUnhealthyThresholdConfigName,
Value: "5",
},
{
Name: LBHealthCheckProtocolPortConfigName,
Value: "http:80",
},
},
slbConfig: &slbConfig{
lbIds: []string{"xxx-A"},
targetPorts: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
externalTrafficPolicyType: corev1.ServiceExternalTrafficPolicyTypeCluster,
isFixed: false,
lBHealthCheckSwitch: "off",
lBHealthCheckFlag: "off",
lBHealthCheckType: "http",
lBHealthCheckConnectTimeout: "100",
lBHealthCheckInterval: "30",
lBHealthCheckUri: "/another?valid",
lBHealthCheckDomain: "www.test.com",
lBHealthCheckMethod: "head",
lBHealthyThreshold: "5",
lBUnhealthyThreshold: "5",
lBHealthCheckProtocolPort: "http:80",
},
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: SlbIdsConfigName,
Value: "xxx-A,xxx-B,",
},
{
Name: PortProtocolsConfigName,
Value: "81/UDP,82,83/TCP",
},
{
Name: FixedConfigName,
Value: "true",
},
{
Name: ExternalTrafficPolicyTypeConfigName,
Value: "Local",
},
},
slbConfig: &slbConfig{
lbIds: []string{"xxx-A", "xxx-B"},
targetPorts: []int{81, 82, 83},
protocols: []corev1.Protocol{corev1.ProtocolUDP, corev1.ProtocolTCP, corev1.ProtocolTCP},
externalTrafficPolicyType: corev1.ServiceExternalTrafficPolicyTypeLocal,
isFixed: true,
lBHealthCheckSwitch: "on",
lBHealthCheckFlag: "off",
lBHealthCheckType: "tcp",
lBHealthCheckConnectTimeout: "5",
lBHealthCheckInterval: "10",
lBUnhealthyThreshold: "2",
lBHealthyThreshold: "2",
lBHealthCheckUri: "",
lBHealthCheckDomain: "",
lBHealthCheckMethod: "",
lBHealthCheckProtocolPort: "",
},
},
}
for i, test := range tests {
sc, err := parseLbConfig(test.conf)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(test.slbConfig, sc) {
t.Errorf("case %d: lbId expect: %v, actual: %v", i, test.slbConfig, sc)
}
}
}
func TestInitLbCache(t *testing.T) {
test := struct {
svcList []corev1.Service
minPort int32
maxPort int32
blockPorts []int32
cache map[string]portAllocated
podAllocate map[string]string
}{
minPort: 512,
maxPort: 712,
blockPorts: []int32{593},
cache: map[string]portAllocated{
"xxx-A": map[int32]bool{
666: true,
593: true,
},
"xxx-B": map[int32]bool{
555: true,
593: true,
},
},
podAllocate: map[string]string{
"ns-0/name-0": "xxx-A:666",
"ns-1/name-1": "xxx-B:555",
},
svcList: []corev1.Service{
{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
SlbIdLabelKey: "xxx-A",
},
Namespace: "ns-0",
Name: "name-0",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-A",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(80),
Port: 666,
Protocol: corev1.ProtocolTCP,
},
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
SlbIdLabelKey: "xxx-B",
},
Namespace: "ns-1",
Name: "name-1",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-B",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(8080),
Port: 555,
Protocol: corev1.ProtocolTCP,
},
},
},
},
},
}
actualCache, actualPodAllocate := initLbCache(test.svcList, test.minPort, test.maxPort, test.blockPorts)
for lb, pa := range test.cache {
for port, isAllocated := range pa {
if actualCache[lb][port] != isAllocated {
t.Errorf("lb %s port %d isAllocated, expect: %t, actual: %t", lb, port, isAllocated, actualCache[lb][port])
}
}
}
if !reflect.DeepEqual(actualPodAllocate, test.podAllocate) {
t.Errorf("podAllocate expect %v, but actully got %v", test.podAllocate, actualPodAllocate)
}
}

View File

@ -0,0 +1,185 @@
English | [中文](./README.zh_CN.md)
For game businesses using OKG in AWS EKS clusters, routing traffic directly to Pod ports via network load balancing is the foundation for achieving high-performance real-time service discovery. Using NLB for dynamic port mapping simplifies the forwarding chain and avoids the performance loss caused by Kubernetes kube-proxy load balancing. These features are particularly crucial for handling replica combat-type game servers. For GameServerSets with the network type specified as AmazonWebServices-NLB, the AmazonWebServices-NLB network plugin will schedule an NLB, automatically allocate ports, create listeners and target groups, and associate the target group with Kubernetes services through the TargetGroupBinding CRD. If the cluster is configured with VPC-CNI, the traffic will be automatically forwarded to the Pod's IP address; otherwise, it will be forwarded through ClusterIP. The process is considered successful when the network of the GameServer is in the Ready state.
![image](./../../docs/images/aws-nlb.png)
## AmazonWebServices-NLB Configuration
### plugin Configuration
```toml
[aws]
enable = true
[aws.nlb]
# Specify the range of free ports that NLB can use to allocate external access ports for pods, with a maximum range of 50 (closed interval)
# The limit of 50 comes from AWS's limit on the number of listeners, see: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-limits.html
max_port = 32050
min_port = 32001
```
### Preparation ###
Due to the difference in AWS design, to achieve NLB port-to-Pod port mapping, three types of CRD resources need to be created: Listener/TargetGroup/TargetGroupBinding
#### Deploy elbv2-controller
Definition and controller for Listener/TargetGroup CRDs: https://github.com/aws-controllers-k8s/elbv2-controller. This project links k8s resources with AWS cloud resources. Download the chart: https://gallery.ecr.aws/aws-controllers-k8s/elbv2-chart, example value.yaml:
```yaml
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::xxxxxxxxx:role/test"
aws:
region: "us-east-1"
endpoint_url: "https://elasticloadbalancing.us-east-1.amazonaws.com"
```
The key to deploying this project lies in authorizing the k8s ServiceAccount to access the NLB SDK, which is recommended to be done through an IAM role:
##### Step 1Enable OIDC provider for the EKS cluster
1. Sign in to the AWS Management Console.
2. Navigate to the EKS consolehttps://console.aws.amazon.com/eks/
3. Select your cluster.
4. On the cluster details page, ensure that the OIDC provider is enabled. Obtain the OIDC provider URL for the EKS cluster. In the "Configuration" section of the cluster details page, find the "OpenID Connect provider URL".
##### Step 2Configure the IAM role trust policy
1. In the IAM console, create a new identity provider and select "OpenID Connect".
- For the Provider URL, enter the OIDC provider URL of your EKS cluster.
- For Audience, enter: `sts.amazonaws.com`
2. In the IAM console, create a new IAM role and select "Custom trust policy".
- Use the following trust policy to allow EKS to use this role:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>:sub": "system:serviceaccount:<NAMESPACE>:ack-elbv2-controller",
"oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>:aud": "sts.amazonaws.com"
}
}
}
]
}
```
- Replace `<AWS_ACCOUNT_ID>`、`<REGION>`、`<OIDC_ID>`、`<NAMESPACE>` and `<SERVICE_ACCOUNT_NAME>` with your actual values.
- Add the permission `ElasticLoadBalancingFullAccess`
#### Deploy AWS Load Balancer Controller
CRD and controller for TargetGroupBinding: https://github.com/kubernetes-sigs/aws-load-balancer-controller/
Official deployment documentation: https://docs.aws.amazon.com/eks/latest/userguide/lbc-helm.html, essentially authorizing k8s ServiceAccount in a way similar to an IAM role.
### Parameters
#### NlbARNs
- Meaning: Fill in the ARN of the nlb, you can fill in multiple, and nlb needs to be created in AWS in advance.
- Format: Separate each nlbARN with a comma. For example: arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870,arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e
- Support for change: Yes
#### NlbVPCId
- Meaning: Fill in the vpcid where nlb is located, needed for creating AWS target groups.
- Format: String. For example: vpc-0bbc9f9f0ffexxxxx
- Support for change: Yes
#### NlbHealthCheck
- Meaning: Fill in the health check parameters for the nlb target group, can be left blank to use default values.
- Format: Separate each configuration with a comma. For example: "healthCheckEnabled:true,healthCheckIntervalSeconds:30,healthCheckPath:/health,healthCheckPort:8081,healthCheckProtocol:HTTP,healthCheckTimeoutSeconds:10,healthyThresholdCount:5,unhealthyThresholdCount:2"
- Support for change: Yes
- Parameter explanation
- **healthCheckEnabled**Indicates whether health checks are enabled. If the target type is lambda, health checks are disabled by default but can be enabled. If the target type is instance, ip, or alb, health checks are always enabled and cannot be disabled.
- **healthCheckIntervalSeconds**The approximate amount of time, in seconds, between health checks of an individual target. The range is 5-300. If the target group protocol is TCP, TLS, UDP, TCP_UDP, HTTP, or HTTPS, the default is 30 seconds. If the target group protocol is GENEVE, the default is 10 seconds. If the target type is lambda, the default is 35 seconds.
- **healthCheckPath**The destination for health checks on the targets. For HTTP/HTTPS health checks, this is the path. For GRPC protocol version, this is the path of a custom health check method with the format /package.service/method. The default is /Amazon Web Services.ALB/healthcheck.
- **healthCheckPort**The port the load balancer uses when performing health checks on targets. The default is traffic-port, which is the port on which each target receives traffic from the load balancer. If the protocol is GENEVE, the default is port 80.
- **healthCheckProtocol**The protocol the load balancer uses when performing health checks on targets. For Application Load Balancers, the default is HTTP. For Network Load Balancers and Gateway Load Balancers, the default is TCP. The GENEVE, TLS, UDP, and TCP_UDP protocols are not supported for health checks.
- **healthCheckTimeoutSeconds**The amount of time, in seconds, during which no response from a target means a failed health check. The range is 2120 seconds. For target groups with a protocol of HTTP, the default is 6 seconds. For target groups with a protocol of TCP, TLS, or HTTPS, the default is 10 seconds. For target groups with a protocol of GENEVE, the default is 5 seconds. If the target type is lambda, the default is 30 seconds.
- **healthyThresholdCount**The number of consecutive health check successes required before considering a target healthy. The range is 2-10. If the target group protocol is TCP, TCP_UDP, UDP, TLS, HTTP, or HTTPS, the default is 5. For target groups with a protocol of GENEVE, the default is 5. If the target type is lambda, the default is 5.
- **unhealthyThresholdCount**The number of consecutive health check failures required before considering a target unhealthy. The range is 2-10. If the target group protocol is TCP, TCP_UDP, UDP, TLS, HTTP, or HTTPS, the default is 2. For target groups with a protocol of GENEVE, the default is 2. If the target type is lambda, the default is 5.
#### PortProtocols
- Meaning: Ports and protocols exposed by the pod, supports specifying multiple ports/protocols.
- Format: port1/protocol1,port2/protocol2,... (protocol should be uppercase)
- Support for change: Yes
#### Fixed
- Meaning: Whether the access port is fixed. If yes, even if the pod is deleted and rebuilt, the mapping between the internal and external networks will not change.
- Format: false / true
- Support for change: Yes
#### AllowNotReadyContainers
- Meaning: The corresponding container name that allows continuous traffic during in-place upgrades.
- Format: {containerName_0},{containerName_1},... For example: sidecar
- Support for change: Not changeable during in-place upgrades
#### Annotations
- Meaning: Annotations added to the service, supports specifying multiple annotations.
- Format: key1:value1,key2:value2...
- Support for change: Yes
### Usage Example
```shell
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: gs-demo
namespace: default
spec:
replicas: 1
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: AmazonWebServices-NLB
networkConf:
- name: NlbARNs
value: "arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxx:loadbalancer/net/okg-test/yyyyyyyyyyyyyyyy"
- name: NlbVPCId
value: "vpc-0bbc9f9f0ffexxxxx"
- name: PortProtocols
value: "80/TCP"
- name: NlbHealthCheck
value: "healthCheckIntervalSeconds:15"
gameServerTemplate:
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/gs-demo/gameserver:network
name: gameserver
EOF
```
Check the network status of the GameServer:
```yaml
networkStatus:
createTime: "2024-05-30T03:34:14Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- endPoint: okg-test-yyyyyyyyyyyyyyyy.elb.us-east-1.amazonaws.com
ip: ""
ports:
- name: "80"
port: 32034
protocol: TCP
internalAddresses:
- ip: 10.10.7.154
ports:
- name: "80"
port: 80
protocol: TCP
lastTransitionTime: "2024-05-30T03:34:14Z"
networkType: AmazonWebServices-NLB
```

View File

@ -0,0 +1,183 @@
中文 | [English](./README.md)
对于在AWS EKS集群中使用OKG的游戏业务通过网络负载均衡将流量直接路由到Pod端口是实现高性能实时服务发现的基础。利用NLB进行动态端口映射简化了转发链路规避了Kubernetes kube-proxy负载均衡带来的性能损耗。这些特性对于处理副本战斗类型的游戏服务器尤为关键。对于指定了网络类型为AmazonWebServices-NLB的GameServerSetAmazonWebServices-NLB网络插件将会调度一个NLB自动分配端口创建侦听器和目标组并通过TargetGroupBinding CRD将目标组与Kubernetes服务进行关联。如果集群配置了VPC-CNI那么此时流量将自动转发到Pod的IP地址否则将通过ClusterIP转发。观察到GameServer的网络处于Ready状态时该过程即执行成功。
![image](./../../docs/images/aws-nlb.png)
## AmazonWebServices-NLB 相关配置
### plugin配置
```toml
[aws]
enable = true
[aws.nlb]
# 填写nlb可使用的空闲端口段用于为pod分配外部接入端口范围最大为50(闭区间)
# 50限制来自AWS对侦听器数量的限制参考https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-limits.html
max_port = 32050
min_port = 32001
```
### 准备: ###
由于AWS的设计有所区别要实现NLB端口与Pod端口映射需要创建三类CRD资源Listener/TargetGroup/TargetGroupBinding
#### 部署elbv2-controller
Listener/TargetGroup的CRD定义及控制器https://github.com/aws-controllers-k8s/elbv2-controller 该项目联动了k8s资源与AWS云资源chart下载https://gallery.ecr.aws/aws-controllers-k8s/elbv2-chart value.yaml示例
```yaml
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::xxxxxxxxx:role/test"
aws:
region: "us-east-1"
endpoint_url: "https://elasticloadbalancing.us-east-1.amazonaws.com"
```
部署该项目最关键的在于授权k8s ServiceAccount访问NLB SDK推荐通过IAM角色的方式
##### 步骤 1为 EKS 集群启用 OIDC 提供者
1. 登录到 AWS 管理控制台。
2. 导航到 EKS 控制台https://console.aws.amazon.com/eks/
3. 选择您的集群。
4. 在集群详细信息页面上,确保 OIDC 提供者已启用。获取 EKS 集群的 OIDC 提供者 URL。在集群详细信息页面的 “Configuration” 部分,找到 “OpenID Connect provider URL”。
##### 步骤 2配置 IAM 角色信任策略
1. 在 IAM 控制台中,创建一个新的身份提供商,并选择 “OpenID Connect”
- 提供商URL填写EKS 集群的 OIDC 提供者 URL
- 受众填写:`sts.amazonaws.com`
2. 在 IAM 控制台中,创建一个新的 IAM 角色,并选择 “Custom trust policy”。
- 使用以下信任策略,允许 EKS 使用这个角色:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>:sub": "system:serviceaccount:<NAMESPACE>:ack-elbv2-controller",
"oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>:aud": "sts.amazonaws.com"
}
}
}
]
}
```
- 将 `<AWS_ACCOUNT_ID>`、`<REGION>`、`<OIDC_ID>`、`<NAMESPACE>` 和 `<SERVICE_ACCOUNT_NAME>` 替换为您的实际值。
- 添加权限 `ElasticLoadBalancingFullAccess`
#### 部署AWS Load Balancer Controller
TargetGroupBinding的CRD及控制器https://github.com/kubernetes-sigs/aws-load-balancer-controller/
官方部署文档https://docs.aws.amazon.com/eks/latest/userguide/lbc-helm.html 其本质也是通过授权k8s ServiceAccount一个IAM角色的方式。
### 参数
#### NlbARNs
- 含义填写nlb的arn可填写多个需要现在【AWS】中创建好nlb。
- 填写格式各个nlbARN用,分割。例如arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870,arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e
- 是否支持变更:是
#### NlbVPCId
- 含义填写nlb所在的vpcid创建AWS目标组需要。
- 填写格式字符串。例如vpc-0bbc9f9f0ffexxxxx
- 是否支持变更:是
#### NlbHealthCheck
- 含义填写nlb目标组的健康检查参数可不填使用默认值。
- 填写格式:各个配置用,分割。例如:"healthCheckEnabled:true,healthCheckIntervalSeconds:30,healthCheckPath:/health,healthCheckPort:8081,healthCheckProtocol:HTTP,healthCheckTimeoutSeconds:10,healthyThresholdCount:5,unhealthyThresholdCount:2"
- 是否支持变更:是
- 参数解释:
- **healthCheckEnabled**指示是否启用了健康检查。如果目标类型是lambda默认情况下健康检查是禁用的但可以启用。如果目标类型是instance、ip或alb健康检查总是启用的且不能禁用。
- **healthCheckIntervalSeconds**:每个目标之间健康检查的时间间隔(以秒为单位)。 取值范围为5-300秒。如果目标组协议是TCP、TLS、UDP、TCP_UDP、HTTP或HTTPS默认值为30秒。 如果目标组协议是GENEVE默认值为10秒。如果目标类型是lambda默认值为35秒。
- **healthCheckPath**[HTTP/HTTPS健康检查] 目标健康检查的路径。 [HTTP1或HTTP2协议版本] ping路径。默认值为/。 [GRPC协议版本] 自定义健康检查方法的路径,格式为/package.service/method。默认值为/Amazon Web Services.ALB/healthcheck。
- **healthCheckPort**:负载均衡器在对目标执行健康检查时使用的端口。 如果协议是HTTP、HTTPS、TCP、TLS、UDP或TCP_UDP默认值为traffic-port这是每个目标接收负载均衡器流量的端口。 如果协议是GENEVE默认值为端口80。
- **healthCheckProtocol**:负载均衡器在对目标执行健康检查时使用的协议。 对于应用负载均衡器默认协议是HTTP。对于网络负载均衡器和网关负载均衡器默认协议是TCP。 如果目标组协议是HTTP或HTTPS则不支持TCP协议进行健康检查。GENEVE、TLS、UDP和TCP_UDP协议不支持健康检查。
- **healthCheckTimeoutSeconds**在目标没有响应的情况下认为健康检查失败的时间以秒为单位。取值范围为2-120秒。对于HTTP协议的目标组默认值为6秒。对于TCP、TLS或HTTPS协议的目标组默认值为10秒。对于GENEVE协议的目标组默认值为5秒。如果目标类型是lambda默认值为30秒。
- **healthyThresholdCount**在将目标标记为健康之前所需的连续健康检查成功次数。取值范围为2-10。如果目标组协议是TCP、TCP_UDP、UDP、TLS、HTTP或HTTPS默认值为5。 对于GENEVE协议的目标组默认值为5。如果目标类型是lambda默认值为5。
- **unhealthyThresholdCount**指定在将目标标记为不健康之前所需的连续健康检查失败次数。取值范围为2-10。如果目标组的协议是TCP、TCP_UDP、UDP、TLS、HTTP或HTTPS默认值为2。如果目标组的协议是GENEVE默认值为2。如果目标类型是lambda默认值为5。
#### PortProtocols
- 含义pod暴露的端口及协议支持填写多个端口/协议
- 填写格式port1/protocol1,port2/protocol2,...(协议需大写)
- 是否支持变更:是
#### Fixed
- 含义是否固定访问端口。若是即使pod删除重建网络内外映射关系不会改变
- 填写格式false / true
- 是否支持变更:是
#### AllowNotReadyContainers
- 含义:在容器原地升级时允许不断流的对应容器名称,可填写多个
- 填写格式:{containerName_0},{containerName_1},... 例如sidecar
- 是否支持变更:在原地升级过程中不可变更
#### Annotations
- 含义添加在service上的anno可填写多个
- 填写格式key1:value1,key2:value2...
- 是否支持变更:是
### 使用示例
```shell
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: gs-demo
namespace: default
spec:
replicas: 1
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: AmazonWebServices-NLB
networkConf:
- name: NlbARNs
value: "arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxx:loadbalancer/net/okg-test/yyyyyyyyyyyyyyyy"
- name: NlbVPCId
value: "vpc-0bbc9f9f0ffexxxxx"
- name: PortProtocols
value: "80/TCP"
- name: NlbHealthCheck
value: "healthCheckIntervalSeconds:15"
gameServerTemplate:
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/gs-demo/gameserver:network
name: gameserver
EOF
```
检查GameServer中的网络状态
```
networkStatus:
createTime: "2024-05-30T03:34:14Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- endPoint: okg-test-yyyyyyyyyyyyyyyy.elb.us-east-1.amazonaws.com
ip: ""
ports:
- name: "80"
port: 32034
protocol: TCP
internalAddresses:
- ip: 10.10.7.154
ports:
- name: "80"
port: 80
protocol: TCP
lastTransitionTime: "2024-05-30T03:34:14Z"
networkType: AmazonWebServices-NLB
```

View File

@ -0,0 +1,62 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package amazonswebservices
import (
log "k8s.io/klog/v2"
"github.com/openkruise/kruise-game/cloudprovider"
)
const (
AmazonsWebServices = "AmazonsWebServices"
)
var (
amazonsWebServicesProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
)
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (ap *Provider) Name() string {
return AmazonsWebServices
}
func (ap *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if ap.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return ap.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (ap *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
log.Fatal("empty plugin name")
}
ap.plugins[name] = plugin
}
func NewAmazonsWebServicesProvider() (cloudprovider.CloudProvider, error) {
return amazonsWebServicesProvider, nil
}

View File

@ -0,0 +1,835 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package amazonswebservices
import (
"context"
"fmt"
"strconv"
"strings"
"sync"
ackv1alpha1 "github.com/aws-controllers-k8s/elbv2-controller/apis/v1alpha1"
"github.com/kr/pretty"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/tools/cache"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
elbv2api "sigs.k8s.io/aws-load-balancer-controller/apis/elbv2/v1beta1"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
)
const (
NlbNetwork = "AmazonWebServices-NLB"
AliasNlb = "NLB-Network"
NlbARNsConfigName = "NlbARNs"
NlbVPCIdConfigName = "NlbVPCId"
NlbHealthCheckConfigName = "NlbHealthCheck"
PortProtocolsConfigName = "PortProtocols"
FixedConfigName = "Fixed"
NlbAnnotations = "Annotations"
NlbARNAnnoKey = "service.beta.kubernetes.io/aws-load-balancer-nlb-arn"
NlbPortAnnoKey = "service.beta.kubernetes.io/aws-load-balancer-nlb-port"
AWSTargetGroupSyncStatus = "aws-load-balancer-nlb-target-group-synced"
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
NlbConfigHashKey = "game.kruise.io/network-config-hash"
ResourceTagKey = "managed-by"
ResourceTagValue = "game.kruise.io"
)
const (
healthCheckEnabled = "healthCheckEnabled"
healthCheckIntervalSeconds = "healthCheckIntervalSeconds"
healthCheckPath = "healthCheckPath"
healthCheckPort = "healthCheckPort"
healthCheckProtocol = "healthCheckProtocol"
healthCheckTimeoutSeconds = "healthCheckTimeoutSeconds"
healthyThresholdCount = "healthyThresholdCount"
unhealthyThresholdCount = "unhealthyThresholdCount"
listenerActionType = "forward"
)
type portAllocated map[int32]bool
type nlbPorts struct {
arn string
ports []int32
}
type NlbPlugin struct {
maxPort int32
minPort int32
cache map[string]portAllocated
podAllocate map[string]*nlbPorts
mutex sync.RWMutex
}
type backend struct {
targetPort int
protocol corev1.Protocol
}
type healthCheck struct {
healthCheckEnabled *bool
healthCheckIntervalSeconds *int64
healthCheckPath *string
healthCheckPort *string
healthCheckProtocol *string
healthCheckTimeoutSeconds *int64
healthyThresholdCount *int64
unhealthyThresholdCount *int64
}
type nlbConfig struct {
loadBalancerARNs []string
healthCheck *healthCheck
vpcID string
backends []*backend
isFixed bool
annotations map[string]string
}
func startWatchTargetGroup(ctx context.Context) error {
var err error
go func() {
err = watchTargetGroup(ctx)
}()
return err
}
func watchTargetGroup(ctx context.Context) error {
scheme := runtime.NewScheme()
utilruntime.Must(ackv1alpha1.AddToScheme(scheme))
utilruntime.Must(elbv2api.AddToScheme(scheme))
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Metrics: metricsserver.Options{
BindAddress: "0",
},
Scheme: scheme,
})
if err != nil {
return err
}
informer, err := mgr.GetCache().GetInformer(ctx, &ackv1alpha1.TargetGroup{})
if err != nil {
return fmt.Errorf("failed to get informer: %v", err)
}
if _, err := informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
handleTargetGroupEvent(ctx, mgr.GetClient(), obj)
},
UpdateFunc: func(oldObj, newObj interface{}) {
handleTargetGroupEvent(ctx, mgr.GetClient(), newObj)
},
}); err != nil {
return fmt.Errorf("failed to add event handler: %v", err)
}
log.Info("Start to watch TargetGroups successfully")
return mgr.Start(ctx)
}
func handleTargetGroupEvent(ctx context.Context, c client.Client, obj interface{}) {
targetGroup, ok := obj.(*ackv1alpha1.TargetGroup)
if !ok {
log.Warning("Failed to convert event.Object to TargetGroup")
return
}
if targetGroup.Labels[AWSTargetGroupSyncStatus] == "false" {
targetGroupARN, err := getACKTargetGroupARN(targetGroup)
if err != nil {
return
}
log.Infof("targetGroup sync request watched, start to sync %s/%s, ARN: %s",
targetGroup.GetNamespace(), targetGroup.GetName(), targetGroupARN)
err = syncListenerAndTargetGroupBinding(ctx, c, targetGroup, &targetGroupARN)
if err != nil {
log.Errorf("syncListenerAndTargetGroupBinding by targetGroup %s error %v",
pretty.Sprint(targetGroup), err)
return
}
patch := client.RawPatch(types.MergePatchType,
[]byte(fmt.Sprintf(`{"metadata":{"labels":{"%s":"true"}}}`, AWSTargetGroupSyncStatus)))
err = c.Patch(ctx, targetGroup, patch)
if err != nil {
log.Warningf("patch targetGroup %s %s error %v",
pretty.Sprint(targetGroup), AWSTargetGroupSyncStatus, err)
}
}
}
func (n *NlbPlugin) Name() string {
return NlbNetwork
}
func (n *NlbPlugin) Alias() string {
return AliasNlb
}
func (n *NlbPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
n.mutex.Lock()
defer n.mutex.Unlock()
err := startWatchTargetGroup(ctx)
if err != nil {
return err
}
nlbOptions, ok := options.(provideroptions.AmazonsWebServicesOptions)
if !ok {
return cperrors.ToPluginError(fmt.Errorf("failed to convert options to nlbOptions"), cperrors.InternalError)
}
n.minPort = nlbOptions.NLBOptions.MinPort
n.maxPort = nlbOptions.NLBOptions.MaxPort
svcList := &corev1.ServiceList{}
err = c.List(ctx, svcList, client.MatchingLabels{ResourceTagKey: ResourceTagValue})
if err != nil {
return err
}
n.initLbCache(svcList.Items)
if err != nil {
return err
}
log.Infof("[%s] podAllocate cache complete initialization: %s", NlbNetwork, pretty.Sprint(n.podAllocate))
return nil
}
func (n *NlbPlugin) initCache(nlbARN string) {
if n.cache[nlbARN] == nil {
n.cache[nlbARN] = make(portAllocated, n.maxPort-n.minPort+1)
for j := n.minPort; j <= n.maxPort; j++ {
n.cache[nlbARN][j] = false
}
}
}
func (n *NlbPlugin) initLbCache(svcList []corev1.Service) {
if n.cache == nil {
n.cache = make(map[string]portAllocated)
}
if n.podAllocate == nil {
n.podAllocate = make(map[string]*nlbPorts)
}
for _, svc := range svcList {
lbARN := svc.Annotations[NlbARNAnnoKey]
if lbARN != "" {
n.initCache(lbARN)
var ports []int32
for _, port := range getPorts(svc.Spec.Ports) {
if port <= n.maxPort && port >= n.minPort {
n.cache[lbARN][port] = true
ports = append(ports, port)
}
}
if len(ports) != 0 {
n.podAllocate[svc.GetNamespace()+"/"+svc.GetName()] = &nlbPorts{arn: lbARN, ports: ports}
}
}
}
}
func (n *NlbPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (n *NlbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, err := networkManager.GetNetworkStatus()
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
lbConfig := parseLbConfig(networkConfig)
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// get svc
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
return pod, cperrors.ToPluginError(n.syncTargetGroupAndService(lbConfig, pod, c, ctx), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// update svc
if util.GetHash(lbConfig) != svc.GetAnnotations()[NlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
return pod, cperrors.ToPluginError(n.syncTargetGroupAndService(lbConfig, pod, c, ctx), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() {
return pod, cperrors.ToPluginError(c.DeleteAllOf(ctx, &elbv2api.TargetGroupBinding{},
client.InNamespace(pod.GetNamespace()),
client.MatchingLabels(map[string]string{ResourceTagKey: ResourceTagValue, SvcSelectorKey: pod.GetName()})),
cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() {
selector := client.MatchingLabels{
ResourceTagKey: ResourceTagValue,
SvcSelectorKey: pod.GetName(),
}
var tgbList elbv2api.TargetGroupBindingList
err = c.List(ctx, &tgbList, selector)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
if len(tgbList.Items) != len(svc.Spec.Ports) {
var tgList ackv1alpha1.TargetGroupList
err = c.List(ctx, &tgList, selector)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
patch := client.RawPatch(types.MergePatchType,
[]byte(fmt.Sprintf(`{"metadata":{"labels":{"%s":"false"}}}`, AWSTargetGroupSyncStatus)))
for _, tg := range tgList.Items {
err = c.Patch(ctx, &tg, patch)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
EndPoint: generateNlbEndpoint(svc.Annotations[NlbARNAnnoKey]),
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func generateNlbEndpoint(nlbARN string) string {
const arnPartsCount = 6
const loadBalancerPrefix = "loadbalancer/net/"
parts := strings.Split(nlbARN, ":")
if len(parts) != arnPartsCount {
return ""
}
region := parts[3]
loadBalancerName := strings.ReplaceAll(strings.TrimPrefix(parts[5], loadBalancerPrefix), "/", "-")
return fmt.Sprintf("%s.elb.%s.amazonaws.com", loadBalancerName, region)
}
func (n *NlbPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, client)
networkConfig := networkManager.GetNetworkConfig()
sc := parseLbConfig(networkConfig)
var podKeys []string
if sc.isFixed {
gss, err := util.GetGameServerSetOfPod(pod, client, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range n.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
n.deAllocate(podKey)
}
return nil
}
func (n *NlbPlugin) allocate(lbARNs []string, num int, nsName string) *nlbPorts {
n.mutex.Lock()
defer n.mutex.Unlock()
// Initialize cache for each lbARN if not already done
for _, nlbARN := range lbARNs {
n.initCache(nlbARN)
}
// Find lbARN with enough free ports
selectedARN := n.findLbWithFreePorts(lbARNs, num)
if selectedARN == "" {
return nil
}
// Allocate ports
ports := n.allocatePorts(selectedARN, num)
n.podAllocate[nsName] = &nlbPorts{arn: selectedARN, ports: ports}
log.Infof("pod %s allocate nlb %s ports %v", nsName, selectedARN, ports)
return &nlbPorts{arn: selectedARN, ports: ports}
}
func (n *NlbPlugin) findLbWithFreePorts(lbARNs []string, num int) string {
for _, nlbARN := range lbARNs {
freePorts := 0
for i := n.minPort; i <= n.maxPort && freePorts < num; i++ {
if !n.cache[nlbARN][i] {
freePorts++
}
}
if freePorts >= num {
return nlbARN
}
}
return ""
}
func (n *NlbPlugin) allocatePorts(lbARN string, num int) []int32 {
var ports []int32
for i := 0; i < num; i++ {
for p := n.minPort; p <= n.maxPort; p++ {
if !n.cache[lbARN][p] {
n.cache[lbARN][p] = true
ports = append(ports, p)
break
}
}
}
return ports
}
func (n *NlbPlugin) deAllocate(nsName string) {
n.mutex.Lock()
defer n.mutex.Unlock()
allocatedPorts, exist := n.podAllocate[nsName]
if !exist {
return
}
lbARN := allocatedPorts.arn
ports := allocatedPorts.ports
for _, port := range ports {
n.cache[lbARN][port] = false
}
delete(n.podAllocate, nsName)
log.Infof("pod %s deallocate nlb %s ports %v", nsName, lbARN, ports)
}
func init() {
nlbPlugin := NlbPlugin{
mutex: sync.RWMutex{},
}
amazonsWebServicesProvider.registerPlugin(&nlbPlugin)
}
func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *nlbConfig {
var lbARNs []string
var hc healthCheck
var vpcId string
backends := make([]*backend, 0)
isFixed := false
annotations := map[string]string{}
for _, c := range conf {
switch c.Name {
case NlbARNsConfigName:
for _, nlbARN := range strings.Split(c.Value, ",") {
if nlbARN != "" {
lbARNs = append(lbARNs, nlbARN)
}
}
case NlbHealthCheckConfigName:
for _, healthCheckConf := range strings.Split(c.Value, ",") {
confKV := strings.Split(healthCheckConf, ":")
if len(confKV) == 2 {
switch confKV[0] {
case healthCheckEnabled:
v, err := strconv.ParseBool(confKV[1])
if err != nil {
continue
}
hc.healthCheckEnabled = &v
case healthCheckIntervalSeconds:
v, err := strconv.ParseInt(confKV[1], 10, 64)
if err != nil {
continue
}
hc.healthCheckIntervalSeconds = &v
case healthCheckPath:
hc.healthCheckPath = &confKV[1]
case healthCheckPort:
hc.healthCheckPort = &confKV[1]
case healthCheckProtocol:
hc.healthCheckProtocol = &confKV[1]
case healthCheckTimeoutSeconds:
v, err := strconv.ParseInt(confKV[1], 10, 64)
if err != nil {
continue
}
hc.healthCheckTimeoutSeconds = &v
case healthyThresholdCount:
v, err := strconv.ParseInt(confKV[1], 10, 64)
if err != nil {
continue
}
hc.healthyThresholdCount = &v
case unhealthyThresholdCount:
v, err := strconv.ParseInt(confKV[1], 10, 64)
if err != nil {
continue
}
hc.unhealthyThresholdCount = &v
}
} else {
log.Warningf("nlb %s %s is invalid", NlbHealthCheckConfigName, confKV)
}
}
case NlbVPCIdConfigName:
vpcId = c.Value
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
var protocol corev1.Protocol
if len(ppSlice) != 2 {
protocol = corev1.ProtocolTCP
} else {
protocol = corev1.Protocol(ppSlice[1])
}
backends = append(backends, &backend{
targetPort: port,
protocol: protocol,
})
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
isFixed = v
case NlbAnnotations:
for _, anno := range strings.Split(c.Value, ",") {
annoKV := strings.Split(anno, ":")
if len(annoKV) == 2 {
annotations[annoKV[0]] = annoKV[1]
} else {
log.Warningf("nlb %s %s is invalid", NlbAnnotations, c.Value)
}
}
}
}
return &nlbConfig{
loadBalancerARNs: lbARNs,
healthCheck: &hc,
vpcID: vpcId,
backends: backends,
isFixed: isFixed,
annotations: annotations,
}
}
func getACKTargetGroupARN(tg *ackv1alpha1.TargetGroup) (string, error) {
if len(tg.Status.Conditions) == 0 {
return "", fmt.Errorf("targetGroup status not ready")
}
if tg.Status.Conditions[0].Status != "True" {
return "", fmt.Errorf("targetGroup status error: %s %s",
*tg.Status.Conditions[0].Message, *tg.Status.Conditions[0].Reason)
}
if tg.Status.ACKResourceMetadata != nil && tg.Status.ACKResourceMetadata.ARN != nil {
return string(*tg.Status.ACKResourceMetadata.ARN), nil
} else {
return "", fmt.Errorf("targetGroup status not ready")
}
}
func (n *NlbPlugin) syncTargetGroupAndService(config *nlbConfig,
pod *corev1.Pod, client client.Client, ctx context.Context) error {
var ports []int32
var lbARN string
podKey := pod.GetNamespace() + "/" + pod.GetName()
allocatedPorts, exist := n.podAllocate[podKey]
if !exist {
allocatedPorts = n.allocate(config.loadBalancerARNs, len(config.backends), podKey)
if allocatedPorts == nil {
return fmt.Errorf("no NLB has %d enough available ports for %s", len(config.backends), podKey)
}
}
lbARN = allocatedPorts.arn
ports = allocatedPorts.ports
ownerReference := getOwnerReference(client, ctx, pod, config.isFixed)
for i := range ports {
targetGroupName := fmt.Sprintf("%s-%d", pod.GetName(), ports[i])
protocol := string(config.backends[i].protocol)
targetPort := int64(config.backends[i].targetPort)
var targetTypeIP = string(ackv1alpha1.TargetTypeEnum_ip)
_, err := controllerutil.CreateOrUpdate(ctx, client, &ackv1alpha1.TargetGroup{
ObjectMeta: metav1.ObjectMeta{
Name: targetGroupName,
Namespace: pod.GetNamespace(),
OwnerReferences: ownerReference,
Labels: map[string]string{
ResourceTagKey: ResourceTagValue,
SvcSelectorKey: pod.GetName(),
AWSTargetGroupSyncStatus: "false",
},
Annotations: map[string]string{
NlbARNAnnoKey: lbARN,
NlbPortAnnoKey: fmt.Sprintf("%d", ports[i]),
},
},
Spec: ackv1alpha1.TargetGroupSpec{
HealthCheckEnabled: config.healthCheck.healthCheckEnabled,
HealthCheckIntervalSeconds: config.healthCheck.healthCheckIntervalSeconds,
HealthCheckPath: config.healthCheck.healthCheckPath,
HealthCheckPort: config.healthCheck.healthCheckPort,
HealthCheckProtocol: config.healthCheck.healthCheckProtocol,
HealthCheckTimeoutSeconds: config.healthCheck.healthCheckTimeoutSeconds,
HealthyThresholdCount: config.healthCheck.healthyThresholdCount,
UnhealthyThresholdCount: config.healthCheck.unhealthyThresholdCount,
Name: &targetGroupName,
Protocol: &protocol,
Port: &targetPort,
VPCID: &config.vpcID,
TargetType: &targetTypeIP,
Tags: []*ackv1alpha1.Tag{{Key: ptr.To[string](ResourceTagKey),
Value: ptr.To[string](ResourceTagValue)}},
},
}, func() error { return nil })
if err != nil {
return err
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(config.backends); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(config.backends[i].targetPort),
Port: ports[i],
Protocol: config.backends[i].protocol,
TargetPort: intstr.FromInt(config.backends[i].targetPort),
})
}
annotations := map[string]string{
NlbARNAnnoKey: lbARN,
NlbConfigHashKey: util.GetHash(config),
}
for key, value := range config.annotations {
annotations[key] = value
}
_, err := controllerutil.CreateOrUpdate(ctx, client, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: annotations,
OwnerReferences: ownerReference,
Labels: map[string]string{
ResourceTagKey: ResourceTagValue,
SvcSelectorKey: pod.GetName(),
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeClusterIP,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
},
}, func() error { return nil })
if err != nil {
return err
}
return nil
}
func syncListenerAndTargetGroupBinding(ctx context.Context, client client.Client,
tg *ackv1alpha1.TargetGroup, targetGroupARN *string) error {
actionType := listenerActionType
port, err := strconv.ParseInt(tg.Annotations[NlbPortAnnoKey], 10, 64)
if err != nil {
return err
}
lbARN := tg.Annotations[NlbARNAnnoKey]
podName := tg.Labels[SvcSelectorKey]
_, err = controllerutil.CreateOrUpdate(ctx, client, &ackv1alpha1.Listener{
ObjectMeta: metav1.ObjectMeta{
Name: tg.GetName(),
Namespace: tg.GetNamespace(),
OwnerReferences: tg.GetOwnerReferences(),
Labels: map[string]string{
ResourceTagKey: ResourceTagValue,
SvcSelectorKey: podName,
},
},
Spec: ackv1alpha1.ListenerSpec{
Protocol: tg.Spec.Protocol,
Port: &port,
LoadBalancerARN: &lbARN,
DefaultActions: []*ackv1alpha1.Action{
{
TargetGroupARN: targetGroupARN,
Type: &actionType,
},
},
Tags: []*ackv1alpha1.Tag{{Key: ptr.To[string](ResourceTagKey),
Value: ptr.To[string](ResourceTagValue)}},
},
}, func() error { return nil })
if err != nil {
return err
}
var targetTypeIP = elbv2api.TargetTypeIP
_, err = controllerutil.CreateOrUpdate(ctx, client, &elbv2api.TargetGroupBinding{
ObjectMeta: metav1.ObjectMeta{
Name: tg.GetName(),
Namespace: tg.GetNamespace(),
OwnerReferences: tg.GetOwnerReferences(),
Labels: map[string]string{
ResourceTagKey: ResourceTagValue,
SvcSelectorKey: podName,
},
},
Spec: elbv2api.TargetGroupBindingSpec{
TargetGroupARN: *targetGroupARN,
TargetType: &targetTypeIP,
ServiceRef: elbv2api.ServiceReference{
Name: podName,
Port: intstr.FromInt(int(port)),
},
},
}, func() error { return nil })
if err != nil {
return err
}
return nil
}
func getPorts(ports []corev1.ServicePort) []int32 {
var ret []int32
for _, port := range ports {
ret = append(ret, port.Port)
}
return ret
}
func getOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, isFixed bool) []metav1.OwnerReference {
ownerReferences := []metav1.OwnerReference{
{
APIVersion: pod.APIVersion,
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err == nil {
ownerReferences = []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}
}
return ownerReferences
}

View File

@ -0,0 +1,288 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package amazonswebservices
import (
"reflect"
"sync"
"testing"
"github.com/kr/pretty"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/utils/ptr"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
)
func TestAllocateDeAllocate(t *testing.T) {
tests := []struct {
loadBalancerARNs []string
nlb *NlbPlugin
num int
podKey string
}{
{
loadBalancerARNs: []string{"arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870",
"arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e"},
nlb: &NlbPlugin{
maxPort: int32(1000),
minPort: int32(951),
cache: make(map[string]portAllocated),
podAllocate: make(map[string]*nlbPorts),
mutex: sync.RWMutex{},
},
podKey: "xxx/xxx",
num: 3,
},
{
loadBalancerARNs: []string{"arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870",
"arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e"},
nlb: &NlbPlugin{
maxPort: int32(955),
minPort: int32(951),
cache: make(map[string]portAllocated),
podAllocate: make(map[string]*nlbPorts),
mutex: sync.RWMutex{},
},
podKey: "xxx/xxx",
num: 6,
},
}
for _, test := range tests {
allocatedPorts := test.nlb.allocate(test.loadBalancerARNs, test.num, test.podKey)
if int(test.nlb.maxPort-test.nlb.minPort+1) < test.num && allocatedPorts != nil {
t.Errorf("insufficient available ports but NLB was still allocated: %s",
pretty.Sprint(allocatedPorts))
}
if allocatedPorts == nil {
continue
}
if _, exist := test.nlb.podAllocate[test.podKey]; !exist {
t.Errorf("podAllocate[%s] is empty after allocated", test.podKey)
}
for _, port := range allocatedPorts.ports {
if port > test.nlb.maxPort || port < test.nlb.minPort {
t.Errorf("allocate port %d, unexpected", port)
}
if test.nlb.cache[allocatedPorts.arn][port] == false {
t.Errorf("allocate port %d failed", port)
}
}
test.nlb.deAllocate(test.podKey)
for _, port := range allocatedPorts.ports {
if test.nlb.cache[allocatedPorts.arn][port] == true {
t.Errorf("deAllocate port %d failed", port)
}
}
if _, exist := test.nlb.podAllocate[test.podKey]; exist {
t.Errorf("podAllocate[%s] is not empty after deallocated", test.podKey)
}
}
}
func TestParseLbConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
loadBalancerARNs []string
healthCheck *healthCheck
backends []*backend
isFixed bool
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbARNsConfigName,
Value: "arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870",
},
{
Name: PortProtocolsConfigName,
Value: "80",
},
},
loadBalancerARNs: []string{"arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870"},
healthCheck: &healthCheck{},
backends: []*backend{
{
targetPort: 80,
protocol: corev1.ProtocolTCP,
},
},
isFixed: false,
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbARNsConfigName,
Value: "arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870,arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e",
},
{
Name: NlbHealthCheckConfigName,
Value: "healthCheckEnabled:true,healthCheckIntervalSeconds:30,healthCheckPath:/health,healthCheckPort:8081,healthCheckProtocol:HTTP,healthCheckTimeoutSeconds:10,healthyThresholdCount:5,unhealthyThresholdCount:2",
},
{
Name: PortProtocolsConfigName,
Value: "10000/UDP,10001,10002/TCP",
},
{
Name: FixedConfigName,
Value: "true",
},
},
loadBalancerARNs: []string{"arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870", "arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e"},
healthCheck: &healthCheck{
healthCheckEnabled: ptr.To[bool](true),
healthCheckIntervalSeconds: ptr.To[int64](30),
healthCheckPath: ptr.To[string]("/health"),
healthCheckPort: ptr.To[string]("8081"),
healthCheckProtocol: ptr.To[string]("HTTP"),
healthCheckTimeoutSeconds: ptr.To[int64](10),
healthyThresholdCount: ptr.To[int64](5),
unhealthyThresholdCount: ptr.To[int64](2),
},
backends: []*backend{
{
targetPort: 10000,
protocol: corev1.ProtocolUDP,
},
{
targetPort: 10001,
protocol: corev1.ProtocolTCP,
},
{
targetPort: 10002,
protocol: corev1.ProtocolTCP,
},
},
isFixed: true,
},
}
for _, test := range tests {
sc := parseLbConfig(test.conf)
if !reflect.DeepEqual(test.loadBalancerARNs, sc.loadBalancerARNs) {
t.Errorf("loadBalancerARNs expect: %v, actual: %v", test.loadBalancerARNs, sc.loadBalancerARNs)
}
if !reflect.DeepEqual(test.healthCheck, sc.healthCheck) {
t.Errorf("healthCheck expect: %s, actual: %s", pretty.Sprint(test.healthCheck), pretty.Sprint(sc.healthCheck))
}
if !reflect.DeepEqual(test.backends, sc.backends) {
t.Errorf("ports expect: %s, actual: %s", pretty.Sprint(test.backends), pretty.Sprint(sc.backends))
}
if test.isFixed != sc.isFixed {
t.Errorf("isFixed expect: %v, actual: %v", test.isFixed, sc.isFixed)
}
}
}
func TestInitLbCache(t *testing.T) {
test := struct {
n *NlbPlugin
svcList []corev1.Service
cache map[string]portAllocated
podAllocate map[string]*nlbPorts
}{
n: &NlbPlugin{
minPort: 951,
maxPort: 1000,
},
cache: map[string]portAllocated{
"arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870": map[int32]bool{
988: true,
},
"arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e": map[int32]bool{
951: true,
999: true,
},
},
podAllocate: map[string]*nlbPorts{
"ns-0/name-0": {
arn: "arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870",
ports: []int32{988},
},
"ns-1/name-1": {
arn: "arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e",
ports: []int32{951, 999},
},
},
svcList: []corev1.Service{
{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
NlbARNAnnoKey: "arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870",
},
Labels: map[string]string{ResourceTagKey: ResourceTagValue},
Namespace: "ns-0",
Name: "name-0",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-A",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(80),
Port: 988,
Protocol: corev1.ProtocolTCP,
},
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
NlbARNAnnoKey: "arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e",
},
Labels: map[string]string{ResourceTagKey: ResourceTagValue},
Namespace: "ns-1",
Name: "name-1",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-B",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(8080),
Port: 951,
Protocol: corev1.ProtocolTCP,
},
{
TargetPort: intstr.FromInt(8081),
Port: 999,
Protocol: corev1.ProtocolTCP,
},
},
},
},
},
}
test.n.initLbCache(test.svcList)
for arn, pa := range test.cache {
for port, isAllocated := range pa {
if test.n.cache[arn][port] != isAllocated {
t.Errorf("nlb arn %s port %d isAllocated, expect: %t, actual: %t", arn, port, isAllocated, test.n.cache[arn][port])
}
}
}
if !reflect.DeepEqual(test.n.podAllocate, test.podAllocate) {
t.Errorf("podAllocate expect %v, but actully got %v", test.podAllocate, test.n.podAllocate)
}
}

View File

@ -0,0 +1,54 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cloudprovider
import (
"context"
"github.com/openkruise/kruise-game/cloudprovider/errors"
corev1 "k8s.io/api/core/v1"
client "sigs.k8s.io/controller-runtime/pkg/client"
)
/*
|-Cloud Provider
|------ Kubernetes
|------ plugins
|------ AlibabaCloud
|------- plugins
|------ others
*/
type Plugin interface {
Name() string
// Alias define the plugin with similar func cross multi cloud provider
Alias() string
Init(client client.Client, options CloudProviderOptions, ctx context.Context) error
// Pod Event handler
OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError)
OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError)
OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) errors.PluginError
}
type CloudProvider interface {
Name() string
ListPlugins() (map[string]Plugin, error)
}
type CloudProviderOptions interface {
Enabled() bool
Valid() bool
}

86
cloudprovider/config.go Normal file
View File

@ -0,0 +1,86 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cloudprovider
import (
"flag"
"github.com/BurntSushi/toml"
"github.com/openkruise/kruise-game/cloudprovider/options"
"k8s.io/klog/v2"
)
var Opt *Options
type Options struct {
CloudProviderConfigFile string
}
func init() {
Opt = &Options{}
}
func InitCloudProviderFlags() {
flag.StringVar(&Opt.CloudProviderConfigFile, "provider-config", "/etc/kruise-game/config.toml", "Cloud Provider Config File Path.")
}
type ConfigFile struct {
Path string
}
type CloudProviderConfig struct {
KubernetesOptions CloudProviderOptions
AlibabaCloudOptions CloudProviderOptions
VolcengineOptions CloudProviderOptions
AmazonsWebServicesOptions CloudProviderOptions
TencentCloudOptions CloudProviderOptions
JdCloudOptions CloudProviderOptions
HwCloudOptions CloudProviderOptions
}
type tomlConfigs struct {
Kubernetes options.KubernetesOptions `toml:"kubernetes"`
AlibabaCloud options.AlibabaCloudOptions `toml:"alibabacloud"`
Volcengine options.VolcengineOptions `toml:"volcengine"`
AmazonsWebServices options.AmazonsWebServicesOptions `toml:"aws"`
TencentCloud options.TencentCloudOptions `toml:"tencentcloud"`
JdCloud options.JdCloudOptions `toml:"jdcloud"`
HwCloud options.HwCloudOptions `toml:"hwcloud"`
}
func (cf *ConfigFile) Parse() *CloudProviderConfig {
var config tomlConfigs
if _, err := toml.DecodeFile(cf.Path, &config); err != nil {
klog.Fatal(err)
}
return &CloudProviderConfig{
KubernetesOptions: config.Kubernetes,
AlibabaCloudOptions: config.AlibabaCloud,
VolcengineOptions: config.Volcengine,
AmazonsWebServicesOptions: config.AmazonsWebServices,
TencentCloudOptions: config.TencentCloud,
JdCloudOptions: config.JdCloud,
HwCloudOptions: config.HwCloud,
}
}
func NewConfigFile(path string) *ConfigFile {
return &ConfigFile{
Path: path,
}
}

View File

@ -0,0 +1,70 @@
package cloudprovider
import (
"github.com/openkruise/kruise-game/cloudprovider/options"
"io"
"os"
"reflect"
"testing"
)
func TestParse(t *testing.T) {
tests := []struct {
fileString string
kubernetes options.KubernetesOptions
alibabacloud options.AlibabaCloudOptions
}{
{
fileString: `
[kubernetes]
enable = true
[kubernetes.hostPort]
max_port = 9000
min_port = 8000
[alibabacloud]
enable = true
`,
kubernetes: options.KubernetesOptions{
Enable: true,
HostPort: options.HostPortOptions{
MaxPort: 9000,
MinPort: 8000,
},
},
alibabacloud: options.AlibabaCloudOptions{
Enable: true,
},
},
}
for _, test := range tests {
tempFile := "config"
file, err := os.Create(tempFile)
if err != nil {
t.Errorf("open file failed, because of %s", err.Error())
}
_, err = io.WriteString(file, test.fileString)
if err != nil {
t.Errorf("write file failed, because of %s", err.Error())
}
err = file.Close()
if err != nil {
t.Errorf("close file failed, because of %s", err.Error())
}
cf := ConfigFile{
Path: tempFile,
}
cloudProviderConfig := cf.Parse()
if !reflect.DeepEqual(cloudProviderConfig.AlibabaCloudOptions, test.alibabacloud) {
t.Errorf("expect AlibabaCloudOptions: %v, but got %v", test.alibabacloud, cloudProviderConfig.AlibabaCloudOptions)
}
if !reflect.DeepEqual(cloudProviderConfig.KubernetesOptions, test.kubernetes) {
t.Errorf("expect KubernetesOptions: %v, but got %v", test.kubernetes, cloudProviderConfig.KubernetesOptions)
}
os.Remove(tempFile)
}
}

View File

@ -0,0 +1,72 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package errors
import "fmt"
// PluginErrorType describes a high-level category of a given error
type PluginErrorType string
const (
// ApiCallError is an error related to communication with k8s API server
ApiCallError PluginErrorType = "apiCallError"
// InternalError is an error inside plugin
InternalError PluginErrorType = "internalError"
// ParameterError is an error related to bad parameters provided by a user
ParameterError PluginErrorType = "parameterError"
// NotImplementedError an error related to be not implemented by developers
NotImplementedError PluginErrorType = "notImplementedError"
)
type PluginError interface {
// Error implements golang error interface
Error() string
// Type returns the type of CloudProviderError
Type() PluginErrorType
}
type pluginErrorImplErrorImpl struct {
errorType PluginErrorType
msg string
}
func (c pluginErrorImplErrorImpl) Error() string {
return c.msg
}
func (c pluginErrorImplErrorImpl) Type() PluginErrorType {
return c.errorType
}
// NewPluginError returns new plugin error with a message constructed from format string
func NewPluginError(errorType PluginErrorType, msg string, args ...interface{}) PluginError {
return pluginErrorImplErrorImpl{
errorType: errorType,
msg: fmt.Sprintf(msg, args...),
}
}
func ToPluginError(err error, errorType PluginErrorType) PluginError {
if err == nil {
return nil
}
return pluginErrorImplErrorImpl{
errorType: errorType,
msg: err.Error(),
}
}

View File

@ -0,0 +1,799 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package hwcloud
import (
"context"
"encoding/json"
"fmt"
"strconv"
"strings"
"sync"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
)
const (
PortProtocolsConfigName = "PortProtocols"
ExternalTrafficPolicyTypeConfigName = "ExternalTrafficPolicyType"
PublishNotReadyAddressesConfigName = "PublishNotReadyAddresses"
ElbIdAnnotationKey = "kubernetes.io/elb.id"
ElbIdsConfigName = "ElbIds"
ElbClassAnnotationKey = "kubernetes.io/elb.class"
ElbClassConfigName = "ElbClass"
ElbAvailableZoneAnnotationKey = "kubernetes.io/elb.availability-zones"
ElbAvailableZoneAnnotationConfigName = "ElbAvailableZone"
ElbConnLimitAnnotationKey = "kubernetes.io/elb.connection-limit"
ElbConnLimitConfigName = "ElbConnLimit"
ElbSubnetAnnotationKey = "kubernetes.io/elb.subnet-id"
ElbSubnetConfigName = "ElbSubnetId"
ElbEipAnnotationKey = "kubernetes.io/elb.eip-id"
ElbEipConfigName = "ElbEipId"
ElbEipKeepAnnotationKey = "kubernetes.io/elb.keep-eip"
ElbEipKeepConfigName = "ElbKeepd"
ElbEipAutoCreateOptionAnnotationKey = "kubernetes.io/elb.eip-auto-create-option"
ElbEipAutoCreateOptionConfigName = "ElbEipAutoCreateOption"
ElbLbAlgorithmAnnotationKey = "kubernetes.io/elb.lb-algorithm"
ElbLbAlgorithmConfigName = "ElbLbAlgorithm"
ElbSessionAffinityFlagAnnotationKey = "kubernetes.io/elb.session-affinity-flag"
ElbSessionAffinityFlagConfigName = "ElbSessionAffinityFlag"
ElbSessionAffinityOptionAnnotationKey = "kubernetes.io/elb.session-affinity-option"
ElbSessionAffinityOptionConfigName = "ElbSessionAffinityOption"
ElbTransparentClientIPAnnotationKey = "kubernetes.io/elb.enable-transparent-client-ip"
ElbTransparentClientIPConfigName = "ElbTransparentClientIP"
ElbXForwardedHostAnnotationKey = "kubernetes.io/elb.x-forwarded-host"
ElbXForwardedHostConfigName = "ElbXForwardedHost"
ElbTlsRefAnnotationKey = "kubernetes.io/elb.default-tls-container-ref"
ElbTlsRefConfigName = "ElbTlsRef"
ElbIdleTimeoutAnnotationKey = "kubernetes.io/elb.idle-timeout"
ElbIdleTimeoutConfigName = "ElbIdleTimeout"
ElbRequestTimeoutAnnotationKey = "kubernetes.io/elb.request-timeout"
ElbRequestTimeoutConfigName = "ElbRequestTimeout"
ElbResponseTimeoutAnnotationKey = "kubernetes.io/elb.response-timeout"
ElbResponseTimeoutConfigName = "ElbResponseTimeout"
ElbEnableCrossVPCAnnotationKey = "kubernetes.io/elb.enable-cross-vpc"
ElbEnableCrossVPCConfigName = "ElbEnableCrossVPC"
ElbL4FlavorIDAnnotationKey = "kubernetes.io/elb.l4-flavor-id"
ElbL4FlavorIDConfigName = "ElbL4FlavorID"
ElbL7FlavorIDAnnotationKey = "kubernetes.io/elb.l7-flavor-id"
ElbL7FlavorIDConfigName = "ElbL7FlavorID"
LBHealthCheckSwitchAnnotationKey = "kubernetes.io/elb.health-check-flag"
LBHealthCheckSwitchConfigName = "LBHealthCheckFlag"
LBHealthCheckOptionAnnotationKey = "kubernetes.io/elb.health-check-option"
LBHealthCHeckOptionConfigName = "LBHealthCheckOption"
)
const (
ElbConfigHashKey = "game.kruise.io/network-config-hash"
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
ProtocolTCPUDP corev1.Protocol = "TCPUDP"
FixedConfigName = "Fixed"
ElbNetwork = "HwCloud-ELB"
AliasELB = "ELB-Network"
ElbClassDedicated = "dedicated"
ElbClassShared = "shared"
ElbLbAlgorithmRoundRobin = "ROUND_ROBIN"
ElbLbAlgorithmLeastConn = "LEAST_CONNECTIONS"
ElbLbAlgorithmSourceIP = "SOURCE_IP"
)
type portAllocated map[int32]bool
type ElbPlugin struct {
maxPort int32
minPort int32
blockPorts []int32
cache map[string]portAllocated
podAllocate map[string]string
mutex sync.RWMutex
}
type elbConfig struct {
lbIds []string
targetPorts []int
protocols []corev1.Protocol
isFixed bool
elbClass string
elbConnLimit int32
elbLbAlgorithm string
elbSessionAffinityFlag string
elbSessionAffinityOption string
elbTransparentClientIP bool
elbXForwardedHost bool
elbIdleTimeout int32
elbRequestTimeout int32
elbResponseTimeout int32
externalTrafficPolicyType corev1.ServiceExternalTrafficPolicyType
publishNotReadyAddresses bool
lBHealthCheckSwitch string
lBHealtchCheckOption string
}
func (s *ElbPlugin) Name() string {
return ElbNetwork
}
func (s *ElbPlugin) Alias() string {
return AliasELB
}
func (s *ElbPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
s.mutex.Lock()
defer s.mutex.Unlock()
elbOptions := options.(provideroptions.HwCloudOptions).ELBOptions
s.minPort = elbOptions.MinPort
s.maxPort = elbOptions.MaxPort
s.blockPorts = elbOptions.BlockPorts
svcList := &corev1.ServiceList{}
err := c.List(ctx, svcList)
if err != nil {
return err
}
s.cache, s.podAllocate = initLbCache(svcList.Items, s.minPort, s.maxPort, s.blockPorts)
log.Infof("[%s] podAllocate cache complete initialization: %v", ElbNetwork, s.podAllocate)
return nil
}
func initLbCache(svcList []corev1.Service, minPort, maxPort int32, blockPorts []int32) (map[string]portAllocated, map[string]string) {
newCache := make(map[string]portAllocated)
newPodAllocate := make(map[string]string)
for _, svc := range svcList {
lbId := svc.Annotations[ElbIdAnnotationKey]
if lbId != "" && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
// init cache for that lb
if newCache[lbId] == nil {
newCache[lbId] = make(portAllocated, maxPort-minPort+1)
for i := minPort; i <= maxPort; i++ {
newCache[lbId][i] = false
}
}
// block ports
for _, blockPort := range blockPorts {
newCache[lbId][blockPort] = true
}
// fill in cache for that lb
var ports []int32
for _, port := range getPorts(svc.Spec.Ports) {
if port <= maxPort && port >= minPort {
value, ok := newCache[lbId][port]
if !ok || !value {
newCache[lbId][port] = true
ports = append(ports, port)
}
}
}
if len(ports) != 0 {
newPodAllocate[svc.GetNamespace()+"/"+svc.GetName()] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("svc %s/%s allocate elb %s ports %v", svc.Namespace, svc.Name, lbId, ports)
}
}
}
return newCache, newPodAllocate
}
func (s *ElbPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (s *ElbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseLbConfig(networkConfig)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
// get svc
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
service, err := s.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Create(ctx, service), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// old svc remain
if svc.OwnerReferences[0].Kind == "Pod" && svc.OwnerReferences[0].UID != pod.UID {
log.Infof("[%s] waitting old svc %s/%s deleted. old owner pod uid is %s, but now is %s", ElbNetwork, svc.Namespace, svc.Name, svc.OwnerReferences[0].UID, pod.UID)
return pod, nil
}
// update svc
if util.GetHash(sc) != svc.GetAnnotations()[ElbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
service, err := s.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
return pod, cperrors.ToPluginError(c.Update(ctx, service), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
svc.Spec.Type = corev1.ServiceTypeClusterIP
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeClusterIP {
svc.Spec.Type = corev1.ServiceTypeLoadBalancer
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// network not ready
if svc.Status.LoadBalancer.Ingress == nil {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (s *ElbPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseLbConfig(networkConfig)
if err != nil {
return cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
var podKeys []string
if sc.isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range s.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
s.deAllocate(podKey)
}
return nil
}
func (s *ElbPlugin) allocate(lbIds []string, num int, nsName string) (string, []int32) {
s.mutex.Lock()
defer s.mutex.Unlock()
var ports []int32
var lbId string
// find lb with adequate ports
for _, slbId := range lbIds {
sum := 0
for i := s.minPort; i <= s.maxPort; i++ {
if !s.cache[slbId][i] {
sum++
}
if sum >= num {
lbId = slbId
break
}
}
}
if lbId == "" {
return "", nil
}
// select ports
for i := 0; i < num; i++ {
var port int32
if s.cache[lbId] == nil {
// init cache for new lb
s.cache[lbId] = make(portAllocated, s.maxPort-s.minPort+1)
for i := s.minPort; i <= s.maxPort; i++ {
s.cache[lbId][i] = false
}
// block ports
for _, blockPort := range s.blockPorts {
s.cache[lbId][blockPort] = true
}
}
for p, allocated := range s.cache[lbId] {
if !allocated {
port = p
break
}
}
s.cache[lbId][port] = true
ports = append(ports, port)
}
s.podAllocate[nsName] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("pod %s allocate slb %s ports %v", nsName, lbId, ports)
return lbId, ports
}
func (s *ElbPlugin) deAllocate(nsName string) {
s.mutex.Lock()
defer s.mutex.Unlock()
allocatedPorts, exist := s.podAllocate[nsName]
if !exist {
return
}
slbPorts := strings.Split(allocatedPorts, ":")
lbId := slbPorts[0]
ports := util.StringToInt32Slice(slbPorts[1], ",")
for _, port := range ports {
s.cache[lbId][port] = false
}
// block ports
for _, blockPort := range s.blockPorts {
s.cache[lbId][blockPort] = true
}
delete(s.podAllocate, nsName)
log.Infof("pod %s deallocate slb %s ports %v", nsName, lbId, ports)
}
func init() {
elbPlugin := ElbPlugin{
mutex: sync.RWMutex{},
}
hwCloudProvider.registerPlugin(&elbPlugin)
}
func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*elbConfig, error) {
var lbIds []string
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
externalTrafficPolicy := corev1.ServiceExternalTrafficPolicyTypeCluster
publishNotReadyAddresses := false
elbClass := ElbClassDedicated
elbConnLimit := int32(-1)
elbLbAlgorithm := ElbLbAlgorithmRoundRobin
elbSessionAffinityFlag := "off"
elbSessionAffinityOption := ""
elbTransparentClientIP := false
elbXForwardedHost := false
elbIdleTimeout := int32(-1)
elbRequestTimeout := int32(-1)
elbResponseTimeout := int32(-1)
lBHealthCheckSwitch := "on"
LBHealthCHeckOptionConfig := ""
for _, c := range conf {
switch c.Name {
case ElbIdsConfigName:
for _, slbId := range strings.Split(c.Value, ",") {
if slbId != "" {
lbIds = append(lbIds, slbId)
}
}
if len(lbIds) <= 0 {
return nil, fmt.Errorf("no elb id found, must specify at least one elb id")
}
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
isFixed = v
case ExternalTrafficPolicyTypeConfigName:
if strings.EqualFold(c.Value, string(corev1.ServiceExternalTrafficPolicyTypeLocal)) {
externalTrafficPolicy = corev1.ServiceExternalTrafficPolicyTypeLocal
}
case PublishNotReadyAddressesConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
publishNotReadyAddresses = v
case ElbClassConfigName:
if strings.EqualFold(c.Value, string(ElbClassShared)) {
elbClass = ElbClassShared
}
case ElbConnLimitConfigName:
v, err := strconv.Atoi(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb connection limit value: %s", c.Value)
continue
}
elbConnLimit = int32(v)
case ElbLbAlgorithmConfigName:
if strings.EqualFold(c.Value, ElbLbAlgorithmRoundRobin) {
elbLbAlgorithm = ElbLbAlgorithmRoundRobin
}
if strings.EqualFold(c.Value, ElbLbAlgorithmLeastConn) {
elbLbAlgorithm = ElbLbAlgorithmLeastConn
}
if strings.EqualFold(c.Value, ElbLbAlgorithmSourceIP) {
elbLbAlgorithm = ElbLbAlgorithmSourceIP
}
case ElbSessionAffinityFlagConfigName:
if strings.EqualFold(c.Value, "on") {
elbSessionAffinityFlag = "on"
}
case ElbSessionAffinityOptionConfigName:
if json.Valid([]byte(c.Value)) {
LBHealthCHeckOptionConfig = c.Value
} else {
return nil, fmt.Errorf("invalid elb session affinity option value: %s", c.Value)
}
elbSessionAffinityOption = c.Value
case ElbTransparentClientIPConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb transparent client ip value: %s", c.Value)
continue
}
elbTransparentClientIP = v
case ElbXForwardedHostConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb x forwarded host value: %s", c.Value)
continue
}
elbXForwardedHost = v
case ElbIdleTimeoutConfigName:
v, err := strconv.Atoi(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb idle timeout value: %s", c.Value)
continue
}
if v >= 0 && v <= 4000 {
elbIdleTimeout = int32(v)
} else {
_ = fmt.Errorf("ignore invalid elb idle timeout value: %s", c.Value)
continue
}
case ElbRequestTimeoutConfigName:
v, err := strconv.Atoi(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb request timeout value: %s", c.Value)
continue
}
if v >= 1 && v <= 300 {
elbRequestTimeout = int32(v)
} else {
_ = fmt.Errorf("ignore invalid elb request timeout value: %s", c.Value)
continue
}
case ElbResponseTimeoutConfigName:
v, err := strconv.Atoi(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb response timeout value: %s", c.Value)
continue
}
if v >= 1 && v <= 300 {
elbResponseTimeout = int32(v)
} else {
_ = fmt.Errorf("ignore invalid elb response timeout value: %s", c.Value)
continue
}
case LBHealthCheckSwitchConfigName:
checkSwitch := strings.ToLower(c.Value)
if checkSwitch != "on" && checkSwitch != "off" {
return nil, fmt.Errorf("invalid lb health check switch value: %s", c.Value)
}
lBHealthCheckSwitch = checkSwitch
case LBHealthCHeckOptionConfigName:
if json.Valid([]byte(c.Value)) {
LBHealthCHeckOptionConfig = c.Value
} else {
return nil, fmt.Errorf("invalid lb health check option value: %s", c.Value)
}
}
}
return &elbConfig{
lbIds: lbIds,
protocols: protocols,
targetPorts: ports,
isFixed: isFixed,
externalTrafficPolicyType: externalTrafficPolicy,
publishNotReadyAddresses: publishNotReadyAddresses,
elbClass: elbClass,
elbConnLimit: elbConnLimit,
elbLbAlgorithm: elbLbAlgorithm,
elbSessionAffinityFlag: elbSessionAffinityFlag,
elbSessionAffinityOption: elbSessionAffinityOption,
elbTransparentClientIP: elbTransparentClientIP,
elbXForwardedHost: elbXForwardedHost,
elbIdleTimeout: elbIdleTimeout,
elbRequestTimeout: elbRequestTimeout,
elbResponseTimeout: elbResponseTimeout,
lBHealthCheckSwitch: lBHealthCheckSwitch,
lBHealtchCheckOption: LBHealthCHeckOptionConfig,
}, nil
}
func getPorts(ports []corev1.ServicePort) []int32 {
var ret []int32
for _, port := range ports {
ret = append(ret, port.Port)
}
return ret
}
func (s *ElbPlugin) consSvc(sc *elbConfig, pod *corev1.Pod, c client.Client, ctx context.Context) (*corev1.Service, error) {
var ports []int32
var lbId string
podKey := pod.GetNamespace() + "/" + pod.GetName()
allocatedPorts, exist := s.podAllocate[podKey]
if exist {
slbPorts := strings.Split(allocatedPorts, ":")
lbId = slbPorts[0]
ports = util.StringToInt32Slice(slbPorts[1], ",")
} else {
lbId, ports = s.allocate(sc.lbIds, len(sc.targetPorts), podKey)
if lbId == "" && ports == nil {
return nil, fmt.Errorf("there are no avaialable ports for %v", sc.lbIds)
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(sc.targetPorts); i++ {
if sc.protocols[i] == ProtocolTCPUDP {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), strings.ToLower(string(corev1.ProtocolTCP))),
Port: ports[i],
Protocol: corev1.ProtocolTCP,
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), strings.ToLower(string(corev1.ProtocolUDP))),
Port: ports[i],
Protocol: corev1.ProtocolUDP,
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
} else {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), strings.ToLower(string(sc.protocols[i]))),
Port: ports[i],
Protocol: sc.protocols[i],
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
}
}
svcAnnotations := map[string]string{
ElbIdAnnotationKey: lbId,
ElbConfigHashKey: util.GetHash(sc),
ElbClassAnnotationKey: sc.elbClass,
ElbLbAlgorithmAnnotationKey: sc.elbLbAlgorithm,
ElbSessionAffinityFlagAnnotationKey: sc.elbSessionAffinityFlag,
ElbSessionAffinityOptionAnnotationKey: sc.elbSessionAffinityOption,
ElbTransparentClientIPAnnotationKey: strconv.FormatBool(sc.elbTransparentClientIP),
ElbXForwardedHostAnnotationKey: strconv.FormatBool(sc.elbXForwardedHost),
LBHealthCheckSwitchAnnotationKey: sc.lBHealthCheckSwitch,
}
if sc.elbClass == ElbClassDedicated {
} else {
svcAnnotations[ElbConnLimitAnnotationKey] = strconv.Itoa(int(sc.elbConnLimit))
}
if sc.elbIdleTimeout != -1 {
svcAnnotations[ElbIdleTimeoutAnnotationKey] = strconv.Itoa(int(sc.elbIdleTimeout))
}
if sc.elbRequestTimeout != -1 {
svcAnnotations[ElbRequestTimeoutAnnotationKey] = strconv.Itoa(int(sc.elbRequestTimeout))
}
if sc.elbResponseTimeout != -1 {
svcAnnotations[ElbResponseTimeoutAnnotationKey] = strconv.Itoa(int(sc.elbResponseTimeout))
}
if sc.lBHealthCheckSwitch == "on" {
svcAnnotations[LBHealthCheckOptionAnnotationKey] = sc.lBHealtchCheckOption
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: svcAnnotations,
OwnerReferences: getSvcOwnerReference(c, ctx, pod, sc.isFixed),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
ExternalTrafficPolicy: sc.externalTrafficPolicyType,
PublishNotReadyAddresses: sc.publishNotReadyAddresses,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
},
}
return svc, nil
}
func getSvcOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, isFixed bool) []metav1.OwnerReference {
ownerReferences := []metav1.OwnerReference{
{
APIVersion: pod.APIVersion,
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err == nil {
ownerReferences = []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}
}
return ownerReferences
}

View File

@ -0,0 +1,45 @@
package hwcloud
import (
"github.com/openkruise/kruise-game/cloudprovider"
"k8s.io/klog/v2"
)
const (
HwCloud = "HwCloud"
)
var (
hwCloudProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
)
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (ap *Provider) Name() string {
return HwCloud
}
func (ap *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if ap.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return ap.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (ap *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
klog.Fatal("empty plugin name")
}
ap.plugins[name] = plugin
}
func NewHwCloudProvider() (cloudprovider.CloudProvider, error) {
return hwCloudProvider, nil
}

View File

@ -0,0 +1,193 @@
English | [中文](./README.md)
Based on JdCloud Container Service, for game scenarios, combine OKG to provide various network model plugins.
## JdCloud-NLB configuration
JdCloud Container Service supports the reuse of NLB (Network Load Balancer) in Kubernetes. Different services (svcs) can use different ports of the same NLB. As a result, the JdCloud-NLB network plugin will record the port allocation for each NLB. For services that specify the network type as JdCloud-NLB, the JdCloud-NLB network plugin will automatically allocate a port and create a service object. Once it detects that the public IP of the svc has been successfully created, the GameServer's network will transition to the Ready state, completing the process.
### plugin configuration
```toml
[jdcloud]
enable = true
[jdcloud.nlb]
#To allocate external access ports for Pods, you need to define the idle port ranges that the NLB (Network Load Balancer) can use. The maximum range for each port segment is 200 ports.
max_port = 700
min_port = 500
```
### Parameter
#### NlbIds
- Meaningfill in the id of the clb. You can fill in more than one. You need to create the clb in [JdCloud].
- Valueeach clbId is divided by `,` . For example`netlb-aaa,netlb-bbb,...`
- ConfigurableY
#### PortProtocols
- Meaningthe ports and protocols exposed by the pod, support filling in multiple ports/protocols
- Value`port1/protocol1`,`port2/protocol2`,... The protocol names must be in uppercase letters.
- ConfigurableY
#### Fixed
- Meaningwhether the mapping relationship is fixed. If the mapping relationship is fixed, the mapping relationship remains unchanged even if the pod is deleted and recreated.
- Valuefalse / true
- ConfigurableY
#### AllocateLoadBalancerNodePorts
- MeaningWhether the generated service is assigned nodeport, this can be set to false only in nlb passthrough mode
- Valuefalse / true
- ConfigurableY
#### AllowNotReadyContainers
- Meaningthe container names that are allowed not ready when inplace updating, when traffic will not be cut.
- Value{containerName_0},{containerName_1},... egsidecar
- ConfigurableIt cannot be changed during the in-place updating process.
#### Annotations
- Meaningthe anno added to the service
- Valuekey1:value1,key2:value2...
- ConfigurableY
### Example
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: nlb
namespace: default
spec:
replicas: 3
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: JdCloud-NLB
networkConf:
- name: NlbIds
#Fill in Jdcloud Cloud LoadBalancer Id here
value: netlb-xxxxx
- name: PortProtocols
#Fill in the exposed ports and their corresponding protocols here.
#If there are multiple ports, the format is as follows: {port1}/{protocol1},{port2}/{protocol2}...
#If the protocol is not filled in, the default is TCP
value: 80/TCP
- name: AllocateLoadBalancerNodePorts
# Whether the generated service is assigned nodeport.
value: "true"
- name: Fixed
#Fill in here whether a fixed IP is required [optional] ; Default is false
value: "false"
- name: Annotations
#Fill in the anno related to clb on the service
#The format is as follows: {key1}:{value1},{key2}:{value2}...
value: "key1:value1,key2:value2"
gameServerTemplate:
spec:
containers:
- args:
- /data/server/start.sh
command:
- /bin/bash
image: gss-cn-north-1.jcr.service.jdcloud.com/gsshosting/pal:v1
name: game-server
EOF
```
Check the network status in GameServer:
```
networkStatus:
createTime: "2024-11-04T08:00:20Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xxx.xxx
ports:
- name: "8211"
port: 531
protocol: UDP
internalAddresses:
- ip: 10.0.0.95
ports:
- name: "8211"
port: 8211
protocol: UDP
lastTransitionTime: "2024-11-04T08:00:20Z"
networkType: JdCloud-NLB
```
## JdCloud-EIP configuration
JdCloud Container Service supports binding an Elastic Public IP directly to a pod in Kubernetes, allowing the pod to communicate directly with the external network.
- The cluster's network plugin uses Yunjian-CNI and cannot use Flannel to create the cluster.
- For specific usage restrictions of Elastic Public IPs, please refer to the JdCloud Elastic Public IP product documentation.
- Install the EIP-Controller component.
- The Elastic Public IP will not be deleted when the pod is destroyed.
### Parameter
#### BandwidthConfigName
- MeaningThe bandwidth of the Elastic Public IP, measured in Mbps, has a value range of [1, 1024].
- ValueMust be an integer
- ConfigurableY
#### ChargeTypeConfigName
- MeaningThe billing method for the Elastic Public IP
- Valuestring, `postpaid_by_usage`/`postpaid_by_duration`
- ConfigurableY
#### FixedEIPConfigName
- MeaningWhether to fixed the Elastic Public IP,if so, the EIP will not be changed when the pod is recreated.
- Valuestring, "false" / "true"
- ConfigurableY
#### AssignEIPConfigName
- MeaningWhether to designate a specific Elastic Public IP. If true, provide the ID of the Elastic Public IP; otherwise, an EIP will be automatically allocated.
- Valuestring, "false" / "true"
#### EIPIdConfigName
- MeaningIf a specific Elastic Public IP is designated, the ID of the Elastic Public IP must be provided, and the component will automatically perform the lookup and binding.
- Valuestringfor example`fip-xxxxxxxx`
### Example
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: eip
namespace: default
spec:
containers:
- args:
- /data/server/start.sh
command:
- /bin/bash
image: gss-cn-north-1.jcr.service.jdcloud.com/gsshosting/pal:v1
name: game-server
network:
networkType: JdCloud-EIP
networkConf:
- name: "BandWidth"
value: "10"
- name: "ChargeType"
value: postpaid_by_usage
- name: "Fixed"
value: "false"
replicas: 3
EOF
```
Check the network status in GameServer:
```
networkStatus:
createTime: "2024-11-04T10:53:14Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xxx.xxx
internalAddresses:
- ip: 10.0.0.95
lastTransitionTime: "2024-11-04T10:53:14Z"
networkType: JdCloud-EIP
```

View File

@ -0,0 +1,192 @@
中文 | [English](./README.md)
基于京东云容器服务针对游戏场景结合OKG提供各网络模型插件。
## JdCloud-NLB 相关配置
京东云容器服务支持在k8s中对NLB复用的机制不同的svc可以使用同一个NLB的不同端口。由此JdCloud-NLB network plugin将记录各NLB对应的端口分配情况对于指定了网络类型为JdCloud-NLBJdCloud-NLB网络插件将会自动分配一个端口并创建一个service对象待检测到svc公网IP创建成功后GameServer的网络变为Ready状态该过程执行完成。
### plugin配置
```toml
[jdcloud]
enable = true
[jdcloud.nlb]
#填写nlb可使用的空闲端口段用于为pod分配外部接入端口范围最大为200
max_port = 700
min_port = 500
```
### 参数
#### NlbIds
- 含义填写nlb的id可填写多个需要先在【京东云】中创建好nlb。
- 填写格式各个nlbId用,分割。例如netlb-aaa,netlb-bbb,...
- 是否支持变更:是
#### PortProtocols
- 含义pod暴露的端口及协议支持填写多个端口/协议
- 填写格式port1/protocol1,port2/protocol2,...(协议需大写)
- 是否支持变更:是
#### Fixed
- 含义是否固定访问IP/端口。若是即使pod删除重建网络内外映射关系不会改变
- 填写格式false / true
- 是否支持变更:是
#### AllocateLoadBalancerNodePorts
- 含义生成的service是否分配nodeport, 仅在nlb的直通模式passthrough才能设置为false
- 填写格式true/false
- 是否支持变更:是
#### AllowNotReadyContainers
- 含义:在容器原地升级时允许不断流的对应容器名称,可填写多个
- 填写格式:{containerName_0},{containerName_1},... 例如sidecar
- 是否支持变更:在原地升级过程中不可变更
#### Annotations
- 含义添加在service上的anno可填写多个
- 填写格式key1:value1,key2:value2...
- 是否支持变更:是
### 使用示例
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: nlb
namespace: default
spec:
replicas: 3
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: JdCloud-NLB
networkConf:
- name: NlbIds
#Fill in Jdcloud Cloud LoadBalancer Id here
value: netlb-xxxxx
- name: PortProtocols
#Fill in the exposed ports and their corresponding protocols here.
#If there are multiple ports, the format is as follows: {port1}/{protocol1},{port2}/{protocol2}...
#If the protocol is not filled in, the default is TCP
value: 80/TCP
- name: AllocateLoadBalancerNodePorts
# Whether the generated service is assigned nodeport.
value: "true"
- name: Fixed
#Fill in here whether a fixed IP is required [optional] ; Default is false
value: "false"
- name: Annotations
#Fill in the anno related to clb on the service
#The format is as follows: {key1}:{value1},{key2}:{value2}...
value: "key1:value1,key2:value2"
gameServerTemplate:
spec:
containers:
- args:
- /data/server/start.sh
command:
- /bin/bash
image: gss-cn-north-1.jcr.service.jdcloud.com/gsshosting/pal:v1
name: game-server
EOF
```
检查GameServer中的网络状态:
```
networkStatus:
createTime: "2024-11-04T08:00:20Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xxx.xxx
ports:
- name: "8211"
port: 531
protocol: UDP
internalAddresses:
- ip: 10.0.0.95
ports:
- name: "8211"
port: 8211
protocol: UDP
lastTransitionTime: "2024-11-04T08:00:20Z"
networkType: JdCloud-NLB
```
## JdCloud-EIP 相关配置
京东云容器服务支持在k8s中让一个 pod 和弹性公网 IP 直接进行绑定,可以让 pod 直接与外部网络进行通信。
- 集群的网络插件使用 yunjian-CNI不可使用 flannel 创建集群
- 弹性公网 IP 使用限制请具体参考京东云弹性公网 IP 产品文档
- 安装 EIP-Controller 组件
- 弹性公网 IP 不会随 POD 的销毁而删除
### 参数
#### BandwidthConfigName
- 含义弹性公网IP的带宽单位为 Mbps取值范围为 [1,1024]
- 填写格式:必须填整数,且不带单位
- 是否支持变更:是
#### ChargeTypeConfigName
- 含义弹性公网IP的计费方式取值按量计费postpaid_by_usage包年包月postpaid_by_duration
- 填写格式:字符串
- 是否支持变更:是
#### FixedEIPConfigName
- 含义是否固定弹性公网IP。若是即使pod删除重建弹性公网IP也不会改变
- 填写格式:"false" / "true",字符串
- 是否支持变更:是
#### AssignEIPConfigName
- 含义是否指定使用某个弹性公网IP请填写 true否则自动分配一个EIP
- 填写格式:"false" / "true",字符串
#### EIPIdConfigName
- 含义若指定使用某个弹性公网IP则必须填写弹性公网IP的ID组件会自动进行进行查询和绑定
- 填写格式字符串例如fip-xxxxxxxx
### 使用示例
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: eip
namespace: default
spec:
containers:
- args:
- /data/server/start.sh
command:
- /bin/bash
image: gss-cn-north-1.jcr.service.jdcloud.com/gsshosting/pal:v1
name: game-server
network:
networkType: JdCloud-EIP
networkConf:
- name: "BandWidth"
value: "10"
- name: "ChargeType"
value: postpaid_by_usage
- name: "Fixed"
value: "false"
replicas: 3
EOF
```
检查GameServer中的网络状态:
```
networkStatus:
createTime: "2024-11-04T10:53:14Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xxx.xxx
internalAddresses:
- ip: 10.0.0.95
lastTransitionTime: "2024-11-04T10:53:14Z"
networkType: JdCloud-EIP
```

View File

@ -0,0 +1,111 @@
package jdcloud
import (
"context"
cerr "errors"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
"github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
EIPNetwork = "JdCloud-EIP"
AliasSEIP = "EIP-Network"
EIPIdConfigName = "EIPId"
EIPIdAnnotationKey = "jdos.jd.com/eip.id"
EIPIfaceAnnotationKey = "jdos.jd.com/eip.iface"
EIPAnnotationKey = "jdos.jd.com/eip.ip"
BandwidthConfigName = "Bandwidth"
BandwidthAnnotationkey = "jdos.jd.com/eip.bandwith"
ChargeTypeConfigName = "ChargeType"
ChargeTypeAnnotationkey = "jdos.jd.com/eip.chargeMode"
EnableEIPAnnotationKey = "jdos.jd.com/eip.enable"
FixedEIPConfigName = "Fixed"
FixedEIPAnnotationKey = "jdos.jd.com/eip.static"
EIPNameAnnotationKey = "jdos.jd.com/eip-name"
AssignEIPConfigName = "AssignEIP"
AssignEIPAnnotationKey = "jdos.jd.com/eip.userAssign"
)
type EipPlugin struct {
}
func (E EipPlugin) Name() string {
return EIPNetwork
}
func (E EipPlugin) Alias() string {
return AliasSEIP
}
func (E EipPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (E EipPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
conf := networkManager.GetNetworkConfig()
pod.Annotations[EnableEIPAnnotationKey] = "true"
pod.Annotations[EIPNameAnnotationKey] = pod.GetNamespace() + "/" + pod.GetName()
//parse network configuration
for _, c := range conf {
switch c.Name {
case BandwidthConfigName:
pod.Annotations[BandwidthAnnotationkey] = c.Value
case ChargeTypeConfigName:
pod.Annotations[ChargeTypeAnnotationkey] = c.Value
case FixedEIPConfigName:
pod.Annotations[FixedEIPAnnotationKey] = c.Value
}
}
return pod, nil
}
func (E EipPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkWaiting,
}, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
if enable, ok := pod.Annotations[EnableEIPAnnotationKey]; !ok || (ok && enable != "true") {
return pod, errors.ToPluginError(cerr.New("eip plugin is not enabled"), errors.InternalError)
}
if _, ok := pod.Annotations[EIPIdAnnotationKey]; !ok {
return pod, nil
}
if _, ok := pod.Annotations[EIPAnnotationKey]; !ok {
return pod, nil
}
networkStatus.ExternalAddresses = []gamekruiseiov1alpha1.NetworkAddress{
{
IP: pod.Annotations[EIPAnnotationKey],
},
}
networkStatus.InternalAddresses = []gamekruiseiov1alpha1.NetworkAddress{
{
IP: pod.Status.PodIP,
},
}
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err := networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
func (E EipPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) errors.PluginError {
return nil
}
func init() {
jdcloudProvider.registerPlugin(&EipPlugin{})
}

View File

@ -0,0 +1 @@
package jdcloud

View File

@ -0,0 +1,61 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package jdcloud
import (
"github.com/openkruise/kruise-game/cloudprovider"
"k8s.io/klog/v2"
)
const (
Jdcloud = "Jdcloud"
)
var (
jdcloudProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
)
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (jp *Provider) Name() string {
return Jdcloud
}
func (jp *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if jp.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return jp.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (jp *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
klog.Fatal("empty plugin name")
}
jp.plugins[name] = plugin
}
func NewJdcloudProvider() (cloudprovider.CloudProvider, error) {
return jdcloudProvider, nil
}

View File

@ -0,0 +1,581 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package jdcloud
import (
"context"
"encoding/json"
"fmt"
"strconv"
"strings"
"sync"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
)
const (
LbType_NLB = "nlb"
)
type JdNLBElasticIp struct {
ElasticIpId string `json:"elasticIpId"`
}
type JdNLBAlgorithm string
const (
JdNLBDefaultConnIdleTime int = 600
JdNLBAlgorithmRoundRobin JdNLBAlgorithm = "RoundRobin"
JdNLBAlgorithmLeastConn JdNLBAlgorithm = "LeastConn"
JdNLBAlgorithmIpHash JdNLBAlgorithm = "IpHash"
)
type JdNLBListenerBackend struct {
ProxyProtocol bool `json:"proxyProtocol"`
Algorithm JdNLBAlgorithm `json:"algorithm"`
}
type JdNLBListener struct {
Protocol string `json:"protocol"`
ConnectionIdleTimeSeconds int `json:"connectionIdleTimeSeconds"`
Backend *JdNLBListenerBackend `json:"backend"`
}
type JdNLB struct {
Version string `json:"version"`
LoadBalancerId string `json:"loadBalancerId"`
LoadBalancerType string `json:"loadBalancerType"`
Internal bool `json:"internal"`
Listeners []*JdNLBListener `json:"listeners"`
}
const (
JdNLBVersion = "v1"
NlbNetwork = "JdCloud-NLB"
AliasNLB = "NLB-Network"
NlbIdLabelKey = "service.beta.kubernetes.io/jdcloud-loadbalancer-id"
NlbIdsConfigName = "NlbIds"
PortProtocolsConfigName = "PortProtocols"
FixedConfigName = "Fixed"
AllocateLoadBalancerNodePorts = "AllocateLoadBalancerNodePorts"
NlbAnnotations = "Annotations"
NlbConfigHashKey = "game.kruise.io/network-config-hash"
NlbSpecAnnotationKey = "service.beta.kubernetes.io/jdcloud-load-balancer-spec"
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
NlbAlgorithm = "service.beta.kubernetes.io/jdcloud-lb-algorithm"
NlbConnectionIdleTime = "service.beta.kubernetes.io/jdcloud-lb-idle-time"
)
type portAllocated map[int32]bool
type NlbPlugin struct {
maxPort int32
minPort int32
cache map[string]portAllocated
podAllocate map[string]string
mutex sync.RWMutex
}
type nlbConfig struct {
lbIds []string
targetPorts []int
protocols []corev1.Protocol
isFixed bool
annotations map[string]string
allocateLoadBalancerNodePorts bool
algorithm string
connIdleTimeSeconds int
}
func (c *NlbPlugin) Name() string {
return NlbNetwork
}
func (c *NlbPlugin) Alias() string {
return AliasNLB
}
func (c *NlbPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
c.mutex.Lock()
defer c.mutex.Unlock()
nlbOptions, ok := options.(provideroptions.JdCloudOptions)
if !ok {
return cperrors.ToPluginError(fmt.Errorf("failed to convert options to nlbOptions"), cperrors.InternalError)
}
c.minPort = nlbOptions.NLBOptions.MinPort
c.maxPort = nlbOptions.NLBOptions.MaxPort
svcList := &corev1.ServiceList{}
err := client.List(ctx, svcList)
if err != nil {
return err
}
c.cache, c.podAllocate = initLbCache(svcList.Items, c.minPort, c.maxPort)
return nil
}
func initLbCache(svcList []corev1.Service, minPort, maxPort int32) (map[string]portAllocated, map[string]string) {
newCache := make(map[string]portAllocated)
newPodAllocate := make(map[string]string)
for _, svc := range svcList {
lbId := svc.Labels[NlbIdLabelKey]
if lbId != "" && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
if newCache[lbId] == nil {
newCache[lbId] = make(portAllocated, maxPort-minPort)
for i := minPort; i < maxPort; i++ {
newCache[lbId][i] = false
}
}
var ports []int32
for _, port := range getPorts(svc.Spec.Ports) {
if port <= maxPort && port >= minPort {
newCache[lbId][port] = true
ports = append(ports, port)
}
}
if len(ports) != 0 {
newPodAllocate[svc.GetNamespace()+"/"+svc.GetName()] = lbId + ":" + util.Int32SliceToString(ports, ",")
}
}
}
log.Infof("[%s] podAllocate cache complete initialization: %v", NlbNetwork, newPodAllocate)
return newCache, newPodAllocate
}
func (c *NlbPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (c *NlbPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, err := networkManager.GetNetworkStatus()
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
config := parseLbConfig(networkConfig)
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// get svc
svc := &corev1.Service{}
err = client.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
return pod, cperrors.ToPluginError(client.Create(ctx, c.consSvc(config, pod, client, ctx)), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// update svc
if util.GetHash(config) != svc.GetAnnotations()[NlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
return pod, cperrors.ToPluginError(client.Update(ctx, c.consSvc(config, pod, client, ctx)), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
svc.Spec.Type = corev1.ServiceTypeClusterIP
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeClusterIP {
svc.Spec.Type = corev1.ServiceTypeLoadBalancer
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// network not ready
if len(svc.Status.LoadBalancer.Ingress) == 0 {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(client, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := client.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (c *NlbPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, client)
networkConfig := networkManager.GetNetworkConfig()
sc := parseLbConfig(networkConfig)
var podKeys []string
if sc.isFixed {
gss, err := util.GetGameServerSetOfPod(pod, client, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range c.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
c.deAllocate(podKey)
}
return nil
}
func (c *NlbPlugin) allocate(lbIds []string, num int, nsName string) (string, []int32) {
c.mutex.Lock()
defer c.mutex.Unlock()
var ports []int32
var lbId string
// find lb with adequate ports
for _, nlbId := range lbIds {
sum := 0
for i := c.minPort; i < c.maxPort; i++ {
if !c.cache[nlbId][i] {
sum++
}
if sum >= num {
lbId = nlbId
break
}
}
}
// select ports
for i := 0; i < num; i++ {
var port int32
if c.cache[lbId] == nil {
c.cache[lbId] = make(portAllocated, c.maxPort-c.minPort)
for i := c.minPort; i < c.maxPort; i++ {
c.cache[lbId][i] = false
}
}
for p, allocated := range c.cache[lbId] {
if !allocated {
port = p
break
}
}
c.cache[lbId][port] = true
ports = append(ports, port)
}
c.podAllocate[nsName] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("pod %s allocate nlb %s ports %v", nsName, lbId, ports)
return lbId, ports
}
func (c *NlbPlugin) deAllocate(nsName string) {
c.mutex.Lock()
defer c.mutex.Unlock()
allocatedPorts, exist := c.podAllocate[nsName]
if !exist {
return
}
nlbPorts := strings.Split(allocatedPorts, ":")
lbId := nlbPorts[0]
ports := util.StringToInt32Slice(nlbPorts[1], ",")
for _, port := range ports {
c.cache[lbId][port] = false
}
delete(c.podAllocate, nsName)
log.Infof("pod %s deallocate nlb %s ports %v", nsName, lbId, ports)
}
func init() {
JdNlbPlugin := NlbPlugin{
mutex: sync.RWMutex{},
}
jdcloudProvider.registerPlugin(&JdNlbPlugin)
}
func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *nlbConfig {
var lbIds []string
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
allocateLoadBalancerNodePorts := true
annotations := map[string]string{}
algo := string(JdNLBAlgorithmRoundRobin)
idleTime := JdNLBDefaultConnIdleTime
for _, c := range conf {
switch c.Name {
case NlbIdsConfigName:
for _, clbId := range strings.Split(c.Value, ",") {
if clbId != "" {
lbIds = append(lbIds, clbId)
}
}
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
isFixed = v
case NlbAlgorithm:
algo = c.Value
case NlbConnectionIdleTime:
t, err := strconv.Atoi(c.Value)
if err == nil {
idleTime = t
}
case AllocateLoadBalancerNodePorts:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
allocateLoadBalancerNodePorts = v
case NlbAnnotations:
for _, anno := range strings.Split(c.Value, ",") {
annoKV := strings.Split(anno, ":")
if len(annoKV) == 2 {
annotations[annoKV[0]] = annoKV[1]
} else {
log.Warningf("nlb annotation %s is invalid", annoKV[0])
}
}
}
}
return &nlbConfig{
lbIds: lbIds,
protocols: protocols,
targetPorts: ports,
isFixed: isFixed,
annotations: annotations,
allocateLoadBalancerNodePorts: allocateLoadBalancerNodePorts,
algorithm: algo,
connIdleTimeSeconds: idleTime,
}
}
func getPorts(ports []corev1.ServicePort) []int32 {
var ret []int32
for _, port := range ports {
ret = append(ret, port.Port)
}
return ret
}
func (c *NlbPlugin) consSvc(config *nlbConfig, pod *corev1.Pod, client client.Client, ctx context.Context) *corev1.Service {
var ports []int32
var lbId string
podKey := pod.GetNamespace() + "/" + pod.GetName()
allocatedPorts, exist := c.podAllocate[podKey]
if exist {
nlbPorts := strings.Split(allocatedPorts, ":")
lbId = nlbPorts[0]
ports = util.StringToInt32Slice(nlbPorts[1], ",")
} else {
lbId, ports = c.allocate(config.lbIds, len(config.targetPorts), podKey)
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(config.targetPorts); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(config.targetPorts[i]),
Port: ports[i],
Protocol: config.protocols[i],
TargetPort: intstr.FromInt(config.targetPorts[i]),
})
}
annotations := map[string]string{
NlbIdLabelKey: lbId,
NlbSpecAnnotationKey: getNLBSpec(svcPorts, lbId, config.algorithm, config.connIdleTimeSeconds),
NlbConfigHashKey: util.GetHash(config),
}
for key, value := range config.annotations {
annotations[key] = value
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: annotations,
OwnerReferences: getSvcOwnerReference(client, ctx, pod, config.isFixed),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
AllocateLoadBalancerNodePorts: ptr.To[bool](config.allocateLoadBalancerNodePorts),
},
}
return svc
}
func getNLBSpec(ports []corev1.ServicePort, lbId, algorithm string, connIdleTimeSeconds int) string {
jdNlb := _getNLBSpec(ports, lbId, algorithm, connIdleTimeSeconds)
bytes, err := json.Marshal(jdNlb)
if err != nil {
return ""
}
return string(bytes)
}
func _getNLBSpec(ports []corev1.ServicePort, lbId, algorithm string, connIdleTimeSeconds int) *JdNLB {
var listeners = make([]*JdNLBListener, len(ports))
for k, v := range ports {
listeners[k] = &JdNLBListener{
Protocol: strings.ToUpper(string(v.Protocol)),
ConnectionIdleTimeSeconds: connIdleTimeSeconds,
Backend: &JdNLBListenerBackend{
Algorithm: JdNLBAlgorithm(algorithm),
},
}
}
return &JdNLB{
Version: JdNLBVersion,
LoadBalancerId: lbId,
LoadBalancerType: LbType_NLB,
Internal: false,
Listeners: listeners,
}
}
func getSvcOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, isFixed bool) []metav1.OwnerReference {
ownerReferences := []metav1.OwnerReference{
{
APIVersion: pod.APIVersion,
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err == nil {
ownerReferences = []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}
}
return ownerReferences
}

View File

@ -0,0 +1,344 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package jdcloud
import (
"context"
"k8s.io/utils/ptr"
"reflect"
"sync"
"testing"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"sigs.k8s.io/controller-runtime/pkg/client"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
)
func TestAllocateDeAllocate(t *testing.T) {
test := struct {
lbIds []string
nlb *NlbPlugin
num int
podKey string
}{
lbIds: []string{"xxx-A"},
nlb: &NlbPlugin{
maxPort: int32(700),
minPort: int32(500),
cache: make(map[string]portAllocated),
podAllocate: make(map[string]string),
mutex: sync.RWMutex{},
},
podKey: "xxx/xxx",
num: 3,
}
lbId, ports := test.nlb.allocate(test.lbIds, test.num, test.podKey)
if _, exist := test.nlb.podAllocate[test.podKey]; !exist {
t.Errorf("podAllocate[%s] is empty after allocated", test.podKey)
}
for _, port := range ports {
if port > test.nlb.maxPort || port < test.nlb.minPort {
t.Errorf("allocate port %d, unexpected", port)
}
if test.nlb.cache[lbId][port] == false {
t.Errorf("Allocate port %d failed", port)
}
}
test.nlb.deAllocate(test.podKey)
for _, port := range ports {
if test.nlb.cache[lbId][port] == true {
t.Errorf("deAllocate port %d failed", port)
}
}
if _, exist := test.nlb.podAllocate[test.podKey]; exist {
t.Errorf("podAllocate[%s] is not empty after deallocated", test.podKey)
}
}
func TestParseLbConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
lbIds []string
ports []int
protocols []corev1.Protocol
isFixed bool
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "xxx-A",
},
{
Name: PortProtocolsConfigName,
Value: "80",
},
},
lbIds: []string{"xxx-A"},
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
isFixed: false,
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "xxx-A,xxx-B,",
},
{
Name: PortProtocolsConfigName,
Value: "81/UDP,82,83/TCP",
},
{
Name: FixedConfigName,
Value: "true",
},
},
lbIds: []string{"xxx-A", "xxx-B"},
ports: []int{81, 82, 83},
protocols: []corev1.Protocol{corev1.ProtocolUDP, corev1.ProtocolTCP, corev1.ProtocolTCP},
isFixed: true,
},
}
for _, test := range tests {
sc := parseLbConfig(test.conf)
if !reflect.DeepEqual(test.lbIds, sc.lbIds) {
t.Errorf("lbId expect: %v, actual: %v", test.lbIds, sc.lbIds)
}
if !util.IsSliceEqual(test.ports, sc.targetPorts) {
t.Errorf("ports expect: %v, actual: %v", test.ports, sc.targetPorts)
}
if !reflect.DeepEqual(test.protocols, sc.protocols) {
t.Errorf("protocols expect: %v, actual: %v", test.protocols, sc.protocols)
}
if test.isFixed != sc.isFixed {
t.Errorf("isFixed expect: %v, actual: %v", test.isFixed, sc.isFixed)
}
}
}
func TestInitLbCache(t *testing.T) {
test := struct {
svcList []corev1.Service
minPort int32
maxPort int32
cache map[string]portAllocated
podAllocate map[string]string
}{
minPort: 500,
maxPort: 700,
cache: map[string]portAllocated{
"xxx-A": map[int32]bool{
666: true,
},
"xxx-B": map[int32]bool{
555: true,
},
},
podAllocate: map[string]string{
"ns-0/name-0": "xxx-A:666",
"ns-1/name-1": "xxx-B:555",
},
svcList: []corev1.Service{
{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
NlbIdLabelKey: "xxx-A",
},
Namespace: "ns-0",
Name: "name-0",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-A",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(80),
Port: 666,
Protocol: corev1.ProtocolTCP,
},
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
NlbIdLabelKey: "xxx-B",
},
Namespace: "ns-1",
Name: "name-1",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-B",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(8080),
Port: 555,
Protocol: corev1.ProtocolTCP,
},
},
},
},
},
}
actualCache, actualPodAllocate := initLbCache(test.svcList, test.minPort, test.maxPort)
for lb, pa := range test.cache {
for port, isAllocated := range pa {
if actualCache[lb][port] != isAllocated {
t.Errorf("lb %s port %d isAllocated, expect: %t, actual: %t", lb, port, isAllocated, actualCache[lb][port])
}
}
}
if !reflect.DeepEqual(actualPodAllocate, test.podAllocate) {
t.Errorf("podAllocate expect %v, but actully got %v", test.podAllocate, actualPodAllocate)
}
}
func TestNlbPlugin_consSvc(t *testing.T) {
type fields struct {
maxPort int32
minPort int32
cache map[string]portAllocated
podAllocate map[string]string
}
type args struct {
config *nlbConfig
pod *corev1.Pod
client client.Client
ctx context.Context
}
tests := []struct {
name string
fields fields
args args
want *corev1.Service
}{
{
name: "convert svc cache exist",
fields: fields{
maxPort: 3000,
minPort: 1,
cache: map[string]portAllocated{
"default/test-pod": map[int32]bool{},
},
podAllocate: map[string]string{
"default/test-pod": "nlb-xxx:80,81",
},
},
args: args{
config: &nlbConfig{
lbIds: []string{"nlb-xxx"},
targetPorts: []int{82},
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
},
isFixed: false,
annotations: map[string]string{
"service.beta.kubernetes.io/jdcloud-load-balancer-spec": "{}",
},
allocateLoadBalancerNodePorts: true,
},
pod: &corev1.Pod{
TypeMeta: metav1.TypeMeta{
Kind: "pod",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
UID: "32fqwfqfew",
},
},
client: nil,
ctx: context.Background(),
},
want: &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
NlbConfigHashKey: util.GetHash(&nlbConfig{
lbIds: []string{"nlb-xxx"},
targetPorts: []int{82},
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
},
isFixed: false,
annotations: map[string]string{
"service.beta.kubernetes.io/jdcloud-load-balancer-spec": "{}",
},
allocateLoadBalancerNodePorts: true,
}),
"service.beta.kubernetes.io/jdcloud-load-balancer-spec": "{}",
"service.beta.kubernetes.io/jdcloud-loadbalancer-id": "nlb-xxx",
},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "v1",
Kind: "pod",
Name: "test-pod",
UID: "32fqwfqfew",
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "test-pod",
},
Ports: []corev1.ServicePort{{
Name: "82",
Port: 80,
Protocol: "TCP",
TargetPort: intstr.IntOrString{
Type: 0,
IntVal: 82,
},
},
},
AllocateLoadBalancerNodePorts: ptr.To[bool](true),
},
},
},
}
for _, tt := range tests {
c := &NlbPlugin{
maxPort: tt.fields.maxPort,
minPort: tt.fields.minPort,
cache: tt.fields.cache,
podAllocate: tt.fields.podAllocate,
}
if got := c.consSvc(tt.args.config, tt.args.pod, tt.args.client, tt.args.ctx); !reflect.DeepEqual(got, tt.want) {
t.Errorf("consSvc() = %v, want %v", got, tt.want)
}
}
}

View File

@ -0,0 +1,412 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kubernetes
import (
"context"
"net"
"strconv"
"strings"
"sync"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
"github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
HostPortNetwork = "Kubernetes-HostPort"
// ContainerPortsKey represents the configuration key when using hostPort.
// Its corresponding value format is as follows, containerName:port1/protocol1,port2/protocol2,... e.g. game-server:25565/TCP
// When no protocol is specified, TCP is used by default
ContainerPortsKey = "ContainerPorts"
PortSameAsHost = "SameAsHost"
ProtocolTCPUDP = "TCPUDP"
)
type HostPortPlugin struct {
maxPort int32
minPort int32
podAllocated map[string]string
portAmount map[int32]int
amountStat []int
mutex sync.RWMutex
}
func init() {
hostPortPlugin := HostPortPlugin{
mutex: sync.RWMutex{},
podAllocated: make(map[string]string),
}
kubernetesProvider.registerPlugin(&hostPortPlugin)
}
func (hpp *HostPortPlugin) Name() string {
return HostPortNetwork
}
func (hpp *HostPortPlugin) Alias() string {
return ""
}
func (hpp *HostPortPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
log.Infof("Receiving pod %s/%s ADD Operation", pod.GetNamespace(), pod.GetName())
podNow := &corev1.Pod{}
err := c.Get(ctx, types.NamespacedName{
Namespace: pod.GetNamespace(),
Name: pod.GetName(),
}, podNow)
if err == nil {
log.Infof("There is a pod with same ns/name(%s/%s) exists in cluster, do not allocate", pod.GetNamespace(), pod.GetName())
return pod, errors.NewPluginError(errors.InternalError, "There is a pod with same ns/name exists in cluster")
}
if !k8serrors.IsNotFound(err) {
return pod, errors.NewPluginError(errors.ApiCallError, err.Error())
}
networkManager := utils.NewNetworkManager(pod, c)
conf := networkManager.GetNetworkConfig()
containerPortsMap, containerProtocolsMap, numToAlloc := parseConfig(conf, pod)
var hostPorts []int32
if str, ok := hpp.podAllocated[pod.GetNamespace()+"/"+pod.GetName()]; ok {
hostPorts = util.StringToInt32Slice(str, ",")
log.Infof("pod %s/%s use hostPorts %v , which are allocated before", pod.GetNamespace(), pod.GetName(), hostPorts)
} else {
hostPorts = hpp.allocate(numToAlloc, pod.GetNamespace()+"/"+pod.GetName())
log.Infof("pod %s/%s allocated hostPorts %v", pod.GetNamespace(), pod.GetName(), hostPorts)
}
// patch pod container ports
containers := pod.Spec.Containers
for cIndex, container := range pod.Spec.Containers {
if ports, ok := containerPortsMap[container.Name]; ok {
containerPorts := container.Ports
for i, port := range ports {
// -1 means same as host
if port == -1 {
port = hostPorts[numToAlloc-1]
}
protocol := containerProtocolsMap[container.Name][i]
hostPort := hostPorts[numToAlloc-1]
if protocol == ProtocolTCPUDP {
containerPorts = append(containerPorts,
corev1.ContainerPort{
ContainerPort: port,
HostPort: hostPort,
Protocol: corev1.ProtocolTCP,
}, corev1.ContainerPort{
ContainerPort: port,
HostPort: hostPort,
Protocol: corev1.ProtocolUDP,
})
} else {
containerPorts = append(containerPorts, corev1.ContainerPort{
ContainerPort: port,
HostPort: hostPort,
Protocol: protocol,
})
}
numToAlloc--
}
containers[cIndex].Ports = containerPorts
}
}
pod.Spec.Containers = containers
return pod, nil
}
func (hpp *HostPortPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
log.Infof("Receiving pod %s/%s UPDATE Operation", pod.GetNamespace(), pod.GetName())
node := &corev1.Node{}
err := c.Get(ctx, types.NamespacedName{
Name: pod.Spec.NodeName,
}, node)
if err != nil {
if k8serrors.IsNotFound(err) {
return pod, nil
}
return pod, errors.NewPluginError(errors.ApiCallError, err.Error())
}
nodeIp := getAddress(node)
networkManager := utils.NewNetworkManager(pod, c)
iNetworkPorts := make([]gamekruiseiov1alpha1.NetworkPort, 0)
eNetworkPorts := make([]gamekruiseiov1alpha1.NetworkPort, 0)
for _, container := range pod.Spec.Containers {
for _, port := range container.Ports {
if port.HostPort >= hpp.minPort && port.HostPort <= hpp.maxPort {
containerPortIs := intstr.FromInt(int(port.ContainerPort))
hostPortIs := intstr.FromInt(int(port.HostPort))
iNetworkPorts = append(iNetworkPorts, gamekruiseiov1alpha1.NetworkPort{
Name: container.Name + "-" + containerPortIs.String(),
Port: &containerPortIs,
Protocol: port.Protocol,
})
eNetworkPorts = append(eNetworkPorts, gamekruiseiov1alpha1.NetworkPort{
Name: container.Name + "-" + containerPortIs.String(),
Port: &hostPortIs,
Protocol: port.Protocol,
})
}
}
}
// network not ready
if len(iNetworkPorts) == 0 || len(eNetworkPorts) == 0 || pod.Status.PodIP == "" {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
networkStatus := gamekruiseiov1alpha1.NetworkStatus{
InternalAddresses: []gamekruiseiov1alpha1.NetworkAddress{
{
IP: pod.Status.PodIP,
Ports: iNetworkPorts,
},
},
ExternalAddresses: []gamekruiseiov1alpha1.NetworkAddress{
{
IP: nodeIp,
Ports: eNetworkPorts,
},
},
CurrentNetworkState: gamekruiseiov1alpha1.NetworkReady,
}
pod, err = networkManager.UpdateNetworkStatus(networkStatus, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
func (hpp *HostPortPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) errors.PluginError {
log.Infof("Receiving pod %s/%s DELETE Operation", pod.GetNamespace(), pod.GetName())
if _, ok := hpp.podAllocated[pod.GetNamespace()+"/"+pod.GetName()]; !ok {
return nil
}
hostPorts := make([]int32, 0)
for _, container := range pod.Spec.Containers {
for _, port := range container.Ports {
if port.HostPort >= hpp.minPort && port.HostPort <= hpp.maxPort {
hostPorts = append(hostPorts, port.HostPort)
}
}
}
hpp.deAllocate(hostPorts, pod.GetNamespace()+"/"+pod.GetName())
log.Infof("pod %s/%s deallocated hostPorts %v", pod.GetNamespace(), pod.GetName(), hostPorts)
return nil
}
func (hpp *HostPortPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
hpp.mutex.Lock()
defer hpp.mutex.Unlock()
hostPortOptions := options.(provideroptions.KubernetesOptions).HostPort
hpp.maxPort = hostPortOptions.MaxPort
hpp.minPort = hostPortOptions.MinPort
newPortAmount := make(map[int32]int, hpp.maxPort-hpp.minPort+1)
for i := hpp.minPort; i <= hpp.maxPort; i++ {
newPortAmount[i] = 0
}
podList := &corev1.PodList{}
err := c.List(ctx, podList)
if err != nil {
return err
}
for _, pod := range podList.Items {
var hostPorts []int32
if pod.GetAnnotations()[gamekruiseiov1alpha1.GameServerNetworkType] == HostPortNetwork {
for _, container := range pod.Spec.Containers {
for _, port := range container.Ports {
if port.HostPort >= hpp.minPort && port.HostPort <= hpp.maxPort {
newPortAmount[port.HostPort]++
hostPorts = append(hostPorts, port.HostPort)
}
}
}
}
if len(hostPorts) != 0 {
hpp.podAllocated[pod.GetNamespace()+"/"+pod.GetName()] = util.Int32SliceToString(hostPorts, ",")
}
}
size := 0
for _, amount := range newPortAmount {
if amount > size {
size = amount
}
}
newAmountStat := make([]int, size+1)
for _, amount := range newPortAmount {
newAmountStat[amount]++
}
hpp.portAmount = newPortAmount
hpp.amountStat = newAmountStat
log.Infof("[Kubernetes-HostPort] podAllocated init: %v", hpp.podAllocated)
return nil
}
func (hpp *HostPortPlugin) allocate(num int, nsname string) []int32 {
hpp.mutex.Lock()
defer hpp.mutex.Unlock()
hostPorts, index := selectPorts(hpp.amountStat, hpp.portAmount, num)
for _, hostPort := range hostPorts {
hpp.portAmount[hostPort]++
hpp.amountStat[index]--
if index+1 >= len(hpp.amountStat) {
hpp.amountStat = append(hpp.amountStat, 0)
}
hpp.amountStat[index+1]++
}
hpp.podAllocated[nsname] = util.Int32SliceToString(hostPorts, ",")
return hostPorts
}
func (hpp *HostPortPlugin) deAllocate(hostPorts []int32, nsname string) {
hpp.mutex.Lock()
defer hpp.mutex.Unlock()
for _, hostPort := range hostPorts {
amount := hpp.portAmount[hostPort]
hpp.portAmount[hostPort]--
hpp.amountStat[amount]--
hpp.amountStat[amount-1]++
}
delete(hpp.podAllocated, nsname)
}
func verifyContainerName(containerName string, pod *corev1.Pod) bool {
for _, container := range pod.Spec.Containers {
if container.Name == containerName {
return true
}
}
return false
}
func getAddress(node *corev1.Node) string {
nodeIp := ""
for _, a := range node.Status.Addresses {
if a.Type == corev1.NodeExternalIP && net.ParseIP(a.Address) != nil {
nodeIp = a.Address
}
}
for _, a := range node.Status.Addresses {
if a.Type == corev1.NodeExternalDNS {
nodeIp = a.Address
}
}
if nodeIp == "" {
for _, a := range node.Status.Addresses {
if a.Type == corev1.NodeInternalIP && net.ParseIP(a.Address) != nil {
nodeIp = a.Address
}
}
for _, a := range node.Status.Addresses {
if a.Type == corev1.NodeInternalDNS {
nodeIp = a.Address
}
}
}
return nodeIp
}
func parseConfig(conf []gamekruiseiov1alpha1.NetworkConfParams, pod *corev1.Pod) (map[string][]int32, map[string][]corev1.Protocol, int) {
numToAlloc := 0
containerPortsMap := make(map[string][]int32)
containerProtocolsMap := make(map[string][]corev1.Protocol)
for _, c := range conf {
if c.Name == ContainerPortsKey {
cpSlice := strings.Split(c.Value, ":")
containerName := cpSlice[0]
if verifyContainerName(containerName, pod) && len(cpSlice) == 2 {
ports := make([]int32, 0)
protocols := make([]corev1.Protocol, 0)
for _, portString := range strings.Split(cpSlice[1], ",") {
ppSlice := strings.Split(portString, "/")
// handle port
var port int64
var err error
if ppSlice[0] == PortSameAsHost {
port = -1
} else {
port, err = strconv.ParseInt(ppSlice[0], 10, 32)
if err != nil {
continue
}
}
numToAlloc++
ports = append(ports, int32(port))
// handle protocol
if len(ppSlice) == 2 {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
} else {
protocols = append(protocols, corev1.ProtocolTCP)
}
}
containerPortsMap[containerName] = ports
containerProtocolsMap[containerName] = protocols
}
}
}
return containerPortsMap, containerProtocolsMap, numToAlloc
}
func selectPorts(amountStat []int, portAmount map[int32]int, num int) ([]int32, int) {
var index int
for i, total := range amountStat {
if total >= num {
index = i
break
}
}
hostPorts := make([]int32, 0)
for hostPort, amount := range portAmount {
if amount == index {
hostPorts = append(hostPorts, hostPort)
num--
}
if num == 0 {
break
}
}
return hostPorts, index
}

View File

@ -0,0 +1,59 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kubernetes
import (
"testing"
)
func TestSelectPorts(t *testing.T) {
tests := []struct {
amountStat []int
portAmount map[int32]int
num int
shouldIn []int32
index int
}{
{
amountStat: []int{8, 3},
portAmount: map[int32]int{800: 0, 801: 0, 802: 0, 803: 1, 804: 0, 805: 1, 806: 0, 807: 0, 808: 1, 809: 0, 810: 0},
num: 2,
shouldIn: []int32{800, 801, 802, 804, 806, 807, 809, 810},
index: 0,
},
}
for _, test := range tests {
hostPorts, index := selectPorts(test.amountStat, test.portAmount, test.num)
if index != test.index {
t.Errorf("expect index %v but got %v", test.index, index)
}
for _, hostPort := range hostPorts {
isIn := false
for _, si := range test.shouldIn {
if si == hostPort {
isIn = true
break
}
}
if !isIn {
t.Errorf("hostPort %d not in expect slice: %v", hostPort, test.shouldIn)
}
}
}
}

View File

@ -0,0 +1,381 @@
package kubernetes
import (
"context"
"encoding/json"
"fmt"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/api/networking/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
)
const (
IngressNetwork = "Kubernetes-Ingress"
// PathTypeKey determines the interpretation of the Path matching, which is same as PathType in HTTPIngressPath.
PathTypeKey = "PathType"
// PathKey is matched against the path of an incoming request, the meaning of which is same as Path in HTTPIngressPath.
// Users can add <id> to any position of the path, and the network plugin will generate the path corresponding to the game server.
// e.g. /game<id>(/|$)(.*) The ingress path of GameServer 0 is /game0(/|$)(.*), the ingress path of GameServer 1 is /game1(/|$)(.*), and so on.
PathKey = "Path"
// PortKey indicates the exposed port value of game server.
PortKey = "Port"
// IngressClassNameKey indicates the name of the IngressClass cluster resource, which is same as IngressClassName in IngressSpec.
IngressClassNameKey = "IngressClassName"
// HostKey indicates domain name, which is same as Host in IngressRule.
HostKey = "Host"
// TlsHostsKey indicates hosts that included in the TLS certificate, the meaning of which is the same as that of Hosts in IngressTLS.
// Its corresponding value format is as follows, host1,host2,... e.g. xxx.xx.com
TlsHostsKey = "TlsHosts"
// TlsSecretNameKey indicates the name of the secret used to terminate TLS traffic on port 443, which is same as SecretName in IngressTLS.
TlsSecretNameKey = "TlsSecretName"
// AnnotationKey is the key of an annotation for ingress.
// Its corresponding value format is as follows, key: value(note that there is a space after: ) e.g. nginx.ingress.kubernetes.io/rewrite-target: /$2
// If the user wants to fill in multiple annotations, just fill in multiple AnnotationKeys in the network config.
AnnotationKey = "Annotation"
// FixedKey indicates whether the ingress object is still retained when the pod is deleted.
// If True, ingress will not be deleted even though the pod is deleted.
// If False, ingress will be deleted along with pod deletion.
FixedKey = "Fixed"
)
const (
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
IngressHashKey = "game.kruise.io/ingress-hash"
ServiceHashKey = "game.kruise.io/svc-hash"
)
const paramsError = "Network Config Params Error"
func init() {
kubernetesProvider.registerPlugin(&IngressPlugin{})
}
type IngressPlugin struct {
}
func (i IngressPlugin) Name() string {
return IngressNetwork
}
func (i IngressPlugin) Alias() string {
return ""
}
func (i IngressPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (i IngressPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (i IngressPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
conf := networkManager.GetNetworkConfig()
ic, err := parseIngConfig(conf, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
// get svc
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
return pod, cperrors.ToPluginError(c.Create(ctx, consSvc(ic, pod, c, ctx)), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// update svc
if util.GetHash(ic.ports) != svc.GetAnnotations()[ServiceHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
newSvc := consSvc(ic, pod, c, ctx)
patchSvc := map[string]interface{}{"metadata": map[string]map[string]string{"annotations": newSvc.Annotations}, "spec": newSvc.Spec}
patchSvcBytes, err := json.Marshal(patchSvc)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
return pod, cperrors.ToPluginError(c.Patch(ctx, svc, client.RawPatch(types.MergePatchType, patchSvcBytes)), cperrors.ApiCallError)
}
// get ingress
ing := &v1.Ingress{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, ing)
if err != nil {
if errors.IsNotFound(err) {
return pod, cperrors.ToPluginError(c.Create(ctx, consIngress(ic, pod, c, ctx)), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// update ingress
if util.GetHash(ic) != ing.GetAnnotations()[IngressHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
return pod, cperrors.ToPluginError(c.Update(ctx, consIngress(ic, pod, c, ctx)), cperrors.ApiCallError)
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
networkPorts := make([]gamekruiseiov1alpha1.NetworkPort, 0)
for _, p := range ing.Spec.Rules[0].HTTP.Paths {
instrIPort := intstr.FromInt(int(p.Backend.Service.Port.Number))
networkPort := gamekruiseiov1alpha1.NetworkPort{
Name: p.Path,
Port: &instrIPort,
}
networkPorts = append(networkPorts, networkPort)
}
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: networkPorts,
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
EndPoint: ing.Spec.Rules[0].Host,
Ports: networkPorts,
}
networkStatus.InternalAddresses = append(internalAddresses, internalAddress)
networkStatus.ExternalAddresses = append(externalAddresses, externalAddress)
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (i IngressPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
return nil
}
type ingConfig struct {
paths []string
pathTypes []*v1.PathType
ports []int32
host string
ingressClassName *string
tlsHosts []string
tlsSecretName string
annotations map[string]string
fixed bool
}
func parseIngConfig(conf []gamekruiseiov1alpha1.NetworkConfParams, pod *corev1.Pod) (ingConfig, error) {
var ic ingConfig
ic.annotations = make(map[string]string)
ic.paths = make([]string, 0)
ic.pathTypes = make([]*v1.PathType, 0)
ic.ports = make([]int32, 0)
id := util.GetIndexFromGsName(pod.GetName())
for _, c := range conf {
switch c.Name {
case PathTypeKey:
pathType := v1.PathType(c.Value)
ic.pathTypes = append(ic.pathTypes, &pathType)
case PortKey:
port, _ := strconv.ParseInt(c.Value, 10, 32)
ic.ports = append(ic.ports, int32(port))
case HostKey:
strs := strings.Split(c.Value, "<id>")
switch len(strs) {
case 2:
ic.host = strs[0] + strconv.Itoa(id) + strs[1]
case 1:
ic.host = strs[0]
default:
return ingConfig{}, fmt.Errorf("%s", paramsError)
}
case IngressClassNameKey:
ic.ingressClassName = ptr.To[string](c.Value)
case TlsSecretNameKey:
ic.tlsSecretName = c.Value
case TlsHostsKey:
ic.tlsHosts = strings.Split(c.Value, ",")
case PathKey:
strs := strings.Split(c.Value, "<id>")
switch len(strs) {
case 2:
ic.paths = append(ic.paths, strs[0]+strconv.Itoa(id)+strs[1])
case 1:
ic.paths = append(ic.paths, strs[0])
default:
return ingConfig{}, fmt.Errorf("%s", paramsError)
}
case AnnotationKey:
kv := strings.Split(c.Value, ": ")
if len(kv) != 2 {
return ingConfig{}, fmt.Errorf("%s", paramsError)
}
ic.annotations[kv[0]] = kv[1]
case FixedKey:
fixed, _ := strconv.ParseBool(c.Value)
ic.fixed = fixed
}
}
if len(ic.paths) == 0 || len(ic.pathTypes) == 0 || len(ic.ports) == 0 {
return ingConfig{}, fmt.Errorf("%s", paramsError)
}
return ic, nil
}
func consIngress(ic ingConfig, pod *corev1.Pod, c client.Client, ctx context.Context) *v1.Ingress {
pathSlice := ic.paths
pathTypeSlice := ic.pathTypes
pathPortSlice := ic.ports
lenPathTypeSlice := len(pathTypeSlice)
lenPathPortSlice := len(pathPortSlice)
for i := 0; i < len(pathSlice)-lenPathTypeSlice; i++ {
pathTypeSlice = append(pathTypeSlice, pathTypeSlice[0])
}
for i := 0; i < len(pathSlice)-lenPathPortSlice; i++ {
pathPortSlice = append(pathPortSlice, pathPortSlice[0])
}
ingAnnotations := ic.annotations
if ingAnnotations == nil {
ingAnnotations = make(map[string]string)
}
ingAnnotations[IngressHashKey] = util.GetHash(ic)
ingPaths := make([]v1.HTTPIngressPath, 0)
for i := 0; i < len(pathSlice); i++ {
ingPath := v1.HTTPIngressPath{
Path: pathSlice[i],
PathType: pathTypeSlice[i],
Backend: v1.IngressBackend{
Service: &v1.IngressServiceBackend{
Name: pod.GetName(),
Port: v1.ServiceBackendPort{
Number: pathPortSlice[i],
},
},
},
}
ingPaths = append(ingPaths, ingPath)
}
ing := &v1.Ingress{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: ingAnnotations,
OwnerReferences: consOwnerReference(c, ctx, pod, ic.fixed),
},
Spec: v1.IngressSpec{
IngressClassName: ic.ingressClassName,
Rules: []v1.IngressRule{
{
Host: ic.host,
IngressRuleValue: v1.IngressRuleValue{
HTTP: &v1.HTTPIngressRuleValue{
Paths: ingPaths,
},
},
},
},
},
}
if ic.tlsHosts != nil || ic.tlsSecretName != "" {
ing.Spec.TLS = []v1.IngressTLS{
{
SecretName: ic.tlsSecretName,
Hosts: ic.tlsHosts,
},
}
}
return ing
}
func consSvc(ic ingConfig, pod *corev1.Pod, c client.Client, ctx context.Context) *corev1.Service {
annoatations := make(map[string]string)
annoatations[ServiceHashKey] = util.GetHash(ic.ports)
ports := make([]corev1.ServicePort, 0)
for _, p := range ic.ports {
port := corev1.ServicePort{
Port: p,
Name: strconv.Itoa(int(p)),
}
ports = append(ports, port)
}
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
OwnerReferences: consOwnerReference(c, ctx, pod, ic.fixed),
Annotations: annoatations,
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeClusterIP,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: ports,
},
}
}
func consOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, isFixed bool) []metav1.OwnerReference {
ownerReferences := []metav1.OwnerReference{
{
APIVersion: pod.APIVersion,
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err == nil {
ownerReferences = []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}
}
return ownerReferences
}

View File

@ -0,0 +1,436 @@
package kubernetes
import (
"fmt"
"reflect"
"testing"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/api/networking/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/ptr"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
)
func TestParseIngConfig(t *testing.T) {
pathTypePrefix := v1.PathTypePrefix
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
pod *corev1.Pod
ic ingConfig
err error
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: PathKey,
Value: "/game<id>(/|$)(.*)",
},
{
Name: AnnotationKey,
Value: "nginx.ingress.kubernetes.io/rewrite-target: /$2",
},
{
Name: AnnotationKey,
Value: "alb.ingress.kubernetes.io/server-snippets: |\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";",
},
{
Name: TlsHostsKey,
Value: "xxx-xxx.com,xxx-xx.com",
},
{
Name: PortKey,
Value: "8080",
},
{
Name: PathTypeKey,
Value: string(v1.PathTypePrefix),
},
},
pod: &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod-3",
},
},
ic: ingConfig{
paths: []string{"/game3(/|$)(.*)"},
tlsHosts: []string{
"xxx-xxx.com",
"xxx-xx.com",
},
annotations: map[string]string{
"nginx.ingress.kubernetes.io/rewrite-target": "/$2",
"alb.ingress.kubernetes.io/server-snippets": "|\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";",
},
ports: []int32{8080},
pathTypes: []*v1.PathType{&pathTypePrefix},
},
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: PathKey,
Value: "/game<id>",
},
{
Name: AnnotationKey,
Value: "nginx.ingress.kubernetes.io/rewrite-target: /$2",
},
{
Name: TlsHostsKey,
Value: "xxx-xxx.com,xxx-xx.com",
},
{
Name: PathTypeKey,
Value: string(v1.PathTypePrefix),
},
},
pod: &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod-3",
},
},
err: fmt.Errorf("%s", paramsError),
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: PathKey,
Value: "/game",
},
{
Name: PortKey,
Value: "8080",
},
{
Name: PathTypeKey,
Value: string(v1.PathTypePrefix),
},
{
Name: HostKey,
Value: "instance<id>.xxx.xxx.com",
},
},
pod: &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod-2",
},
},
ic: ingConfig{
paths: []string{"/game"},
ports: []int32{8080},
pathTypes: []*v1.PathType{&pathTypePrefix},
host: "instance2.xxx.xxx.com",
annotations: map[string]string{},
},
},
}
for i, test := range tests {
expect := test.ic
actual, err := parseIngConfig(test.conf, test.pod)
if !reflect.DeepEqual(err, test.err) {
t.Errorf("case %d: expect err: %v , but actual: %v", i, test.err, err)
}
if !reflect.DeepEqual(actual, expect) {
if !reflect.DeepEqual(expect.paths, actual.paths) {
t.Errorf("case %d: expect paths: %v , but actual: %v", i, expect.paths, actual.paths)
}
if !reflect.DeepEqual(expect.ports, actual.ports) {
t.Errorf("case %d: expect ports: %v , but actual: %v", i, expect.ports, actual.ports)
}
if !reflect.DeepEqual(expect.pathTypes, actual.pathTypes) {
t.Errorf("case %d: expect annotations: %v , but actual: %v", i, expect.pathTypes, actual.pathTypes)
}
if !reflect.DeepEqual(expect.host, actual.host) {
t.Errorf("case %d: expect host: %v , but actual: %v", i, expect.host, actual.host)
}
if !reflect.DeepEqual(expect.tlsHosts, actual.tlsHosts) {
t.Errorf("case %d: expect tlsHosts: %v , but actual: %v", i, expect.tlsHosts, actual.tlsHosts)
}
if !reflect.DeepEqual(expect.tlsSecretName, actual.tlsSecretName) {
t.Errorf("case %d: expect tlsSecretName: %v , but actual: %v", i, expect.tlsSecretName, actual.tlsSecretName)
}
if !reflect.DeepEqual(expect.ingressClassName, actual.ingressClassName) {
t.Errorf("case %d: expect ingressClassName: %v , but actual: %v", i, expect.ingressClassName, actual.ingressClassName)
}
if !reflect.DeepEqual(expect.annotations, actual.annotations) {
t.Errorf("case %d: expect annotations: %v , but actual: %v", i, expect.annotations, actual.annotations)
}
}
}
}
func TestConsIngress(t *testing.T) {
pathTypePrefix := v1.PathTypePrefix
pathTypeImplementationSpecific := v1.PathTypeImplementationSpecific
ingressClassNameNginx := "nginx"
pod := &corev1.Pod{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
Kind: "Pod",
},
ObjectMeta: metav1.ObjectMeta{
Name: "pod-3",
Namespace: "ns",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
},
}
baseIngObjectMeta := metav1.ObjectMeta{
Name: "pod-3",
Namespace: "ns",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "v1",
Kind: "Pod",
Name: "pod-3",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
}
// case 0
icCase0 := ingConfig{
ports: []int32{
int32(80),
},
pathTypes: []*v1.PathType{
&pathTypePrefix,
},
paths: []string{
"/path1-3",
"/path2-3",
},
host: "xxx.xx.com",
ingressClassName: &ingressClassNameNginx,
annotations: map[string]string{
"nginx.ingress.kubernetes.io/rewrite-target": "/$2",
},
}
ingObjectMetaCase0 := baseIngObjectMeta
ingObjectMetaCase0.Annotations = map[string]string{
"nginx.ingress.kubernetes.io/rewrite-target": "/$2",
IngressHashKey: util.GetHash(icCase0),
}
ingCase0 := &v1.Ingress{
ObjectMeta: ingObjectMetaCase0,
Spec: v1.IngressSpec{
IngressClassName: &ingressClassNameNginx,
Rules: []v1.IngressRule{
{
Host: "xxx.xx.com",
IngressRuleValue: v1.IngressRuleValue{
HTTP: &v1.HTTPIngressRuleValue{
Paths: []v1.HTTPIngressPath{
{
Path: "/path1-3",
PathType: &pathTypePrefix,
Backend: v1.IngressBackend{
Service: &v1.IngressServiceBackend{
Name: "pod-3",
Port: v1.ServiceBackendPort{
Number: int32(80),
},
},
},
},
{
Path: "/path2-3",
PathType: &pathTypePrefix,
Backend: v1.IngressBackend{
Service: &v1.IngressServiceBackend{
Name: "pod-3",
Port: v1.ServiceBackendPort{
Number: int32(80),
},
},
},
},
},
},
},
},
},
},
}
// case 1
icCase1 := ingConfig{
ports: []int32{
int32(80),
int32(8080),
},
pathTypes: []*v1.PathType{
&pathTypePrefix,
&pathTypeImplementationSpecific,
},
paths: []string{
"/path1-3",
"/path2-3",
"/path3-3",
},
host: "xxx.xx.com",
ingressClassName: &ingressClassNameNginx,
}
ingObjectMetaCase1 := baseIngObjectMeta
ingObjectMetaCase1.Annotations = map[string]string{
IngressHashKey: util.GetHash(icCase1),
}
ingCase1 := &v1.Ingress{
ObjectMeta: ingObjectMetaCase1,
Spec: v1.IngressSpec{
IngressClassName: &ingressClassNameNginx,
Rules: []v1.IngressRule{
{
Host: "xxx.xx.com",
IngressRuleValue: v1.IngressRuleValue{
HTTP: &v1.HTTPIngressRuleValue{
Paths: []v1.HTTPIngressPath{
{
Path: "/path1-3",
PathType: &pathTypePrefix,
Backend: v1.IngressBackend{
Service: &v1.IngressServiceBackend{
Name: "pod-3",
Port: v1.ServiceBackendPort{
Number: int32(80),
},
},
},
},
{
Path: "/path2-3",
PathType: &pathTypeImplementationSpecific,
Backend: v1.IngressBackend{
Service: &v1.IngressServiceBackend{
Name: "pod-3",
Port: v1.ServiceBackendPort{
Number: int32(8080),
},
},
},
},
{
Path: "/path3-3",
PathType: &pathTypePrefix,
Backend: v1.IngressBackend{
Service: &v1.IngressServiceBackend{
Name: "pod-3",
Port: v1.ServiceBackendPort{
Number: int32(80),
},
},
},
},
},
},
},
},
},
},
}
tests := []struct {
ic ingConfig
ing *v1.Ingress
}{
{
ic: icCase0,
ing: ingCase0,
},
{
ic: icCase1,
ing: ingCase1,
},
}
for i, test := range tests {
actual := consIngress(test.ic, pod, nil, nil)
if !reflect.DeepEqual(actual, test.ing) {
t.Errorf("case %d: expect ingress: %v , but actual: %v", i, test.ing, actual)
}
}
}
func TestConsSvc(t *testing.T) {
pod := &corev1.Pod{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
Kind: "Pod",
},
ObjectMeta: metav1.ObjectMeta{
Name: "pod-3",
Namespace: "ns",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
},
}
baseSvcObjectMeta := metav1.ObjectMeta{
Name: "pod-3",
Namespace: "ns",
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "v1",
Kind: "Pod",
Name: "pod-3",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
}
// case 0
icCase0 := ingConfig{
ports: []int32{
int32(80),
int32(8080),
},
}
svcObjectMetaCase0 := baseSvcObjectMeta
svcObjectMetaCase0.Annotations = map[string]string{
ServiceHashKey: util.GetHash(icCase0.ports),
}
svcCase0 := &corev1.Service{
ObjectMeta: svcObjectMetaCase0,
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeClusterIP,
Selector: map[string]string{
SvcSelectorKey: "pod-3",
},
Ports: []corev1.ServicePort{
{
Name: "80",
Port: int32(80),
},
{
Name: "8080",
Port: int32(8080),
},
},
},
}
tests := []struct {
ic ingConfig
svc *corev1.Service
}{
{
ic: icCase0,
svc: svcCase0,
},
}
for i, test := range tests {
actual := consSvc(test.ic, pod, nil, nil)
if !reflect.DeepEqual(actual, test.svc) {
t.Errorf("case %d: expect service: %v , but actual: %v", i, test.svc, actual)
}
}
}

View File

@ -0,0 +1,57 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kubernetes
import (
"github.com/openkruise/kruise-game/cloudprovider"
"k8s.io/klog/v2"
)
var (
kubernetesProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
)
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (kp *Provider) Name() string {
return "Kubernetes"
}
func (kp *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if kp.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return kp.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (kp *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
klog.Fatal("empty plugin name")
}
kp.plugins[name] = plugin
}
func NewKubernetesProvider() (cloudprovider.CloudProvider, error) {
return kubernetesProvider, nil
}

View File

@ -0,0 +1,279 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kubernetes
import (
"context"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
)
const (
NodePortNetwork = "Kubernetes-NodePort"
PortProtocolsConfigName = "PortProtocols"
SvcSelectorDisabledKey = "game.kruise.io/svc-selector-disabled"
)
type NodePortPlugin struct {
}
func (n *NodePortPlugin) Name() string {
return NodePortNetwork
}
func (n *NodePortPlugin) Alias() string {
return ""
}
func (n *NodePortPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (n *NodePortPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (n *NodePortPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, _ := networkManager.GetNetworkStatus()
networkConfig := networkManager.GetNetworkConfig()
npc, err := parseNodePortConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// get svc
svc := &corev1.Service{}
err = client.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
return pod, cperrors.ToPluginError(client.Create(ctx, consNodePortSvc(npc, pod, client, ctx)), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// update svc
if util.GetHash(npc) != svc.GetAnnotations()[ServiceHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
return pod, cperrors.ToPluginError(client.Update(ctx, consNodePortSvc(npc, pod, client, ctx)), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Selector[SvcSelectorKey] == pod.GetName() {
newSelector := svc.Spec.Selector
newSelector[SvcSelectorDisabledKey] = pod.GetName()
delete(svc.Spec.Selector, SvcSelectorKey)
svc.Spec.Selector = newSelector
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Selector[SvcSelectorDisabledKey] == pod.GetName() {
newSelector := svc.Spec.Selector
newSelector[SvcSelectorKey] = pod.GetName()
delete(svc.Spec.Selector, SvcSelectorDisabledKey)
svc.Spec.Selector = newSelector
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(client, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := client.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
node := &corev1.Node{}
err = client.Get(ctx, types.NamespacedName{
Name: pod.Spec.NodeName,
}, node)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
if pod.Status.PodIP == "" {
// Pod IP not exist, Network NotReady
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
if port.NodePort == 0 {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
instrEPort := intstr.FromInt(int(port.NodePort))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: getAddress(node),
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (n *NodePortPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
return nil
}
func init() {
kubernetesProvider.registerPlugin(&NodePortPlugin{})
}
type nodePortConfig struct {
ports []int
protocols []corev1.Protocol
isFixed bool
}
func parseNodePortConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*nodePortConfig, error) {
var ports []int
var protocols []corev1.Protocol
isFixed := false
for _, c := range conf {
switch c.Name {
case PortProtocolsConfigName:
ports, protocols = parsePortProtocols(c.Value)
case FixedKey:
var err error
isFixed, err = strconv.ParseBool(c.Value)
if err != nil {
return nil, err
}
}
}
return &nodePortConfig{
ports: ports,
protocols: protocols,
isFixed: isFixed,
}, nil
}
func parsePortProtocols(value string) ([]int, []corev1.Protocol) {
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
for _, pp := range strings.Split(value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
return ports, protocols
}
func consNodePortSvc(npc *nodePortConfig, pod *corev1.Pod, c client.Client, ctx context.Context) *corev1.Service {
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(npc.ports); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(npc.ports[i]),
Port: int32(npc.ports[i]),
Protocol: npc.protocols[i],
TargetPort: intstr.FromInt(npc.ports[i]),
})
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: map[string]string{
ServiceHashKey: util.GetHash(npc),
},
OwnerReferences: consOwnerReference(c, ctx, pod, npc.isFixed),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeNodePort,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
},
}
return svc
}

View File

@ -0,0 +1,185 @@
package kubernetes
import (
"reflect"
"testing"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/utils/ptr"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
)
func TestParseNPConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
podNetConfig *nodePortConfig
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: PortProtocolsConfigName,
Value: "80",
},
},
podNetConfig: &nodePortConfig{
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: PortProtocolsConfigName,
Value: "8021/UDP",
},
},
podNetConfig: &nodePortConfig{
ports: []int{8021},
protocols: []corev1.Protocol{corev1.ProtocolUDP},
},
},
}
for _, test := range tests {
podNetConfig, _ := parseNodePortConfig(test.conf)
if !reflect.DeepEqual(podNetConfig, test.podNetConfig) {
t.Errorf("expect podNetConfig: %v, but actual: %v", test.podNetConfig, podNetConfig)
}
}
}
func TestConsNPSvc(t *testing.T) {
pod := &corev1.Pod{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
Kind: "Pod",
},
ObjectMeta: metav1.ObjectMeta{
Name: "pod-3",
Namespace: "ns",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
},
}
// case 0
npcCase0 := &nodePortConfig{
ports: []int{
80,
8080,
},
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
corev1.ProtocolTCP,
},
isFixed: false,
}
svcCase0 := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "pod-3",
Namespace: "ns",
Annotations: map[string]string{
ServiceHashKey: util.GetHash(npcCase0),
},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "v1",
Kind: "Pod",
Name: "pod-3",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeNodePort,
Selector: map[string]string{
SvcSelectorKey: "pod-3",
},
Ports: []corev1.ServicePort{
{
Name: "80",
Port: int32(80),
TargetPort: intstr.FromInt(80),
Protocol: corev1.ProtocolTCP,
},
{
Name: "8080",
Port: int32(8080),
TargetPort: intstr.FromInt(8080),
Protocol: corev1.ProtocolTCP,
},
},
},
}
// case 1
npcCase1 := &nodePortConfig{
ports: []int{
8021,
},
protocols: []corev1.Protocol{
corev1.ProtocolUDP,
},
isFixed: false,
}
svcCase1 := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "pod-3",
Namespace: "ns",
Annotations: map[string]string{
ServiceHashKey: util.GetHash(npcCase1),
},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "v1",
Kind: "Pod",
Name: "pod-3",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeNodePort,
Selector: map[string]string{
SvcSelectorKey: "pod-3",
},
Ports: []corev1.ServicePort{
{
Name: "8021",
Port: int32(8021),
TargetPort: intstr.FromInt(8021),
Protocol: corev1.ProtocolUDP,
},
},
},
}
tests := []struct {
npc *nodePortConfig
svc *corev1.Service
}{
{
npc: npcCase0,
svc: svcCase0,
},
{
npc: npcCase1,
svc: svcCase1,
},
}
for i, test := range tests {
actual := consNodePortSvc(test.npc, pod, nil, nil)
if !reflect.DeepEqual(actual, test.svc) {
t.Errorf("case %d: expect service: %v , but actual: %v", i, test.svc, actual)
}
}
}

View File

@ -0,0 +1,179 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package manager
import (
"context"
"github.com/openkruise/kruise-game/cloudprovider/hwcloud"
"github.com/openkruise/kruise-game/cloudprovider/jdcloud"
"github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
"github.com/openkruise/kruise-game/cloudprovider/alibabacloud"
aws "github.com/openkruise/kruise-game/cloudprovider/amazonswebservices"
"github.com/openkruise/kruise-game/cloudprovider/kubernetes"
"github.com/openkruise/kruise-game/cloudprovider/tencentcloud"
volcengine "github.com/openkruise/kruise-game/cloudprovider/volcengine"
corev1 "k8s.io/api/core/v1"
log "k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
)
type ProviderManager struct {
CloudProviders map[string]cloudprovider.CloudProvider
CPOptions map[string]cloudprovider.CloudProviderOptions
}
func (pm *ProviderManager) FindConfigs(cpName string) cloudprovider.CloudProviderOptions {
return pm.CPOptions[cpName]
}
func (pm *ProviderManager) RegisterCloudProvider(provider cloudprovider.CloudProvider, options cloudprovider.CloudProviderOptions) {
if provider.Name() == "" {
log.Fatal("EmptyCloudProviderName")
}
pm.CloudProviders[provider.Name()] = provider
pm.CPOptions[provider.Name()] = options
}
func (pm *ProviderManager) FindAvailablePlugins(pod *corev1.Pod) (cloudprovider.Plugin, bool) {
pluginType, ok := pod.Annotations[v1alpha1.GameServerNetworkType]
if !ok {
log.V(5).Infof("Pod %s has no plugin configured and skip", pod.Name)
return nil, false
}
for _, cp := range pm.CloudProviders {
plugins, err := cp.ListPlugins()
if err != nil {
log.Warningf("Cloud provider %s can not list plugins,because of %s", cp.Name(), err.Error())
continue
}
for _, p := range plugins {
// TODO add multi plugins supported
if p.Name() == pluginType {
return p, true
}
}
}
return nil, false
}
func (pm *ProviderManager) Init(client client.Client) {
for _, cp := range pm.CloudProviders {
name := cp.Name()
plugins, err := cp.ListPlugins()
if err != nil {
continue
}
log.Infof("Cloud Provider [%s] has been registered with %d plugins", name, len(plugins))
for _, p := range plugins {
err := p.Init(client, pm.FindConfigs(cp.Name()), context.Background())
if err != nil {
continue
}
log.Infof("plugin [%s] has been registered", p.Name())
}
}
}
// NewProviderManager return a new cloud provider manager instance
func NewProviderManager() (*ProviderManager, error) {
configFile := cloudprovider.NewConfigFile(cloudprovider.Opt.CloudProviderConfigFile)
configs := configFile.Parse()
pm := &ProviderManager{
CloudProviders: make(map[string]cloudprovider.CloudProvider),
CPOptions: make(map[string]cloudprovider.CloudProviderOptions),
}
if configs.KubernetesOptions.Valid() && configs.KubernetesOptions.Enabled() {
// Register default kubernetes network provider
kp, err := kubernetes.NewKubernetesProvider()
if err != nil {
log.Errorf("Failed to initialized kubernetes provider,because of %s", err.Error())
} else {
pm.RegisterCloudProvider(kp, configs.KubernetesOptions)
}
}
if configs.AlibabaCloudOptions.Valid() && configs.AlibabaCloudOptions.Enabled() {
// build and register alibaba cloud provider
acp, err := alibabacloud.NewAlibabaCloudProvider()
if err != nil {
log.Errorf("Failed to initialize alibabacloud provider.because of %s", err.Error())
} else {
pm.RegisterCloudProvider(acp, configs.AlibabaCloudOptions)
}
}
if configs.VolcengineOptions.Valid() && configs.VolcengineOptions.Enabled() {
// build and register volcengine cloud provider
vcp, err := volcengine.NewVolcengineProvider()
if err != nil {
log.Errorf("Failed to initialize volcengine provider.because of %s", err.Error())
} else {
pm.RegisterCloudProvider(vcp, configs.VolcengineOptions)
}
}
if configs.AmazonsWebServicesOptions.Valid() && configs.AmazonsWebServicesOptions.Enabled() {
// build and register amazon web services provider
vcp, err := aws.NewAmazonsWebServicesProvider()
if err != nil {
log.Errorf("Failed to initialize amazons web services provider.because of %s", err.Error())
} else {
pm.RegisterCloudProvider(vcp, configs.AmazonsWebServicesOptions)
}
}
if configs.TencentCloudOptions.Valid() && configs.TencentCloudOptions.Enabled() {
// build and register tencent cloud provider
tcp, err := tencentcloud.NewTencentCloudProvider()
if err != nil {
log.Errorf("Failed to initialize tencentcloud provider.because of %s", err.Error())
} else {
pm.RegisterCloudProvider(tcp, configs.TencentCloudOptions)
}
}
if configs.JdCloudOptions.Valid() && configs.JdCloudOptions.Enabled() {
// build and register tencent cloud provider
tcp, err := jdcloud.NewJdcloudProvider()
if err != nil {
log.Errorf("Failed to initialize jdcloud provider.because of %s", err.Error())
} else {
pm.RegisterCloudProvider(tcp, configs.JdCloudOptions)
}
}
if configs.HwCloudOptions.Valid() && configs.HwCloudOptions.Enabled() {
// build and register hw cloud provider
hp, err := hwcloud.NewHwCloudProvider()
if err != nil {
log.Errorf("Failed to initialize hwcloud provider.because of %s", err.Error())
} else {
pm.RegisterCloudProvider(hp, configs.HwCloudOptions)
}
} else {
log.Warningf("HwCloudProvider is not enabled, enable flag is %v, config valid flag is %v", configs.HwCloudOptions.Enabled(), configs.HwCloudOptions.Valid())
}
return pm, nil
}

View File

@ -0,0 +1,47 @@
package options
type AlibabaCloudOptions struct {
Enable bool `toml:"enable"`
SLBOptions SLBOptions `toml:"slb"`
NLBOptions NLBOptions `toml:"nlb"`
}
type SLBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
BlockPorts []int32 `toml:"block_ports"`
}
type NLBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
BlockPorts []int32 `toml:"block_ports"`
}
func (o AlibabaCloudOptions) Valid() bool {
// SLB valid
slbOptions := o.SLBOptions
for _, blockPort := range slbOptions.BlockPorts {
if blockPort >= slbOptions.MaxPort || blockPort <= slbOptions.MinPort {
return false
}
}
if int(slbOptions.MaxPort-slbOptions.MinPort)-len(slbOptions.BlockPorts) >= 200 {
return false
}
if slbOptions.MinPort <= 0 {
return false
}
// NLB valid
nlbOptions := o.NLBOptions
for _, blockPort := range nlbOptions.BlockPorts {
if blockPort >= nlbOptions.MaxPort || blockPort <= nlbOptions.MinPort {
return false
}
}
return nlbOptions.MinPort > 0
}
func (o AlibabaCloudOptions) Enabled() bool {
return o.Enable
}

View File

@ -0,0 +1,31 @@
package options
// https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-limits.html
// Listeners per Network Load Balancer is 50
const maxPortRange = 50
type AmazonsWebServicesOptions struct {
Enable bool `toml:"enable"`
NLBOptions AWSNLBOptions `toml:"nlb"`
}
type AWSNLBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
}
func (ao AmazonsWebServicesOptions) Valid() bool {
nlbOptions := ao.NLBOptions
if nlbOptions.MaxPort-nlbOptions.MinPort+1 > maxPortRange {
return false
}
if nlbOptions.MinPort < 1 || nlbOptions.MaxPort > 65535 {
return false
}
return true
}
func (ao AmazonsWebServicesOptions) Enabled() bool {
return ao.Enable
}

View File

@ -0,0 +1,32 @@
package options
type HwCloudOptions struct {
Enable bool `toml:"enable"`
ELBOptions ELBOptions `toml:"elb"`
}
type ELBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
BlockPorts []int32 `toml:"block_ports"`
}
func (o HwCloudOptions) Valid() bool {
elbOptions := o.ELBOptions
for _, blockPort := range elbOptions.BlockPorts {
if blockPort >= elbOptions.MaxPort || blockPort <= elbOptions.MinPort {
return false
}
}
if int(elbOptions.MaxPort-elbOptions.MinPort)-len(elbOptions.BlockPorts) > 200 {
return false
}
if elbOptions.MinPort <= 0 {
return false
}
return true
}
func (o HwCloudOptions) Enabled() bool {
return o.Enable
}

View File

@ -0,0 +1,28 @@
package options
type JdCloudOptions struct {
Enable bool `toml:"enable"`
NLBOptions JdNLBOptions `toml:"nlb"`
}
type JdNLBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
}
func (v JdCloudOptions) Valid() bool {
nlbOptions := v.NLBOptions
if nlbOptions.MaxPort > 65535 {
return false
}
if nlbOptions.MinPort < 1 {
return false
}
return true
}
func (v JdCloudOptions) Enabled() bool {
return v.Enable
}

View File

@ -0,0 +1,27 @@
package options
type KubernetesOptions struct {
Enable bool `toml:"enable"`
HostPort HostPortOptions `toml:"hostPort"`
}
type HostPortOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
}
func (o KubernetesOptions) Valid() bool {
// HostPort valid
slbOptions := o.HostPort
if slbOptions.MaxPort <= slbOptions.MinPort {
return false
}
if slbOptions.MinPort <= 0 {
return false
}
return true
}
func (o KubernetesOptions) Enabled() bool {
return o.Enable
}

View File

@ -0,0 +1,13 @@
package options
type TencentCloudOptions struct {
Enable bool `toml:"enable"`
}
func (o TencentCloudOptions) Enabled() bool {
return o.Enable
}
func (o TencentCloudOptions) Valid() bool {
return true
}

View File

@ -0,0 +1,35 @@
package options
type VolcengineOptions struct {
Enable bool `toml:"enable"`
CLBOptions CLBOptions `toml:"clb"`
}
type CLBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
BlockPorts []int32 `toml:"block_ports"`
}
func (v VolcengineOptions) Valid() bool {
clbOptions := v.CLBOptions
for _, blockPort := range clbOptions.BlockPorts {
if blockPort >= clbOptions.MaxPort || blockPort < clbOptions.MinPort {
return false
}
}
if clbOptions.MaxPort > 65535 {
return false
}
if clbOptions.MinPort < 1 {
return false
}
return true
}
func (v VolcengineOptions) Enabled() bool {
return v.Enable
}

View File

@ -0,0 +1,208 @@
package tencentcloud
import (
"context"
"encoding/json"
"fmt"
"strconv"
"strings"
"github.com/openkruise/kruise-game/pkg/util"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/util/intstr"
kruisev1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
ClbNetwork = "TencentCloud-CLB"
AliasCLB = "CLB-Network"
ClbIdsConfigName = "ClbIds"
PortProtocolsConfigName = "PortProtocols"
CLBPortMappingAnnotation = "networking.cloud.tencent.com/clb-port-mapping"
EnableCLBPortMappingAnnotation = "networking.cloud.tencent.com/enable-clb-port-mapping"
CLBPortMappingResultAnnotation = "networking.cloud.tencent.com/clb-port-mapping-result"
CLBPortMappingStatuslAnnotation = "networking.cloud.tencent.com/clb-port-mapping-status"
)
type ClbPlugin struct{}
type portProtocol struct {
port int
protocol string
}
type clbConfig struct {
targetPorts []portProtocol
}
type portMapping struct {
Port int `json:"port"`
Protocol string `json:"protocol"`
Address string `json:"address"`
}
func (p *ClbPlugin) Name() string {
return ClbNetwork
}
func (p *ClbPlugin) Alias() string {
return AliasCLB
}
func (p *ClbPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (p *ClbPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return p.reconcile(c, pod, ctx)
}
func (p *ClbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
if pod.DeletionTimestamp != nil {
return pod, nil
}
return p.reconcile(c, pod, ctx)
}
// Ensure the annotation of pod is correct.
func (p *ClbPlugin) reconcile(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(kruisev1alpha1.NetworkStatus{
CurrentNetworkState: kruisev1alpha1.NetworkWaiting,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
clbConf, err := parseLbConfig(networkConfig)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err != nil && !apierrors.IsNotFound(err) {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
if pod.Annotations == nil {
pod.Annotations = make(map[string]string)
}
pod.Annotations[CLBPortMappingAnnotation] = getClbPortMappingAnnotation(clbConf, gss)
enableCLBPortMapping := "true"
if networkManager.GetNetworkDisabled() {
enableCLBPortMapping = "false"
}
pod.Annotations[EnableCLBPortMappingAnnotation] = enableCLBPortMapping
if pod.Annotations[CLBPortMappingStatuslAnnotation] == "Ready" {
if result := pod.Annotations[CLBPortMappingResultAnnotation]; result != "" {
mappings := []portMapping{}
if err := json.Unmarshal([]byte(result), &mappings); err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
if len(mappings) != 0 {
internalAddresses := make([]kruisev1alpha1.NetworkAddress, 0)
externalAddresses := make([]kruisev1alpha1.NetworkAddress, 0)
for _, mapping := range mappings {
ss := strings.Split(mapping.Address, ":")
if len(ss) != 2 {
continue
}
lbIP := ss[0]
lbPort, err := strconv.Atoi(ss[1])
if err != nil {
continue
}
port := mapping.Port
instrIPort := intstr.FromInt(port)
instrEPort := intstr.FromInt(lbPort)
portName := instrIPort.String()
protocol := corev1.Protocol(mapping.Protocol)
internalAddresses = append(internalAddresses, kruisev1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []kruisev1alpha1.NetworkPort{
{
Name: portName,
Port: &instrIPort,
Protocol: protocol,
},
},
})
externalAddresses = append(externalAddresses, kruisev1alpha1.NetworkAddress{
IP: lbIP,
Ports: []kruisev1alpha1.NetworkPort{
{
Name: portName,
Port: &instrEPort,
Protocol: protocol,
},
},
})
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = kruisev1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
}
}
}
}
return pod, nil
}
func (p *ClbPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
return nil
}
func init() {
clbPlugin := ClbPlugin{}
tencentCloudProvider.registerPlugin(&clbPlugin)
}
func getClbPortMappingAnnotation(clbConf *clbConfig, gss *kruisev1alpha1.GameServerSet) string {
poolName := fmt.Sprintf("%s-%s", gss.Namespace, gss.Name)
var buf strings.Builder
for _, pp := range clbConf.targetPorts {
buf.WriteString(fmt.Sprintf("%d %s %s\n", pp.port, pp.protocol, poolName))
}
return buf.String()
}
var ErrMissingPortProtocolsConfig = fmt.Errorf("missing %s config", PortProtocolsConfigName)
func parseLbConfig(conf []kruisev1alpha1.NetworkConfParams) (*clbConfig, error) {
ports := []portProtocol{}
for _, c := range conf {
switch c.Name {
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
protocol := "TCP"
if len(ppSlice) == 2 {
protocol = ppSlice[1]
}
ports = append(ports, portProtocol{
port: port,
protocol: protocol,
})
}
}
}
if len(ports) == 0 {
return nil, ErrMissingPortProtocolsConfig
}
return &clbConfig{
targetPorts: ports,
}, nil
}

View File

@ -0,0 +1,74 @@
package tencentcloud
import (
"reflect"
"testing"
kruisev1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
)
func TestParseLbConfig(t *testing.T) {
tests := []struct {
conf []kruisev1alpha1.NetworkConfParams
clbConfig *clbConfig
}{
{
conf: []kruisev1alpha1.NetworkConfParams{
{
Name: ClbIdsConfigName,
Value: "xxx-A",
},
{
Name: PortProtocolsConfigName,
Value: "80",
},
},
clbConfig: &clbConfig{
targetPorts: []portProtocol{
{
port: 80,
protocol: "TCP",
},
},
},
},
{
conf: []kruisev1alpha1.NetworkConfParams{
{
Name: ClbIdsConfigName,
Value: "xxx-A,xxx-B,",
},
{
Name: PortProtocolsConfigName,
Value: "81/UDP,82,83/TCP",
},
},
clbConfig: &clbConfig{
targetPorts: []portProtocol{
{
port: 81,
protocol: "UDP",
},
{
port: 82,
protocol: "TCP",
},
{
port: 83,
protocol: "TCP",
},
},
},
},
}
for i, test := range tests {
lc, err := parseLbConfig(test.conf)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(test.clbConfig, lc) {
t.Errorf("case %d: lbId expect: %v, actual: %v", i, test.clbConfig, lc)
}
}
}

View File

@ -0,0 +1,59 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package tencentcloud
import (
"github.com/openkruise/kruise-game/cloudprovider"
"k8s.io/klog/v2"
)
const (
TencentCloud = "TencentCloud"
)
var tencentCloudProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (ap *Provider) Name() string {
return TencentCloud
}
func (ap *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if ap.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return ap.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (ap *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
klog.Fatal("empty plugin name")
}
ap.plugins[name] = plugin
}
func NewTencentCloudProvider() (cloudprovider.CloudProvider, error) {
return tencentCloudProvider, nil
}

View File

@ -0,0 +1,144 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package utils
import (
"context"
"errors"
"github.com/openkruise/kruise-game/apis/v1alpha1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/json"
log "k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
)
type NetworkManager struct {
pod *corev1.Pod
networkType string
networkConf []v1alpha1.NetworkConfParams
networkStatus *v1alpha1.NetworkStatus
networkDisabled bool
client client.Client
}
func (nm *NetworkManager) GetNetworkDisabled() bool {
return nm.networkDisabled
}
func (nm *NetworkManager) SetNetworkState(disabled bool) error {
patchPod := nm.pod.DeepCopy()
if patchPod == nil {
return errors.New("EmptyPodError")
}
// Initial annotations if necessary
if patchPod.Labels == nil {
patchPod.Labels = make(map[string]string)
}
patchPod.Labels[v1alpha1.GameServerNetworkDisabled] = strconv.FormatBool(disabled)
patch := client.MergeFrom(patchPod)
return nm.client.Patch(context.Background(), nm.pod, patch, nil)
}
func (nm *NetworkManager) GetNetworkStatus() (*v1alpha1.NetworkStatus, error) {
p := nm.pod.DeepCopy()
if p == nil || p.Annotations == nil {
return nil, errors.New("EmptyPodError")
}
networkStatusStr := p.Annotations[v1alpha1.GameServerNetworkStatus]
if networkStatusStr == "" {
return nil, nil
}
networkStatus := &v1alpha1.NetworkStatus{}
err := json.Unmarshal([]byte(networkStatusStr), networkStatus)
if err != nil {
log.Errorf("Failed to unmarshal pod %s networkStatus,because of %s", p.Name, err.Error())
return nil, err
}
return networkStatus, nil
}
func (nm *NetworkManager) UpdateNetworkStatus(networkStatus v1alpha1.NetworkStatus, pod *corev1.Pod) (*corev1.Pod, error) {
networkStatusBytes, err := json.Marshal(networkStatus)
if err != nil {
log.Errorf("pod %s can not update networkStatus,because of %s", nm.pod.Name, err.Error())
return pod, err
}
pod.Annotations[v1alpha1.GameServerNetworkStatus] = string(networkStatusBytes)
return pod, nil
}
func (nm *NetworkManager) GetNetworkConfig() []v1alpha1.NetworkConfParams {
return nm.networkConf
}
func (nm *NetworkManager) GetNetworkType() string {
return nm.networkType
}
func NewNetworkManager(pod *corev1.Pod, client client.Client) *NetworkManager {
var ok bool
var err error
var networkType string
if networkType, ok = pod.Annotations[v1alpha1.GameServerNetworkType]; !ok {
log.V(5).Infof("Pod %s has no network conf", pod.Name)
return nil
}
var networkConfStr string
var networkConf []v1alpha1.NetworkConfParams
if networkConfStr, ok = pod.Annotations[v1alpha1.GameServerNetworkConf]; ok {
err = json.Unmarshal([]byte(networkConfStr), &networkConf)
if err != nil {
log.Warningf("Pod %s has invalid network conf, err: %s", pod.Name, err.Error())
return nil
}
}
// If valid and use status as default
var networkStatusStr string
networkStatus := &v1alpha1.NetworkStatus{}
if networkStatusStr, ok = pod.Annotations[v1alpha1.GameServerNetworkStatus]; ok {
err = json.Unmarshal([]byte(networkStatusStr), networkStatus)
if err != nil {
log.Warningf("Pod %s has invalid network status, err: %s", pod.Name, err.Error())
}
}
var networkDisabled bool
if networkDisabledStr, ok := pod.Labels[v1alpha1.GameServerNetworkDisabled]; ok {
networkDisabled, err = strconv.ParseBool(networkDisabledStr)
if err != nil {
log.Warningf("Pod %s has invalid network disabled option, err: %s", pod.Name, err.Error())
}
}
return &NetworkManager{
pod: pod,
networkType: networkType,
networkConf: networkConf,
networkStatus: networkStatus,
networkDisabled: networkDisabled,
client: client,
}
}

View File

@ -0,0 +1,85 @@
package utils
import (
"context"
kruisePub "github.com/openkruise/kruise-api/apps/pub"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
"sigs.k8s.io/controller-runtime/pkg/client"
"strings"
)
func AllowNotReadyContainers(c client.Client, ctx context.Context, pod *corev1.Pod, svc *corev1.Service, isSvcShared bool) (bool, cperrors.PluginError) {
// get lifecycleState
lifecycleState, exist := pod.GetLabels()[kruisePub.LifecycleStateKey]
// get gss
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err != nil {
return false, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// get allowNotReadyContainers
var allowNotReadyContainers []string
for _, kv := range gss.Spec.Network.NetworkConf {
if kv.Name == gamekruiseiov1alpha1.AllowNotReadyContainersNetworkConfName {
for _, allowNotReadyContainer := range strings.Split(kv.Value, ",") {
if allowNotReadyContainer != "" {
allowNotReadyContainers = append(allowNotReadyContainers, allowNotReadyContainer)
}
}
}
}
// PreInplaceUpdating
if exist && lifecycleState == string(kruisePub.LifecycleStatePreparingUpdate) {
// ensure PublishNotReadyAddresses is true when containers pre-updating
if !svc.Spec.PublishNotReadyAddresses && util.IsContainersPreInplaceUpdating(pod, gss, allowNotReadyContainers) {
svc.Spec.PublishNotReadyAddresses = true
return true, nil
}
// ensure remove finalizer
if svc.Spec.PublishNotReadyAddresses || !util.IsContainersPreInplaceUpdating(pod, gss, allowNotReadyContainers) {
pod.GetLabels()[gamekruiseiov1alpha1.InplaceUpdateNotReadyBlocker] = "false"
}
} else {
pod.GetLabels()[gamekruiseiov1alpha1.InplaceUpdateNotReadyBlocker] = "true"
if !svc.Spec.PublishNotReadyAddresses {
return false, nil
}
if isSvcShared {
// ensure PublishNotReadyAddresses is false when all pods are updated
if gss.Status.UpdatedReplicas == gss.Status.Replicas {
podList := &corev1.PodList{}
err := c.List(ctx, podList, &client.ListOptions{
Namespace: gss.GetNamespace(),
LabelSelector: labels.SelectorFromSet(map[string]string{
gamekruiseiov1alpha1.GameServerOwnerGssKey: gss.GetName(),
})})
if err != nil {
return false, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
for _, p := range podList.Items {
_, condition := util.GetPodConditionFromList(p.Status.Conditions, corev1.PodReady)
if condition == nil || condition.Status != corev1.ConditionTrue {
return false, nil
}
}
svc.Spec.PublishNotReadyAddresses = false
return true, nil
}
} else {
_, condition := util.GetPodConditionFromList(pod.Status.Conditions, corev1.PodReady)
if condition == nil || condition.Status != corev1.ConditionTrue {
return false, nil
}
svc.Spec.PublishNotReadyAddresses = false
return true, nil
}
}
return false, nil
}

View File

@ -0,0 +1,289 @@
package utils
import (
"context"
kruisePub "github.com/openkruise/kruise-api/apps/pub"
kruiseV1alpha1 "github.com/openkruise/kruise-api/apps/v1alpha1"
kruiseV1beta1 "github.com/openkruise/kruise-api/apps/v1beta1"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"testing"
)
var (
scheme = runtime.NewScheme()
)
func init() {
utilruntime.Must(gamekruiseiov1alpha1.AddToScheme(scheme))
utilruntime.Must(kruiseV1beta1.AddToScheme(scheme))
utilruntime.Must(kruiseV1alpha1.AddToScheme(scheme))
utilruntime.Must(corev1.AddToScheme(scheme))
}
func TestAllowNotReadyContainers(t *testing.T) {
tests := []struct {
// input
pod *corev1.Pod
svc *corev1.Service
gss *gamekruiseiov1alpha1.GameServerSet
isSvcShared bool
podElse []*corev1.Pod
// output
inplaceUpdateNotReadyBlocker string
isSvcUpdated bool
}{
// When svc is not shared, pod updated, svc should not publish NotReadyAddresses
{
pod: &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case0-0",
UID: "xxx0",
Labels: map[string]string{
kruisePub.LifecycleStateKey: string(kruisePub.LifecycleStateUpdating),
gamekruiseiov1alpha1.GameServerOwnerGssKey: "case0",
},
},
Status: corev1.PodStatus{
Conditions: []corev1.PodCondition{
{
Type: corev1.PodReady,
Status: corev1.ConditionTrue,
},
},
ContainerStatuses: []corev1.ContainerStatus{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v1.0",
},
},
},
},
svc: &corev1.Service{
Spec: corev1.ServiceSpec{
PublishNotReadyAddresses: true,
},
},
gss: &gamekruiseiov1alpha1.GameServerSet{
TypeMeta: metav1.TypeMeta{
Kind: "GameServerSet",
APIVersion: "game.kruise.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case0",
UID: "xxx0",
},
Spec: gamekruiseiov1alpha1.GameServerSetSpec{
Network: &gamekruiseiov1alpha1.Network{
NetworkConf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: gamekruiseiov1alpha1.AllowNotReadyContainersNetworkConfName,
Value: "name_B",
},
},
},
GameServerTemplate: gamekruiseiov1alpha1.GameServerTemplate{
PodTemplateSpec: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v1.0",
},
},
},
},
},
},
},
isSvcShared: false,
inplaceUpdateNotReadyBlocker: "true",
isSvcUpdated: true,
},
// When svc is not shared & pod is pre-updating & svc PublishNotReadyAddresses is false, svc should publish NotReadyAddresses
{
pod: &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case1-0",
UID: "xxx0",
Labels: map[string]string{
kruisePub.LifecycleStateKey: string(kruisePub.LifecycleStatePreparingUpdate),
gamekruiseiov1alpha1.InplaceUpdateNotReadyBlocker: "true",
gamekruiseiov1alpha1.GameServerOwnerGssKey: "case1",
},
},
Status: corev1.PodStatus{
Conditions: []corev1.PodCondition{
{
Type: corev1.PodReady,
Status: corev1.ConditionTrue,
},
},
ContainerStatuses: []corev1.ContainerStatus{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v1.0",
},
},
},
},
svc: &corev1.Service{
Spec: corev1.ServiceSpec{
PublishNotReadyAddresses: false,
},
},
gss: &gamekruiseiov1alpha1.GameServerSet{
TypeMeta: metav1.TypeMeta{
Kind: "GameServerSet",
APIVersion: "game.kruise.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case1",
UID: "xxx0",
},
Spec: gamekruiseiov1alpha1.GameServerSetSpec{
Network: &gamekruiseiov1alpha1.Network{
NetworkConf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: gamekruiseiov1alpha1.AllowNotReadyContainersNetworkConfName,
Value: "name_B",
},
},
},
GameServerTemplate: gamekruiseiov1alpha1.GameServerTemplate{
PodTemplateSpec: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v2.0",
},
},
},
},
},
},
},
isSvcShared: false,
inplaceUpdateNotReadyBlocker: "true",
isSvcUpdated: true,
},
// When svc is not shared & pod is pre-updating & svc PublishNotReadyAddresses is true, finalizer of pod should be removed to enter next stage
{
pod: &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case2-0",
UID: "xxx0",
Labels: map[string]string{
kruisePub.LifecycleStateKey: string(kruisePub.LifecycleStatePreparingUpdate),
gamekruiseiov1alpha1.GameServerOwnerGssKey: "case2",
},
},
Status: corev1.PodStatus{
Conditions: []corev1.PodCondition{
{
Type: corev1.PodReady,
Status: corev1.ConditionTrue,
},
},
ContainerStatuses: []corev1.ContainerStatus{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v1.0",
},
},
},
},
svc: &corev1.Service{
Spec: corev1.ServiceSpec{
PublishNotReadyAddresses: true,
},
},
gss: &gamekruiseiov1alpha1.GameServerSet{
TypeMeta: metav1.TypeMeta{
Kind: "GameServerSet",
APIVersion: "game.kruise.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case2",
UID: "xxx0",
},
Spec: gamekruiseiov1alpha1.GameServerSetSpec{
Network: &gamekruiseiov1alpha1.Network{
NetworkConf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: gamekruiseiov1alpha1.AllowNotReadyContainersNetworkConfName,
Value: "name_B",
},
},
},
GameServerTemplate: gamekruiseiov1alpha1.GameServerTemplate{
PodTemplateSpec: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v1.0",
},
},
},
},
},
},
},
isSvcShared: false,
inplaceUpdateNotReadyBlocker: "false",
isSvcUpdated: false,
},
}
for i, test := range tests {
objs := []client.Object{test.gss, test.pod, test.svc}
c := fake.NewClientBuilder().WithScheme(scheme).WithObjects(objs...).Build()
actual, err := AllowNotReadyContainers(c, context.TODO(), test.pod, test.svc, test.isSvcShared)
if err != nil {
t.Errorf("case %d: %s", i, err.Error())
}
if actual != test.isSvcUpdated {
t.Errorf("case %d: expect isSvcUpdated is %v but actually got %v", i, test.isSvcUpdated, actual)
}
if test.pod.GetLabels()[gamekruiseiov1alpha1.InplaceUpdateNotReadyBlocker] != test.inplaceUpdateNotReadyBlocker {
t.Errorf("case %d: expect inplaceUpdateNotReadyBlocker is %v but actually got %v", i, test.inplaceUpdateNotReadyBlocker, test.pod.GetLabels()[gamekruiseiov1alpha1.InplaceUpdateNotReadyBlocker])
}
}
}

View File

@ -0,0 +1,124 @@
English | [中文](./README.zh_CN.md)
The Volcaengine Kubernetes Engine supports the CLB reuse mechanism in k8s. Different SVCs can use different ports of the same CLB. Therefore, the Volcengine-CLB network plugin will record the port allocation corresponding to each CLB. For the specified network type as Volcengine-CLB, the Volcengine-CLB network plugin will automatically allocate a port and create a service object. Wait for the svc ingress field. After the public network IP is successfully created, the GameServer network is in the Ready state and the process is completed.
![image](https://github.com/lizhipeng629/kruise-game/assets/110802158/209de309-b9b7-4ba8-b2fb-da0d299e2edb)
## Volcengine-CLB configuration
### plugin configuration
```toml
[volcengine]
enable = true
[volcengine.clb]
#Fill in the free port segment that clb can use to allocate external access ports to pods, The maximum port range is 200.
max_port = 700
min_port = 500
```
### Parameter
#### ClbIds
- Meaningfill in the id of the clb. You can fill in more than one. You need to create the clb in [Volcano Engine].
- Valueeach clbId is divided by `,` . For example: `clb-9zeo7prq1m25ctpfrw1m7`,`clb-bp1qz7h50yd3w58h2f8je`,...
- ConfigurableY
#### PortProtocols
- Meaningthe ports and protocols exposed by the pod, support filling in multiple ports/protocols
- Value`port1/protocol1`,`port2/protocol2`,... The protocol names must be in uppercase letters.
- ConfigurableY
#### AllocateLoadBalancerNodePorts
- MeaningWhether the generated service is assigned nodeport, this can be set to false only in clb passthrough mode
- Valuefalse / true
- ConfigurableY
#### Fixed
- Meaningwhether the mapping relationship is fixed. If the mapping relationship is fixed, the mapping relationship remains unchanged even if the pod is deleted and recreated.
- Valuefalse / true
- ConfigurableY
#### AllowNotReadyContainers
- Meaningthe container names that are allowed not ready when inplace updating, when traffic will not be cut.
- Value{containerName_0},{containerName_1},... egsidecar
- ConfigurableIt cannot be changed during the in-place updating process.
#### Annotations
- Meaningthe anno added to the service
- Valuekey1:value1,key2:value2...
- ConfigurableY
### Example
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: gss-2048-clb
namespace: default
spec:
replicas: 3
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: Volcengine-CLB
networkConf:
- name: ClbIds
#Fill in Volcengine Cloud LoadBalancer Id here
value: clb-xxxxx
- name: PortProtocols
#Fill in the exposed ports and their corresponding protocols here.
#If there are multiple ports, the format is as follows: {port1}/{protocol1},{port2}/{protocol2}...
#If the protocol is not filled in, the default is TCP
value: 80/TCP
- name: AllocateLoadBalancerNodePorts
# Whether the generated service is assigned nodeport.
value: "true"
- name: Fixed
#Fill in here whether a fixed IP is required [optional] ; Default is false
value: "false"
- name: Annotations
#Fill in the anno related to clb on the service
#The format is as follows: {key1}:{value1},{key2}:{value2}...
value: "key1:value1,key2:value2"
gameServerTemplate:
spec:
containers:
- image: cr-helm2-cn-beijing.cr.volces.com/kruise/2048:v1.0
name: app-2048
volumeMounts:
- name: shared-dir
mountPath: /var/www/html/js
- image: cr-helm2-cn-beijing.cr.volces.com/kruise/2048-sidecar:v1.0
name: sidecar
args:
- bash
- -c
- rsync -aP /app/js/* /app/scripts/ && while true; do echo 11;sleep 2; done
volumeMounts:
- name: shared-dir
mountPath: /app/scripts
volumes:
- name: shared-dir
emptyDir: {}
EOF
```
Check the network status in GameServer:
```
networkStatus:
createTime: "2024-01-19T08:19:49Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xx.xxx
ports:
- name: "80"
port: 6611
protocol: TCP
internalAddresses:
- ip: 172.16.200.60
ports:
- name: "80"
port: 80
protocol: TCP
lastTransitionTime: "2024-01-19T08:19:49Z"
networkType: Volcengine-CLB
```

View File

@ -0,0 +1,125 @@
中文 | [English](./README.md)
火山引擎容器服务支持在k8s中对CLB复用的机制不同的svc可以使用同一个CLB的不同端口。由此Volcengine-CLB network plugin将记录各CLB对应的端口分配情况对于指定了网络类型为Volcengine-CLBVolcengine-CLB网络插件将会自动分配一个端口并创建一个service对象待svc ingress字段的公网IP创建成功后GameServer的网络处于Ready状态该过程执行完成。
![image](https://github.com/lizhipeng629/kruise-game/assets/110802158/209de309-b9b7-4ba8-b2fb-da0d299e2edb)
## Volcengine-CLB 相关配置
### plugin配置
```toml
[volcengine]
enable = true
[volcengine.clb]
#填写clb可使用的空闲端口段用于为pod分配外部接入端口范围最大为200
max_port = 700
min_port = 500
```
### 参数
#### ClbIds
- 含义填写clb的id可填写多个需要现在【火山引擎】中创建好clb。
- 填写格式各个clbId用,分割。例如clb-9zeo7prq1m25ctpfrw1m7,clb-bp1qz7h50yd3w58h2f8je,...
- 是否支持变更:是
#### PortProtocols
- 含义pod暴露的端口及协议支持填写多个端口/协议
- 填写格式port1/protocol1,port2/protocol2,...(协议需大写)
- 是否支持变更:是
#### Fixed
- 含义是否固定访问IP/端口。若是即使pod删除重建网络内外映射关系不会改变
- 填写格式false / true
- 是否支持变更:是
#### AllocateLoadBalancerNodePorts
- 含义生成的service是否分配nodeport, 仅在clb的直通模式passthrough才能设置为false
- 填写格式true/false
- 是否支持变更:是
#### AllowNotReadyContainers
- 含义:在容器原地升级时允许不断流的对应容器名称,可填写多个
- 填写格式:{containerName_0},{containerName_1},... 例如sidecar
- 是否支持变更:在原地升级过程中不可变更
#### Annotations
- 含义添加在service上的anno可填写多个
- 填写格式key1:value1,key2:value2...
- 是否支持变更:是
### 使用示例
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: gss-2048-clb
namespace: default
spec:
replicas: 3
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: Volcengine-CLB
networkConf:
- name: ClbIds
#Fill in Volcengine Cloud LoadBalancer Id here
value: clb-xxxxx
- name: PortProtocols
#Fill in the exposed ports and their corresponding protocols here.
#If there are multiple ports, the format is as follows: {port1}/{protocol1},{port2}/{protocol2}...
#If the protocol is not filled in, the default is TCP
value: 80/TCP
- name: AllocateLoadBalancerNodePorts
# Whether the generated service is assigned nodeport.
value: "true"
- name: Fixed
#Fill in here whether a fixed IP is required [optional] ; Default is false
value: "false"
- name: Annotations
#Fill in the anno related to clb on the service
#The format is as follows: {key1}:{value1},{key2}:{value2}...
value: "key1:value1,key2:value2"
gameServerTemplate:
spec:
containers:
- image: cr-helm2-cn-beijing.cr.volces.com/kruise/2048:v1.0
name: app-2048
volumeMounts:
- name: shared-dir
mountPath: /var/www/html/js
- image: cr-helm2-cn-beijing.cr.volces.com/kruise/2048-sidecar:v1.0
name: sidecar
args:
- bash
- -c
- rsync -aP /app/js/* /app/scripts/ && while true; do echo 11;sleep 2; done
volumeMounts:
- name: shared-dir
mountPath: /app/scripts
volumes:
- name: shared-dir
emptyDir: {}
EOF
```
检查GameServer中的网络状态:
```
networkStatus:
createTime: "2024-01-19T08:19:49Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xx.xxx
ports:
- name: "80"
port: 6611
protocol: TCP
internalAddresses:
- ip: 172.16.200.60
ports:
- name: "80"
port: 80
protocol: TCP
lastTransitionTime: "2024-01-19T08:19:49Z"
networkType: Volcengine-CLB
```

View File

@ -0,0 +1,688 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package volcengine
import (
"context"
"fmt"
"strconv"
"strings"
"sync"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
)
const (
ClbNetwork = "Volcengine-CLB"
AliasCLB = "CLB-Network"
ClbIdLabelKey = "service.beta.kubernetes.io/volcengine-loadbalancer-id"
ClbIdsConfigName = "ClbIds"
PortProtocolsConfigName = "PortProtocols"
FixedConfigName = "Fixed"
AllocateLoadBalancerNodePorts = "AllocateLoadBalancerNodePorts"
ClbAnnotations = "Annotations"
ClbConfigHashKey = "game.kruise.io/network-config-hash"
ClbIdAnnotationKey = "service.beta.kubernetes.io/volcengine-loadbalancer-id"
ClbAddressTypeKey = "service.beta.kubernetes.io/volcengine-loadbalancer-address-type"
ClbAddressTypePublic = "PUBLIC"
ClbSchedulerKey = "service.beta.kubernetes.io/volcengine-loadbalancer-scheduler"
ClbSchedulerWRR = "wrr"
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
EnableClbScatterConfigName = "EnableClbScatter"
EnableMultiIngressConfigName = "EnableMultiIngress"
)
type portAllocated map[int32]bool
type ClbPlugin struct {
maxPort int32
minPort int32
blockPorts []int32
cache map[string]portAllocated
podAllocate map[string]string
mutex sync.RWMutex
lastScatterIdx int // 新增:用于轮询打散
}
type clbConfig struct {
lbIds []string
targetPorts []int
protocols []corev1.Protocol
isFixed bool
annotations map[string]string
allocateLoadBalancerNodePorts bool
enableClbScatter bool // 新增:打散开关
enableMultiIngress bool // 新增:多 ingress IP 开关
}
func (c *ClbPlugin) Name() string {
return ClbNetwork
}
func (c *ClbPlugin) Alias() string {
return AliasCLB
}
func (c *ClbPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
log.Infof("[CLB] Init called, options: %+v", options)
c.mutex.Lock()
defer c.mutex.Unlock()
clbOptions, ok := options.(provideroptions.VolcengineOptions)
if !ok {
log.Errorf("[CLB] failed to convert options to clbOptions: %+v", options)
return cperrors.ToPluginError(fmt.Errorf("failed to convert options to clbOptions"), cperrors.InternalError)
}
c.minPort = clbOptions.CLBOptions.MinPort
c.maxPort = clbOptions.CLBOptions.MaxPort
c.blockPorts = clbOptions.CLBOptions.BlockPorts
svcList := &corev1.ServiceList{}
err := client.List(ctx, svcList)
if err != nil {
log.Errorf("[CLB] client.List failed: %v", err)
return err
}
c.cache, c.podAllocate = initLbCache(svcList.Items, c.minPort, c.maxPort, c.blockPorts)
log.Infof("[CLB] Init finished, minPort=%d, maxPort=%d, blockPorts=%v, svcCount=%d", c.minPort, c.maxPort, c.blockPorts, len(svcList.Items))
return nil
}
func initLbCache(svcList []corev1.Service, minPort, maxPort int32, blockPorts []int32) (map[string]portAllocated, map[string]string) {
newCache := make(map[string]portAllocated)
newPodAllocate := make(map[string]string)
for _, svc := range svcList {
lbId := svc.Labels[ClbIdLabelKey]
if lbId != "" && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
if newCache[lbId] == nil {
newCache[lbId] = make(portAllocated, maxPort-minPort)
for i := minPort; i < maxPort; i++ {
newCache[lbId][i] = false
}
}
// block ports
for _, blockPort := range blockPorts {
newCache[lbId][blockPort] = true
}
var ports []int32
for _, port := range getPorts(svc.Spec.Ports) {
if port <= maxPort && port >= minPort {
newCache[lbId][port] = true
ports = append(ports, port)
}
}
if len(ports) != 0 {
newPodAllocate[svc.GetNamespace()+"/"+svc.GetName()] = lbId + ":" + util.Int32SliceToString(ports, ",")
}
}
}
log.Infof("[%s] podAllocate cache complete initialization: %v", ClbNetwork, newPodAllocate)
return newCache, newPodAllocate
}
func (c *ClbPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
log.Infof("[CLB] OnPodAdded called for pod %s/%s", pod.GetNamespace(), pod.GetName())
return pod, nil
}
func (c *ClbPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
log.Infof("[CLB] OnPodUpdated called for pod %s/%s", pod.GetNamespace(), pod.GetName())
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, err := networkManager.GetNetworkStatus()
if err != nil {
log.Errorf("[CLB] GetNetworkStatus failed: %v", err)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
log.V(4).Infof("[CLB] NetworkConfig: %+v", networkConfig)
config := parseLbConfig(networkConfig)
log.V(4).Infof("[CLB] Parsed clbConfig: %+v", config)
if networkStatus == nil {
log.Infof("[CLB] networkStatus is nil, set NetworkNotReady for pod %s/%s", pod.GetNamespace(), pod.GetName())
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkStatus = &gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}
}
// get svc
svc := &corev1.Service{}
err = client.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
log.Infof("[CLB] Service not found for pod %s/%s, will create new svc", pod.GetNamespace(), pod.GetName())
svc, err := c.consSvc(config, pod, client, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
return pod, cperrors.ToPluginError(client.Create(ctx, svc), cperrors.ApiCallError)
}
log.Errorf("[CLB] client.Get svc failed: %v", err)
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
if len(svc.OwnerReferences) > 0 && svc.OwnerReferences[0].Kind == "Pod" && svc.OwnerReferences[0].UID != pod.UID {
log.Infof("[CLB] waiting old svc %s/%s deleted. old owner pod uid is %s, but now is %s", svc.Namespace, svc.Name, svc.OwnerReferences[0].UID, pod.UID)
return pod, nil
}
// update svc
if util.GetHash(config) != svc.GetAnnotations()[ClbConfigHashKey] {
log.Infof("[CLB] config hash changed for pod %s/%s, updating svc", pod.GetNamespace(), pod.GetName())
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
newSvc, err := c.consSvc(config, pod, client, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
return pod, cperrors.ToPluginError(client.Update(ctx, newSvc), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
log.V(4).Infof("[CLB] Network disabled, set svc type to ClusterIP for pod %s/%s", pod.GetNamespace(), pod.GetName())
svc.Spec.Type = corev1.ServiceTypeClusterIP
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeClusterIP {
log.V(4).Infof("[CLB] Network enabled, set svc type to LoadBalancer for pod %s/%s", pod.GetNamespace(), pod.GetName())
svc.Spec.Type = corev1.ServiceTypeLoadBalancer
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// network not ready
if len(svc.Status.LoadBalancer.Ingress) == 0 {
log.Infof("[CLB] svc %s/%s has no ingress, network not ready", svc.Namespace, svc.Name)
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
log.V(4).Infof("[CLB] AllowNotReadyContainers enabled for pod %s/%s", pod.GetNamespace(), pod.GetName())
toUpDateSvc, err := utils.AllowNotReadyContainers(client, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := client.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
networkReady(svc, pod, networkStatus, config)
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func networkReady(svc *corev1.Service, pod *corev1.Pod, networkStatus *gamekruiseiov1alpha1.NetworkStatus, config *clbConfig) {
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
// 检查是否启用多 ingress IP 支持
if config.enableMultiIngress && len(svc.Status.LoadBalancer.Ingress) > 1 {
// 多 ingress IP 模式:为每个 ingress IP 创建单独的 external address
for _, ingress := range svc.Status.LoadBalancer.Ingress {
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
// 每个 ingress IP 都创建一个单独的 external address
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: ingress.IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
externalAddresses = append(externalAddresses, externalAddress)
}
}
} else {
// 单 ingress IP 模式(原有逻辑)
if len(svc.Status.LoadBalancer.Ingress) > 0 {
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
externalAddresses = append(externalAddresses, externalAddress)
}
}
}
// internal addresses 逻辑保持不变
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
}
func (c *ClbPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
log.Infof("[CLB] OnPodDeleted called for pod %s/%s", pod.GetNamespace(), pod.GetName())
networkManager := utils.NewNetworkManager(pod, client)
networkConfig := networkManager.GetNetworkConfig()
sc := parseLbConfig(networkConfig)
var podKeys []string
if sc.isFixed {
log.Infof("[CLB] isFixed=true, check gss for pod %s/%s", pod.GetNamespace(), pod.GetName())
gss, err := util.GetGameServerSetOfPod(pod, client, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
log.Infof("[CLB] gss exists, skip deAllocate for pod %s/%s", pod.GetNamespace(), pod.GetName())
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range c.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
log.Infof("[CLB] deAllocate for podKey %s", podKey)
c.deAllocate(podKey)
}
return nil
}
func (c *ClbPlugin) allocate(lbIds []string, num int, nsName string, enableClbScatter ...bool) (string, []int32, error) {
c.mutex.Lock()
defer c.mutex.Unlock()
log.Infof("[CLB] allocate called, lbIds=%v, num=%d, nsName=%s, scatter=%v", lbIds, num, nsName, enableClbScatter)
if len(lbIds) == 0 {
return "", nil, fmt.Errorf("no load balancer IDs provided")
}
var ports []int32
var lbId string
useScatter := false
if len(enableClbScatter) > 0 {
useScatter = enableClbScatter[0]
}
if useScatter && len(lbIds) > 0 {
log.V(4).Infof("[CLB] scatter enabled, round robin from idx %d", c.lastScatterIdx)
// 轮询分配
startIdx := c.lastScatterIdx % len(lbIds)
for i := 0; i < len(lbIds); i++ {
idx := (startIdx + i) % len(lbIds)
clbId := lbIds[idx]
if c.cache[clbId] == nil {
// we assume that an empty cache is allways allocatable
c.newCacheForSingleLb(clbId)
lbId = clbId
c.lastScatterIdx = idx + 1 // 下次从下一个开始
break
}
sum := 0
for p := c.minPort; p < c.maxPort; p++ {
if !c.cache[clbId][p] {
sum++
}
if sum >= num {
lbId = clbId
c.lastScatterIdx = idx + 1 // 下次从下一个开始
break
}
}
if lbId != "" {
break
}
}
} else {
log.V(4).Infof("[CLB] scatter disabled, use default order")
// 原有逻辑
for _, clbId := range lbIds {
if c.cache[clbId] == nil {
c.newCacheForSingleLb(clbId)
lbId = clbId
break
}
sum := 0
for i := c.minPort; i < c.maxPort; i++ {
if !c.cache[clbId][i] {
sum++
}
if sum >= num {
lbId = clbId
break
}
}
if lbId != "" {
break
}
}
}
if lbId == "" {
return "", nil, fmt.Errorf("unable to find load balancer with %d available ports", num)
}
// Find available ports sequentially
portCount := 0
for port := c.minPort; port < c.maxPort && portCount < num; port++ {
if !c.cache[lbId][port] {
c.cache[lbId][port] = true
ports = append(ports, port)
portCount++
}
}
// Check if we found enough ports
if len(ports) < num {
// Rollback: release allocated ports
for _, port := range ports {
c.cache[lbId][port] = false
}
return "", nil, fmt.Errorf("insufficient available ports on load balancer %s: found %d, need %d", lbId, len(ports), num)
}
c.podAllocate[nsName] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("[CLB] pod %s allocate clb %s ports %v", nsName, lbId, ports)
return lbId, ports, nil
}
// newCacheForSingleLb initializes the port allocation cache for a single load balancer. MUST BE CALLED IN LOCK STATE
func (c *ClbPlugin) newCacheForSingleLb(lbId string) {
if c.cache[lbId] == nil {
c.cache[lbId] = make(portAllocated, c.maxPort-c.minPort+1)
for i := c.minPort; i <= c.maxPort; i++ {
c.cache[lbId][i] = false
}
// block ports
for _, blockPort := range c.blockPorts {
c.cache[lbId][blockPort] = true
}
}
}
func (c *ClbPlugin) deAllocate(nsName string) {
c.mutex.Lock()
defer c.mutex.Unlock()
log.Infof("[CLB] deAllocate called for nsName=%s", nsName)
allocatedPorts, exist := c.podAllocate[nsName]
if !exist {
log.Warningf("[CLB] deAllocate: nsName=%s not found in podAllocate", nsName)
return
}
clbPorts := strings.Split(allocatedPorts, ":")
lbId := clbPorts[0]
ports := util.StringToInt32Slice(clbPorts[1], ",")
for _, port := range ports {
c.cache[lbId][port] = false
}
// block ports
for _, blockPort := range c.blockPorts {
c.cache[lbId][blockPort] = true
}
delete(c.podAllocate, nsName)
log.Infof("pod %s deallocate clb %s ports %v", nsName, lbId, ports)
}
func init() {
clbPlugin := ClbPlugin{
mutex: sync.RWMutex{},
}
volcengineProvider.registerPlugin(&clbPlugin)
}
func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *clbConfig {
log.Infof("[CLB] parseLbConfig called, conf=%+v", conf)
var lbIds []string
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
allocateLoadBalancerNodePorts := true
annotations := map[string]string{}
enableClbScatter := false
enableMultiIngress := false
for _, c := range conf {
switch c.Name {
case ClbIdsConfigName:
seenIds := make(map[string]struct{})
for _, clbId := range strings.Split(c.Value, ",") {
if clbId != "" {
if _, exists := seenIds[clbId]; !exists {
lbIds = append(lbIds, clbId)
seenIds[clbId] = struct{}{}
}
}
}
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
isFixed = v
case AllocateLoadBalancerNodePorts:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
allocateLoadBalancerNodePorts = v
case ClbAnnotations:
for _, anno := range strings.Split(c.Value, ",") {
annoKV := strings.Split(anno, ":")
if len(annoKV) == 2 {
annotations[annoKV[0]] = annoKV[1]
} else {
log.Warningf("clb annotation %s is invalid", annoKV[0])
}
}
case EnableClbScatterConfigName:
v, err := strconv.ParseBool(c.Value)
if err == nil {
enableClbScatter = v
}
case EnableMultiIngressConfigName:
v, err := strconv.ParseBool(c.Value)
if err == nil {
enableMultiIngress = v
}
}
}
return &clbConfig{
lbIds: lbIds,
protocols: protocols,
targetPorts: ports,
isFixed: isFixed,
annotations: annotations,
allocateLoadBalancerNodePorts: allocateLoadBalancerNodePorts,
enableClbScatter: enableClbScatter,
enableMultiIngress: enableMultiIngress,
}
}
func getPorts(ports []corev1.ServicePort) []int32 {
var ret []int32
for _, port := range ports {
ret = append(ret, port.Port)
}
return ret
}
func (c *ClbPlugin) consSvc(config *clbConfig, pod *corev1.Pod, client client.Client, ctx context.Context) (*corev1.Service, error) {
var ports []int32
var lbId string
podKey := pod.GetNamespace() + "/" + pod.GetName()
allocatedPorts, exist := c.podAllocate[podKey]
if exist {
clbPorts := strings.Split(allocatedPorts, ":")
lbId = clbPorts[0]
ports = util.StringToInt32Slice(clbPorts[1], ",")
} else {
var err error
lbId, ports, err = c.allocate(config.lbIds, len(config.targetPorts), podKey, config.enableClbScatter)
if err != nil {
log.Errorf("[CLB] pod %s allocate clb failed: %v", podKey, err)
return nil, err
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(config.targetPorts); i++ {
portName := fmt.Sprintf("%d-%s", config.targetPorts[i], strings.ToLower(string(config.protocols[i])))
svcPorts = append(svcPorts, corev1.ServicePort{
Name: portName,
Port: ports[i],
Protocol: config.protocols[i],
TargetPort: intstr.FromInt(config.targetPorts[i]),
})
}
annotations := map[string]string{
ClbSchedulerKey: ClbSchedulerWRR,
ClbAddressTypeKey: ClbAddressTypePublic,
ClbIdAnnotationKey: lbId,
ClbConfigHashKey: util.GetHash(config),
}
for key, value := range config.annotations {
annotations[key] = value
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: annotations,
OwnerReferences: getSvcOwnerReference(client, ctx, pod, config.isFixed),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
AllocateLoadBalancerNodePorts: ptr.To[bool](config.allocateLoadBalancerNodePorts),
},
}
return svc, nil
}
func getSvcOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, isFixed bool) []metav1.OwnerReference {
ownerReferences := []metav1.OwnerReference{
{
APIVersion: pod.APIVersion,
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err == nil {
ownerReferences = []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}
}
return ownerReferences
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,206 @@
package volcengine
import (
"context"
"encoding/json"
"fmt"
"strconv"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
"github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
corev1 "k8s.io/api/core/v1"
log "k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
EIPNetwork = "Volcengine-EIP"
AliasSEIP = "EIP-Network"
ReleaseStrategyConfigName = "ReleaseStrategy"
PoolIdConfigName = "PoolId"
ResourceGroupIdConfigName = "ResourceGroupId"
BandwidthConfigName = "Bandwidth"
BandwidthPackageIdConfigName = "BandwidthPackageId"
ChargeTypeConfigName = "ChargeType"
DescriptionConfigName = "Description"
VkeAnnotationPrefix = "vke.volcengine.com"
UseExistEIPAnnotationKey = "vke.volcengine.com/primary-eip-id"
WithEIPAnnotationKey = "vke.volcengine.com/primary-eip-allocate"
EipAttributeAnnotationKey = "vke.volcengine.com/primary-eip-attributes"
EipStatusKey = "vke.volcengine.com/allocated-eips"
DefaultEipConfig = "{\"type\": \"Elastic\"}"
)
type eipStatus struct {
EipId string `json:"EipId,omitempty"` // EIP 实例 ID
EipAddress string `json:"EipAddress,omitempty"` // EIP 实例公网地址
EniId string `json:"EniId,omitempty"` // Pod 实例的弹性网卡 ID
EniIp string `json:"niIp,omitempty"` // Pod 实例的弹性网卡的私网 IPv4 地址
}
type EipPlugin struct {
}
func (E EipPlugin) Name() string {
return EIPNetwork
}
func (E EipPlugin) Alias() string {
return AliasSEIP
}
func (E EipPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
log.Infof("Initializing Volcengine EIP plugin")
return nil
}
func (E EipPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
log.Infof("begin to handle PodAdded for pod name %s, namespace %s", pod.Name, pod.Namespace)
networkManager := utils.NewNetworkManager(pod, client)
// 获取网络配置参数
networkConfs := networkManager.GetNetworkConfig()
log.Infof("pod %s/%s network configs: %+v", pod.Namespace, pod.Name, networkConfs)
if networkManager.GetNetworkType() != EIPNetwork {
log.Infof("pod %s/%s network type is not %s, skipping", pod.Namespace, pod.Name, EIPNetwork)
return pod, nil
}
log.Infof("processing pod %s/%s with Volcengine EIP network", pod.Namespace, pod.Name)
// 检查是否有 UseExistEIPAnnotationKey 的配置
eipID := ""
if pod.Annotations == nil {
log.Infof("pod %s/%s has no annotations, initializing", pod.Namespace, pod.Name)
pod.Annotations = make(map[string]string)
}
eipConfig := make(map[string]interface{})
// 从配置中提取参数
for _, conf := range networkConfs {
log.Infof("processing network config for pod %s/%s: %s=%s", pod.Namespace, pod.Name, conf.Name, conf.Value)
switch conf.Name {
case UseExistEIPAnnotationKey:
pod.Annotations[UseExistEIPAnnotationKey] = conf.Value
eipID = conf.Value
log.Infof("pod %s/%s using existing EIP ID: %s", pod.Namespace, pod.Name, eipID)
case "billingType":
var err error
eipConfig[conf.Name], err = strconv.ParseInt(conf.Value, 10, 64)
if err != nil {
log.Infof("failed to parse billingType for pod %s/%s: %v", pod.Namespace, pod.Name, err)
return pod, errors.ToPluginError(err, errors.InternalError)
}
log.Infof("pod %s/%s billingType set to: %v", pod.Namespace, pod.Name, eipConfig[conf.Name])
case "bandwidth":
var err error
eipConfig[conf.Name], err = strconv.ParseInt(conf.Value, 10, 64)
if err != nil {
log.Infof("failed to parse bandwidth for pod %s/%s: %v", pod.Namespace, pod.Name, err)
return pod, errors.ToPluginError(err, errors.InternalError)
}
log.Infof("pod %s/%s bandwidth set to: %v", pod.Namespace, pod.Name, eipConfig[conf.Name])
default:
eipConfig[conf.Name] = conf.Value
log.Infof("pod %s/%s setting %s to: %v", pod.Namespace, pod.Name, conf.Name, conf.Value)
}
}
// 更新 Pod 注解
if eipID != "" {
// 使用已有的 EIP
log.Infof("pod %s/%s using existing EIP ID: %s", pod.Namespace, pod.Name, eipID)
pod.Annotations[UseExistEIPAnnotationKey] = eipID
} else {
// 使用新建逻辑
if len(eipConfig) == 0 {
eipConfig["description"] = "Created by the OKG Volcengine EIP plugin. Do not delete or modify."
}
configs, _ := json.Marshal(eipConfig)
log.Infof("pod %s/%s allocating new EIP with config: %s", pod.Namespace, pod.Name, string(configs))
pod.Annotations[WithEIPAnnotationKey] = DefaultEipConfig
pod.Annotations[EipAttributeAnnotationKey] = string(configs)
}
log.Infof("completed OnPodAdded for pod %s/%s", pod.Namespace, pod.Name)
return pod, nil
}
func (E EipPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
log.Infof("begin to handle PodUpdated for pod name %s, namespace %s", pod.Name, pod.Namespace)
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
log.Infof("network status is nil for pod %s/%s, updating to waiting state", pod.Namespace, pod.Name)
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkWaiting,
}, pod)
if err != nil {
log.Infof("failed to update network status for pod %s/%s: %v", pod.Namespace, pod.Name, err)
return pod, errors.ToPluginError(err, errors.InternalError)
}
return pod, nil
}
podEipStatus := []eipStatus{}
if str, ok := pod.Annotations[EipStatusKey]; ok {
log.Infof("found EIP status annotation for pod %s/%s: %s", pod.Namespace, pod.Name, str)
err := json.Unmarshal([]byte(str), &podEipStatus)
if err != nil {
log.Infof("failed to unmarshal EipStatusKey for pod %s/%s: %v", pod.Namespace, pod.Name, err)
return pod, errors.ToPluginError(fmt.Errorf("failed to unmarshal EipStatusKey, err: %w", err), errors.ParameterError)
}
log.Infof("updating network status for pod %s/%s, internal IP: %s, external IP: %s",
pod.Namespace, pod.Name, podEipStatus[0].EniIp, podEipStatus[0].EipAddress)
var internalAddresses []gamekruiseiov1alpha1.NetworkAddress
var externalAddresses []gamekruiseiov1alpha1.NetworkAddress
for _, eipStatus := range podEipStatus {
internalAddresses = append(internalAddresses, gamekruiseiov1alpha1.NetworkAddress{
IP: eipStatus.EniIp,
})
externalAddresses = append(externalAddresses, gamekruiseiov1alpha1.NetworkAddress{
IP: eipStatus.EipAddress,
})
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
log.Infof("network for pod %s/%s is ready, EIP: %s", pod.Namespace, pod.Name, podEipStatus[0].EipAddress)
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
log.Infof("failed to update network status for pod %s/%s: %v", pod.Namespace, pod.Name, err)
}
return pod, errors.ToPluginError(err, errors.InternalError)
}
log.Infof("no EIP status found for pod %s/%s, waiting for allocation", pod.Namespace, pod.Name)
return pod, nil
}
func (E EipPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) errors.PluginError {
log.Infof("handling pod deletion for pod %s/%s", pod.Namespace, pod.Name)
// 检查是否需要额外处理
if pod.Annotations != nil {
if eipID, ok := pod.Annotations[UseExistEIPAnnotationKey]; ok {
log.Infof("pod %s/%s being deleted had existing EIP ID: %s", pod.Namespace, pod.Name, eipID)
}
if _, ok := pod.Annotations[WithEIPAnnotationKey]; ok {
log.Infof("pod %s/%s being deleted had allocated EIP", pod.Namespace, pod.Name)
}
}
log.Infof("completed deletion handling for pod %s/%s", pod.Namespace, pod.Name)
return nil
}
func init() {
volcengineProvider.registerPlugin(&EipPlugin{})
}

View File

@ -0,0 +1,182 @@
package volcengine
import (
"context"
"encoding/json"
"testing"
"github.com/openkruise/kruise-game/apis/v1alpha1"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider/alibabacloud/apis/v1beta1"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
func TestEipPlugin_Init(t *testing.T) {
plugin := EipPlugin{}
assert.Equal(t, EIPNetwork, plugin.Name())
assert.Equal(t, AliasSEIP, plugin.Alias())
err := plugin.Init(nil, nil, context.Background())
assert.NoError(t, err)
}
func TestEipPlugin_OnPodAdded_UseExistingEIP(t *testing.T) {
// 创建测试 Pod
networkConf := []v1alpha1.NetworkConfParams{}
networkConf = append(networkConf, v1alpha1.NetworkConfParams{
Name: UseExistEIPAnnotationKey,
Value: "eip-12345",
})
jsonStr, _ := json.Marshal(networkConf)
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
v1alpha1.GameServerNetworkType: EIPNetwork,
v1alpha1.GameServerNetworkConf: string(jsonStr),
},
},
}
// 创建假的 client
scheme := runtime.NewScheme()
_ = corev1.AddToScheme(scheme)
fakeClient := fake.NewClientBuilder().WithScheme(scheme).Build()
// 执行测试
plugin := EipPlugin{}
updatedPod, err := plugin.OnPodAdded(fakeClient, pod, context.Background())
// 检查结果
assert.NoError(t, err)
assert.Equal(t, "eip-12345", updatedPod.Annotations[UseExistEIPAnnotationKey])
assert.Equal(t, EIPNetwork, updatedPod.Annotations[v1alpha1.GameServerNetworkType])
jErr := json.Unmarshal([]byte(updatedPod.Annotations[v1alpha1.GameServerNetworkConf]), &networkConf)
assert.NoError(t, jErr)
}
func addKvToParams(networkConf []v1alpha1.NetworkConfParams, keys []string, values []string) []v1alpha1.NetworkConfParams {
// 遍历 keys 和 values添加到 map 中
for i := 0; i < len(keys); i++ {
networkConf = append(networkConf, v1alpha1.NetworkConfParams{
Name: keys[i],
Value: values[i],
})
}
return networkConf
}
func TestEipPlugin_OnPodAdded_NewEIP(t *testing.T) {
networkConf := []v1alpha1.NetworkConfParams{}
networkConf = addKvToParams(networkConf, []string{"name", "isp", "bandwidth", "description", "billingType"},
[]string{"eip-demo", "BGP", "100", "demo for pods eip", "2"})
jsonStr, _ := json.Marshal(networkConf)
// 创建测试 Pod 并添加相关注解
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
v1alpha1.GameServerNetworkType: EIPNetwork,
v1alpha1.GameServerNetworkConf: string(jsonStr),
},
},
}
// 创建假的 client
scheme := runtime.NewScheme()
_ = corev1.AddToScheme(scheme)
fakeClient := fake.NewClientBuilder().WithScheme(scheme).Build()
// 执行测试
plugin := EipPlugin{}
updatedPod, err := plugin.OnPodAdded(fakeClient, pod, context.Background())
// 检查结果
assert.NoError(t, err)
assert.Equal(t, DefaultEipConfig, updatedPod.Annotations[WithEIPAnnotationKey])
assert.Equal(t, EIPNetwork, updatedPod.Annotations[v1alpha1.GameServerNetworkType])
attributeStr, ok := pod.Annotations[EipAttributeAnnotationKey]
assert.True(t, ok)
attributes := make(map[string]interface{})
jErr := json.Unmarshal([]byte(attributeStr), &attributes)
assert.NoError(t, jErr)
assert.Equal(t, "eip-demo", attributes["name"])
assert.Equal(t, "BGP", attributes["isp"])
assert.Equal(t, float64(100), attributes["bandwidth"])
assert.Equal(t, "demo for pods eip", attributes["description"])
assert.Equal(t, float64(2), attributes["billingType"])
}
func TestEipPlugin_OnPodUpdated_WithNetworkStatus(t *testing.T) {
// 创建测试 Pod 并添加网络状态
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
v1alpha1.GameServerNetworkType: EIPNetwork,
"cloud.kruise.io/network-status": `{"currentNetworkState":"Waiting"}`,
},
},
Status: corev1.PodStatus{},
}
// 创建假的 client 包含 PodEIP
scheme := runtime.NewScheme()
_ = corev1.AddToScheme(scheme)
_ = v1beta1.AddToScheme(scheme)
_ = gamekruiseiov1alpha1.AddToScheme(scheme)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(pod).
Build()
// 执行测试
plugin := EipPlugin{}
// Ensure network status includes EIP information
networkStatus := &v1alpha1.NetworkStatus{}
networkStatus.ExternalAddresses = []v1alpha1.NetworkAddress{{IP: "203.0.113.1"}}
networkStatus.InternalAddresses = []v1alpha1.NetworkAddress{{IP: "10.0.0.1"}}
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
networkStatusBytes, jErr := json.Marshal(networkStatus)
assert.NoError(t, jErr)
pod.Annotations[v1alpha1.GameServerNetworkStatus] = string(networkStatusBytes)
updatedPod, err := plugin.OnPodUpdated(fakeClient, pod, context.Background())
assert.NoError(t, err)
// 更新一下podStatus
// Update network status manually to simulate what OnPodUpdated should do
jErr = json.Unmarshal([]byte(updatedPod.Annotations[v1alpha1.GameServerNetworkStatus]), &networkStatus)
assert.NoError(t, jErr)
// 检查结果
assert.Contains(t, updatedPod.Annotations[v1alpha1.GameServerNetworkStatus], "Ready")
assert.Contains(t, updatedPod.Annotations[v1alpha1.GameServerNetworkStatus], "203.0.113.1")
assert.Contains(t, updatedPod.Annotations[v1alpha1.GameServerNetworkStatus], "10.0.0.1")
}
func TestEipPlugin_OnPodDeleted(t *testing.T) {
plugin := EipPlugin{}
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
v1alpha1.GameServerNetworkType: EIPNetwork,
"cloud.kruise.io/network-status": `{"currentNetworkState":"Waiting"}`,
},
},
Status: corev1.PodStatus{},
}
err := plugin.OnPodDeleted(nil, pod, context.Background())
assert.Nil(t, err)
}

View File

@ -0,0 +1,61 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package volcengine
import (
"github.com/openkruise/kruise-game/cloudprovider"
"k8s.io/klog/v2"
)
const (
Volcengine = "Volcengine"
)
var (
volcengineProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
)
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (vp *Provider) Name() string {
return Volcengine
}
func (vp *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if vp.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return vp.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (vp *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
klog.Fatal("empty plugin name")
}
vp.plugins[name] = plugin
}
func NewVolcengineProvider() (cloudprovider.CloudProvider, error) {
return volcengineProvider, nil
}

View File

@ -0,0 +1,31 @@
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned-issuer
namespace: system
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: cert
namespace: system
spec:
commonName: kruise-game-controller-manager
dnsNames:
- $(SERVICE_NAME).$(SERVICE_NAMESPACE)
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc.cluster.local
secretName: kruise-game-certs
usages:
- server auth
- client auth
privateKey:
algorithm: RSA
size: 2048
rotationPolicy: Never
issuerRef:
name: selfsigned-issuer
kind: Issuer
group: cert-manager.io

View File

@ -0,0 +1,5 @@
resources:
- certificate.yaml
configurations:
- kustomizeconfig.yaml

View File

@ -0,0 +1,16 @@
# This configuration is for teaching kustomize how to update name ref and var substitution
nameReference:
- kind: Issuer
group: cert-manager.io
fieldSpecs:
- kind: Certificate
group: cert-manager.io
path: spec/issuerRef/name
varReference:
- kind: Certificate
group: cert-manager.io
path: spec/commonName
- kind: Certificate
group: cert-manager.io
path: spec/dnsNames

View File

@ -0,0 +1,102 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
name: poddnats.alibabacloud.com
spec:
group: alibabacloud.com
names:
kind: PodDNAT
listKind: PodDNATList
plural: poddnats
singular: poddnat
scope: Namespaced
versions:
- name: v1beta1
schema:
openAPIV3Schema:
description: PodDNAT is the Schema for the poddnats API
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: PodDNATSpec defines the desired state of PodDNAT
properties:
eni:
type: string
entryId:
type: string
externalIP:
type: string
externalPort:
type: string
internalIP:
type: string
internalPort:
type: string
portMapping:
items:
properties:
externalPort:
type: string
internalPort:
type: string
type: object
type: array
protocol:
type: string
tableId:
type: string
vswitch:
type: string
zoneID:
type: string
type: object
status:
description: PodDNATStatus defines the observed state of PodDNAT
properties:
created:
description: created create status
type: string
entries:
description: entries
items:
description: Entry record for forwardEntry
properties:
externalIP:
type: string
externalPort:
type: string
forwardEntryId:
type: string
internalIP:
type: string
internalPort:
type: string
ipProtocol:
type: string
type: object
type: array
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@ -0,0 +1,113 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
name: podeips.alibabacloud.com
spec:
group: alibabacloud.com
names:
kind: PodEIP
listKind: PodEIPList
plural: podeips
singular: podeip
scope: Namespaced
versions:
- name: v1beta1
schema:
openAPIV3Schema:
description: PodEIP is the Schema for the podeips API
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: PodEIPSpec defines the desired state of PodEIP
properties:
allocationID:
type: string
allocationType:
description: AllocationType ip type and release strategy
properties:
releaseAfter:
type: string
releaseStrategy:
allOf:
- enum:
- Follow
- TTL
- Never
- enum:
- Follow
- TTL
- Never
description: ReleaseStrategy is the type for eip release strategy
type: string
type:
default: Auto
description: IPAllocType is the type for eip alloc strategy
enum:
- Auto
- Static
type: string
required:
- releaseStrategy
- type
type: object
bandwidthPackageID:
type: string
required:
- allocationID
- allocationType
type: object
status:
description: PodEIPStatus defines the observed state of PodEIP
properties:
bandwidthPackageID:
description: BandwidthPackageID
type: string
eipAddress:
description: eip
type: string
internetChargeType:
type: string
isp:
type: string
name:
type: string
networkInterfaceID:
description: eni
type: string
podLastSeen:
description: PodLastSeen is the timestamp when pod resource last seen
format: date-time
type: string
privateIPAddress:
type: string
publicIpAddressPoolID:
type: string
resourceGroupID:
type: string
status:
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,125 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
name: dedicatedclblisteners.networking.cloud.tencent.com
spec:
group: networking.cloud.tencent.com
names:
kind: DedicatedCLBListener
listKind: DedicatedCLBListenerList
plural: dedicatedclblisteners
singular: dedicatedclblistener
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: CLB ID
jsonPath: .spec.lbId
name: LbId
type: string
- description: Port of CLB Listener
jsonPath: .spec.lbPort
name: LbPort
type: integer
- description: Pod name of target pod
jsonPath: .spec.targetPod.podName
name: Pod
type: string
- description: State of the dedicated clb listener
jsonPath: .status.state
name: State
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: DedicatedCLBListener is the Schema for the dedicatedclblisteners
API
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: DedicatedCLBListenerSpec defines the desired state of DedicatedCLBListener
properties:
extensiveParameters:
type: string
lbId:
type: string
x-kubernetes-validations:
- message: Value is immutable
rule: self == oldSelf
lbPort:
format: int64
type: integer
x-kubernetes-validations:
- message: Value is immutable
rule: self == oldSelf
lbRegion:
type: string
x-kubernetes-validations:
- message: Value is immutable
rule: self == oldSelf
protocol:
enum:
- TCP
- UDP
type: string
x-kubernetes-validations:
- message: Value is immutable
rule: self == oldSelf
targetPod:
properties:
podName:
type: string
targetPort:
format: int64
type: integer
required:
- podName
- targetPort
type: object
required:
- lbId
- lbPort
- protocol
type: object
status:
description: DedicatedCLBListenerStatus defines the observed state of
DedicatedCLBListener
properties:
address:
type: string
listenerId:
type: string
message:
type: string
state:
enum:
- Bound
- Available
- Pending
- Failed
- Deleting
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@ -20,10 +20,12 @@ bases:
# crd/kustomization.yaml
- ../webhook
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
#- ../certmanager
- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
# - ../prometheus
- ../scaler
patchesStrategicMerge:
# Protect the /metrics endpoint by putting it behind auth.
# If you want your controller-manager to expose the /metrics
@ -36,39 +38,39 @@ patchesStrategicMerge:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
#- manager_webhook_patch.yaml
- manager_webhook_patch.yaml
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'.
# Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA injection in the admission webhooks.
# 'CERTMANAGER' needs to be enabled to use ca injection
#- webhookcainjection_patch.yaml
- webhookcainjection_patch.yaml
# the following config is for teaching kustomize how to do var substitution
vars:
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
#- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
# objref:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert # this name should match the one in certificate.yaml
# fieldref:
# fieldpath: metadata.namespace
#- name: CERTIFICATE_NAME
# objref:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert # this name should match the one in certificate.yaml
#- name: SERVICE_NAMESPACE # namespace of the service
# objref:
# kind: Service
# version: v1
# name: webhook-service
# fieldref:
# fieldpath: metadata.namespace
#- name: SERVICE_NAME
# objref:
# kind: Service
# version: v1
# name: webhook-service
- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
objref:
kind: Certificate
group: cert-manager.io
version: v1
name: cert # this name should match the one in certificate.yaml
fieldref:
fieldpath: metadata.namespace
- name: CERTIFICATE_NAME
objref:
kind: Certificate
group: cert-manager.io
version: v1
name: cert # this name should match the one in certificate.yaml
- name: SERVICE_NAMESPACE # namespace of the service
objref:
kind: Service
version: v1
name: webhook-service
fieldref:
fieldpath: metadata.namespace
- name: SERVICE_NAME
objref:
kind: Service
version: v1
name: webhook-service

View File

@ -9,15 +9,16 @@ spec:
containers:
- name: manager
ports:
- containerPort: 9443
- containerPort: 9876
name: webhook-server
protocol: TCP
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
- mountPath: /tmp/webhook-certs/
name: cert
readOnly: true
volumes:
- name: cert
secret:
defaultMode: 420
secretName: webhook-server-cert
secretName: kruise-game-certs
optional: false

View File

@ -1,8 +1,15 @@
# This patch add annotation to admission webhook config and
# the variables $(CERTIFICATE_NAMESPACE) and $(CERTIFICATE_NAME) will be substituted by kustomize.
apiVersion: admissionregistration.k8s.io/v1beta1
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: mutating-webhook-configuration
name: mutating-webhook
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: validating-webhook
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)

View File

@ -0,0 +1,48 @@
[kubernetes]
enable = true
[kubernetes.hostPort]
max_port = 9000
min_port = 8000
[alibabacloud]
enable = true
[alibabacloud.slb]
max_port = 700
min_port = 500
block_ports = [593]
[alibabacloud.nlb]
max_port = 1502
min_port = 1000
block_ports = [1025, 1434, 1068]
[hwcloud]
enable = true
[hwcloud.elb]
max_port = 700
min_port = 500
block_ports = []
[volcengine]
enable = true
[volcengine.clb]
max_port = 600
min_port = 550
block_ports = [593]
[aws]
enable = false
[aws.nlb]
max_port = 30050
min_port = 30001
[jdcloud]
enable = false
[jdcloud.nlb]
max_port = 700
min_port = 500
[tencentcloud]
enable = true
[tencentcloud.clb]
min_port = 700
max_port = 750

View File

@ -7,6 +7,7 @@ generatorOptions:
configMapGenerator:
- files:
- controller_manager_config.yaml
- config.toml
name: manager-config
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

View File

@ -38,8 +38,20 @@ spec:
- /manager
args:
- --leader-elect=false
- --provider-config=/etc/kruise-game/config.toml
- --api-server-qps=5
- --api-server-qps-burst=10
- --enable-cert-generation=false
image: controller:latest
name: manager
env:
- name: "NETWORK_TOTAL_WAIT_TIME"
value: "60"
- name: "NETWORK_PROBE_INTERVAL_TIME"
value: "5"
ports:
- name: https
containerPort: 8080
securityContext:
allowPrivilegeEscalation: false
capabilities:
@ -66,5 +78,17 @@ spec:
requests:
cpu: 10m
memory: 64Mi
volumeMounts:
- mountPath: /etc/kruise-game
name: provider-config
serviceAccountName: controller-manager
terminationGracePeriodSeconds: 10
volumes:
- configMap:
defaultMode: 420
items:
- key: config.toml
path: config.toml
name: kruise-game-manager-config
name: provider-config

File diff suppressed because it is too large Load Diff

View File

@ -11,10 +11,6 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager

View File

@ -2,23 +2,59 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: manager-role
rules:
- apiGroups:
- admissionregistration.k8s.io
- ""
resources:
- mutatingwebhookconfigurations
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- nodes
- persistentvolumeclaims
- persistentvolumes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
- persistentvolumeclaims/status
- persistentvolumes/status
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods/status
- services/status
verbs:
- get
- patch
- update
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- create
@ -27,6 +63,22 @@ rules:
- patch
- update
- watch
- apiGroups:
- alibabacloud.com
resources:
- poddnats
- podeips
verbs:
- get
- list
- watch
- apiGroups:
- alibabacloud.com
resources:
- poddnats/status
- podeips/status
verbs:
- get
- apiGroups:
- apiextensions.k8s.io
resources:
@ -41,17 +93,6 @@ rules:
- apps.kruise.io
resources:
- podprobemarkers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps.kruise.io
resources:
- statefulsets
verbs:
- create
@ -70,54 +111,32 @@ rules:
- patch
- update
- apiGroups:
- ""
- elbv2.k8s.aws
resources:
- pods
- targetgroupbindings
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
- elbv2.services.k8s.aws
resources:
- pods/status
- listeners
- targetgroups
verbs:
- create
- get
- list
- patch
- update
- watch
- apiGroups:
- game.kruise.io
resources:
- gameservers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- game.kruise.io
resources:
- gameservers/finalizers
verbs:
- update
- apiGroups:
- game.kruise.io
resources:
- gameservers/status
verbs:
- get
- patch
- update
- apiGroups:
- game.kruise.io
resources:
- gameserversets
verbs:
- create
@ -130,14 +149,54 @@ rules:
- apiGroups:
- game.kruise.io
resources:
- gameservers/finalizers
- gameserversets/finalizers
verbs:
- update
- apiGroups:
- game.kruise.io
resources:
- gameservers/status
- gameserversets/status
verbs:
- get
- patch
- update
- apiGroups:
- networking.cloud.tencent.com
resources:
- dedicatedclblisteners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- networking.cloud.tencent.com
resources:
- dedicatedclblisteners/status
verbs:
- get
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- get
- patch
- update

View File

@ -0,0 +1,2 @@
resources:
- service.yaml

View File

@ -0,0 +1,12 @@
---
apiVersion: v1
kind: Service
metadata:
name: external-scaler
namespace: kruise-game-system
spec:
ports:
- port: 6000
targetPort: 6000
selector:
control-plane: controller-manager

View File

@ -0,0 +1,297 @@
---
# service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: index-offset-scheduler
namespace: kruise-game-system
---
# clusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: 'true'
name: index-offset-scheduler
rules:
- apiGroups:
- ''
- events.k8s.io
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- coordination.k8s.io
resourceNames:
- kube-scheduler
- index-offset-scheduler
resources:
- leases
verbs:
- get
- list
- update
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leasecandidates
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- pods
verbs:
- delete
- get
- list
- watch
- apiGroups:
- ''
resources:
- bindings
- pods/binding
verbs:
- create
- apiGroups:
- ''
resources:
- pods/status
verbs:
- patch
- update
- apiGroups:
- ''
resources:
- replicationcontrollers
- services
verbs:
- get
- list
- watch
- apiGroups:
- apps
- extensions
resources:
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- persistentvolumeclaims
- persistentvolumes
verbs:
- get
- list
- watch
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
- apiGroups:
- storage.k8s.io
resources:
- csinodes
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- csidrivers
verbs:
- get
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- csistoragecapacities
verbs:
- get
- list
- watch
- apiGroups:
- ""
resourceNames:
- kube-scheduler
- index-offset-scheduler
resources:
- endpoints
verbs:
- delete
- get
- patch
- update
---
# ClusterRoleBinding: index-offset-scheduler
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: index-offset-scheduler-as-kube-scheduler
subjects:
- kind: ServiceAccount
name: index-offset-scheduler
namespace: kruise-game-system
roleRef:
kind: ClusterRole
name: index-offset-scheduler
apiGroup: rbac.authorization.k8s.io
---
# ClusterRoleBinding: system:volume-scheduler
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: index-offset-scheduler-as-volume-scheduler
subjects:
- kind: ServiceAccount
name: index-offset-scheduler
namespace: kruise-game-system
roleRef:
kind: ClusterRole
name: system:volume-scheduler
apiGroup: rbac.authorization.k8s.io
---
# RoleBinding: apiserver
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: index-offset-scheduler-extension-apiserver-authentication-reader
namespace: kube-system
roleRef:
kind: Role
name: extension-apiserver-authentication-reader
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: index-offset-scheduler
namespace: kruise-game-system
---
# configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: index-offset-scheduler-config
namespace: kruise-game-system
data:
scheduler-config.yaml: |
# stable v1 after version 1.25
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
leaderElection:
leaderElect: false
resourceNamespace: kruise-game-system
resourceName: index-offset-scheduler
profiles:
- schedulerName: index-offset-scheduler
plugins:
score:
enabled:
- name: index-offset-scheduler
---
# deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: index-offset-scheduler
namespace: kruise-game-system
labels:
app: index-offset-scheduler
spec:
replicas: 1
selector:
matchLabels:
app: index-offset-scheduler
template:
metadata:
labels:
app: index-offset-scheduler
spec:
serviceAccountName: index-offset-scheduler
containers:
- name: scheduler
# change your image
image: openkruise/kruise-game-scheduler-index-offset:v1.0
imagePullPolicy: Always
command:
- /app/index-offset-scheduler
- --config=/etc/kubernetes/scheduler-config.yaml
- --v=5
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 500m
memory: 512Mi
volumeMounts:
- name: config
mountPath: /etc/kubernetes
# imagePullSecrets:
# - name: <your image pull secret>
volumes:
- name: config
configMap:
name: index-offset-scheduler-config

View File

@ -0,0 +1,2 @@
resources:
- index-offset-scheduler.yaml

View File

@ -1,2 +1,6 @@
resources:
- manifests.yaml
- service.yaml
configurations:
- kustomizeconfig.yaml

Some files were not shown because too many files have changed in this diff Show More