Compare commits

..

116 Commits

Author SHA1 Message Date
ChrisLiu 293619796d
bugfix(Kubernetes-HostPort): allow pod update when node notfound (#260)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-11 20:37:35 +08:00
ChrisLiu cac4ba793e
enhance: network trigger time adapts to different time zones (#259)
* enhance: network trigger time adapts to different time zones

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* update config of kruise-game manager for using cert-manager

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-10 16:39:53 +08:00
Kagaya 52d9a14c13
feat: add enable-cert-generation option (#245)
* add enable-cert-generation option

Signed-off-by: Kagaya <kagaya85@outlook.com>

* update webhook manifests config

Signed-off-by: Kagaya <kagaya85@outlook.com>

* e2e: install cert manager

Signed-off-by: Kagaya <kagaya85@outlook.com>

---------

Signed-off-by: Kagaya <kagaya85@outlook.com>
2025-07-08 21:33:54 +08:00
ChrisLiu f6d679cc75
bugfix: consider preDelete pods when scaling (#257)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-08 14:09:36 +08:00
Xuetao Song 90c0b68350
feat: support EnableMultiIngress for vke(#251) 2025-07-07 17:22:38 +08:00
ChrisLiu b82d7e34f7
feat: add PreDeleteReplicas for GameServerSet status (#254)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-03 22:13:17 +08:00
ChrisLiu 1a1c256460
fix the meaning of CURRENT printcolumn when using kubectl (#253)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-03 21:51:54 +08:00
ChrisLiu 19d8ce0b2c
bugfix: gs state should be changed from PreDelete to Deleting (#252)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-07-03 21:19:23 +08:00
ChrisLiu 5095740248
AlibabaCloud-AutoNLBs support multi intranet type eip (#248)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-30 10:55:28 +08:00
ChrisLiu 7dfe07097b
feat: support new plugin named AlibabaCloud-AutoNLBs (#246)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-26 17:27:03 +08:00
ChrisLiu 0ff70733c6
feat: support user-defined number of controller workers (#247)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-23 19:17:37 +08:00
roc fbcb3953c0
Add PersistentVolumeClaimRetentionPolicy support to GameServerSet (#243)
Signed-off-by: roc <roc@imroc.cc>
2025-06-20 18:09:20 +08:00
ChrisLiu 94a15fdb38
feat: add annotation of state-last-changed-time (#238)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-16 23:48:09 +08:00
roc a6ed1d95c4
feat(Kubernetes-HostPort): support TCPUDP protocol (#244)
Signed-off-by: roc <roc@imroc.cc>
2025-06-16 23:47:42 +08:00
Xuetao Song 9a04f87f5e
feat: volcengine-clb plugin support EnableClbScatter 2025-06-16 14:22:35 +08:00
Xuetao Song f175e0d73c
fix duplicated port for Volcengine-CLB plugin (#240) 2025-06-13 14:04:32 +08:00
roc 1414654f46
升级 TencentCloud-CLB 插件 (#239)
* upgrade tencentcloud clb plugin

* deprecate DedicatedCLBListener CRD
* use CLBPortPool's pod annotation

Signed-off-by: roc <roc@imroc.cc>

* add comments

Signed-off-by: roc <roc@imroc.cc>

---------

Signed-off-by: roc <roc@imroc.cc>
2025-06-10 21:34:58 +08:00
lizhipeng629 7136738627
fix old svc remain after pod recreate when using Volcengine-CLB (#233)
feat(*): check pod uid in svc

fix:add pod create time in svc

Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2025-06-06 17:00:09 +08:00
ChrisLiu 6dbab6be15
fix go-lint err (#237)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-06 13:32:19 +08:00
ChrisLiu 1ca95a5c36
cancel the limit of Ali NLB port range (#235)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-06 12:08:42 +08:00
ChrisLiu 40c7bba35e
enhance: AlibabaCloud-SLB-SharedPort plugin support managed services (#224)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-06 11:55:53 +08:00
ChrisLiu 51a82bd107
enhance: Kubernetes-HostPort support container port same as host (#230)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-06-06 11:49:42 +08:00
ChrisLiu a64b21eab5
enhance: activity of externalscaler relate to minAvailable (#228)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-05-19 18:08:11 +08:00
ChrisLiu 4e6ae2e2d0
fix the external scaler error when minAvailable is 0 (#227)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-05-19 17:37:40 +08:00
ChrisLiu f2044b8f1a
fix: update ppmHash when ServiceQualities changed (#226)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-05-19 14:04:51 +08:00
ChrisLiu 9c4ce841c3
fix: support auto-scaling when replicas is 0 (#225)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-05-19 11:51:35 +08:00
Kagaya 5180743458
feat: support minAvailable percentage type (#222)
Signed-off-by: Kagaya <kagaya85@outlook.com>
2025-05-12 20:04:27 +08:00
陈欣宇 7c51b24e6e
feat(metrics): improve observability for GameServersOpsStateCount metrics (#221)
* feat(metrics): improve observability

add gssName namespace label for  metrics: okg_gameservers_opsState_count to improve observability

* fix: remove gssName Compare

---------

Co-authored-by: 陈欣宇 <chenxinyu@YJ-IT-02836.local>
2025-05-06 20:49:41 +08:00
Xuetao Song fc88742857
add doc of Volcengine-EIP (#219) 2025-04-30 17:26:11 +08:00
Xuetao Song d04f8d0a7a
feat(*): add eip provider of VKE (#218) 2025-04-27 15:33:36 +08:00
ChrisLiu 5a272eaec3
enhance: add network ready condition for AlibabaCloud-Multi-NLBs plugin (#214)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-04-24 15:32:54 +08:00
ChrisLiu 6d5f041afc
enhance: support svc external traffic policy for AlibabaCloud-Multi-NLBs (#216)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-04-24 15:32:09 +08:00
berg 3da984ff96
ServiceQualities support serverless pod (#212) 2025-04-22 20:21:32 +08:00
ChrisLiu 897e706a85
update ci workflow to ubuntu-24.04 (#215)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-04-21 16:22:27 +08:00
Kagaya 624d17ff11
feat: support range type for ReserveIDs (#209) 2025-04-21 15:31:16 +08:00
ChrisLiu d038737580
feat: support multi groups for nlbs (#213)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-04-14 16:40:38 +08:00
Kagaya f2d02a6ab2
deps: update to k8s 0.30.10 (#210) 2025-04-14 15:04:10 +08:00
ChrisLiu 0bfc500fec
enhance: create service of ali-multi-nlbs in parallel (#207)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-03-24 10:51:48 +08:00
ChrisLiu 6133bab818
update workflow ci go cache to v4 (#206)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-03-12 20:33:59 +08:00
LHB6540 a2a0864f27
Add index-offset-scheduler (#205)
Co-authored-by: 李海彬 <lihaibin@goatgames.com>
2025-03-12 18:22:50 +08:00
ChrisLiu 0b3575947b
Increase the upper limit of ali-nlb ports (#204)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-02-27 17:44:08 +08:00
Gao PeiLiang aaa63740a4
Add hwcloud provider and elb plugin (#201)
* add hwcloud ELB Network Plugin

* add hwcloud cloud provider register

* fix register error

* fix error

* add log

* fix hwcloud provider regster error

* fix health check error

* only suuport use exist elb

* add docs

* add hwcloud elb config

* fix docs
2025-02-12 17:46:54 +08:00
Durgin 2ea11c1cb3
feat: add annotation of opsState-last-changed-time (#200)
- add annotation `game.kruise.io/opsState-last-changed-time`

#199
2025-02-08 18:01:06 +08:00
Gao PeiLiang 8079c29c22
alibabacloud slb support map same TCP and UDP port , eg 8000/TCPUDP (#197)
* create svc use port + protocol as name to fix when use same port but different protocol

* alibabacloud slb support TCP/UDP

* add log info

* fix alibabacloud slb init same port svc

* add doc

* clear log print, avoid too many info
2025-02-06 16:58:17 +08:00
Gao PeiLiang f0c82f1b1f
add support svc external traffic policy for alibabacloud slb (#194)
* add test log

* add support svc external traffic policy for alibabacloud slb

* fix error

* add e2e test timeout

* add aliyun slb param ExternalTrafficPolicyType doc
2025-01-16 10:44:24 +08:00
roc 8c229c1191
add rbac role for tencentcloud provider (#193)
Signed-off-by: roc <roc@imroc.cc>
2025-01-08 19:07:39 +08:00
roc 65d230658e
add tencentcloud in config.yaml (#192)
Signed-off-by: roc <roc@imroc.cc>
2025-01-08 15:51:18 +08:00
ChrisLiu 41c76a0d7a
Update CHANGELOG.md for v0.10.0 2025-01-08 15:21:50 +08:00
ChrisLiu be2b9065d8
feat: Add new networkType named AlibabaCloud-Multi-NLBs (#187)
* feat: Add new networkType named AlibabaCloud-Multi-NLBs

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* support same port of tcp&udp

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-01-07 17:36:44 +08:00
ChrisLiu ea98123211
feat: add maxAvailable param for external scaler (#190)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2025-01-07 17:36:10 +08:00
roc 7976e9002e
enhance: support network isolation for tencentcloud clb plugin (#183)
Signed-off-by: roc <roc@imroc.cc>
2024-11-12 23:04:23 +08:00
lizhipeng629 b841f0d313
fix:add block port in volc engine (#182)
Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2024-11-12 11:11:31 +08:00
ChrisLiu 51aad5b0a0
Semantic fixes for network port ranges (#181)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-11-06 19:59:25 +08:00
hhr 6bba287858
feat: add jdcloud provider and the nlb&eip plugin (#180) 2024-11-05 17:11:36 +08:00
ChrisLiu 468b2c77fb
enhance: add block ports config for AlibabaCloud LB network models (#175)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-11-01 17:11:00 +08:00
ChrisLiu c114781c7e
reconstruct the logic of GameServers scaling (#171)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-11-01 17:10:50 +08:00
ChrisLiu 41d902a8f2
update kruise-api to v1.7.1 (#173)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-11-01 17:10:37 +08:00
roc c680b411b7
feat(*): add tencent cloud provider and clb plugin (#179)
* feat(*): add tencent cloud provider and clb plugin

Signed-off-by: rockerchen <rockerchen@tencent.com>
2024-10-29 14:13:11 +08:00
ChrisLiu e121bcc109
add user logo of jjworld (#178)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-10-22 20:20:45 +08:00
ChrisLiu ecce453d7f
Update CHANGELOG.md for v0.9.0 2024-08-20 18:06:16 +08:00
ChrisLiu a1d0065e0c
enhance: service quality support patch labels & annotations (#159)
* enhance: service quality support patch labels & annotations

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* remove ci markdownlint check

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-08-16 10:45:03 +08:00
ChrisLiu 92475c1451
enhance: labels from gs can be synced to pod (#160)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-08-16 10:44:48 +08:00
ChrisLiu 26fcaf7889
feat: add lifecycle field for gameserverset (#162)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-08-16 10:44:26 +08:00
ChrisLiu 14e281dfa9
fix old svc remain after pod recreate when using ali-lb models (#165)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-08-16 10:43:50 +08:00
ChrisLiu ba65115f08
add users (#167)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-08-08 11:33:03 +08:00
clarklee b3991f24d5
fix: AmazonWebServices-NLB controller parameter modification and doc update (#164)
Signed-off-by: clarklee92 <clarklee1992@hotmail.com>
2024-07-19 19:49:41 +08:00
ChrisLiu f9467003b5
add user yongshi (#158)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-07-03 14:23:46 +08:00
clarklee92 08ce8f6fcd Upgrade Golang to version 1.21
Signed-off-by: clarklee92 <clarklee1992@hotmail.com>
2024-06-21 22:54:32 +08:00
clarklee92 ad0744df7a Fix go-lint error
Signed-off-by: clarklee92 <clarklee1992@hotmail.com>
2024-06-21 22:54:32 +08:00
clarklee92 782a250dd7 feat: add AmazonWebServices-NLB network plugin
Signed-off-by: clarklee92 <clarklee1992@hotmail.com>
2024-06-21 22:54:32 +08:00
ChrisLiu 3c8ddbdd4e
Fix the allocation error when Ali loadbalancers reache the limit of ports number (#149)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-06-18 15:19:32 +08:00
ChrisLiu adff8bdd54
Enhance: support custom health checks for AlibabaCloud-SLB (#154)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-06-18 15:17:39 +08:00
ChrisLiu d911eb3cd8
enhance: Kubernetes-NodePort supports network disabled (#156)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-06-18 15:13:58 +08:00
ChrisLiu 7d8e169d0c
enhance: check networkType when create GameServerSet (#157)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-06-18 15:12:40 +08:00
ChrisLiu b6fdc2353e
Enhance: support custom health checks for AlibabaCloud-NLB (#147)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-05-22 19:20:35 +08:00
ChrisLiu cafaab3216
Add users of OKG (#145)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-05-10 12:43:29 +08:00
李志朋 88dc6f5f97 add changelog in v0.8.0 2024-04-26 19:01:59 +08:00
lizhipeng629 eb547228b6
add v0.8.0 changelog (#143)
Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2024-04-26 17:21:19 +08:00
lizhipeng629 ffccbc5023
enhance: add AllocateLoadBalancerNodePorts in clb plugin (#141)
Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2024-04-26 11:24:00 +08:00
ChrisLiu d67e058e0f
feat: sync annotations from gs to pod (#140)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-04-26 11:23:13 +08:00
ChrisLiu d547f61323
feat: add Kubernetes-NodePort network plugin (#138)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-04-26 11:22:45 +08:00
ChrisLiu 56d9071c01
enhance: Kubernetes-HostPort plugin support to wait for network ready (#136)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-04-26 11:22:29 +08:00
ChrisLiu 4829414955
feat: add AlibabaCloud-NLB network plugin (#135)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-04-26 11:22:02 +08:00
lizhipeng629 53b69204e3
enhance: add annotations config in clb plugin (#137)
Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2024-04-18 11:15:04 +08:00
ChrisLiu 69babe66fe
replace patch asts with update (#131)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-03-27 10:25:07 +08:00
ChrisLiu 2dd97c2567
FailurePolicy of PodMutatingWebhook turn to Fail (#129)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-02-22 15:48:20 +08:00
lizhipeng629 9c203d01c9
feat(*): add volcengine provider and clb plugin (#127)
Co-authored-by: 李志朋 <lizhipeng.629@bytedance.com>
2024-01-29 16:36:23 +08:00
ChrisLiu 2b6ce6fcfc
fix: avoid patching gameserver continuously (#124)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2024-01-18 17:00:31 +08:00
Whislly 02c00091d2
add "apiVersion: game.kruise.io/v1alpha1" (#122) 2024-01-03 15:08:40 +08:00
ChrisLiu 4eab53302e
Update CHANGELOG.md for v0.7.0 2023-12-29 11:42:50 +08:00
ChrisLiu b399cd104b
bugfix: patch pod image fail when gs image is nil (#121)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-12-28 19:49:54 +08:00
ChrisLiu 5a15888804
feat: add ReclaimPolicy for GameServer (#115)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-12-28 10:40:02 +08:00
ChrisLiu ecf81d9259
feat: differentiated updates to GameServers (#120)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-12-27 15:28:10 +08:00
ChrisLiu 4013d3e597
enhance: ServiceQuality supports multiple results returned by a single probe (#117)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-12-27 15:22:07 +08:00
ChrisLiu 1181b22e4e
add users wanglong&lbdj (#118)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-12-18 15:32:22 +08:00
ChrisLiu 250bd86ffb
Update CHANGELOG.md for v0.6.0 2023-10-27 11:09:04 +08:00
ChrisLiu 4ea376f9bc
feat: add GameServerConditions (#95)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-10-26 11:11:32 +08:00
ChrisLiu 58cd242780
feat: add network plugin AlibabaCloud-NLB-SharedPort & support AllowNotReadyContainers (#98)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-10-26 11:09:03 +08:00
ChrisLiu 758eb33911
hostport network not ready when no ports exist (#100)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-10-26 11:07:14 +08:00
hongdou 6513d8ef2a
add qps and burst settings (#108) 2023-10-26 11:06:06 +08:00
ChrisLiu bd41239718
update go version to 1.19 & e2e dependency version (#104)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-10-17 11:20:59 +08:00
ChrisLiu ddcfeb759f
add OKG users: wuduan & juren (#99)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-10-03 15:00:46 +08:00
ChrisLiu 4d003d6ee9
add gameserverset controller unit tests (#97)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-09-13 15:38:09 +08:00
ChrisLiu 72ee4ca111
feat: support auto scaling-up based on minAvailable (#88)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-09-06 20:38:33 +08:00
ChrisLiu 4ec1a65e3a
fix AlibabaCloud-NATGW network ready condition when multi-ports (#94)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-08-30 10:48:41 +08:00
ChrisLiu 3d523f9611
Update CHANGELOG.md for v0.5.0 2023-08-11 11:16:36 +08:00
ChrisLiu 649ab11773
Feat/svc name (#92)
* feat: add new field ServiceName for GameServerSet

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

* fix log for kill gs

Signed-off-by: ChrisLiu <chrisliu1995@163.com>

---------

Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-08-09 10:00:27 +08:00
ChrisLiu 3f5a1ea13f
feat: AlibabaCloud-EIP support to define EIP name & description (#91)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-08-04 15:25:46 +08:00
ChrisLiu acc12c12f7
feat: add new opsState type named Kill (#90)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-08-04 15:25:33 +08:00
ChrisLiu 1b05801484
feat: add new opsState named Allocated (#89)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-28 14:49:20 +08:00
ChrisLiu 6f89f6e637
add network plugin AlibabaCloud-EIP (#86)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-24 14:49:34 +08:00
ChrisLiu defcb15f02
refactor NetworkPortRange into a pointer (#87)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-18 16:28:16 +08:00
ChrisLiu dcc3d8260c
support to sync gs metadata from from gsTemplate (#85)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-17 17:02:33 +08:00
ChrisLiu c4e4197b12
correct gs network status when pod network status is nil (#80)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-12 10:25:33 +08:00
ChrisLiu 0e6708dce9
enhance pod scaling efficiency (#81)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-12 10:24:39 +08:00
ChrisLiu 2be1717984
improve hostport cache to record allocated ports of pod (#82)
Signed-off-by: ChrisLiu <chrisliu1995@163.com>
2023-07-12 10:20:20 +08:00
163 changed files with 23270 additions and 2565 deletions

View File

@ -10,8 +10,8 @@ on:
env:
# Common versions
GO_VERSION: '1.18'
GOLANGCI_VERSION: 'v1.47'
GO_VERSION: '1.22'
GOLANGCI_VERSION: 'v1.58'
DOCKER_BUILDX_VERSION: 'v0.4.2'
# Common users. We can't run a step 'if secrets.AWS_USR != ""' but we can run
@ -23,7 +23,7 @@ env:
jobs:
golangci-lint:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
steps:
- name: Checkout
uses: actions/checkout@v3
@ -34,7 +34,7 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
@ -43,27 +43,14 @@ jobs:
run: |
make generate
- name: Lint golang code
uses: golangci/golangci-lint-action@v3.2.0
uses: golangci/golangci-lint-action@v3.5.0
with:
version: ${{ env.GOLANGCI_VERSION }}
args: --verbose --timeout=10m
skip-pkg-cache: true
markdownlint-misspell-shellcheck:
runs-on: ubuntu-20.04
# this image is build from Dockerfile
# https://github.com/pouchcontainer/pouchlinter/blob/master/Dockerfile
container: pouchcontainer/pouchlinter:v0.1.2
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Run misspell
run: find ./* -name "*" | grep -v vendor | xargs misspell -error
- name: Run shellcheck
run: find ./ -name "*.sh" | grep -v vendor | xargs shellcheck
unit-tests:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v3
with:
@ -75,7 +62,7 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}

View File

@ -1,4 +1,4 @@
name: E2E-1.24
name: E2E-1.26
on:
push:
@ -10,16 +10,16 @@ on:
env:
# Common versions
GO_VERSION: '1.18'
KIND_ACTION_VERSION: 'v1.3.0'
KIND_VERSION: 'v0.14.0'
KIND_IMAGE: 'kindest/node:v1.24.2'
GO_VERSION: '1.22'
KIND_VERSION: 'v0.18.0'
KIND_IMAGE: 'kindest/node:v1.26.4'
KIND_CLUSTER_NAME: 'ci-testing'
CERT_MANAGER_VERSION: 'v1.18.2'
jobs:
game-kruise:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v3
with:
@ -40,6 +40,10 @@ jobs:
export IMAGE="openkruise/kruise-game-manager:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Cert-Manager
run: |
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/${{ env.CERT_MANAGER_VERSION }}/cert-manager.yaml
kubectl -n cert-manager rollout status deploy/cert-manager-webhook --timeout=180s
- name: Install Kruise
run: |
set -ex
@ -47,7 +51,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.3.0 --set featureGates="PodProbeMarkerGate=true"
helm install kruise openkruise/kruise --version 1.5.0
for ((i=1;i<10;i++));
do
set +e

117
.github/workflows/e2e-1.30.yaml vendored Normal file
View File

@ -0,0 +1,117 @@
name: E2E-1.30
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.22'
KIND_VERSION: 'v0.22.0'
KIND_IMAGE: 'kindest/node:v1.30.8'
KIND_CLUSTER_NAME: 'ci-testing'
CERT_MANAGER_VERSION: 'v1.18.2'
jobs:
game-kruise:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v3
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.12.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
version: ${{ env.KIND_VERSION }}
- name: Build image
run: |
export IMAGE="openkruise/kruise-game-manager:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Cert-Manager
run: |
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/${{ env.CERT_MANAGER_VERSION }}/cert-manager.yaml
kubectl -n cert-manager rollout status deploy/cert-manager-webhook --timeout=180s
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.3
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Game
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-game-manager:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-game-system | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-game-system | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-game-system -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-game ready successfully"
else
echo "Timeout to wait for kruise-game ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v test/e2e
retVal=$?
# kubectl get pod -n kruise-game-system --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-game-system
restartCount=$(kubectl get pod -n kruise-game-system --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-game has not restarted"
else
kubectl get pod -n kruise-game-system --no-headers
echo "Kruise-game has restarted, abort!!!"
kubectl get pod -n kruise-game-system --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-game-system
exit 1
fi
exit $retVal

1
.gitignore vendored
View File

@ -23,3 +23,4 @@ testbin/*
*.swp
*.swo
*~
.vscode

View File

@ -1,5 +1,98 @@
# Change Log
## v0.10.0
> Change log since v0.9.0
### Features & Enhancements
- Feat: add tencent cloud provider and clb plugin. https://github.com/openkruise/kruise-game/pull/179
- Enhance: update kruise-api to v1.7.1 https://github.com/openkruise/kruise-game/pull/173
- Enhance: add block ports config for AlibabaCloud LB network models. https://github.com/openkruise/kruise-game/pull/175
- Enhance: add block port in volc engine. https://github.com/openkruise/kruise-game/pull/182
- Feat: add jdcloud provider and the nlb&eip plugin. https://github.com/openkruise/kruise-game/pull/180
- Enhance: support network isolation for tencentcloud clb plugin. https://github.com/openkruise/kruise-game/pull/183
- Feat: add maxAvailable param for external scaler. https://github.com/openkruise/kruise-game/pull/190
- Feat: add new networkType named AlibabaCloud-Multi-NLBs. https://github.com/openkruise/kruise-game/pull/187
### Bug Fixes
- Reconstruct the logic of GameServers scaling. https://github.com/openkruise/kruise-game/pull/171
- Semantic fixes for network port ranges. https://github.com/openkruise/kruise-game/pull/181
## v0.9.0
> Change log since v0.8.0
### Features & Enhancements
- Enhance: support custom health checks for AlibabaCloud-NLB. https://github.com/openkruise/kruise-game/pull/147
- Feat: add AmazonWebServices-NLB network plugin. https://github.com/openkruise/kruise-game/pull/150
- Enhance: support custom health checks for AlibabaCloud-SLB. https://github.com/openkruise/kruise-game/pull/154
- Enhance: Kubernetes-NodePort supports network disabled. https://github.com/openkruise/kruise-game/pull/156
- Enhance: check networkType when create GameServerSet. https://github.com/openkruise/kruise-game/pull/157
- Enhance: service quality support patch labels & annotations. https://github.com/openkruise/kruise-game/pull/159
- Enhance: labels from gs can be synced to pod. https://github.com/openkruise/kruise-game/pull/160
- Feat: add lifecycle field for gameserverset. https://github.com/openkruise/kruise-game/pull/162
### Bug Fixes
- Fix the allocation error when Ali loadbalancers reache the limit of ports number. https://github.com/openkruise/kruise-game/pull/149
- Fix: AmazonWebServices-NLB controller parameter modification. https://github.com/openkruise/kruise-game/pull/164
- Fix old svc remain after pod recreate when using ali-lb models. https://github.com/openkruise/kruise-game/pull/165
## v0.8.0
> Change log since v0.7.0
### Features & Enhancements
- Add AlibabaCloud-NLB network plugin. https://github.com/openkruise/kruise-game/pull/135
- Add Volcengine-CLB network plugin. https://github.com/openkruise/kruise-game/pull/127
- Add Kubernetes-NodePort network plugin. https://github.com/openkruise/kruise-game/pull/138
- Sync annotations from gs to pod. https://github.com/openkruise/kruise-game/pull/140
- FailurePolicy of PodMutatingWebhook turn to Fail. https://github.com/openkruise/kruise-game/pull/129
- Replace patch asts with update. https://github.com/openkruise/kruise-game/pull/131
- Kubernetes-HostPort plugin support to wait for network ready. https://github.com/openkruise/kruise-game/pull/136
- Add AllocateLoadBalancerNodePorts in clb plugin. https://github.com/openkruise/kruise-game/pull/141
### Bug Fixes
- Avoid patching gameserver continuously. https://github.com/openkruise/kruise-game/pull/124
## v0.7.0
> Change log since v0.6.0
### Features & Enhancements
- Add ReclaimPolicy for GameServer. https://github.com/openkruise/kruise-game/pull/115
- ServiceQuality supports multiple results returned by one probe. https://github.com/openkruise/kruise-game/pull/117
- Support differentiated updates to GameServers. https://github.com/openkruise/kruise-game/pull/120
### Bug Fixes
- Fix the error of patching pod image failure when gs image is nil. https://github.com/openkruise/kruise-game/pull/121
## v0.6.0
> Change log since v0.5.0
### Features & Enhancements
- Support auto scaling-up based on minAvailable. https://github.com/openkruise/kruise-game/pull/88
- Update go version to 1.19. https://github.com/openkruise/kruise-game/pull/104
- Add GameServerConditions. https://github.com/openkruise/kruise-game/pull/95
- Add network plugin AlibabaCloud-NLB-SharedPort. https://github.com/openkruise/kruise-game/pull/98
- Support AllowNotReadyContainers for network plugin. https://github.com/openkruise/kruise-game/pull/98
- Add qps and burst settings of controller-manager. https://github.com/openkruise/kruise-game/pull/108
### Bug Fixes
- Fix AlibabaCloud-NATGW network ready condition when multi-ports. https://github.com/openkruise/kruise-game/pull/94
- Hostport network should be not ready when no ports exist. https://github.com/openkruise/kruise-game/pull/100
## v0.5.0
> Change log since v0.4.0
### Features & Enhancements
- Improve hostport cache to record allocated ports of pod. https://github.com/openkruise/kruise-game/pull/82
- Enhance pod scaling efficiency. https://github.com/openkruise/kruise-game/pull/81
- Support to sync gs metadata from from gsTemplate. https://github.com/openkruise/kruise-game/pull/85
- Refactor NetworkPortRange into a pointer. https://github.com/openkruise/kruise-game/pull/87
- Add network plugin AlibabaCloud-EIP. https://github.com/openkruise/kruise-game/pull/86
- Add new opsState type named Allocated. https://github.com/openkruise/kruise-game/pull/89
- Add new opsState type named Kill. https://github.com/openkruise/kruise-game/pull/90
- AlibabaCloud-EIP support to define EIP name & description. https://github.com/openkruise/kruise-game/pull/91
- Support customized serviceName. https://github.com/openkruise/kruise-game/pull/92
### Bug Fixes
- correct gs network status when pod network status is nil. https://github.com/openkruise/kruise-game/pull/80
## v0.4.0
> Change log since v0.3.0
@ -42,8 +135,6 @@
- Avoid GameServerSet status sync failed when template metadata is not null. https://github.com/openkruise/kruise-game/pull/46
- Add marginal conditions to avoid fatal errors when scaling. https://github.com/openkruise/kruise-game/pull/49
---
## v0.2.0
> Change log since v0.1.0
@ -60,8 +151,6 @@
- AlibabaCloud-SLB
- AlibabaCloud-SLB-SharedPort
---
## v0.1.0
### Features

View File

@ -1,5 +1,5 @@
# Build the manager binary
FROM golang:1.18 as builder
FROM golang:1.22.12 AS builder
WORKDIR /workspace
# Copy the Go Modules manifests

View File

@ -2,7 +2,7 @@
# Image URL to use all building/pushing images targets
IMG ?= kruise-game-manager:test
# ENVTEST_K8S_VERSION refers to the version of kubebuilder assets to be downloaded by envtest binary.
ENVTEST_K8S_VERSION = 1.24.1
ENVTEST_K8S_VERSION = 1.30.0
# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)
ifeq (,$(shell go env GOBIN))
@ -114,7 +114,7 @@ ENVTEST ?= $(LOCALBIN)/setup-envtest
## Tool Versions
KUSTOMIZE_VERSION ?= v4.5.5
CONTROLLER_TOOLS_VERSION ?= v0.9.0
CONTROLLER_TOOLS_VERSION ?= v0.16.5
KUSTOMIZE_INSTALL_SCRIPT ?= "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh"
.PHONY: kustomize
@ -130,7 +130,7 @@ $(CONTROLLER_GEN): $(LOCALBIN)
.PHONY: envtest
envtest: $(ENVTEST) ## Download envtest-setup locally if necessary.
$(ENVTEST): $(LOCALBIN)
GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@c7e1dc9b5302d649d5531e19168dd7ea0013736d
HELM = $(shell pwd)/bin/helm
helm: ## Download helm locally if necessary.

View File

@ -58,12 +58,31 @@ OpenKruiseGame has the following core features:
<table style="border-collapse: collapse;">
<tr style="border: none;">
<td style="border: none;"><center><img src="./docs/images/bilibili-logo.png" width="120"> </center></td>
<td style="border: none;"><center><img src="./docs/images/hypergryph-logo.png" width="120" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/shangyou-logo.jpeg" width="120" ></center></td>
<td style="border: none;"><center><img src="./docs/images/guanying-logo.png" width="120" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/booming-logo.png" width="120" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/xingzhe-logo.png" width="120" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/lilith-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="./docs/images/hypergryph-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/jjworld-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/bilibili-logo.png" width="80"> </center></td>
<td style="border: none;"><center><img src="./docs/images/shangyou-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="./docs/images/yahaha-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/xingzhe-logo.png" width="80" ></center> </td>
</tr>
<tr style="border: none;">
<td style="border: none;"><center><img src="./docs/images/juren-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/baibian-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="./docs/images/chillyroom-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="./docs/images/wuduan-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/yostar-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/bekko-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/xingchao-logo.png" width="80" ></center> </td>
</tr>
<tr style="border: none;">
<td style="border: none;"><center><img src="./docs/images/wanglong-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/guanying-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/booming-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/gsshosting-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/yongshi-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/360-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="./docs/images/vma-logo.png" width="80" ></center> </td>
</tr>
</table>

View File

@ -23,16 +23,18 @@ import (
)
const (
GameServerStateKey = "game.kruise.io/gs-state"
GameServerOpsStateKey = "game.kruise.io/gs-opsState"
GameServerUpdatePriorityKey = "game.kruise.io/gs-update-priority"
GameServerDeletePriorityKey = "game.kruise.io/gs-delete-priority"
GameServerDeletingKey = "game.kruise.io/gs-deleting"
GameServerNetworkType = "game.kruise.io/network-type"
GameServerNetworkConf = "game.kruise.io/network-conf"
GameServerNetworkDisabled = "game.kruise.io/network-disabled"
GameServerNetworkStatus = "game.kruise.io/network-status"
GameServerNetworkTriggerTime = "game.kruise.io/network-trigger-time"
GameServerStateKey = "game.kruise.io/gs-state"
GameServerOpsStateKey = "game.kruise.io/gs-opsState"
GameServerUpdatePriorityKey = "game.kruise.io/gs-update-priority"
GameServerDeletePriorityKey = "game.kruise.io/gs-delete-priority"
GameServerDeletingKey = "game.kruise.io/gs-deleting"
GameServerNetworkType = "game.kruise.io/network-type"
GameServerNetworkConf = "game.kruise.io/network-conf"
GameServerNetworkDisabled = "game.kruise.io/network-disabled"
GameServerNetworkStatus = "game.kruise.io/network-status"
GameServerNetworkTriggerTime = "game.kruise.io/network-trigger-time"
GameServerOpsStateLastChangedTime = "game.kruise.io/opsState-last-changed-time"
GameServerStateLastChangedTime = "game.kruise.io/state-last-changed-time"
)
// GameServerSpec defines the desired state of GameServer
@ -41,18 +43,35 @@ type GameServerSpec struct {
UpdatePriority *intstr.IntOrString `json:"updatePriority,omitempty"`
DeletionPriority *intstr.IntOrString `json:"deletionPriority,omitempty"`
NetworkDisabled bool `json:"networkDisabled,omitempty"`
// Containers can be used to make the corresponding GameServer container fields
// different from the fields defined by GameServerTemplate in GameServerSetSpec.
Containers []GameServerContainer `json:"containers,omitempty"`
}
type GameServerContainer struct {
// Name indicates the name of the container to update.
Name string `json:"name"`
// Image indicates the image of the container to update.
// When Image updated, pod.spec.containers[*].image will be updated immediately.
Image string `json:"image,omitempty"`
// Resources indicates the resources of the container to update.
// When Resources updated, pod.spec.containers[*].Resources will be not updated immediately,
// which will be updated when pod recreate.
Resources corev1.ResourceRequirements `json:"resources,omitempty"`
}
type GameServerState string
const (
Unknown GameServerState = "Unknown"
Creating GameServerState = "Creating"
Ready GameServerState = "Ready"
NotReady GameServerState = "NotReady"
Crash GameServerState = "Crash"
Updating GameServerState = "Updating"
Deleting GameServerState = "Deleting"
Unknown GameServerState = "Unknown"
Creating GameServerState = "Creating"
Ready GameServerState = "Ready"
NotReady GameServerState = "NotReady"
Crash GameServerState = "Crash"
Updating GameServerState = "Updating"
Deleting GameServerState = "Deleting"
PreDelete GameServerState = "PreDelete"
PreUpdate GameServerState = "PreUpdate"
)
type OpsState string
@ -61,6 +80,8 @@ const (
Maintaining OpsState = "Maintaining"
WaitToDelete OpsState = "WaitToBeDeleted"
None OpsState = "None"
Allocated OpsState = "Allocated"
Kill OpsState = "Kill"
)
type ServiceQuality struct {
@ -75,16 +96,23 @@ type ServiceQuality struct {
}
type ServiceQualityCondition struct {
Name string `json:"name"`
Status string `json:"status,omitempty"`
Name string `json:"name"`
Status string `json:"status,omitempty"`
// Result indicate the probe message returned by the script
Result string `json:"result,omitempty"`
LastProbeTime metav1.Time `json:"lastProbeTime,omitempty"`
LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty"`
LastActionTransitionTime metav1.Time `json:"lastActionTransitionTime,omitempty"`
}
type ServiceQualityAction struct {
State bool `json:"state"`
State bool `json:"state"`
// Result indicate the probe message returned by the script.
// When Result is defined, it would exec action only when the according Result is actually returns.
Result string `json:"result,omitempty"`
GameServerSpec `json:",inline"`
Annotations map[string]string `json:"annotations,omitempty"`
Labels map[string]string `json:"labels,omitempty"`
}
// GameServerStatus defines the observed state of GameServer
@ -100,8 +128,39 @@ type GameServerStatus struct {
UpdatePriority *intstr.IntOrString `json:"updatePriority,omitempty"`
DeletionPriority *intstr.IntOrString `json:"deletionPriority,omitempty"`
LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty"`
// Conditions is an array of current observed GameServer conditions.
// +optional
Conditions []GameServerCondition `json:"conditions,omitempty" `
}
type GameServerCondition struct {
// Type is the type of the condition.
Type GameServerConditionType `json:"type"`
// Status is the status of the condition.
// Can be True, False, Unknown.
Status corev1.ConditionStatus `json:"status"`
// Last time we probed the condition.
// +optional
LastProbeTime metav1.Time `json:"lastProbeTime,omitempty"`
// Last time the condition transitioned from one status to another.
// +optional
LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty"`
// Unique, one-word, CamelCase reason for the condition's last transition.
// +optional
Reason string `json:"reason,omitempty"`
// Human-readable message indicating details about last transition.
// +optional
Message string `json:"message,omitempty"`
}
type GameServerConditionType string
const (
NodeNormal GameServerConditionType = "NodeNormal"
PersistentVolumeNormal GameServerConditionType = "PersistentVolumeNormal"
PodNormal GameServerConditionType = "PodNormal"
)
type NetworkStatus struct {
NetworkType string `json:"networkType,omitempty"`
InternalAddresses []NetworkAddress `json:"internalAddresses,omitempty"`
@ -123,9 +182,9 @@ const (
type NetworkAddress struct {
IP string `json:"ip"`
// TODO add IPv6
Ports []NetworkPort `json:"ports,omitempty"`
PortRange NetworkPortRange `json:"portRange,omitempty"`
EndPoint string `json:"endPoint,omitempty"`
Ports []NetworkPort `json:"ports,omitempty"`
PortRange *NetworkPortRange `json:"portRange,omitempty"`
EndPoint string `json:"endPoint,omitempty"`
}
type NetworkPort struct {

View File

@ -33,6 +33,11 @@ const (
GameServerSetReserveIdsKey = "game.kruise.io/reserve-ids"
AstsHashKey = "game.kruise.io/asts-hash"
PpmHashKey = "game.kruise.io/ppm-hash"
GsTemplateMetadataHashKey = "game.kruise.io/gsTemplate-metadata-hash"
)
const (
InplaceUpdateNotReadyBlocker = "game.kruise.io/inplace-update-not-ready-blocker"
)
// GameServerSetSpec defines the desired state of GameServerSet
@ -45,12 +50,19 @@ type GameServerSetSpec struct {
Replicas *int32 `json:"replicas"`
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
GameServerTemplate GameServerTemplate `json:"gameServerTemplate,omitempty"`
ReserveGameServerIds []int `json:"reserveGameServerIds,omitempty"`
ServiceQualities []ServiceQuality `json:"serviceQualities,omitempty"`
UpdateStrategy UpdateStrategy `json:"updateStrategy,omitempty"`
ScaleStrategy ScaleStrategy `json:"scaleStrategy,omitempty"`
Network *Network `json:"network,omitempty"`
GameServerTemplate GameServerTemplate `json:"gameServerTemplate,omitempty"`
ServiceName string `json:"serviceName,omitempty"`
ReserveGameServerIds []intstr.IntOrString `json:"reserveGameServerIds,omitempty"`
ServiceQualities []ServiceQuality `json:"serviceQualities,omitempty"`
UpdateStrategy UpdateStrategy `json:"updateStrategy,omitempty"`
ScaleStrategy ScaleStrategy `json:"scaleStrategy,omitempty"`
Network *Network `json:"network,omitempty"`
Lifecycle *appspub.Lifecycle `json:"lifecycle,omitempty"`
// PersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from
// the StatefulSet VolumeClaimTemplates. This requires the
// StatefulSetAutoDeletePVC feature gate to be enabled, which is alpha.
// +optional
PersistentVolumeClaimRetentionPolicy *kruiseV1beta1.StatefulSetPersistentVolumeClaimRetentionPolicy `json:"persistentVolumeClaimRetentionPolicy,omitempty"`
}
type GameServerTemplate struct {
@ -58,8 +70,22 @@ type GameServerTemplate struct {
// +kubebuilder:validation:Schemaless
corev1.PodTemplateSpec `json:",inline"`
VolumeClaimTemplates []corev1.PersistentVolumeClaim `json:"volumeClaimTemplates,omitempty"`
// ReclaimPolicy indicates the reclaim policy for GameServer.
// Default is Cascade.
ReclaimPolicy GameServerReclaimPolicy `json:"reclaimPolicy,omitempty"`
}
type GameServerReclaimPolicy string
const (
// CascadeGameServerReclaimPolicy indicates that GameServer is deleted when the pod is deleted.
// The age of GameServer is exactly the same as that of the pod.
CascadeGameServerReclaimPolicy GameServerReclaimPolicy = "Cascade"
// DeleteGameServerReclaimPolicy indicates that GameServers will be deleted when replicas of GameServerSet decreases.
// The GameServer will not be deleted when the corresponding pod is deleted due to manual deletion, update, eviction, etc.
DeleteGameServerReclaimPolicy GameServerReclaimPolicy = "Delete"
)
type Network struct {
NetworkType string `json:"networkType,omitempty"`
NetworkConf []NetworkConfParams `json:"networkConf,omitempty"`
@ -67,6 +93,10 @@ type Network struct {
type NetworkConfParams KVParams
const (
AllowNotReadyContainersNetworkConfName = "AllowNotReadyContainers"
)
type KVParams struct {
Name string `json:"name,omitempty"`
Value string `json:"value,omitempty"`
@ -161,6 +191,7 @@ type GameServerSetStatus struct {
UpdatedReadyReplicas int32 `json:"updatedReadyReplicas,omitempty"`
MaintainingReplicas *int32 `json:"maintainingReplicas,omitempty"`
WaitToBeDeletedReplicas *int32 `json:"waitToBeDeletedReplicas,omitempty"`
PreDeleteReplicas *int32 `json:"preDeleteReplicas,omitempty"`
// LabelSelector is label selectors for query over pods that should match the replica count used by HPA.
LabelSelector string `json:"labelSelector,omitempty"`
}
@ -168,11 +199,12 @@ type GameServerSetStatus struct {
//+genclient
//+kubebuilder:object:root=true
//+kubebuilder:printcolumn:name="DESIRED",type="integer",JSONPath=".spec.replicas",description="The desired number of GameServers."
//+kubebuilder:printcolumn:name="CURRENT",type="integer",JSONPath=".status.replicas",description="The number of currently all GameServers."
//+kubebuilder:printcolumn:name="CURRENT",type="integer",JSONPath=".status.currentReplicas",description="The number of currently all GameServers."
//+kubebuilder:printcolumn:name="UPDATED",type="integer",JSONPath=".status.updatedReplicas",description="The number of GameServers updated."
//+kubebuilder:printcolumn:name="READY",type="integer",JSONPath=".status.readyReplicas",description="The number of GameServers ready."
//+kubebuilder:printcolumn:name="Maintaining",type="integer",JSONPath=".status.maintainingReplicas",description="The number of GameServers Maintaining."
//+kubebuilder:printcolumn:name="WaitToBeDeleted",type="integer",JSONPath=".status.waitToBeDeletedReplicas",description="The number of GameServers WaitToBeDeleted."
//+kubebuilder:printcolumn:name="PreDelete",type="integer",JSONPath=".status.preDeleteReplicas",description="The number of GameServers PreDelete."
//+kubebuilder:printcolumn:name="AGE",type="date",JSONPath=".metadata.creationTimestamp",description="The age of GameServerSet."
//+kubebuilder:subresource:status
//+kubebuilder:subresource:scale:specpath=.spec.replicas,statuspath=.status.replicas,selectorpath=.status.labelSelector

View File

@ -1,5 +1,4 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2022 The Kruise Authors.
@ -23,6 +22,7 @@ package v1alpha1
import (
"github.com/openkruise/kruise-api/apps/pub"
"github.com/openkruise/kruise-api/apps/v1beta1"
"k8s.io/api/core/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/intstr"
@ -55,6 +55,39 @@ func (in *GameServer) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GameServerCondition) DeepCopyInto(out *GameServerCondition) {
*out = *in
in.LastProbeTime.DeepCopyInto(&out.LastProbeTime)
in.LastTransitionTime.DeepCopyInto(&out.LastTransitionTime)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerCondition.
func (in *GameServerCondition) DeepCopy() *GameServerCondition {
if in == nil {
return nil
}
out := new(GameServerCondition)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GameServerContainer) DeepCopyInto(out *GameServerContainer) {
*out = *in
in.Resources.DeepCopyInto(&out.Resources)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerContainer.
func (in *GameServerContainer) DeepCopy() *GameServerContainer {
if in == nil {
return nil
}
out := new(GameServerContainer)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GameServerList) DeepCopyInto(out *GameServerList) {
*out = *in
@ -157,7 +190,7 @@ func (in *GameServerSetSpec) DeepCopyInto(out *GameServerSetSpec) {
in.GameServerTemplate.DeepCopyInto(&out.GameServerTemplate)
if in.ReserveGameServerIds != nil {
in, out := &in.ReserveGameServerIds, &out.ReserveGameServerIds
*out = make([]int, len(*in))
*out = make([]intstr.IntOrString, len(*in))
copy(*out, *in)
}
if in.ServiceQualities != nil {
@ -174,6 +207,16 @@ func (in *GameServerSetSpec) DeepCopyInto(out *GameServerSetSpec) {
*out = new(Network)
(*in).DeepCopyInto(*out)
}
if in.Lifecycle != nil {
in, out := &in.Lifecycle, &out.Lifecycle
*out = new(pub.Lifecycle)
(*in).DeepCopyInto(*out)
}
if in.PersistentVolumeClaimRetentionPolicy != nil {
in, out := &in.PersistentVolumeClaimRetentionPolicy, &out.PersistentVolumeClaimRetentionPolicy
*out = new(v1beta1.StatefulSetPersistentVolumeClaimRetentionPolicy)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerSetSpec.
@ -199,6 +242,11 @@ func (in *GameServerSetStatus) DeepCopyInto(out *GameServerSetStatus) {
*out = new(int32)
**out = **in
}
if in.PreDeleteReplicas != nil {
in, out := &in.PreDeleteReplicas, &out.PreDeleteReplicas
*out = new(int32)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerSetStatus.
@ -224,6 +272,13 @@ func (in *GameServerSpec) DeepCopyInto(out *GameServerSpec) {
*out = new(intstr.IntOrString)
**out = **in
}
if in.Containers != nil {
in, out := &in.Containers, &out.Containers
*out = make([]GameServerContainer, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerSpec.
@ -259,6 +314,13 @@ func (in *GameServerStatus) DeepCopyInto(out *GameServerStatus) {
**out = **in
}
in.LastTransitionTime.DeepCopyInto(&out.LastTransitionTime)
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
*out = make([]GameServerCondition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GameServerStatus.
@ -339,7 +401,11 @@ func (in *NetworkAddress) DeepCopyInto(out *NetworkAddress) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
out.PortRange = in.PortRange
if in.PortRange != nil {
in, out := &in.PortRange, &out.PortRange
*out = new(NetworkPortRange)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NetworkAddress.
@ -511,6 +577,20 @@ func (in *ServiceQuality) DeepCopy() *ServiceQuality {
func (in *ServiceQualityAction) DeepCopyInto(out *ServiceQualityAction) {
*out = *in
in.GameServerSpec.DeepCopyInto(&out.GameServerSpec)
if in.Annotations != nil {
in, out := &in.Annotations, &out.Annotations
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ServiceQualityAction.

View File

@ -0,0 +1,96 @@
package v1beta1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// PodEIPSpec defines the desired state of PodEIP
type PodEIPSpec struct {
// +kubebuilder:validation:Required
AllocationID string `json:"allocationID"`
BandwidthPackageID string `json:"bandwidthPackageID,omitempty"`
// +kubebuilder:validation:Required
AllocationType AllocationType `json:"allocationType"`
}
// AllocationType ip type and release strategy
type AllocationType struct {
// +kubebuilder:default:=Auto
// +kubebuilder:validation:Required
Type IPAllocType `json:"type"`
// +kubebuilder:validation:Required
// +kubebuilder:validation:Enum=Follow;TTL;Never
ReleaseStrategy ReleaseStrategy `json:"releaseStrategy"`
ReleaseAfter string `json:"releaseAfter,omitempty"` // go type 5m0s
}
// +kubebuilder:validation:Enum=Auto;Static
// IPAllocType is the type for eip alloc strategy
type IPAllocType string
// IPAllocType
const (
IPAllocTypeAuto IPAllocType = "Auto"
IPAllocTypeStatic IPAllocType = "Static"
)
// +kubebuilder:validation:Enum=Follow;TTL;Never
// ReleaseStrategy is the type for eip release strategy
type ReleaseStrategy string
// ReleaseStrategy
const (
ReleaseStrategyFollow ReleaseStrategy = "Follow" // default policy
ReleaseStrategyTTL ReleaseStrategy = "TTL"
ReleaseStrategyNever ReleaseStrategy = "Never"
)
// PodEIPStatus defines the observed state of PodEIP
type PodEIPStatus struct {
// eni
NetworkInterfaceID string `json:"networkInterfaceID,omitempty"`
PrivateIPAddress string `json:"privateIPAddress,omitempty"`
// eip
EipAddress string `json:"eipAddress,omitempty"`
ISP string `json:"isp,omitempty"`
InternetChargeType string `json:"internetChargeType,omitempty"`
ResourceGroupID string `json:"resourceGroupID,omitempty"`
Name string `json:"name,omitempty"`
PublicIpAddressPoolID string `json:"publicIpAddressPoolID,omitempty"`
Status string `json:"status,omitempty"`
// BandwidthPackageID
BandwidthPackageID string `json:"bandwidthPackageID,omitempty"`
// PodLastSeen is the timestamp when pod resource last seen
PodLastSeen metav1.Time `json:"podLastSeen,omitempty"`
}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// PodEIP is the Schema for the podeips API
type PodEIP struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec PodEIPSpec `json:"spec,omitempty"`
Status PodEIPStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// PodEIPList contains a list of PodEIP
type PodEIPList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []PodEIP `json:"items"`
}
func init() {
SchemeBuilder.Register(&PodEIP{}, &PodEIPList{})
}

View File

@ -1,5 +1,4 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2022 The Kruise Authors.
@ -25,6 +24,21 @@ import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AllocationType) DeepCopyInto(out *AllocationType) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AllocationType.
func (in *AllocationType) DeepCopy() *AllocationType {
if in == nil {
return nil
}
out := new(AllocationType)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Entry) DeepCopyInto(out *Entry) {
*out = *in
@ -194,6 +208,97 @@ func (in *PodDNATStatus) DeepCopy() *PodDNATStatus {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodEIP) DeepCopyInto(out *PodEIP) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
out.Spec = in.Spec
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodEIP.
func (in *PodEIP) DeepCopy() *PodEIP {
if in == nil {
return nil
}
out := new(PodEIP)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *PodEIP) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodEIPList) DeepCopyInto(out *PodEIPList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]PodEIP, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodEIPList.
func (in *PodEIPList) DeepCopy() *PodEIPList {
if in == nil {
return nil
}
out := new(PodEIPList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *PodEIPList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodEIPSpec) DeepCopyInto(out *PodEIPSpec) {
*out = *in
out.AllocationType = in.AllocationType
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodEIPSpec.
func (in *PodEIPSpec) DeepCopy() *PodEIPSpec {
if in == nil {
return nil
}
out := new(PodEIPSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodEIPStatus) DeepCopyInto(out *PodEIPStatus) {
*out = *in
in.PodLastSeen.DeepCopyInto(&out.PodLastSeen)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodEIPStatus.
func (in *PodEIPStatus) DeepCopy() *PodEIPStatus {
if in == nil {
return nil
}
out := new(PodEIPStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PortMapping) DeepCopyInto(out *PortMapping) {
*out = *in

View File

@ -0,0 +1,544 @@
/*
Copyright 2025 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
"context"
"fmt"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
"sync"
)
const (
AutoNLBsNetwork = "AlibabaCloud-AutoNLBs"
AliasAutoNLBs = "Auto-NLBs-Network"
ReserveNlbNumConfigName = "ReserveNlbNum"
EipTypesConfigName = "EipTypes"
ZoneMapsConfigName = "ZoneMaps"
MinPortConfigName = "MinPort"
MaxPortConfigName = "MaxPort"
BlockPortsConfigName = "BlockPorts"
NLBZoneMapsServiceAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-zone-maps"
NLBAddressTypeAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type"
IntranetEIPType = "intranet"
DefaultEIPType = "default"
)
type AutoNLBsPlugin struct {
gssMaxPodIndex map[string]int
mutex sync.RWMutex
}
type autoNLBsConfig struct {
minPort int32
maxPort int32
blockPorts []int32
zoneMaps string
reserveNlbNum int
targetPorts []int
protocols []corev1.Protocol
eipTypes []string
externalTrafficPolicy corev1.ServiceExternalTrafficPolicyType
*nlbHealthConfig
}
func (a *AutoNLBsPlugin) Name() string {
return AutoNLBsNetwork
}
func (a *AutoNLBsPlugin) Alias() string {
return AliasAutoNLBs
}
func (a *AutoNLBsPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
gssList := &gamekruiseiov1alpha1.GameServerSetList{}
err := c.List(ctx, gssList, &client.ListOptions{})
if err != nil {
log.Errorf("cannot list gameserverset in cluster because %s", err.Error())
return err
}
for _, gss := range gssList.Items {
if gss.Spec.Network != nil && gss.Spec.Network.NetworkType == AutoNLBsNetwork {
a.gssMaxPodIndex[gss.GetNamespace()+"/"+gss.GetName()] = int(*gss.Spec.Replicas)
nc, err := parseAutoNLBsConfig(gss.Spec.Network.NetworkConf)
if err != nil {
log.Errorf("pasrse config wronge because %s", err.Error())
return err
}
err = a.ensureServices(ctx, c, gss.GetNamespace(), gss.GetName(), nc)
if err != nil {
log.Errorf("ensure services error because %s", err.Error())
return err
}
}
}
return nil
}
func (a *AutoNLBsPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
conf, err := parseAutoNLBsConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
a.ensureMaxPodIndex(pod)
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if err := a.ensureServices(ctx, c, pod.GetNamespace(), gssName, conf); err != nil {
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
containerPorts := make([]corev1.ContainerPort, 0)
podIndex := util.GetIndexFromGsName(pod.GetName())
for i, port := range conf.targetPorts {
if conf.protocols[i] == ProtocolTCPUDP {
containerPortTCP := corev1.ContainerPort{
ContainerPort: int32(port),
Protocol: corev1.ProtocolTCP,
Name: "tcp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(port),
}
containerPortUDP := corev1.ContainerPort{
ContainerPort: int32(port),
Protocol: corev1.ProtocolUDP,
Name: "udp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(port),
}
containerPorts = append(containerPorts, containerPortTCP, containerPortUDP)
} else {
containerPort := corev1.ContainerPort{
ContainerPort: int32(port),
Protocol: conf.protocols[i],
Name: strings.ToLower(string(conf.protocols[i])) + "-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(port),
}
containerPorts = append(containerPorts, containerPort)
}
}
pod.Spec.Containers[0].Ports = containerPorts
lenRange := int(conf.maxPort) - int(conf.minPort) - len(conf.blockPorts) + 1
svcIndex := podIndex / (lenRange / len(conf.targetPorts))
for _, eipType := range conf.eipTypes {
svcName := gssName + "-" + eipType + "-" + strconv.Itoa(svcIndex)
pod.Spec.ReadinessGates = append(pod.Spec.ReadinessGates, corev1.PodReadinessGate{
ConditionType: corev1.PodConditionType(PrefixReadyReadinessGate + svcName),
})
}
return pod, nil
}
func (a *AutoNLBsPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
networkConfig := networkManager.GetNetworkConfig()
conf, err := parseAutoNLBsConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
_, readyCondition := util.GetPodConditionFromList(pod.Status.Conditions, corev1.PodReady)
if readyCondition == nil || readyCondition.Status == corev1.ConditionFalse {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
var internalPorts []gamekruiseiov1alpha1.NetworkPort
var externalPorts []gamekruiseiov1alpha1.NetworkPort
endPoints := ""
podIndex := util.GetIndexFromGsName(pod.GetName())
lenRange := int(conf.maxPort) - int(conf.minPort) - len(conf.blockPorts) + 1
svcIndex := podIndex / (lenRange / len(conf.targetPorts))
for i, eipType := range conf.eipTypes {
svcName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey] + "-" + eipType + "-" + strconv.Itoa(svcIndex)
svc := &corev1.Service{}
err := c.Get(ctx, types.NamespacedName{
Name: svcName,
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
if svc.Status.LoadBalancer.Ingress == nil || len(svc.Status.LoadBalancer.Ingress) == 0 {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
endPoints = endPoints + svc.Status.LoadBalancer.Ingress[0].Hostname + "/" + eipType
if i == len(conf.eipTypes)-1 {
for i, port := range conf.targetPorts {
if conf.protocols[i] == ProtocolTCPUDP {
portNameTCP := "tcp-" + strconv.Itoa(podIndex) + strconv.Itoa(port)
portNameUDP := "udp-" + strconv.Itoa(podIndex) + strconv.Itoa(port)
iPort := intstr.FromInt(port)
internalPorts = append(internalPorts, gamekruiseiov1alpha1.NetworkPort{
Name: portNameTCP,
Protocol: corev1.ProtocolTCP,
Port: &iPort,
}, gamekruiseiov1alpha1.NetworkPort{
Name: portNameUDP,
Protocol: corev1.ProtocolUDP,
Port: &iPort,
})
for _, svcPort := range svc.Spec.Ports {
if svcPort.Name == portNameTCP || svcPort.Name == portNameUDP {
ePort := intstr.FromInt32(svcPort.Port)
externalPorts = append(externalPorts, gamekruiseiov1alpha1.NetworkPort{
Name: portNameTCP,
Protocol: corev1.ProtocolTCP,
Port: &ePort,
}, gamekruiseiov1alpha1.NetworkPort{
Name: portNameUDP,
Protocol: corev1.ProtocolUDP,
Port: &ePort,
})
break
}
}
} else {
portName := strings.ToLower(string(conf.protocols[i])) + "-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(port)
iPort := intstr.FromInt(port)
internalPorts = append(internalPorts, gamekruiseiov1alpha1.NetworkPort{
Name: portName,
Protocol: conf.protocols[i],
Port: &iPort,
})
for _, svcPort := range svc.Spec.Ports {
if svcPort.Name == portName {
ePort := intstr.FromInt32(svcPort.Port)
externalPorts = append(externalPorts, gamekruiseiov1alpha1.NetworkPort{
Name: portName,
Protocol: conf.protocols[i],
Port: &ePort,
})
break
}
}
}
}
} else {
endPoints = endPoints + ","
}
}
networkStatus = &gamekruiseiov1alpha1.NetworkStatus{
InternalAddresses: []gamekruiseiov1alpha1.NetworkAddress{
{
IP: pod.Status.PodIP,
Ports: internalPorts,
},
},
ExternalAddresses: []gamekruiseiov1alpha1.NetworkAddress{
{
EndPoint: endPoints,
Ports: externalPorts,
},
},
CurrentNetworkState: gamekruiseiov1alpha1.NetworkReady,
}
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (a *AutoNLBsPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
return nil
}
func init() {
autoNLBsPlugin := AutoNLBsPlugin{
mutex: sync.RWMutex{},
gssMaxPodIndex: make(map[string]int),
}
alibabaCloudProvider.registerPlugin(&autoNLBsPlugin)
}
func (a *AutoNLBsPlugin) ensureMaxPodIndex(pod *corev1.Pod) {
a.mutex.Lock()
defer a.mutex.Unlock()
podIndex := util.GetIndexFromGsName(pod.GetName())
gssNsName := pod.GetNamespace() + "/" + pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if podIndex > a.gssMaxPodIndex[gssNsName] {
a.gssMaxPodIndex[gssNsName] = podIndex
}
}
func (a *AutoNLBsPlugin) checkSvcNumToCreate(namespace, gssName string, config *autoNLBsConfig) int {
a.mutex.RLock()
defer a.mutex.RUnlock()
lenRange := int(config.maxPort) - int(config.minPort) - len(config.blockPorts) + 1
expectSvcNum := a.gssMaxPodIndex[namespace+"/"+gssName]/(lenRange/len(config.targetPorts)) + config.reserveNlbNum + 1
return expectSvcNum
}
func (a *AutoNLBsPlugin) ensureServices(ctx context.Context, client client.Client, namespace, gssName string, config *autoNLBsConfig) error {
expectSvcNum := a.checkSvcNumToCreate(namespace, gssName, config)
for _, eipType := range config.eipTypes {
for j := 0; j < expectSvcNum; j++ {
// get svc
svcName := gssName + "-" + eipType + "-" + strconv.Itoa(j)
svc := &corev1.Service{}
err := client.Get(ctx, types.NamespacedName{
Name: svcName,
Namespace: namespace,
}, svc)
if err != nil {
if errors.IsNotFound(err) {
// create svc
toAddSvc := a.consSvc(namespace, gssName, eipType, j, config)
if err := setSvcOwner(client, ctx, toAddSvc, namespace, gssName); err != nil {
return err
} else {
if err := client.Create(ctx, toAddSvc); err != nil {
return err
}
}
} else {
return err
}
}
}
}
return nil
}
func (a *AutoNLBsPlugin) consSvcPorts(svcIndex int, config *autoNLBsConfig) []corev1.ServicePort {
lenRange := int(config.maxPort) - int(config.minPort) - len(config.blockPorts) + 1
ports := make([]corev1.ServicePort, 0)
toAllocatedPort := config.minPort
portNumPerPod := lenRange / len(config.targetPorts)
for podIndex := svcIndex * portNumPerPod; podIndex < (svcIndex+1)*portNumPerPod; podIndex++ {
for i, protocol := range config.protocols {
if protocol == ProtocolTCPUDP {
svcPortTCP := corev1.ServicePort{
Name: "tcp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i]),
TargetPort: intstr.FromString("tcp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i])),
Port: toAllocatedPort,
Protocol: corev1.ProtocolTCP,
}
svcPortUDP := corev1.ServicePort{
Name: "udp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i]),
TargetPort: intstr.FromString("udp-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i])),
Port: toAllocatedPort,
Protocol: corev1.ProtocolUDP,
}
ports = append(ports, svcPortTCP, svcPortUDP)
} else {
svcPort := corev1.ServicePort{
Name: strings.ToLower(string(protocol)) + "-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i]),
TargetPort: intstr.FromString(strings.ToLower(string(protocol)) + "-" + strconv.Itoa(podIndex) + "-" + strconv.Itoa(config.targetPorts[i])),
Port: toAllocatedPort,
Protocol: protocol,
}
ports = append(ports, svcPort)
}
toAllocatedPort++
for util.IsNumInListInt32(toAllocatedPort, config.blockPorts) {
toAllocatedPort++
}
}
}
return ports
}
func (a *AutoNLBsPlugin) consSvc(namespace, gssName, eipType string, svcIndex int, conf *autoNLBsConfig) *corev1.Service {
loadBalancerClass := "alibabacloud.com/nlb"
svcAnnotations := map[string]string{
//SlbConfigHashKey: util.GetHash(conf),
NLBZoneMapsServiceAnnotationKey: conf.zoneMaps,
LBHealthCheckFlagAnnotationKey: conf.lBHealthCheckFlag,
}
if conf.lBHealthCheckFlag == "on" {
svcAnnotations[LBHealthCheckTypeAnnotationKey] = conf.lBHealthCheckType
svcAnnotations[LBHealthCheckConnectPortAnnotationKey] = conf.lBHealthCheckConnectPort
svcAnnotations[LBHealthCheckConnectTimeoutAnnotationKey] = conf.lBHealthCheckConnectTimeout
svcAnnotations[LBHealthCheckIntervalAnnotationKey] = conf.lBHealthCheckInterval
svcAnnotations[LBHealthyThresholdAnnotationKey] = conf.lBHealthyThreshold
svcAnnotations[LBUnhealthyThresholdAnnotationKey] = conf.lBUnhealthyThreshold
if conf.lBHealthCheckType == "http" {
svcAnnotations[LBHealthCheckDomainAnnotationKey] = conf.lBHealthCheckDomain
svcAnnotations[LBHealthCheckUriAnnotationKey] = conf.lBHealthCheckUri
svcAnnotations[LBHealthCheckMethodAnnotationKey] = conf.lBHealthCheckMethod
}
}
if strings.Contains(eipType, IntranetEIPType) {
svcAnnotations[NLBAddressTypeAnnotationKey] = IntranetEIPType
}
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: gssName + "-" + eipType + "-" + strconv.Itoa(svcIndex),
Namespace: namespace,
Annotations: svcAnnotations,
},
Spec: corev1.ServiceSpec{
Ports: a.consSvcPorts(svcIndex, conf),
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
gamekruiseiov1alpha1.GameServerOwnerGssKey: gssName,
},
LoadBalancerClass: &loadBalancerClass,
AllocateLoadBalancerNodePorts: ptr.To[bool](false),
ExternalTrafficPolicy: conf.externalTrafficPolicy,
},
}
}
func setSvcOwner(c client.Client, ctx context.Context, svc *corev1.Service, namespace, gssName string) error {
gss := &gamekruiseiov1alpha1.GameServerSet{}
err := c.Get(ctx, types.NamespacedName{
Namespace: namespace,
Name: gssName,
}, gss)
if err != nil {
return err
}
ownerRef := []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
svc.OwnerReferences = ownerRef
return nil
}
func parseAutoNLBsConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*autoNLBsConfig, error) {
reserveNlbNum := 1
eipTypes := []string{"default"}
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
externalTrafficPolicy := corev1.ServiceExternalTrafficPolicyTypeLocal
zoneMaps := ""
blockPorts := make([]int32, 0)
minPort := int32(1000)
maxPort := int32(1499)
for _, c := range conf {
switch c.Name {
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
return nil, fmt.Errorf("invalid PortProtocols %s", c.Value)
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case ExternalTrafficPolicyTypeConfigName:
if strings.EqualFold(c.Value, string(corev1.ServiceExternalTrafficPolicyTypeCluster)) {
externalTrafficPolicy = corev1.ServiceExternalTrafficPolicyTypeCluster
}
case ReserveNlbNumConfigName:
reserveNlbNum, _ = strconv.Atoi(c.Value)
case EipTypesConfigName:
eipTypes = strings.Split(c.Value, ",")
case ZoneMapsConfigName:
zoneMaps = c.Value
case BlockPortsConfigName:
blockPorts = util.StringToInt32Slice(c.Value, ",")
case MinPortConfigName:
val, err := strconv.ParseInt(c.Value, 10, 32)
if err != nil {
return nil, fmt.Errorf("invalid MinPort %s", c.Value)
} else {
minPort = int32(val)
}
case MaxPortConfigName:
val, err := strconv.ParseInt(c.Value, 10, 32)
if err != nil {
return nil, fmt.Errorf("invalid MaxPort %s", c.Value)
} else {
maxPort = int32(val)
}
}
}
if minPort > maxPort {
return nil, fmt.Errorf("invalid MinPort %d and MaxPort %d", minPort, maxPort)
}
if zoneMaps == "" {
return nil, fmt.Errorf("invalid ZoneMaps, which can not be empty")
}
// check ports & protocols
if len(ports) == 0 || len(protocols) == 0 {
return nil, fmt.Errorf("invalid PortProtocols, which can not be empty")
}
nlbHealthConfig, err := parseNlbHealthConfig(conf)
if err != nil {
return nil, err
}
return &autoNLBsConfig{
blockPorts: blockPorts,
minPort: minPort,
maxPort: maxPort,
nlbHealthConfig: nlbHealthConfig,
reserveNlbNum: reserveNlbNum,
eipTypes: eipTypes,
protocols: protocols,
targetPorts: ports,
zoneMaps: zoneMaps,
externalTrafficPolicy: externalTrafficPolicy,
}, nil
}

View File

@ -0,0 +1,272 @@
/*
Copyright 2025 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"reflect"
"sync"
"testing"
)
func TestIsNeedToCreateService(t *testing.T) {
tests := []struct {
ns string
gssName string
config *autoNLBsConfig
a *AutoNLBsPlugin
expectSvcNum int
}{
// case 0
{
ns: "default",
gssName: "pod",
config: &autoNLBsConfig{
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
corev1.ProtocolUDP,
},
reserveNlbNum: 2,
targetPorts: []int{
6666,
8888,
},
maxPort: 2500,
minPort: 1000,
blockPorts: []int32{},
},
a: &AutoNLBsPlugin{
gssMaxPodIndex: map[string]int{
"default/pod": 1499,
},
mutex: sync.RWMutex{},
},
expectSvcNum: 4,
},
// case 1
{
ns: "default",
gssName: "pod",
config: &autoNLBsConfig{
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
corev1.ProtocolTCP,
corev1.ProtocolUDP,
},
reserveNlbNum: 2,
targetPorts: []int{
6666,
7777,
8888,
},
maxPort: 1005,
minPort: 1000,
blockPorts: []int32{},
},
a: &AutoNLBsPlugin{
gssMaxPodIndex: map[string]int{
"default/pod": 1,
},
mutex: sync.RWMutex{},
},
expectSvcNum: 3,
},
}
for i, test := range tests {
a := test.a
expectSvcNum := a.checkSvcNumToCreate(test.ns, test.gssName, test.config)
if expectSvcNum != test.expectSvcNum {
t.Errorf("case %d: expect toAddSvcNum: %d, but got toAddSvcNum: %d", i, test.expectSvcNum, expectSvcNum)
}
}
}
func TestConsSvcPorts(t *testing.T) {
tests := []struct {
a *AutoNLBsPlugin
svcIndex int
config *autoNLBsConfig
expectSvcPorts []corev1.ServicePort
}{
// case 0
{
a: &AutoNLBsPlugin{
mutex: sync.RWMutex{},
},
svcIndex: 0,
config: &autoNLBsConfig{
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
corev1.ProtocolUDP,
},
targetPorts: []int{
6666,
8888,
},
maxPort: 1003,
minPort: 1000,
blockPorts: []int32{},
},
expectSvcPorts: []corev1.ServicePort{
{
Name: "tcp-0-6666",
TargetPort: intstr.FromString("tcp-0-6666"),
Port: 1000,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-0-8888",
TargetPort: intstr.FromString("udp-0-8888"),
Port: 1001,
Protocol: corev1.ProtocolUDP,
},
{
Name: "tcp-1-6666",
TargetPort: intstr.FromString("tcp-1-6666"),
Port: 1002,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-1-8888",
TargetPort: intstr.FromString("udp-1-8888"),
Port: 1003,
Protocol: corev1.ProtocolUDP,
},
},
},
// case 1
{
a: &AutoNLBsPlugin{
mutex: sync.RWMutex{},
},
svcIndex: 1,
config: &autoNLBsConfig{
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
corev1.ProtocolTCP,
corev1.ProtocolUDP,
},
targetPorts: []int{
6666,
7777,
8888,
},
maxPort: 1004,
minPort: 1000,
blockPorts: []int32{},
},
expectSvcPorts: []corev1.ServicePort{
{
Name: "tcp-1-6666",
TargetPort: intstr.FromString("tcp-1-6666"),
Port: 1000,
Protocol: corev1.ProtocolTCP,
},
{
Name: "tcp-1-7777",
TargetPort: intstr.FromString("tcp-1-7777"),
Port: 1001,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-1-8888",
TargetPort: intstr.FromString("udp-1-8888"),
Port: 1002,
Protocol: corev1.ProtocolUDP,
},
},
},
// case 2
{
a: &AutoNLBsPlugin{
mutex: sync.RWMutex{},
},
svcIndex: 3,
config: &autoNLBsConfig{
protocols: []corev1.Protocol{
ProtocolTCPUDP,
},
targetPorts: []int{
6666,
},
maxPort: 1004,
minPort: 1000,
blockPorts: []int32{1002},
},
expectSvcPorts: []corev1.ServicePort{
{
Name: "tcp-12-6666",
TargetPort: intstr.FromString("tcp-12-6666"),
Port: 1000,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-12-6666",
TargetPort: intstr.FromString("udp-12-6666"),
Port: 1000,
Protocol: corev1.ProtocolUDP,
},
{
Name: "tcp-13-6666",
TargetPort: intstr.FromString("tcp-13-6666"),
Port: 1001,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-13-6666",
TargetPort: intstr.FromString("udp-13-6666"),
Port: 1001,
Protocol: corev1.ProtocolUDP,
},
{
Name: "tcp-14-6666",
TargetPort: intstr.FromString("tcp-14-6666"),
Port: 1003,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-14-6666",
TargetPort: intstr.FromString("udp-14-6666"),
Port: 1003,
Protocol: corev1.ProtocolUDP,
},
{
Name: "tcp-15-6666",
TargetPort: intstr.FromString("tcp-15-6666"),
Port: 1004,
Protocol: corev1.ProtocolTCP,
},
{
Name: "udp-15-6666",
TargetPort: intstr.FromString("udp-15-6666"),
Port: 1004,
Protocol: corev1.ProtocolUDP,
},
},
},
}
for i, test := range tests {
svcPorts := test.a.consSvcPorts(test.svcIndex, test.config)
if !reflect.DeepEqual(svcPorts, test.expectSvcPorts) {
t.Errorf("case %d: expect svcPorts: %v, but got svcPorts: %v", i, test.expectSvcPorts, svcPorts)
}
}
}

View File

@ -0,0 +1,123 @@
package alibabacloud
import (
"context"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
"github.com/openkruise/kruise-game/cloudprovider/alibabacloud/apis/v1beta1"
"github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
EIPNetwork = "AlibabaCloud-EIP"
AliasSEIP = "EIP-Network"
ReleaseStrategyConfigName = "ReleaseStrategy"
PoolIdConfigName = "PoolId"
ResourceGroupIdConfigName = "ResourceGroupId"
BandwidthConfigName = "Bandwidth"
BandwidthPackageIdConfigName = "BandwidthPackageId"
ChargeTypeConfigName = "ChargeType"
DescriptionConfigName = "Description"
WithEIPAnnotationKey = "k8s.aliyun.com/pod-with-eip"
ReleaseStrategyAnnotationkey = "k8s.aliyun.com/pod-eip-release-strategy"
PoolIdAnnotationkey = "k8s.aliyun.com/eip-public-ip-address-pool-id"
ResourceGroupIdAnnotationkey = "k8s.aliyun.com/eip-resource-group-id"
BandwidthAnnotationkey = "k8s.aliyun.com/eip-bandwidth"
BandwidthPackageIdAnnotationkey = "k8s.aliyun.com/eip-common-bandwidth-package-id"
ChargeTypeConfigAnnotationkey = "k8s.aliyun.com/eip-internet-charge-type"
EIPNameAnnotationKey = "k8s.aliyun.com/eip-name"
EIPDescriptionAnnotationKey = "k8s.aliyun.com/eip-description"
)
type EipPlugin struct {
}
func (E EipPlugin) Name() string {
return EIPNetwork
}
func (E EipPlugin) Alias() string {
return AliasSEIP
}
func (E EipPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (E EipPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
conf := networkManager.GetNetworkConfig()
pod.Annotations[WithEIPAnnotationKey] = "true"
pod.Annotations[EIPNameAnnotationKey] = pod.GetNamespace() + "/" + pod.GetName()
// parse network configuration
for _, c := range conf {
switch c.Name {
case ReleaseStrategyConfigName:
pod.Annotations[ReleaseStrategyAnnotationkey] = c.Value
case PoolIdConfigName:
pod.Annotations[PoolIdAnnotationkey] = c.Value
case ResourceGroupIdConfigName:
pod.Annotations[ResourceGroupIdAnnotationkey] = c.Value
case BandwidthConfigName:
pod.Annotations[BandwidthAnnotationkey] = c.Value
case BandwidthPackageIdConfigName:
pod.Annotations[BandwidthPackageIdAnnotationkey] = c.Value
case ChargeTypeConfigName:
pod.Annotations[ChargeTypeConfigAnnotationkey] = c.Value
case DescriptionConfigName:
pod.Annotations[EIPDescriptionAnnotationKey] = c.Value
}
}
return pod, nil
}
func (E EipPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkWaiting,
}, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
podEip := &v1beta1.PodEIP{}
err := client.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, podEip)
if err != nil || podEip.Status.EipAddress == "" {
return pod, nil
}
networkStatus.InternalAddresses = []gamekruiseiov1alpha1.NetworkAddress{
{
IP: podEip.Status.PrivateIPAddress,
},
}
networkStatus.ExternalAddresses = []gamekruiseiov1alpha1.NetworkAddress{
{
IP: podEip.Status.EipAddress,
},
}
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
func (E EipPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) errors.PluginError {
return nil
}
func init() {
alibabaCloudProvider.registerPlugin(&EipPlugin{})
}

View File

@ -0,0 +1,643 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
"context"
"fmt"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
"sync"
)
const (
MultiNlbsNetwork = "AlibabaCloud-Multi-NLBs"
AliasMultiNlbs = "Multi-NLBs-Network"
// ConfigNames defined by OKG
NlbIdNamesConfigName = "NlbIdNames"
// service annotation defined by OKG
LBIDBelongIndexKey = "game.kruise.io/lb-belong-index"
// service label defined by OKG
ServiceBelongNetworkTypeKey = "game.kruise.io/network-type"
ProtocolTCPUDP corev1.Protocol = "TCPUDP"
PrefixReadyReadinessGate = "service.readiness.alibabacloud.com/"
)
type MultiNlbsPlugin struct {
maxPort int32
minPort int32
blockPorts []int32
cache [][]bool
// podAllocate format {pod ns/name}: -{lbId: xxx-a, port: -8001 -8002} -{lbId: xxx-b, port: -8001 -8002}
podAllocate map[string]*lbsPorts
mutex sync.RWMutex
}
type lbsPorts struct {
index int
lbIds []string
ports []int32
targetPort []int
protocols []corev1.Protocol
}
func (m *MultiNlbsPlugin) Name() string {
return MultiNlbsNetwork
}
func (m *MultiNlbsPlugin) Alias() string {
return AliasMultiNlbs
}
func (m *MultiNlbsPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
m.mutex.Lock()
defer m.mutex.Unlock()
nlbOptions := options.(provideroptions.AlibabaCloudOptions).NLBOptions
m.minPort = nlbOptions.MinPort
m.maxPort = nlbOptions.MaxPort
m.blockPorts = nlbOptions.BlockPorts
svcList := &corev1.ServiceList{}
err := c.List(ctx, svcList, client.MatchingLabels{ServiceBelongNetworkTypeKey: MultiNlbsNetwork})
if err != nil {
return err
}
m.podAllocate, m.cache = initMultiLBCache(svcList.Items, m.maxPort, m.minPort, m.blockPorts)
log.Infof("[%s] podAllocate cache complete initialization: ", MultiNlbsNetwork)
for podNsName, lps := range m.podAllocate {
log.Infof("[%s] pod %s: %v", MultiNlbsNetwork, podNsName, *lps)
}
return nil
}
func initMultiLBCache(svcList []corev1.Service, maxPort, minPort int32, blockPorts []int32) (map[string]*lbsPorts, [][]bool) {
podAllocate := make(map[string]*lbsPorts)
cache := make([][]bool, 0)
for _, svc := range svcList {
index, err := strconv.Atoi(svc.GetAnnotations()[LBIDBelongIndexKey])
if err != nil {
continue
}
lenCache := len(cache)
for i := lenCache; i <= index; i++ {
cacheLevel := make([]bool, int(maxPort-minPort)+1)
for _, p := range blockPorts {
cacheLevel[int(p-minPort)] = true
}
cache = append(cache, cacheLevel)
}
ports := make([]int32, 0)
protocols := make([]corev1.Protocol, 0)
targetPorts := make([]int, 0)
for _, port := range svc.Spec.Ports {
cache[index][(port.Port - minPort)] = true
ports = append(ports, port.Port)
protocols = append(protocols, port.Protocol)
targetPorts = append(targetPorts, port.TargetPort.IntValue())
}
nsName := svc.GetNamespace() + "/" + svc.Spec.Selector[SvcSelectorKey]
if podAllocate[nsName] == nil {
podAllocate[nsName] = &lbsPorts{
index: index,
lbIds: []string{svc.Labels[SlbIdLabelKey]},
ports: ports,
protocols: protocols,
targetPort: targetPorts,
}
} else {
podAllocate[nsName].lbIds = append(podAllocate[nsName].lbIds, svc.Labels[SlbIdLabelKey])
}
}
return podAllocate, cache
}
func (m *MultiNlbsPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
conf, err := parseMultiNLBsConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
var lbNames []string
for _, lbName := range conf.lbNames {
if !util.IsStringInList(lbName, lbNames) {
lbNames = append(lbNames, lbName)
}
}
for _, lbName := range lbNames {
pod.Spec.ReadinessGates = append(pod.Spec.ReadinessGates, corev1.PodReadinessGate{
ConditionType: corev1.PodConditionType(PrefixReadyReadinessGate + pod.GetName() + "-" + strings.ToLower(lbName)),
})
}
return pod, nil
}
func (m *MultiNlbsPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
networkConfig := networkManager.GetNetworkConfig()
conf, err := parseMultiNLBsConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
podNsName := pod.GetNamespace() + "/" + pod.GetName()
podLbsPorts, err := m.allocate(conf, podNsName)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
for _, lbId := range conf.idList[podLbsPorts.index] {
// get svc
lbName := conf.lbNames[lbId]
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName() + "-" + strings.ToLower(lbName),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
service, err := m.consSvc(podLbsPorts, conf, pod, lbName, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Create(ctx, service), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
}
endPoints := ""
for i, lbId := range conf.idList[podLbsPorts.index] {
// get svc
lbName := conf.lbNames[lbId]
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName() + "-" + strings.ToLower(lbName),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
service, err := m.consSvc(podLbsPorts, conf, pod, lbName, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Create(ctx, service), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// old svc remain
if svc.OwnerReferences[0].Kind == "Pod" && svc.OwnerReferences[0].UID != pod.UID {
log.Infof("[%s] waitting old svc %s/%s deleted. old owner pod uid is %s, but now is %s", NlbNetwork, svc.Namespace, svc.Name, svc.OwnerReferences[0].UID, pod.UID)
return pod, nil
}
// update svc
if util.GetHash(conf) != svc.GetAnnotations()[SlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
service, err := m.consSvc(podLbsPorts, conf, pod, lbName, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Update(ctx, service), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
svc.Spec.Type = corev1.ServiceTypeClusterIP
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeClusterIP {
svc.Spec.Type = corev1.ServiceTypeLoadBalancer
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// network not ready
if svc.Status.LoadBalancer.Ingress == nil {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
_, readyCondition := util.GetPodConditionFromList(pod.Status.Conditions, corev1.PodReady)
if readyCondition == nil || readyCondition.Status == corev1.ConditionFalse {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
endPoints = endPoints + svc.Status.LoadBalancer.Ingress[0].Hostname + "/" + lbName
if i != len(conf.idList[0])-1 {
endPoints = endPoints + ","
}
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: port.Name,
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
EndPoint: endPoints,
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: port.Name,
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
}
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (m *MultiNlbsPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseMultiNLBsConfig(networkConfig)
if err != nil {
return cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
var podKeys []string
if sc.isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range m.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
m.deAllocate(podKey)
}
return nil
}
func init() {
multiNlbsPlugin := MultiNlbsPlugin{
mutex: sync.RWMutex{},
}
alibabaCloudProvider.registerPlugin(&multiNlbsPlugin)
}
type multiNLBsConfig struct {
lbNames map[string]string
idList [][]string
targetPorts []int
protocols []corev1.Protocol
isFixed bool
externalTrafficPolicy corev1.ServiceExternalTrafficPolicyType
*nlbHealthConfig
}
func (m *MultiNlbsPlugin) consSvc(podLbsPorts *lbsPorts, conf *multiNLBsConfig, pod *corev1.Pod, lbName string, c client.Client, ctx context.Context) (*corev1.Service, error) {
var selectId string
for _, lbId := range podLbsPorts.lbIds {
if conf.lbNames[lbId] == lbName {
selectId = lbId
break
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(podLbsPorts.ports); i++ {
if podLbsPorts.protocols[i] == ProtocolTCPUDP {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(podLbsPorts.targetPort[i]) + "-" + strings.ToLower(string(corev1.ProtocolTCP)),
Port: podLbsPorts.ports[i],
TargetPort: intstr.FromInt(podLbsPorts.targetPort[i]),
Protocol: corev1.ProtocolTCP,
})
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(podLbsPorts.targetPort[i]) + "-" + strings.ToLower(string(corev1.ProtocolUDP)),
Port: podLbsPorts.ports[i],
TargetPort: intstr.FromInt(podLbsPorts.targetPort[i]),
Protocol: corev1.ProtocolUDP,
})
} else {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(podLbsPorts.targetPort[i]) + "-" + strings.ToLower(string(podLbsPorts.protocols[i])),
Port: podLbsPorts.ports[i],
TargetPort: intstr.FromInt(podLbsPorts.targetPort[i]),
Protocol: podLbsPorts.protocols[i],
})
}
}
loadBalancerClass := "alibabacloud.com/nlb"
svcAnnotations := map[string]string{
SlbListenerOverrideKey: "true",
SlbIdAnnotationKey: selectId,
SlbConfigHashKey: util.GetHash(conf),
LBHealthCheckFlagAnnotationKey: conf.lBHealthCheckFlag,
}
if conf.lBHealthCheckFlag == "on" {
svcAnnotations[LBHealthCheckTypeAnnotationKey] = conf.lBHealthCheckType
svcAnnotations[LBHealthCheckConnectPortAnnotationKey] = conf.lBHealthCheckConnectPort
svcAnnotations[LBHealthCheckConnectTimeoutAnnotationKey] = conf.lBHealthCheckConnectTimeout
svcAnnotations[LBHealthCheckIntervalAnnotationKey] = conf.lBHealthCheckInterval
svcAnnotations[LBHealthyThresholdAnnotationKey] = conf.lBHealthyThreshold
svcAnnotations[LBUnhealthyThresholdAnnotationKey] = conf.lBUnhealthyThreshold
if conf.lBHealthCheckType == "http" {
svcAnnotations[LBHealthCheckDomainAnnotationKey] = conf.lBHealthCheckDomain
svcAnnotations[LBHealthCheckUriAnnotationKey] = conf.lBHealthCheckUri
svcAnnotations[LBHealthCheckMethodAnnotationKey] = conf.lBHealthCheckMethod
}
}
svcAnnotations[LBIDBelongIndexKey] = strconv.Itoa(podLbsPorts.index)
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName() + "-" + strings.ToLower(lbName),
Namespace: pod.GetNamespace(),
Annotations: svcAnnotations,
Labels: map[string]string{
ServiceBelongNetworkTypeKey: MultiNlbsNetwork,
},
OwnerReferences: getSvcOwnerReference(c, ctx, pod, conf.isFixed),
},
Spec: corev1.ServiceSpec{
AllocateLoadBalancerNodePorts: ptr.To[bool](false),
ExternalTrafficPolicy: conf.externalTrafficPolicy,
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
LoadBalancerClass: &loadBalancerClass,
},
}, nil
}
func (m *MultiNlbsPlugin) allocate(conf *multiNLBsConfig, nsName string) (*lbsPorts, error) {
m.mutex.Lock()
defer m.mutex.Unlock()
// check if pod is already allocated
if m.podAllocate[nsName] != nil {
return m.podAllocate[nsName], nil
}
// if the pod has not been allocated, allocate new ports to it
var ports []int32
needNum := len(conf.targetPorts)
index := -1
// init cache according to conf.idList
lenCache := len(m.cache)
for i := lenCache; i < len(conf.idList); i++ {
cacheLevel := make([]bool, int(m.maxPort-m.minPort)+1)
for _, p := range m.blockPorts {
cacheLevel[int(p-m.minPort)] = true
}
m.cache = append(m.cache, cacheLevel)
}
// find allocated ports
for i := 0; i < len(m.cache); i++ {
sum := 0
ports = make([]int32, 0)
for j := 0; j < len(m.cache[i]); j++ {
if !m.cache[i][j] {
ports = append(ports, int32(j)+m.minPort)
sum++
if sum == needNum {
index = i
break
}
}
}
if index != -1 {
break
}
}
if index == -1 {
return nil, fmt.Errorf("no available ports found")
}
if index >= len(conf.idList) {
return nil, fmt.Errorf("NlbIdNames configuration have not synced")
}
for _, port := range ports {
m.cache[index][port-m.minPort] = true
}
m.podAllocate[nsName] = &lbsPorts{
index: index,
lbIds: conf.idList[index],
ports: ports,
protocols: conf.protocols,
targetPort: conf.targetPorts,
}
log.Infof("[%s] pod %s allocated: lbIds %v; ports %v", MultiNlbsNetwork, nsName, conf.idList[index], ports)
return m.podAllocate[nsName], nil
}
func (m *MultiNlbsPlugin) deAllocate(nsName string) {
m.mutex.Lock()
defer m.mutex.Unlock()
podLbsPorts := m.podAllocate[nsName]
if podLbsPorts == nil {
return
}
for _, port := range podLbsPorts.ports {
m.cache[podLbsPorts.index][port-m.minPort] = false
}
delete(m.podAllocate, nsName)
log.Infof("[%s] pod %s deallocate: lbIds %s ports %v", MultiNlbsNetwork, nsName, podLbsPorts.lbIds, podLbsPorts.ports)
}
func parseMultiNLBsConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*multiNLBsConfig, error) {
// lbNames format {id}: {name}
lbNames := make(map[string]string)
idList := make([][]string, 0)
nameNums := make(map[string]int)
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
externalTrafficPolicy := corev1.ServiceExternalTrafficPolicyTypeLocal
for _, c := range conf {
switch c.Name {
case NlbIdNamesConfigName:
for _, nlbIdNamesConfig := range strings.Split(c.Value, ",") {
if nlbIdNamesConfig != "" {
idName := strings.Split(nlbIdNamesConfig, "/")
if len(idName) != 2 {
return nil, fmt.Errorf("invalid NlbIdNames %s. You should input as the format {nlb-id-0}/{name-0}", c.Value)
}
id := idName[0]
name := idName[1]
nameNum := nameNums[name]
if nameNum >= len(idList) {
idList = append(idList, []string{id})
} else {
idList[nameNum] = append(idList[nameNum], id)
}
nameNums[name]++
lbNames[id] = name
}
}
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
return nil, fmt.Errorf("invalid PortProtocols %s", c.Value)
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid Fixed %s", c.Value)
}
isFixed = v
case ExternalTrafficPolicyTypeConfigName:
if strings.EqualFold(c.Value, string(corev1.ServiceExternalTrafficPolicyTypeCluster)) {
externalTrafficPolicy = corev1.ServiceExternalTrafficPolicyTypeCluster
}
}
}
// check idList
if len(idList) == 0 {
return nil, fmt.Errorf("invalid NlbIdNames. You should input as the format {nlb-id-0}/{name-0}")
}
num := len(idList[0])
for i := 1; i < len(idList); i++ {
if num != len(idList[i]) {
return nil, fmt.Errorf("invalid NlbIdNames. The number of names should be same")
}
num = len(idList[i])
}
// check ports & protocols
if len(ports) == 0 || len(protocols) == 0 {
return nil, fmt.Errorf("invalid PortProtocols, which can not be empty")
}
nlbHealthConfig, err := parseNlbHealthConfig(conf)
if err != nil {
return nil, err
}
return &multiNLBsConfig{
lbNames: lbNames,
idList: idList,
targetPorts: ports,
protocols: protocols,
isFixed: isFixed,
externalTrafficPolicy: externalTrafficPolicy,
nlbHealthConfig: nlbHealthConfig,
}, nil
}

View File

@ -0,0 +1,453 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"reflect"
"sync"
"testing"
)
func TestParseMultiNLBsConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
multiNLBsConfig *multiNLBsConfig
}{
// case 0
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdNamesConfigName,
Value: "id-xx-A/dianxin,id-xx-B/liantong,id-xx-C/dianxin,id-xx-D/liantong",
},
{
Name: PortProtocolsConfigName,
Value: "80/TCP,80/UDP",
},
},
multiNLBsConfig: &multiNLBsConfig{
lbNames: map[string]string{
"id-xx-A": "dianxin",
"id-xx-B": "liantong",
"id-xx-C": "dianxin",
"id-xx-D": "liantong",
},
idList: [][]string{
{
"id-xx-A", "id-xx-B",
},
{
"id-xx-C", "id-xx-D",
},
},
},
},
// case 1
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdNamesConfigName,
Value: "id-xx-A/dianxin,id-xx-B/dianxin,id-xx-C/dianxin,id-xx-D/liantong,id-xx-E/liantong,id-xx-F/liantong",
},
{
Name: PortProtocolsConfigName,
Value: "80/TCP,80/UDP",
},
},
multiNLBsConfig: &multiNLBsConfig{
lbNames: map[string]string{
"id-xx-A": "dianxin",
"id-xx-B": "dianxin",
"id-xx-C": "dianxin",
"id-xx-D": "liantong",
"id-xx-E": "liantong",
"id-xx-F": "liantong",
},
idList: [][]string{
{
"id-xx-A", "id-xx-D",
},
{
"id-xx-B", "id-xx-E",
},
{
"id-xx-C", "id-xx-F",
},
},
},
},
}
for i, tt := range tests {
actual, err := parseMultiNLBsConfig(tt.conf)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(actual.lbNames, tt.multiNLBsConfig.lbNames) {
t.Errorf("case %d: parseMultiNLBsConfig lbNames actual: %v, expect: %v", i, actual.lbNames, tt.multiNLBsConfig.lbNames)
}
if !reflect.DeepEqual(actual.idList, tt.multiNLBsConfig.idList) {
t.Errorf("case %d: parseMultiNLBsConfig idList actual: %v, expect: %v", i, actual.idList, tt.multiNLBsConfig.idList)
}
}
}
func TestAllocate(t *testing.T) {
tests := []struct {
plugin *MultiNlbsPlugin
conf *multiNLBsConfig
nsName string
lbsPorts *lbsPorts
cacheAfter [][]bool
podAllocateAfter map[string]*lbsPorts
}{
// case 0: cache is nil
{
plugin: &MultiNlbsPlugin{
maxPort: int32(8002),
minPort: int32(8000),
blockPorts: []int32{8001},
mutex: sync.RWMutex{},
podAllocate: make(map[string]*lbsPorts),
cache: make([][]bool, 0),
},
conf: &multiNLBsConfig{
lbNames: map[string]string{
"id-xx-A": "dianxin",
"id-xx-B": "liantong",
"id-xx-C": "dianxin",
"id-xx-D": "liantong",
},
idList: [][]string{
{
"id-xx-A", "id-xx-B",
},
{
"id-xx-C", "id-xx-D",
},
},
targetPorts: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
nsName: "default/test-0",
lbsPorts: &lbsPorts{
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000, 8002},
targetPort: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
cacheAfter: [][]bool{{true, true, true}, {false, true, false}},
podAllocateAfter: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000, 8002},
targetPort: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
},
},
// case 1: cache not nil & new pod
{
plugin: &MultiNlbsPlugin{
maxPort: int32(8002),
minPort: int32(8000),
blockPorts: []int32{8001},
mutex: sync.RWMutex{},
podAllocate: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
cache: [][]bool{{true, true, false}},
},
conf: &multiNLBsConfig{
lbNames: map[string]string{
"id-xx-A": "dianxin",
"id-xx-B": "liantong",
"id-xx-C": "dianxin",
"id-xx-D": "liantong",
},
idList: [][]string{
{
"id-xx-A", "id-xx-B",
},
{
"id-xx-C", "id-xx-D",
},
},
targetPorts: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
nsName: "default/test-1",
lbsPorts: &lbsPorts{
index: 1,
lbIds: []string{"id-xx-C", "id-xx-D"},
ports: []int32{8000, 8002},
targetPort: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
cacheAfter: [][]bool{{true, true, false}, {true, true, true}},
podAllocateAfter: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
"default/test-1": {
index: 1,
lbIds: []string{"id-xx-C", "id-xx-D"},
ports: []int32{8000, 8002},
targetPort: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
},
},
// case 2: cache not nil & old pod
{
plugin: &MultiNlbsPlugin{
maxPort: int32(8002),
minPort: int32(8000),
blockPorts: []int32{8001},
mutex: sync.RWMutex{},
podAllocate: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
cache: [][]bool{{true, true, false}},
},
conf: &multiNLBsConfig{
lbNames: map[string]string{
"id-xx-A": "dianxin",
"id-xx-B": "liantong",
"id-xx-C": "dianxin",
"id-xx-D": "liantong",
},
idList: [][]string{
{
"id-xx-A", "id-xx-B",
},
{
"id-xx-C", "id-xx-D",
},
},
targetPorts: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
nsName: "default/test-0",
lbsPorts: &lbsPorts{
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
cacheAfter: [][]bool{{true, true, false}},
podAllocateAfter: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
},
}
for i, tt := range tests {
plugin := tt.plugin
lbsPorts, err := plugin.allocate(tt.conf, tt.nsName)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(lbsPorts, tt.lbsPorts) {
t.Errorf("case %d: allocate actual: %v, expect: %v", i, lbsPorts, tt.lbsPorts)
}
if !reflect.DeepEqual(plugin.podAllocate, tt.podAllocateAfter) {
t.Errorf("case %d: podAllocate actual: %v, expect: %v", i, plugin.podAllocate, tt.podAllocateAfter)
}
if !reflect.DeepEqual(plugin.cache, tt.cacheAfter) {
t.Errorf("case %d: cache actual: %v, expect: %v", i, plugin.cache, tt.cacheAfter)
}
}
}
func TestDeAllocate(t *testing.T) {
tests := []struct {
plugin *MultiNlbsPlugin
nsName string
cacheAfter [][]bool
podAllocateAfter map[string]*lbsPorts
}{
{
plugin: &MultiNlbsPlugin{
maxPort: int32(8002),
minPort: int32(8000),
blockPorts: []int32{8001},
mutex: sync.RWMutex{},
podAllocate: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
"default/test-1": {
index: 1,
lbIds: []string{"id-xx-C", "id-xx-D"},
ports: []int32{8000, 8002},
targetPort: []int{80, 80},
protocols: []corev1.Protocol{corev1.ProtocolTCP, corev1.ProtocolUDP},
},
},
cache: [][]bool{{true, true, false}, {true, true, true}},
},
nsName: "default/test-1",
cacheAfter: [][]bool{{true, true, false}, {false, true, false}},
podAllocateAfter: map[string]*lbsPorts{
"default/test-0": {
index: 0,
lbIds: []string{"id-xx-A", "id-xx-B"},
ports: []int32{8000},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
},
}
for i, tt := range tests {
plugin := tt.plugin
plugin.deAllocate(tt.nsName)
if !reflect.DeepEqual(plugin.podAllocate, tt.podAllocateAfter) {
t.Errorf("case %d: podAllocate actual: %v, expect: %v", i, plugin.podAllocate, tt.podAllocateAfter)
}
if !reflect.DeepEqual(plugin.cache, tt.cacheAfter) {
t.Errorf("case %d: cache actual: %v, expect: %v", i, plugin.cache, tt.cacheAfter)
}
}
}
func TestInitMultiLBCache(t *testing.T) {
tests := []struct {
svcList []corev1.Service
maxPort int32
minPort int32
blockPorts []int32
podAllocate map[string]*lbsPorts
cache [][]bool
}{
{
svcList: []corev1.Service{
{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
LBIDBelongIndexKey: "0",
},
Labels: map[string]string{
SlbIdLabelKey: "xxx-A",
ServiceBelongNetworkTypeKey: MultiNlbsNetwork,
},
Namespace: "ns-0",
Name: "name-0",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-A",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(80),
Port: 666,
Protocol: corev1.ProtocolTCP,
},
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
LBIDBelongIndexKey: "0",
},
Labels: map[string]string{
SlbIdLabelKey: "xxx-B",
ServiceBelongNetworkTypeKey: MultiNlbsNetwork,
},
Namespace: "ns-0",
Name: "name-1",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-A",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(80),
Port: 666,
Protocol: corev1.ProtocolTCP,
},
},
},
},
},
maxPort: int32(667),
minPort: int32(665),
blockPorts: []int32{},
podAllocate: map[string]*lbsPorts{
"ns-0/pod-A": {
index: 0,
lbIds: []string{"xxx-A", "xxx-B"},
ports: []int32{666},
targetPort: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
cache: [][]bool{{false, true, false}},
},
}
for i, tt := range tests {
podAllocate, cache := initMultiLBCache(tt.svcList, tt.maxPort, tt.minPort, tt.blockPorts)
if !reflect.DeepEqual(podAllocate, tt.podAllocate) {
t.Errorf("case %d: podAllocate actual: %v, expect: %v", i, podAllocate, tt.podAllocate)
}
if !reflect.DeepEqual(cache, tt.cache) {
t.Errorf("case %d: cache actual: %v, expect: %v", i, cache, tt.cache)
}
}
}

View File

@ -122,7 +122,12 @@ func (n NatGwPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
// NetworkReady when all ports have external addresses
if len(strings.Split(pod.Annotations[PortsAnsKey], ",")) == len(podDNat.Status.Entries) {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
}
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}

View File

@ -0,0 +1,640 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alibabacloud
import (
"context"
"fmt"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"regexp"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
"sync"
)
const (
NlbNetwork = "AlibabaCloud-NLB"
AliasNLB = "NLB-Network"
// annotations provided by AlibabaCloud Cloud Controller Manager
LBHealthCheckFlagAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-flag"
LBHealthCheckTypeAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-type"
LBHealthCheckConnectPortAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-connect-port"
LBHealthCheckConnectTimeoutAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-connect-timeout"
LBHealthyThresholdAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-healthy-threshold"
LBUnhealthyThresholdAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-unhealthy-threshold"
LBHealthCheckIntervalAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-interval"
LBHealthCheckUriAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-uri"
LBHealthCheckDomainAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-domain"
LBHealthCheckMethodAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-method"
// ConfigNames defined by OKG
LBHealthCheckFlagConfigName = "LBHealthCheckFlag"
LBHealthCheckTypeConfigName = "LBHealthCheckType"
LBHealthCheckConnectPortConfigName = "LBHealthCheckConnectPort"
LBHealthCheckConnectTimeoutConfigName = "LBHealthCheckConnectTimeout"
LBHealthCheckIntervalConfigName = "LBHealthCheckInterval"
LBHealthCheckUriConfigName = "LBHealthCheckUri"
LBHealthCheckDomainConfigName = "LBHealthCheckDomain"
LBHealthCheckMethodConfigName = "LBHealthCheckMethod"
LBHealthyThresholdConfigName = "LBHealthyThreshold"
LBUnhealthyThresholdConfigName = "LBUnhealthyThreshold"
)
type NlbPlugin struct {
maxPort int32
minPort int32
blockPorts []int32
cache map[string]portAllocated
podAllocate map[string]string
mutex sync.RWMutex
}
type nlbConfig struct {
lbIds []string
targetPorts []int
protocols []corev1.Protocol
isFixed bool
*nlbHealthConfig
}
type nlbHealthConfig struct {
lBHealthCheckFlag string
lBHealthCheckType string
lBHealthCheckConnectPort string
lBHealthCheckConnectTimeout string
lBHealthCheckInterval string
lBHealthCheckUri string
lBHealthCheckDomain string
lBHealthCheckMethod string
lBHealthyThreshold string
lBUnhealthyThreshold string
}
func (n *NlbPlugin) Name() string {
return NlbNetwork
}
func (n *NlbPlugin) Alias() string {
return AliasNLB
}
func (n *NlbPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
n.mutex.Lock()
defer n.mutex.Unlock()
slbOptions := options.(provideroptions.AlibabaCloudOptions).NLBOptions
n.minPort = slbOptions.MinPort
n.maxPort = slbOptions.MaxPort
n.blockPorts = slbOptions.BlockPorts
svcList := &corev1.ServiceList{}
err := c.List(ctx, svcList)
if err != nil {
return err
}
n.cache, n.podAllocate = initLbCache(svcList.Items, n.minPort, n.maxPort, n.blockPorts)
log.Infof("[%s] podAllocate cache complete initialization: %v", NlbNetwork, n.podAllocate)
return nil
}
func (n *NlbPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (n *NlbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseNlbConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// get svc
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
service, err := n.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Create(ctx, service), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// old svc remain
if svc.OwnerReferences[0].Kind == "Pod" && svc.OwnerReferences[0].UID != pod.UID {
log.Infof("[%s] waitting old svc %s/%s deleted. old owner pod uid is %s, but now is %s", NlbNetwork, svc.Namespace, svc.Name, svc.OwnerReferences[0].UID, pod.UID)
return pod, nil
}
// update svc
if util.GetHash(sc) != svc.GetAnnotations()[SlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
service, err := n.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Update(ctx, service), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
svc.Spec.Type = corev1.ServiceTypeClusterIP
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeClusterIP {
svc.Spec.Type = corev1.ServiceTypeLoadBalancer
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// network not ready
if svc.Status.LoadBalancer.Ingress == nil {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
EndPoint: svc.Status.LoadBalancer.Ingress[0].Hostname,
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (n *NlbPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseNlbConfig(networkConfig)
if err != nil {
return cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
var podKeys []string
if sc.isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range n.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
n.deAllocate(podKey)
}
return nil
}
func init() {
nlbPlugin := NlbPlugin{
mutex: sync.RWMutex{},
}
alibabaCloudProvider.registerPlugin(&nlbPlugin)
}
func (n *NlbPlugin) consSvc(nc *nlbConfig, pod *corev1.Pod, c client.Client, ctx context.Context) (*corev1.Service, error) {
var ports []int32
var lbId string
podKey := pod.GetNamespace() + "/" + pod.GetName()
allocatedPorts, exist := n.podAllocate[podKey]
if exist {
slbPorts := strings.Split(allocatedPorts, ":")
lbId = slbPorts[0]
ports = util.StringToInt32Slice(slbPorts[1], ",")
} else {
lbId, ports = n.allocate(nc.lbIds, len(nc.targetPorts), podKey)
if lbId == "" && ports == nil {
return nil, fmt.Errorf("there are no avaialable ports for %v", nc.lbIds)
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(nc.targetPorts); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(nc.targetPorts[i]),
Port: ports[i],
Protocol: nc.protocols[i],
TargetPort: intstr.FromInt(nc.targetPorts[i]),
})
}
loadBalancerClass := "alibabacloud.com/nlb"
svcAnnotations := map[string]string{
SlbListenerOverrideKey: "true",
SlbIdAnnotationKey: lbId,
SlbConfigHashKey: util.GetHash(nc),
LBHealthCheckFlagAnnotationKey: nc.lBHealthCheckFlag,
}
if nc.lBHealthCheckFlag == "on" {
svcAnnotations[LBHealthCheckTypeAnnotationKey] = nc.lBHealthCheckType
svcAnnotations[LBHealthCheckConnectPortAnnotationKey] = nc.lBHealthCheckConnectPort
svcAnnotations[LBHealthCheckConnectTimeoutAnnotationKey] = nc.lBHealthCheckConnectTimeout
svcAnnotations[LBHealthCheckIntervalAnnotationKey] = nc.lBHealthCheckInterval
svcAnnotations[LBHealthyThresholdAnnotationKey] = nc.lBHealthyThreshold
svcAnnotations[LBUnhealthyThresholdAnnotationKey] = nc.lBUnhealthyThreshold
if nc.lBHealthCheckType == "http" {
svcAnnotations[LBHealthCheckDomainAnnotationKey] = nc.lBHealthCheckDomain
svcAnnotations[LBHealthCheckUriAnnotationKey] = nc.lBHealthCheckUri
svcAnnotations[LBHealthCheckMethodAnnotationKey] = nc.lBHealthCheckMethod
}
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: svcAnnotations,
OwnerReferences: getSvcOwnerReference(c, ctx, pod, nc.isFixed),
},
Spec: corev1.ServiceSpec{
ExternalTrafficPolicy: corev1.ServiceExternalTrafficPolicyTypeLocal,
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
LoadBalancerClass: &loadBalancerClass,
},
}
return svc, nil
}
func (n *NlbPlugin) allocate(lbIds []string, num int, nsName string) (string, []int32) {
n.mutex.Lock()
defer n.mutex.Unlock()
var ports []int32
var lbId string
// find lb with adequate ports
for _, slbId := range lbIds {
sum := 0
for i := n.minPort; i <= n.maxPort; i++ {
if !n.cache[slbId][i] {
sum++
}
if sum >= num {
lbId = slbId
break
}
}
}
if lbId == "" {
return "", nil
}
// select ports
for i := 0; i < num; i++ {
var port int32
if n.cache[lbId] == nil {
// init cache for new lb
n.cache[lbId] = make(portAllocated, n.maxPort-n.minPort+1)
for i := n.minPort; i <= n.maxPort; i++ {
n.cache[lbId][i] = false
}
// block ports
for _, blockPort := range n.blockPorts {
n.cache[lbId][blockPort] = true
}
}
for p, allocated := range n.cache[lbId] {
if !allocated {
port = p
break
}
}
n.cache[lbId][port] = true
ports = append(ports, port)
}
n.podAllocate[nsName] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("pod %s allocate nlb %s ports %v", nsName, lbId, ports)
return lbId, ports
}
func (n *NlbPlugin) deAllocate(nsName string) {
n.mutex.Lock()
defer n.mutex.Unlock()
allocatedPorts, exist := n.podAllocate[nsName]
if !exist {
return
}
slbPorts := strings.Split(allocatedPorts, ":")
lbId := slbPorts[0]
ports := util.StringToInt32Slice(slbPorts[1], ",")
for _, port := range ports {
n.cache[lbId][port] = false
}
// block ports
for _, blockPort := range n.blockPorts {
n.cache[lbId][blockPort] = true
}
delete(n.podAllocate, nsName)
log.Infof("pod %s deallocate nlb %s ports %v", nsName, lbId, ports)
}
func parseNlbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*nlbConfig, error) {
var lbIds []string
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
for _, c := range conf {
switch c.Name {
case NlbIdsConfigName:
for _, slbId := range strings.Split(c.Value, ",") {
if slbId != "" {
lbIds = append(lbIds, slbId)
}
}
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
isFixed = v
}
}
nlbHealthConfig, err := parseNlbHealthConfig(conf)
if err != nil {
return nil, err
}
return &nlbConfig{
lbIds: lbIds,
protocols: protocols,
targetPorts: ports,
isFixed: isFixed,
nlbHealthConfig: nlbHealthConfig,
}, nil
}
func parseNlbHealthConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*nlbHealthConfig, error) {
lBHealthCheckFlag := "on"
lBHealthCheckType := "tcp"
lBHealthCheckConnectPort := "0"
lBHealthCheckConnectTimeout := "5"
lBHealthCheckInterval := "10"
lBUnhealthyThreshold := "2"
lBHealthyThreshold := "2"
lBHealthCheckUri := ""
lBHealthCheckDomain := ""
lBHealthCheckMethod := ""
for _, c := range conf {
switch c.Name {
case LBHealthCheckFlagConfigName:
flag := strings.ToLower(c.Value)
if flag != "on" && flag != "off" {
return nil, fmt.Errorf("invalid lb health check flag value: %s", c.Value)
}
lBHealthCheckFlag = flag
case LBHealthCheckTypeConfigName:
checkType := strings.ToLower(c.Value)
if checkType != "tcp" && checkType != "http" {
return nil, fmt.Errorf("invalid lb health check type: %s", c.Value)
}
lBHealthCheckType = checkType
case LBHealthCheckConnectPortConfigName:
portInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb health check connect port: %s", c.Value)
}
if portInt < 0 || portInt > 65535 {
return nil, fmt.Errorf("invalid lb health check connect port: %d", portInt)
}
lBHealthCheckConnectPort = c.Value
case LBHealthCheckConnectTimeoutConfigName:
timeoutInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb health check connect timeout: %s", c.Value)
}
if timeoutInt < 1 || timeoutInt > 300 {
return nil, fmt.Errorf("invalid lb health check connect timeout: %d", timeoutInt)
}
lBHealthCheckConnectTimeout = c.Value
case LBHealthCheckIntervalConfigName:
intervalInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb health check interval: %s", c.Value)
}
if intervalInt < 1 || intervalInt > 50 {
return nil, fmt.Errorf("invalid lb health check interval: %d", intervalInt)
}
lBHealthCheckInterval = c.Value
case LBHealthyThresholdConfigName:
thresholdInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb healthy threshold: %s", c.Value)
}
if thresholdInt < 2 || thresholdInt > 10 {
return nil, fmt.Errorf("invalid lb healthy threshold: %d", thresholdInt)
}
lBHealthyThreshold = c.Value
case LBUnhealthyThresholdConfigName:
thresholdInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb unhealthy threshold: %s", c.Value)
}
if thresholdInt < 2 || thresholdInt > 10 {
return nil, fmt.Errorf("invalid lb unhealthy threshold: %d", thresholdInt)
}
lBUnhealthyThreshold = c.Value
case LBHealthCheckUriConfigName:
if validateUri(c.Value) != nil {
return nil, fmt.Errorf("invalid lb health check uri: %s", c.Value)
}
lBHealthCheckUri = c.Value
case LBHealthCheckDomainConfigName:
if validateDomain(c.Value) != nil {
return nil, fmt.Errorf("invalid lb health check domain: %s", c.Value)
}
lBHealthCheckDomain = c.Value
case LBHealthCheckMethodConfigName:
method := strings.ToLower(c.Value)
if method != "get" && method != "head" {
return nil, fmt.Errorf("invalid lb health check method: %s", c.Value)
}
lBHealthCheckMethod = method
}
}
return &nlbHealthConfig{
lBHealthCheckFlag: lBHealthCheckFlag,
lBHealthCheckType: lBHealthCheckType,
lBHealthCheckConnectPort: lBHealthCheckConnectPort,
lBHealthCheckConnectTimeout: lBHealthCheckConnectTimeout,
lBHealthCheckInterval: lBHealthCheckInterval,
lBHealthCheckUri: lBHealthCheckUri,
lBHealthCheckDomain: lBHealthCheckDomain,
lBHealthCheckMethod: lBHealthCheckMethod,
lBHealthyThreshold: lBHealthyThreshold,
lBUnhealthyThreshold: lBUnhealthyThreshold,
}, nil
}
func validateDomain(domain string) error {
if len(domain) < 1 || len(domain) > 80 {
return fmt.Errorf("the domain length must be between 1 and 80 characters")
}
// Regular expression matches lowercase letters, numbers, dashes and periods
domainRegex := regexp.MustCompile(`^[a-z0-9-.]+$`)
if !domainRegex.MatchString(domain) {
return fmt.Errorf("the domain must only contain lowercase letters, numbers, hyphens, and periods")
}
// make sure the domain name does not start or end with a dash or period
if domain[0] == '-' || domain[0] == '.' || domain[len(domain)-1] == '-' || domain[len(domain)-1] == '.' {
return fmt.Errorf("the domain must not start or end with a hyphen or period")
}
// make sure the domain name does not contain consecutive dots or dashes
if regexp.MustCompile(`(--|\.\.)`).MatchString(domain) {
return fmt.Errorf("the domain must not contain consecutive hyphens or periods")
}
return nil
}
func validateUri(uri string) error {
if len(uri) < 1 || len(uri) > 80 {
return fmt.Errorf("string length must be between 1 and 80 characters")
}
regexPattern := `^/[0-9a-zA-Z.!$%&'*+/=?^_` + "`" + `{|}~-]*$`
matched, err := regexp.MatchString(regexPattern, uri)
if err != nil {
return fmt.Errorf("regex error: %v", err)
}
if !matched {
return fmt.Errorf("string does not match the required pattern")
}
return nil
}

View File

@ -0,0 +1,239 @@
package alibabacloud
import (
"context"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
)
const (
NlbSPNetwork = "AlibabaCloud-NLB-SharedPort"
NlbIdsConfigName = "NlbIds"
)
func init() {
alibabaCloudProvider.registerPlugin(&NlbSpPlugin{})
}
type NlbSpPlugin struct {
}
func (N *NlbSpPlugin) Name() string {
return NlbSPNetwork
}
func (N *NlbSpPlugin) Alias() string {
return ""
}
func (N *NlbSpPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (N *NlbSpPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
podNetConfig := parseNLbSpConfig(networkManager.GetNetworkConfig())
pod.Labels[SlbIdLabelKey] = podNetConfig.lbId
// Get Svc
svc := &corev1.Service{}
err := c.Get(ctx, types.NamespacedName{
Namespace: pod.GetNamespace(),
Name: podNetConfig.lbId,
}, svc)
if err != nil {
if errors.IsNotFound(err) {
// Create Svc
return pod, cperrors.ToPluginError(c.Create(ctx, consNlbSvc(podNetConfig, pod, c, ctx)), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
return pod, nil
}
func (N *NlbSpPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
podNetConfig := parseNLbSpConfig(networkConfig)
// Get Svc
svc := &corev1.Service{}
err := c.Get(context.Background(), types.NamespacedName{
Namespace: pod.GetNamespace(),
Name: podNetConfig.lbId,
}, svc)
if err != nil {
if errors.IsNotFound(err) {
// Create Svc
return pod, cperrors.ToPluginError(c.Create(ctx, consNlbSvc(podNetConfig, pod, c, ctx)), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// update svc
if util.GetHash(podNetConfig) != svc.GetAnnotations()[SlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
return pod, cperrors.ToPluginError(c.Update(ctx, consNlbSvc(podNetConfig, pod, c, ctx)), cperrors.ApiCallError)
}
_, hasLabel := pod.Labels[SlbIdLabelKey]
// disable network
if networkManager.GetNetworkDisabled() && hasLabel {
newLabels := pod.GetLabels()
delete(newLabels, SlbIdLabelKey)
pod.Labels = newLabels
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// enable network
if !networkManager.GetNetworkDisabled() && !hasLabel {
pod.Labels[SlbIdLabelKey] = podNetConfig.lbId
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// network not ready
if svc.Status.LoadBalancer.Ingress == nil {
return pod, nil
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkConfig) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, true)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
EndPoint: svc.Status.LoadBalancer.Ingress[0].Hostname,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (N *NlbSpPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
return nil
}
type nlbSpConfig struct {
lbId string
ports []int
protocols []corev1.Protocol
}
func parseNLbSpConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *nlbSpConfig {
var lbIds string
var ports []int
var protocols []corev1.Protocol
for _, c := range conf {
switch c.Name {
case NlbIdsConfigName:
lbIds = c.Value
case PortProtocolsConfigName:
ports, protocols = parsePortProtocols(c.Value)
}
}
return &nlbSpConfig{
lbId: lbIds,
ports: ports,
protocols: protocols,
}
}
func consNlbSvc(nc *nlbSpConfig, pod *corev1.Pod, c client.Client, ctx context.Context) *corev1.Service {
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(nc.ports); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(nc.ports[i]),
Port: int32(nc.ports[i]),
Protocol: nc.protocols[i],
TargetPort: intstr.FromInt(nc.ports[i]),
})
}
loadBalancerClass := "alibabacloud.com/nlb"
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: nc.lbId,
Namespace: pod.GetNamespace(),
Annotations: map[string]string{
SlbListenerOverrideKey: "true",
SlbIdAnnotationKey: nc.lbId,
SlbConfigHashKey: util.GetHash(nc),
},
OwnerReferences: getSvcOwnerReference(c, ctx, pod, true),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SlbIdLabelKey: nc.lbId,
},
Ports: svcPorts,
LoadBalancerClass: &loadBalancerClass,
},
}
return svc
}

View File

@ -0,0 +1,58 @@
package alibabacloud
import (
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
corev1 "k8s.io/api/core/v1"
"reflect"
"testing"
)
func TestParseNLbSpConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
nc *nlbSpConfig
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "nlb-xxx",
},
{
Name: PortProtocolsConfigName,
Value: "80/UDP",
},
},
nc: &nlbSpConfig{
protocols: []corev1.Protocol{corev1.ProtocolUDP},
ports: []int{80},
lbId: "nlb-xxx",
},
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "nlb-xxx",
},
{
Name: PortProtocolsConfigName,
Value: "80",
},
},
nc: &nlbSpConfig{
protocols: []corev1.Protocol{corev1.ProtocolTCP},
ports: []int{80},
lbId: "nlb-xxx",
},
},
}
for i, test := range tests {
expect := test.nc
actual := parseNLbSpConfig(test.conf)
if !reflect.DeepEqual(expect, actual) {
t.Errorf("case %d: expect nlbSpConfig is %v, but actually is %v", i, expect, actual)
}
}
}

View File

@ -0,0 +1,330 @@
package alibabacloud
import (
"context"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/utils/ptr"
"reflect"
"sigs.k8s.io/controller-runtime/pkg/client"
"sync"
"testing"
)
func TestNLBAllocateDeAllocate(t *testing.T) {
test := struct {
lbIds []string
nlb *NlbPlugin
num int
podKey string
}{
lbIds: []string{"xxx-A"},
nlb: &NlbPlugin{
maxPort: int32(712),
minPort: int32(512),
cache: make(map[string]portAllocated),
podAllocate: make(map[string]string),
mutex: sync.RWMutex{},
},
podKey: "xxx/xxx",
num: 3,
}
lbId, ports := test.nlb.allocate(test.lbIds, test.num, test.podKey)
if _, exist := test.nlb.podAllocate[test.podKey]; !exist {
t.Errorf("podAllocate[%s] is empty after allocated", test.podKey)
}
for _, port := range ports {
if port > test.nlb.maxPort || port < test.nlb.minPort {
t.Errorf("allocate port %d, unexpected", port)
}
if test.nlb.cache[lbId][port] == false {
t.Errorf("Allocate port %d failed", port)
}
}
test.nlb.deAllocate(test.podKey)
for _, port := range ports {
if test.nlb.cache[lbId][port] == true {
t.Errorf("deAllocate port %d failed", port)
}
}
if _, exist := test.nlb.podAllocate[test.podKey]; exist {
t.Errorf("podAllocate[%s] is not empty after deallocated", test.podKey)
}
}
func TestParseNlbConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
nlbConfig *nlbConfig
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "xxx-A",
},
{
Name: PortProtocolsConfigName,
Value: "80",
},
{
Name: LBHealthCheckFlagConfigName,
Value: "On",
},
{
Name: LBHealthCheckTypeConfigName,
Value: "HTTP",
},
{
Name: LBHealthCheckConnectPortConfigName,
Value: "6000",
},
{
Name: LBHealthCheckConnectTimeoutConfigName,
Value: "100",
},
{
Name: LBHealthCheckIntervalConfigName,
Value: "30",
},
{
Name: LBHealthCheckUriConfigName,
Value: "/another?valid",
},
{
Name: LBHealthCheckDomainConfigName,
Value: "www.test.com",
},
{
Name: LBHealthCheckMethodConfigName,
Value: "HEAD",
},
{
Name: LBHealthyThresholdConfigName,
Value: "5",
},
{
Name: LBUnhealthyThresholdConfigName,
Value: "5",
},
},
nlbConfig: &nlbConfig{
lbIds: []string{"xxx-A"},
targetPorts: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
isFixed: false,
nlbHealthConfig: &nlbHealthConfig{
lBHealthCheckFlag: "on",
lBHealthCheckType: "http",
lBHealthCheckConnectPort: "6000",
lBHealthCheckConnectTimeout: "100",
lBHealthCheckInterval: "30",
lBHealthCheckUri: "/another?valid",
lBHealthCheckDomain: "www.test.com",
lBHealthCheckMethod: "head",
lBHealthyThreshold: "5",
lBUnhealthyThreshold: "5",
},
},
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "xxx-A,xxx-B,",
},
{
Name: PortProtocolsConfigName,
Value: "81/UDP,82,83/TCP",
},
{
Name: FixedConfigName,
Value: "true",
},
},
nlbConfig: &nlbConfig{
lbIds: []string{"xxx-A", "xxx-B"},
targetPorts: []int{81, 82, 83},
protocols: []corev1.Protocol{corev1.ProtocolUDP, corev1.ProtocolTCP, corev1.ProtocolTCP},
isFixed: true,
nlbHealthConfig: &nlbHealthConfig{
lBHealthCheckFlag: "on",
lBHealthCheckType: "tcp",
lBHealthCheckConnectPort: "0",
lBHealthCheckConnectTimeout: "5",
lBHealthCheckInterval: "10",
lBUnhealthyThreshold: "2",
lBHealthyThreshold: "2",
lBHealthCheckUri: "",
lBHealthCheckDomain: "",
lBHealthCheckMethod: "",
},
},
},
}
for i, test := range tests {
sc, err := parseNlbConfig(test.conf)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(test.nlbConfig, sc) {
t.Errorf("case %d: lbId expect: %v, actual: %v", i, test.nlbConfig, sc)
}
}
}
func TestNlbPlugin_consSvc(t *testing.T) {
loadBalancerClass := "alibabacloud.com/nlb"
type fields struct {
maxPort int32
minPort int32
cache map[string]portAllocated
podAllocate map[string]string
}
type args struct {
config *nlbConfig
pod *corev1.Pod
client client.Client
ctx context.Context
}
tests := []struct {
name string
fields fields
args args
want *corev1.Service
}{
{
name: "convert svc cache exist",
fields: fields{
maxPort: 3000,
minPort: 1,
cache: map[string]portAllocated{
"default/test-pod": map[int32]bool{},
},
podAllocate: map[string]string{
"default/test-pod": "clb-xxx:80,81",
},
},
args: args{
config: &nlbConfig{
lbIds: []string{"clb-xxx"},
targetPorts: []int{82},
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
},
isFixed: false,
nlbHealthConfig: &nlbHealthConfig{
lBHealthCheckFlag: "on",
lBHealthCheckType: "tcp",
lBHealthCheckConnectPort: "0",
lBHealthCheckConnectTimeout: "5",
lBHealthCheckInterval: "10",
lBUnhealthyThreshold: "2",
lBHealthyThreshold: "2",
lBHealthCheckUri: "",
lBHealthCheckDomain: "",
lBHealthCheckMethod: "",
},
},
pod: &corev1.Pod{
TypeMeta: metav1.TypeMeta{
Kind: "pod",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
UID: "32fqwfqfew",
},
},
client: nil,
ctx: context.Background(),
},
want: &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
SlbListenerOverrideKey: "true",
SlbIdAnnotationKey: "clb-xxx",
SlbConfigHashKey: util.GetHash(&nlbConfig{
lbIds: []string{"clb-xxx"},
targetPorts: []int{82},
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
},
isFixed: false,
nlbHealthConfig: &nlbHealthConfig{
lBHealthCheckFlag: "on",
lBHealthCheckType: "tcp",
lBHealthCheckConnectPort: "0",
lBHealthCheckConnectTimeout: "5",
lBHealthCheckInterval: "10",
lBUnhealthyThreshold: "2",
lBHealthyThreshold: "2",
lBHealthCheckUri: "",
lBHealthCheckDomain: "",
lBHealthCheckMethod: "",
},
}),
LBHealthCheckFlagAnnotationKey: "on",
LBHealthCheckTypeAnnotationKey: "tcp",
LBHealthCheckConnectPortAnnotationKey: "0",
LBHealthCheckConnectTimeoutAnnotationKey: "5",
LBHealthCheckIntervalAnnotationKey: "10",
LBUnhealthyThresholdAnnotationKey: "2",
LBHealthyThresholdAnnotationKey: "2",
},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "v1",
Kind: "pod",
Name: "test-pod",
UID: "32fqwfqfew",
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
ExternalTrafficPolicy: corev1.ServiceExternalTrafficPolicyTypeLocal,
LoadBalancerClass: &loadBalancerClass,
Selector: map[string]string{
SvcSelectorKey: "test-pod",
},
Ports: []corev1.ServicePort{{
Name: "82",
Port: 80,
Protocol: "TCP",
TargetPort: intstr.IntOrString{
Type: 0,
IntVal: 82,
},
},
},
},
},
},
}
for _, tt := range tests {
c := &NlbPlugin{
maxPort: tt.fields.maxPort,
minPort: tt.fields.minPort,
cache: tt.fields.cache,
podAllocate: tt.fields.podAllocate,
}
got, err := c.consSvc(tt.args.config, tt.args.pod, tt.args.client, tt.args.ctx)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("consSvc() = %v, want %v", got, tt.want)
}
}
}

View File

@ -18,35 +18,49 @@ package alibabacloud
import (
"context"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
"fmt"
"strconv"
"strings"
"sync"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/pointer"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
"sync"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
)
const (
SlbNetwork = "AlibabaCloud-SLB"
AliasSLB = "LB-Network"
SlbIdsConfigName = "SlbIds"
PortProtocolsConfigName = "PortProtocols"
SlbListenerOverrideKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners"
SlbIdAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id"
SlbIdLabelKey = "service.k8s.alibaba/loadbalancer-id"
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
SlbConfigHashKey = "game.kruise.io/network-config-hash"
SlbNetwork = "AlibabaCloud-SLB"
AliasSLB = "LB-Network"
SlbIdsConfigName = "SlbIds"
PortProtocolsConfigName = "PortProtocols"
ExternalTrafficPolicyTypeConfigName = "ExternalTrafficPolicyType"
SlbListenerOverrideKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners"
SlbIdAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id"
SlbIdLabelKey = "service.k8s.alibaba/loadbalancer-id"
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
SlbConfigHashKey = "game.kruise.io/network-config-hash"
)
const (
// annotations provided by AlibabaCloud Cloud Controller Manager
LBHealthCheckSwitchAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-health-check-switch"
LBHealthCheckProtocolPortAnnotationKey = "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-protocol-port"
// ConfigNames defined by OKG
LBHealthCheckSwitchConfigName = "LBHealthCheckSwitch"
LBHealthCheckProtocolPortConfigName = "LBHealthCheckProtocolPort"
)
type portAllocated map[int32]bool
@ -54,6 +68,7 @@ type portAllocated map[int32]bool
type SlbPlugin struct {
maxPort int32
minPort int32
blockPorts []int32
cache map[string]portAllocated
podAllocate map[string]string
mutex sync.RWMutex
@ -64,6 +79,19 @@ type slbConfig struct {
targetPorts []int
protocols []corev1.Protocol
isFixed bool
externalTrafficPolicyType corev1.ServiceExternalTrafficPolicyType
lBHealthCheckSwitch string
lBHealthCheckProtocolPort string
lBHealthCheckFlag string
lBHealthCheckType string
lBHealthCheckConnectTimeout string
lBHealthCheckInterval string
lBHealthCheckUri string
lBHealthCheckDomain string
lBHealthCheckMethod string
lBHealthyThreshold string
lBUnhealthyThreshold string
}
func (s *SlbPlugin) Name() string {
@ -80,6 +108,7 @@ func (s *SlbPlugin) Init(c client.Client, options cloudprovider.CloudProviderOpt
slbOptions := options.(provideroptions.AlibabaCloudOptions).SLBOptions
s.minPort = slbOptions.MinPort
s.maxPort = slbOptions.MaxPort
s.blockPorts = slbOptions.BlockPorts
svcList := &corev1.ServiceList{}
err := c.List(ctx, svcList)
@ -87,27 +116,39 @@ func (s *SlbPlugin) Init(c client.Client, options cloudprovider.CloudProviderOpt
return err
}
s.cache, s.podAllocate = initLbCache(svcList.Items, s.minPort, s.maxPort)
s.cache, s.podAllocate = initLbCache(svcList.Items, s.minPort, s.maxPort, s.blockPorts)
log.Infof("[%s] podAllocate cache complete initialization: %v", SlbNetwork, s.podAllocate)
return nil
}
func initLbCache(svcList []corev1.Service, minPort, maxPort int32) (map[string]portAllocated, map[string]string) {
func initLbCache(svcList []corev1.Service, minPort, maxPort int32, blockPorts []int32) (map[string]portAllocated, map[string]string) {
newCache := make(map[string]portAllocated)
newPodAllocate := make(map[string]string)
for _, svc := range svcList {
lbId := svc.Labels[SlbIdLabelKey]
if lbId != "" && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
// init cache for that lb
if newCache[lbId] == nil {
newCache[lbId] = make(portAllocated, maxPort-minPort)
for i := minPort; i < maxPort; i++ {
newCache[lbId] = make(portAllocated, maxPort-minPort+1)
for i := minPort; i <= maxPort; i++ {
newCache[lbId][i] = false
}
}
// block ports
for _, blockPort := range blockPorts {
newCache[lbId][blockPort] = true
}
// fill in cache for that lb
var ports []int32
for _, port := range getPorts(svc.Spec.Ports) {
if port <= maxPort && port >= minPort {
newCache[lbId][port] = true
ports = append(ports, port)
value, ok := newCache[lbId][port]
if !ok || !value {
newCache[lbId][port] = true
ports = append(ports, port)
}
}
}
if len(ports) != 0 {
@ -115,44 +156,52 @@ func initLbCache(svcList []corev1.Service, minPort, maxPort int32) (map[string]p
}
}
}
log.Infof("[%s] podAllocate cache complete initialization: %v", SlbNetwork, newPodAllocate)
return newCache, newPodAllocate
}
func (s *SlbPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
sc := parseLbConfig(networkConfig)
err := c.Create(ctx, s.consSvc(sc, pod, c, ctx))
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
return pod, nil
}
func (s *SlbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
networkConfig := networkManager.GetNetworkConfig()
sc := parseLbConfig(networkConfig)
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseLbConfig(networkConfig)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
// get svc
svc := &corev1.Service{}
err := c.Get(ctx, types.NamespacedName{
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
return pod, cperrors.ToPluginError(c.Create(ctx, s.consSvc(sc, pod, c, ctx)), cperrors.ApiCallError)
service, err := s.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Create(ctx, service), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// old svc remain
if svc.OwnerReferences[0].Kind == "Pod" && svc.OwnerReferences[0].UID != pod.UID {
log.Infof("[%s] waitting old svc %s/%s deleted. old owner pod uid is %s, but now is %s", SlbNetwork, svc.Namespace, svc.Name, svc.OwnerReferences[0].UID, pod.UID)
return pod, nil
}
// update svc
if util.GetHash(sc) != svc.GetAnnotations()[SlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
@ -160,7 +209,11 @@ func (s *SlbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.C
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
return pod, cperrors.ToPluginError(c.Update(ctx, s.consSvc(sc, pod, c, ctx)), cperrors.ApiCallError)
service, err := s.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
return pod, cperrors.ToPluginError(c.Update(ctx, service), cperrors.ApiCallError)
}
// disable network
@ -182,6 +235,21 @@ func (s *SlbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.C
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
@ -221,7 +289,10 @@ func (s *SlbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.C
func (s *SlbPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
sc := parseLbConfig(networkConfig)
sc, err := parseLbConfig(networkConfig)
if err != nil {
return cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
var podKeys []string
if sc.isFixed {
@ -261,7 +332,7 @@ func (s *SlbPlugin) allocate(lbIds []string, num int, nsName string) (string, []
// find lb with adequate ports
for _, slbId := range lbIds {
sum := 0
for i := s.minPort; i < s.maxPort; i++ {
for i := s.minPort; i <= s.maxPort; i++ {
if !s.cache[slbId][i] {
sum++
}
@ -271,15 +342,23 @@ func (s *SlbPlugin) allocate(lbIds []string, num int, nsName string) (string, []
}
}
}
if lbId == "" {
return "", nil
}
// select ports
for i := 0; i < num; i++ {
var port int32
if s.cache[lbId] == nil {
s.cache[lbId] = make(portAllocated, s.maxPort-s.minPort)
for i := s.minPort; i < s.maxPort; i++ {
// init cache for new lb
s.cache[lbId] = make(portAllocated, s.maxPort-s.minPort+1)
for i := s.minPort; i <= s.maxPort; i++ {
s.cache[lbId][i] = false
}
// block ports
for _, blockPort := range s.blockPorts {
s.cache[lbId][blockPort] = true
}
}
for p, allocated := range s.cache[lbId] {
@ -312,6 +391,10 @@ func (s *SlbPlugin) deAllocate(nsName string) {
for _, port := range ports {
s.cache[lbId][port] = false
}
// block ports
for _, blockPort := range s.blockPorts {
s.cache[lbId][blockPort] = true
}
delete(s.podAllocate, nsName)
log.Infof("pod %s deallocate slb %s ports %v", nsName, lbId, ports)
@ -324,11 +407,24 @@ func init() {
alibabaCloudProvider.registerPlugin(&slbPlugin)
}
func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *slbConfig {
func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*slbConfig, error) {
var lbIds []string
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
externalTrafficPolicy := corev1.ServiceExternalTrafficPolicyTypeCluster
lBHealthCheckSwitch := "on"
lBHealthCheckProtocolPort := ""
lBHealthCheckFlag := "off"
lBHealthCheckType := "tcp"
lBHealthCheckConnectTimeout := "5"
lBHealthCheckInterval := "10"
lBUnhealthyThreshold := "2"
lBHealthyThreshold := "2"
lBHealthCheckUri := ""
lBHealthCheckDomain := ""
lBHealthCheckMethod := ""
for _, c := range conf {
switch c.Name {
case SlbIdsConfigName:
@ -357,14 +453,105 @@ func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *slbConfig {
continue
}
isFixed = v
case ExternalTrafficPolicyTypeConfigName:
if strings.EqualFold(c.Value, string(corev1.ServiceExternalTrafficPolicyTypeLocal)) {
externalTrafficPolicy = corev1.ServiceExternalTrafficPolicyTypeLocal
}
case LBHealthCheckSwitchConfigName:
checkSwitch := strings.ToLower(c.Value)
if checkSwitch != "on" && checkSwitch != "off" {
return nil, fmt.Errorf("invalid lb health check switch value: %s", c.Value)
}
lBHealthCheckSwitch = checkSwitch
case LBHealthCheckFlagConfigName:
flag := strings.ToLower(c.Value)
if flag != "on" && flag != "off" {
return nil, fmt.Errorf("invalid lb health check flag value: %s", c.Value)
}
lBHealthCheckFlag = flag
case LBHealthCheckTypeConfigName:
checkType := strings.ToLower(c.Value)
if checkType != "tcp" && checkType != "http" {
return nil, fmt.Errorf("invalid lb health check type: %s", c.Value)
}
lBHealthCheckType = checkType
case LBHealthCheckProtocolPortConfigName:
if validateHttpProtocolPort(c.Value) != nil {
return nil, fmt.Errorf("invalid lb health check protocol port: %s", c.Value)
}
lBHealthCheckProtocolPort = c.Value
case LBHealthCheckConnectTimeoutConfigName:
timeoutInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb health check connect timeout: %s", c.Value)
}
if timeoutInt < 1 || timeoutInt > 300 {
return nil, fmt.Errorf("invalid lb health check connect timeout: %d", timeoutInt)
}
lBHealthCheckConnectTimeout = c.Value
case LBHealthCheckIntervalConfigName:
intervalInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb health check interval: %s", c.Value)
}
if intervalInt < 1 || intervalInt > 50 {
return nil, fmt.Errorf("invalid lb health check interval: %d", intervalInt)
}
lBHealthCheckInterval = c.Value
case LBHealthyThresholdConfigName:
thresholdInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb healthy threshold: %s", c.Value)
}
if thresholdInt < 2 || thresholdInt > 10 {
return nil, fmt.Errorf("invalid lb healthy threshold: %d", thresholdInt)
}
lBHealthyThreshold = c.Value
case LBUnhealthyThresholdConfigName:
thresholdInt, err := strconv.Atoi(c.Value)
if err != nil {
return nil, fmt.Errorf("invalid lb unhealthy threshold: %s", c.Value)
}
if thresholdInt < 2 || thresholdInt > 10 {
return nil, fmt.Errorf("invalid lb unhealthy threshold: %d", thresholdInt)
}
lBUnhealthyThreshold = c.Value
case LBHealthCheckUriConfigName:
if validateUri(c.Value) != nil {
return nil, fmt.Errorf("invalid lb health check uri: %s", c.Value)
}
lBHealthCheckUri = c.Value
case LBHealthCheckDomainConfigName:
if validateDomain(c.Value) != nil {
return nil, fmt.Errorf("invalid lb health check domain: %s", c.Value)
}
lBHealthCheckDomain = c.Value
case LBHealthCheckMethodConfigName:
method := strings.ToLower(c.Value)
if method != "get" && method != "head" {
return nil, fmt.Errorf("invalid lb health check method: %s", c.Value)
}
lBHealthCheckMethod = method
}
}
return &slbConfig{
lbIds: lbIds,
protocols: protocols,
targetPorts: ports,
isFixed: isFixed,
}
lbIds: lbIds,
protocols: protocols,
targetPorts: ports,
isFixed: isFixed,
externalTrafficPolicyType: externalTrafficPolicy,
lBHealthCheckSwitch: lBHealthCheckSwitch,
lBHealthCheckFlag: lBHealthCheckFlag,
lBHealthCheckType: lBHealthCheckType,
lBHealthCheckProtocolPort: lBHealthCheckProtocolPort,
lBHealthCheckConnectTimeout: lBHealthCheckConnectTimeout,
lBHealthCheckInterval: lBHealthCheckInterval,
lBHealthCheckUri: lBHealthCheckUri,
lBHealthCheckDomain: lBHealthCheckDomain,
lBHealthCheckMethod: lBHealthCheckMethod,
lBHealthyThreshold: lBHealthyThreshold,
lBUnhealthyThreshold: lBUnhealthyThreshold,
}, nil
}
func getPorts(ports []corev1.ServicePort) []int32 {
@ -375,7 +562,7 @@ func getPorts(ports []corev1.ServicePort) []int32 {
return ret
}
func (s *SlbPlugin) consSvc(sc *slbConfig, pod *corev1.Pod, c client.Client, ctx context.Context) *corev1.Service {
func (s *SlbPlugin) consSvc(sc *slbConfig, pod *corev1.Pod, c client.Client, ctx context.Context) (*corev1.Service, error) {
var ports []int32
var lbId string
podKey := pod.GetNamespace() + "/" + pod.GetName()
@ -386,38 +573,76 @@ func (s *SlbPlugin) consSvc(sc *slbConfig, pod *corev1.Pod, c client.Client, ctx
ports = util.StringToInt32Slice(slbPorts[1], ",")
} else {
lbId, ports = s.allocate(sc.lbIds, len(sc.targetPorts), podKey)
if lbId == "" && ports == nil {
return nil, fmt.Errorf("there are no avaialable ports for %v", sc.lbIds)
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(sc.targetPorts); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(sc.targetPorts[i]),
Port: ports[i],
Protocol: sc.protocols[i],
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
if sc.protocols[i] == ProtocolTCPUDP {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), corev1.ProtocolTCP),
Port: ports[i],
Protocol: corev1.ProtocolTCP,
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), corev1.ProtocolUDP),
Port: ports[i],
Protocol: corev1.ProtocolUDP,
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
} else {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), sc.protocols[i]),
Port: ports[i],
Protocol: sc.protocols[i],
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
}
}
svcAnnotations := map[string]string{
SlbListenerOverrideKey: "true",
SlbIdAnnotationKey: lbId,
SlbConfigHashKey: util.GetHash(sc),
LBHealthCheckFlagAnnotationKey: sc.lBHealthCheckFlag,
LBHealthCheckSwitchAnnotationKey: sc.lBHealthCheckSwitch,
}
if sc.lBHealthCheckSwitch == "on" {
svcAnnotations[LBHealthCheckTypeAnnotationKey] = sc.lBHealthCheckType
svcAnnotations[LBHealthCheckConnectTimeoutAnnotationKey] = sc.lBHealthCheckConnectTimeout
svcAnnotations[LBHealthCheckIntervalAnnotationKey] = sc.lBHealthCheckInterval
svcAnnotations[LBHealthyThresholdAnnotationKey] = sc.lBHealthyThreshold
svcAnnotations[LBUnhealthyThresholdAnnotationKey] = sc.lBUnhealthyThreshold
if sc.lBHealthCheckType == "http" {
svcAnnotations[LBHealthCheckProtocolPortAnnotationKey] = sc.lBHealthCheckProtocolPort
svcAnnotations[LBHealthCheckDomainAnnotationKey] = sc.lBHealthCheckDomain
svcAnnotations[LBHealthCheckUriAnnotationKey] = sc.lBHealthCheckUri
svcAnnotations[LBHealthCheckMethodAnnotationKey] = sc.lBHealthCheckMethod
}
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: map[string]string{
SlbListenerOverrideKey: "true",
SlbIdAnnotationKey: lbId,
SlbConfigHashKey: util.GetHash(sc),
},
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: svcAnnotations,
OwnerReferences: getSvcOwnerReference(c, ctx, pod, sc.isFixed),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Type: corev1.ServiceTypeLoadBalancer,
ExternalTrafficPolicy: sc.externalTrafficPolicyType,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
},
}
return svc
return svc, nil
}
func getSvcOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, isFixed bool) []metav1.OwnerReference {
@ -427,8 +652,8 @@ func getSvcOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod,
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: pointer.BoolPtr(true),
BlockOwnerDeletion: pointer.BoolPtr(true),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
@ -440,11 +665,27 @@ func getSvcOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: pointer.BoolPtr(true),
BlockOwnerDeletion: pointer.BoolPtr(true),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}
}
return ownerReferences
}
func validateHttpProtocolPort(protocolPort string) error {
protocolPorts := strings.Split(protocolPort, ",")
for _, pp := range protocolPorts {
protocol := strings.Split(pp, ":")[0]
if protocol != "http" && protocol != "https" {
return fmt.Errorf("invalid http protocol: %s", protocol)
}
port := strings.Split(pp, ":")[1]
_, err := strconv.Atoi(port)
if err != nil {
return fmt.Errorf("invalid http port: %s", port)
}
}
return nil
}

View File

@ -7,6 +7,7 @@ import (
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@ -20,8 +21,10 @@ import (
)
const (
SlbSPNetwork = "AlibabaCloud-SLB-SharedPort"
SvcSLBSPLabel = "game.kruise.io/AlibabaCloud-SLB-SharedPort"
SlbSPNetwork = "AlibabaCloud-SLB-SharedPort"
SvcSLBSPLabel = "game.kruise.io/AlibabaCloud-SLB-SharedPort"
ManagedServiceNamesConfigName = "ManagedServiceNames"
ManagedServiceSelectorConfigName = "ManagedServiceSelector"
)
const (
@ -42,9 +45,12 @@ type SlbSpPlugin struct {
}
type lbSpConfig struct {
lbIds []string
ports []int
protocols []corev1.Protocol
lbIds []string
ports []int
protocols []corev1.Protocol
managedServiceNames []string
managedServiceSelectorKey string
managedServiceSelectorValue string
}
func (s *SlbSpPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
@ -111,6 +117,7 @@ func (s *SlbSpPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context
if networkManager.GetNetworkDisabled() && hasLabel {
newLabels := pod.GetLabels()
delete(newLabels, SlbIdLabelKey)
delete(newLabels, podNetConfig.managedServiceSelectorKey)
pod.Labels = newLabels
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
@ -120,6 +127,7 @@ func (s *SlbSpPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context
// enable network
if !networkManager.GetNetworkDisabled() && !hasLabel {
pod.Labels[SlbIdLabelKey] = podSlbId
pod.Labels[podNetConfig.managedServiceSelectorKey] = podNetConfig.managedServiceSelectorValue
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
@ -130,6 +138,41 @@ func (s *SlbSpPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context
return pod, nil
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, true)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
for _, svcName := range podNetConfig.managedServiceNames {
managedSvc := &corev1.Service{}
getErr := c.Get(ctx, types.NamespacedName{
Namespace: pod.GetNamespace(),
Name: svcName,
}, managedSvc)
if getErr != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
toUpDateManagedSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, managedSvc, true)
if err != nil {
return pod, err
}
if toUpDateManagedSvc {
err := c.Update(ctx, managedSvc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
@ -295,18 +338,29 @@ func parseLbSpConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *lbSpConfig
var lbIds []string
var ports []int
var protocols []corev1.Protocol
var managedServiceNames []string
var managedServiceSelectorKey string
var managedServiceSelectorValue string
for _, c := range conf {
switch c.Name {
case SlbIdsConfigName:
lbIds = parseLbIds(c.Value)
case PortProtocolsConfigName:
ports, protocols = parsePortProtocols(c.Value)
case ManagedServiceNamesConfigName:
managedServiceNames = strings.Split(c.Value, ",")
case ManagedServiceSelectorConfigName:
managedServiceSelectorKey = strings.Split(c.Value, "=")[0]
managedServiceSelectorValue = strings.Split(c.Value, "=")[1]
}
}
return &lbSpConfig{
lbIds: lbIds,
ports: ports,
protocols: protocols,
lbIds: lbIds,
ports: ports,
protocols: protocols,
managedServiceNames: managedServiceNames,
managedServiceSelectorKey: managedServiceSelectorKey,
managedServiceSelectorValue: managedServiceSelectorValue,
}
}

View File

@ -3,6 +3,7 @@ package alibabacloud
import (
"fmt"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"reflect"
@ -105,11 +106,22 @@ func TestParseLbSpConfig(t *testing.T) {
Name: SlbIdsConfigName,
Value: "lb-xxa",
},
{
Name: ManagedServiceNamesConfigName,
Value: "service-clusterIp",
},
{
Name: ManagedServiceSelectorConfigName,
Value: "game=v1",
},
},
podNetConfig: &lbSpConfig{
lbIds: []string{"lb-xxa"},
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
lbIds: []string{"lb-xxa"},
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
managedServiceNames: []string{"service-clusterIp"},
managedServiceSelectorKey: "game",
managedServiceSelectorValue: "v1",
},
},
}
@ -121,3 +133,32 @@ func TestParseLbSpConfig(t *testing.T) {
}
}
}
func TestParsePortProtocols(t *testing.T) {
tests := []struct {
value string
ports []int
protocols []corev1.Protocol
}{
{
value: "80",
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
{
value: "8080/UDP,80/TCP",
ports: []int{8080, 80},
protocols: []corev1.Protocol{corev1.ProtocolUDP, corev1.ProtocolTCP},
},
}
for i, test := range tests {
actualPorts, actualProtocols := parsePortProtocols(test.value)
if !util.IsSliceEqual(actualPorts, test.ports) {
t.Errorf("case %d: expect ports is %v, but actually is %v", i, test.ports, actualPorts)
}
if !reflect.DeepEqual(actualProtocols, test.protocols) {
t.Errorf("case %d: expect protocols is %v, but actually is %v", i, test.protocols, actualProtocols)
}
}
}

View File

@ -17,14 +17,14 @@ limitations under the License.
package alibabacloud
import (
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"reflect"
"sync"
"testing"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
)
func TestAllocateDeAllocate(t *testing.T) {
@ -72,10 +72,7 @@ func TestAllocateDeAllocate(t *testing.T) {
func TestParseLbConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
lbIds []string
ports []int
protocols []corev1.Protocol
isFixed bool
slbConfig *slbConfig
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
@ -87,11 +84,73 @@ func TestParseLbConfig(t *testing.T) {
Name: PortProtocolsConfigName,
Value: "80",
},
{
Name: LBHealthCheckSwitchConfigName,
Value: "off",
},
{
Name: LBHealthCheckFlagConfigName,
Value: "off",
},
{
Name: LBHealthCheckTypeConfigName,
Value: "HTTP",
},
{
Name: LBHealthCheckConnectPortConfigName,
Value: "6000",
},
{
Name: LBHealthCheckConnectTimeoutConfigName,
Value: "100",
},
{
Name: LBHealthCheckIntervalConfigName,
Value: "30",
},
{
Name: LBHealthCheckUriConfigName,
Value: "/another?valid",
},
{
Name: LBHealthCheckDomainConfigName,
Value: "www.test.com",
},
{
Name: LBHealthCheckMethodConfigName,
Value: "HEAD",
},
{
Name: LBHealthyThresholdConfigName,
Value: "5",
},
{
Name: LBUnhealthyThresholdConfigName,
Value: "5",
},
{
Name: LBHealthCheckProtocolPortConfigName,
Value: "http:80",
},
},
slbConfig: &slbConfig{
lbIds: []string{"xxx-A"},
targetPorts: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
externalTrafficPolicyType: corev1.ServiceExternalTrafficPolicyTypeCluster,
isFixed: false,
lBHealthCheckSwitch: "off",
lBHealthCheckFlag: "off",
lBHealthCheckType: "http",
lBHealthCheckConnectTimeout: "100",
lBHealthCheckInterval: "30",
lBHealthCheckUri: "/another?valid",
lBHealthCheckDomain: "www.test.com",
lBHealthCheckMethod: "head",
lBHealthyThreshold: "5",
lBUnhealthyThreshold: "5",
lBHealthCheckProtocolPort: "http:80",
},
lbIds: []string{"xxx-A"},
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
isFixed: false,
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
@ -107,27 +166,39 @@ func TestParseLbConfig(t *testing.T) {
Name: FixedConfigName,
Value: "true",
},
{
Name: ExternalTrafficPolicyTypeConfigName,
Value: "Local",
},
},
slbConfig: &slbConfig{
lbIds: []string{"xxx-A", "xxx-B"},
targetPorts: []int{81, 82, 83},
protocols: []corev1.Protocol{corev1.ProtocolUDP, corev1.ProtocolTCP, corev1.ProtocolTCP},
externalTrafficPolicyType: corev1.ServiceExternalTrafficPolicyTypeLocal,
isFixed: true,
lBHealthCheckSwitch: "on",
lBHealthCheckFlag: "off",
lBHealthCheckType: "tcp",
lBHealthCheckConnectTimeout: "5",
lBHealthCheckInterval: "10",
lBUnhealthyThreshold: "2",
lBHealthyThreshold: "2",
lBHealthCheckUri: "",
lBHealthCheckDomain: "",
lBHealthCheckMethod: "",
lBHealthCheckProtocolPort: "",
},
lbIds: []string{"xxx-A", "xxx-B"},
ports: []int{81, 82, 83},
protocols: []corev1.Protocol{corev1.ProtocolUDP, corev1.ProtocolTCP, corev1.ProtocolTCP},
isFixed: true,
},
}
for _, test := range tests {
sc := parseLbConfig(test.conf)
if !reflect.DeepEqual(test.lbIds, sc.lbIds) {
t.Errorf("lbId expect: %v, actual: %v", test.lbIds, sc.lbIds)
for i, test := range tests {
sc, err := parseLbConfig(test.conf)
if err != nil {
t.Error(err)
}
if !util.IsSliceEqual(test.ports, sc.targetPorts) {
t.Errorf("ports expect: %v, actual: %v", test.ports, sc.targetPorts)
}
if !reflect.DeepEqual(test.protocols, sc.protocols) {
t.Errorf("protocols expect: %v, actual: %v", test.protocols, sc.protocols)
}
if test.isFixed != sc.isFixed {
t.Errorf("isFixed expect: %v, actual: %v", test.isFixed, sc.isFixed)
if !reflect.DeepEqual(test.slbConfig, sc) {
t.Errorf("case %d: lbId expect: %v, actual: %v", i, test.slbConfig, sc)
}
}
}
@ -137,17 +208,21 @@ func TestInitLbCache(t *testing.T) {
svcList []corev1.Service
minPort int32
maxPort int32
blockPorts []int32
cache map[string]portAllocated
podAllocate map[string]string
}{
minPort: 512,
maxPort: 712,
minPort: 512,
maxPort: 712,
blockPorts: []int32{593},
cache: map[string]portAllocated{
"xxx-A": map[int32]bool{
666: true,
593: true,
},
"xxx-B": map[int32]bool{
555: true,
593: true,
},
},
podAllocate: map[string]string{
@ -202,7 +277,7 @@ func TestInitLbCache(t *testing.T) {
},
}
actualCache, actualPodAllocate := initLbCache(test.svcList, test.minPort, test.maxPort)
actualCache, actualPodAllocate := initLbCache(test.svcList, test.minPort, test.maxPort, test.blockPorts)
for lb, pa := range test.cache {
for port, isAllocated := range pa {
if actualCache[lb][port] != isAllocated {

View File

@ -0,0 +1,185 @@
English | [中文](./README.zh_CN.md)
For game businesses using OKG in AWS EKS clusters, routing traffic directly to Pod ports via network load balancing is the foundation for achieving high-performance real-time service discovery. Using NLB for dynamic port mapping simplifies the forwarding chain and avoids the performance loss caused by Kubernetes kube-proxy load balancing. These features are particularly crucial for handling replica combat-type game servers. For GameServerSets with the network type specified as AmazonWebServices-NLB, the AmazonWebServices-NLB network plugin will schedule an NLB, automatically allocate ports, create listeners and target groups, and associate the target group with Kubernetes services through the TargetGroupBinding CRD. If the cluster is configured with VPC-CNI, the traffic will be automatically forwarded to the Pod's IP address; otherwise, it will be forwarded through ClusterIP. The process is considered successful when the network of the GameServer is in the Ready state.
![image](./../../docs/images/aws-nlb.png)
## AmazonWebServices-NLB Configuration
### plugin Configuration
```toml
[aws]
enable = true
[aws.nlb]
# Specify the range of free ports that NLB can use to allocate external access ports for pods, with a maximum range of 50 (closed interval)
# The limit of 50 comes from AWS's limit on the number of listeners, see: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-limits.html
max_port = 32050
min_port = 32001
```
### Preparation ###
Due to the difference in AWS design, to achieve NLB port-to-Pod port mapping, three types of CRD resources need to be created: Listener/TargetGroup/TargetGroupBinding
#### Deploy elbv2-controller
Definition and controller for Listener/TargetGroup CRDs: https://github.com/aws-controllers-k8s/elbv2-controller. This project links k8s resources with AWS cloud resources. Download the chart: https://gallery.ecr.aws/aws-controllers-k8s/elbv2-chart, example value.yaml:
```yaml
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::xxxxxxxxx:role/test"
aws:
region: "us-east-1"
endpoint_url: "https://elasticloadbalancing.us-east-1.amazonaws.com"
```
The key to deploying this project lies in authorizing the k8s ServiceAccount to access the NLB SDK, which is recommended to be done through an IAM role:
##### Step 1Enable OIDC provider for the EKS cluster
1. Sign in to the AWS Management Console.
2. Navigate to the EKS consolehttps://console.aws.amazon.com/eks/
3. Select your cluster.
4. On the cluster details page, ensure that the OIDC provider is enabled. Obtain the OIDC provider URL for the EKS cluster. In the "Configuration" section of the cluster details page, find the "OpenID Connect provider URL".
##### Step 2Configure the IAM role trust policy
1. In the IAM console, create a new identity provider and select "OpenID Connect".
- For the Provider URL, enter the OIDC provider URL of your EKS cluster.
- For Audience, enter: `sts.amazonaws.com`
2. In the IAM console, create a new IAM role and select "Custom trust policy".
- Use the following trust policy to allow EKS to use this role:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>:sub": "system:serviceaccount:<NAMESPACE>:ack-elbv2-controller",
"oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>:aud": "sts.amazonaws.com"
}
}
}
]
}
```
- Replace `<AWS_ACCOUNT_ID>`、`<REGION>`、`<OIDC_ID>`、`<NAMESPACE>` and `<SERVICE_ACCOUNT_NAME>` with your actual values.
- Add the permission `ElasticLoadBalancingFullAccess`
#### Deploy AWS Load Balancer Controller
CRD and controller for TargetGroupBinding: https://github.com/kubernetes-sigs/aws-load-balancer-controller/
Official deployment documentation: https://docs.aws.amazon.com/eks/latest/userguide/lbc-helm.html, essentially authorizing k8s ServiceAccount in a way similar to an IAM role.
### Parameters
#### NlbARNs
- Meaning: Fill in the ARN of the nlb, you can fill in multiple, and nlb needs to be created in AWS in advance.
- Format: Separate each nlbARN with a comma. For example: arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870,arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e
- Support for change: Yes
#### NlbVPCId
- Meaning: Fill in the vpcid where nlb is located, needed for creating AWS target groups.
- Format: String. For example: vpc-0bbc9f9f0ffexxxxx
- Support for change: Yes
#### NlbHealthCheck
- Meaning: Fill in the health check parameters for the nlb target group, can be left blank to use default values.
- Format: Separate each configuration with a comma. For example: "healthCheckEnabled:true,healthCheckIntervalSeconds:30,healthCheckPath:/health,healthCheckPort:8081,healthCheckProtocol:HTTP,healthCheckTimeoutSeconds:10,healthyThresholdCount:5,unhealthyThresholdCount:2"
- Support for change: Yes
- Parameter explanation
- **healthCheckEnabled**Indicates whether health checks are enabled. If the target type is lambda, health checks are disabled by default but can be enabled. If the target type is instance, ip, or alb, health checks are always enabled and cannot be disabled.
- **healthCheckIntervalSeconds**The approximate amount of time, in seconds, between health checks of an individual target. The range is 5-300. If the target group protocol is TCP, TLS, UDP, TCP_UDP, HTTP, or HTTPS, the default is 30 seconds. If the target group protocol is GENEVE, the default is 10 seconds. If the target type is lambda, the default is 35 seconds.
- **healthCheckPath**The destination for health checks on the targets. For HTTP/HTTPS health checks, this is the path. For GRPC protocol version, this is the path of a custom health check method with the format /package.service/method. The default is /Amazon Web Services.ALB/healthcheck.
- **healthCheckPort**The port the load balancer uses when performing health checks on targets. The default is traffic-port, which is the port on which each target receives traffic from the load balancer. If the protocol is GENEVE, the default is port 80.
- **healthCheckProtocol**The protocol the load balancer uses when performing health checks on targets. For Application Load Balancers, the default is HTTP. For Network Load Balancers and Gateway Load Balancers, the default is TCP. The GENEVE, TLS, UDP, and TCP_UDP protocols are not supported for health checks.
- **healthCheckTimeoutSeconds**The amount of time, in seconds, during which no response from a target means a failed health check. The range is 2120 seconds. For target groups with a protocol of HTTP, the default is 6 seconds. For target groups with a protocol of TCP, TLS, or HTTPS, the default is 10 seconds. For target groups with a protocol of GENEVE, the default is 5 seconds. If the target type is lambda, the default is 30 seconds.
- **healthyThresholdCount**The number of consecutive health check successes required before considering a target healthy. The range is 2-10. If the target group protocol is TCP, TCP_UDP, UDP, TLS, HTTP, or HTTPS, the default is 5. For target groups with a protocol of GENEVE, the default is 5. If the target type is lambda, the default is 5.
- **unhealthyThresholdCount**The number of consecutive health check failures required before considering a target unhealthy. The range is 2-10. If the target group protocol is TCP, TCP_UDP, UDP, TLS, HTTP, or HTTPS, the default is 2. For target groups with a protocol of GENEVE, the default is 2. If the target type is lambda, the default is 5.
#### PortProtocols
- Meaning: Ports and protocols exposed by the pod, supports specifying multiple ports/protocols.
- Format: port1/protocol1,port2/protocol2,... (protocol should be uppercase)
- Support for change: Yes
#### Fixed
- Meaning: Whether the access port is fixed. If yes, even if the pod is deleted and rebuilt, the mapping between the internal and external networks will not change.
- Format: false / true
- Support for change: Yes
#### AllowNotReadyContainers
- Meaning: The corresponding container name that allows continuous traffic during in-place upgrades.
- Format: {containerName_0},{containerName_1},... For example: sidecar
- Support for change: Not changeable during in-place upgrades
#### Annotations
- Meaning: Annotations added to the service, supports specifying multiple annotations.
- Format: key1:value1,key2:value2...
- Support for change: Yes
### Usage Example
```shell
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: gs-demo
namespace: default
spec:
replicas: 1
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: AmazonWebServices-NLB
networkConf:
- name: NlbARNs
value: "arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxx:loadbalancer/net/okg-test/yyyyyyyyyyyyyyyy"
- name: NlbVPCId
value: "vpc-0bbc9f9f0ffexxxxx"
- name: PortProtocols
value: "80/TCP"
- name: NlbHealthCheck
value: "healthCheckIntervalSeconds:15"
gameServerTemplate:
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/gs-demo/gameserver:network
name: gameserver
EOF
```
Check the network status of the GameServer:
```yaml
networkStatus:
createTime: "2024-05-30T03:34:14Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- endPoint: okg-test-yyyyyyyyyyyyyyyy.elb.us-east-1.amazonaws.com
ip: ""
ports:
- name: "80"
port: 32034
protocol: TCP
internalAddresses:
- ip: 10.10.7.154
ports:
- name: "80"
port: 80
protocol: TCP
lastTransitionTime: "2024-05-30T03:34:14Z"
networkType: AmazonWebServices-NLB
```

View File

@ -0,0 +1,183 @@
中文 | [English](./README.md)
对于在AWS EKS集群中使用OKG的游戏业务通过网络负载均衡将流量直接路由到Pod端口是实现高性能实时服务发现的基础。利用NLB进行动态端口映射简化了转发链路规避了Kubernetes kube-proxy负载均衡带来的性能损耗。这些特性对于处理副本战斗类型的游戏服务器尤为关键。对于指定了网络类型为AmazonWebServices-NLB的GameServerSetAmazonWebServices-NLB网络插件将会调度一个NLB自动分配端口创建侦听器和目标组并通过TargetGroupBinding CRD将目标组与Kubernetes服务进行关联。如果集群配置了VPC-CNI那么此时流量将自动转发到Pod的IP地址否则将通过ClusterIP转发。观察到GameServer的网络处于Ready状态时该过程即执行成功。
![image](./../../docs/images/aws-nlb.png)
## AmazonWebServices-NLB 相关配置
### plugin配置
```toml
[aws]
enable = true
[aws.nlb]
# 填写nlb可使用的空闲端口段用于为pod分配外部接入端口范围最大为50(闭区间)
# 50限制来自AWS对侦听器数量的限制参考https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-limits.html
max_port = 32050
min_port = 32001
```
### 准备: ###
由于AWS的设计有所区别要实现NLB端口与Pod端口映射需要创建三类CRD资源Listener/TargetGroup/TargetGroupBinding
#### 部署elbv2-controller
Listener/TargetGroup的CRD定义及控制器https://github.com/aws-controllers-k8s/elbv2-controller 该项目联动了k8s资源与AWS云资源chart下载https://gallery.ecr.aws/aws-controllers-k8s/elbv2-chart value.yaml示例
```yaml
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::xxxxxxxxx:role/test"
aws:
region: "us-east-1"
endpoint_url: "https://elasticloadbalancing.us-east-1.amazonaws.com"
```
部署该项目最关键的在于授权k8s ServiceAccount访问NLB SDK推荐通过IAM角色的方式
##### 步骤 1为 EKS 集群启用 OIDC 提供者
1. 登录到 AWS 管理控制台。
2. 导航到 EKS 控制台https://console.aws.amazon.com/eks/
3. 选择您的集群。
4. 在集群详细信息页面上,确保 OIDC 提供者已启用。获取 EKS 集群的 OIDC 提供者 URL。在集群详细信息页面的 “Configuration” 部分,找到 “OpenID Connect provider URL”。
##### 步骤 2配置 IAM 角色信任策略
1. 在 IAM 控制台中,创建一个新的身份提供商,并选择 “OpenID Connect”
- 提供商URL填写EKS 集群的 OIDC 提供者 URL
- 受众填写:`sts.amazonaws.com`
2. 在 IAM 控制台中,创建一个新的 IAM 角色,并选择 “Custom trust policy”。
- 使用以下信任策略,允许 EKS 使用这个角色:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>:sub": "system:serviceaccount:<NAMESPACE>:ack-elbv2-controller",
"oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>:aud": "sts.amazonaws.com"
}
}
}
]
}
```
- 将 `<AWS_ACCOUNT_ID>`、`<REGION>`、`<OIDC_ID>`、`<NAMESPACE>` 和 `<SERVICE_ACCOUNT_NAME>` 替换为您的实际值。
- 添加权限 `ElasticLoadBalancingFullAccess`
#### 部署AWS Load Balancer Controller
TargetGroupBinding的CRD及控制器https://github.com/kubernetes-sigs/aws-load-balancer-controller/
官方部署文档https://docs.aws.amazon.com/eks/latest/userguide/lbc-helm.html 其本质也是通过授权k8s ServiceAccount一个IAM角色的方式。
### 参数
#### NlbARNs
- 含义填写nlb的arn可填写多个需要现在【AWS】中创建好nlb。
- 填写格式各个nlbARN用,分割。例如arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870,arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e
- 是否支持变更:是
#### NlbVPCId
- 含义填写nlb所在的vpcid创建AWS目标组需要。
- 填写格式字符串。例如vpc-0bbc9f9f0ffexxxxx
- 是否支持变更:是
#### NlbHealthCheck
- 含义填写nlb目标组的健康检查参数可不填使用默认值。
- 填写格式:各个配置用,分割。例如:"healthCheckEnabled:true,healthCheckIntervalSeconds:30,healthCheckPath:/health,healthCheckPort:8081,healthCheckProtocol:HTTP,healthCheckTimeoutSeconds:10,healthyThresholdCount:5,unhealthyThresholdCount:2"
- 是否支持变更:是
- 参数解释:
- **healthCheckEnabled**指示是否启用了健康检查。如果目标类型是lambda默认情况下健康检查是禁用的但可以启用。如果目标类型是instance、ip或alb健康检查总是启用的且不能禁用。
- **healthCheckIntervalSeconds**:每个目标之间健康检查的时间间隔(以秒为单位)。 取值范围为5-300秒。如果目标组协议是TCP、TLS、UDP、TCP_UDP、HTTP或HTTPS默认值为30秒。 如果目标组协议是GENEVE默认值为10秒。如果目标类型是lambda默认值为35秒。
- **healthCheckPath**[HTTP/HTTPS健康检查] 目标健康检查的路径。 [HTTP1或HTTP2协议版本] ping路径。默认值为/。 [GRPC协议版本] 自定义健康检查方法的路径,格式为/package.service/method。默认值为/Amazon Web Services.ALB/healthcheck。
- **healthCheckPort**:负载均衡器在对目标执行健康检查时使用的端口。 如果协议是HTTP、HTTPS、TCP、TLS、UDP或TCP_UDP默认值为traffic-port这是每个目标接收负载均衡器流量的端口。 如果协议是GENEVE默认值为端口80。
- **healthCheckProtocol**:负载均衡器在对目标执行健康检查时使用的协议。 对于应用负载均衡器默认协议是HTTP。对于网络负载均衡器和网关负载均衡器默认协议是TCP。 如果目标组协议是HTTP或HTTPS则不支持TCP协议进行健康检查。GENEVE、TLS、UDP和TCP_UDP协议不支持健康检查。
- **healthCheckTimeoutSeconds**在目标没有响应的情况下认为健康检查失败的时间以秒为单位。取值范围为2-120秒。对于HTTP协议的目标组默认值为6秒。对于TCP、TLS或HTTPS协议的目标组默认值为10秒。对于GENEVE协议的目标组默认值为5秒。如果目标类型是lambda默认值为30秒。
- **healthyThresholdCount**在将目标标记为健康之前所需的连续健康检查成功次数。取值范围为2-10。如果目标组协议是TCP、TCP_UDP、UDP、TLS、HTTP或HTTPS默认值为5。 对于GENEVE协议的目标组默认值为5。如果目标类型是lambda默认值为5。
- **unhealthyThresholdCount**指定在将目标标记为不健康之前所需的连续健康检查失败次数。取值范围为2-10。如果目标组的协议是TCP、TCP_UDP、UDP、TLS、HTTP或HTTPS默认值为2。如果目标组的协议是GENEVE默认值为2。如果目标类型是lambda默认值为5。
#### PortProtocols
- 含义pod暴露的端口及协议支持填写多个端口/协议
- 填写格式port1/protocol1,port2/protocol2,...(协议需大写)
- 是否支持变更:是
#### Fixed
- 含义是否固定访问端口。若是即使pod删除重建网络内外映射关系不会改变
- 填写格式false / true
- 是否支持变更:是
#### AllowNotReadyContainers
- 含义:在容器原地升级时允许不断流的对应容器名称,可填写多个
- 填写格式:{containerName_0},{containerName_1},... 例如sidecar
- 是否支持变更:在原地升级过程中不可变更
#### Annotations
- 含义添加在service上的anno可填写多个
- 填写格式key1:value1,key2:value2...
- 是否支持变更:是
### 使用示例
```shell
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: gs-demo
namespace: default
spec:
replicas: 1
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: AmazonWebServices-NLB
networkConf:
- name: NlbARNs
value: "arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxx:loadbalancer/net/okg-test/yyyyyyyyyyyyyyyy"
- name: NlbVPCId
value: "vpc-0bbc9f9f0ffexxxxx"
- name: PortProtocols
value: "80/TCP"
- name: NlbHealthCheck
value: "healthCheckIntervalSeconds:15"
gameServerTemplate:
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/gs-demo/gameserver:network
name: gameserver
EOF
```
检查GameServer中的网络状态
```
networkStatus:
createTime: "2024-05-30T03:34:14Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- endPoint: okg-test-yyyyyyyyyyyyyyyy.elb.us-east-1.amazonaws.com
ip: ""
ports:
- name: "80"
port: 32034
protocol: TCP
internalAddresses:
- ip: 10.10.7.154
ports:
- name: "80"
port: 80
protocol: TCP
lastTransitionTime: "2024-05-30T03:34:14Z"
networkType: AmazonWebServices-NLB
```

View File

@ -0,0 +1,62 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package amazonswebservices
import (
log "k8s.io/klog/v2"
"github.com/openkruise/kruise-game/cloudprovider"
)
const (
AmazonsWebServices = "AmazonsWebServices"
)
var (
amazonsWebServicesProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
)
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (ap *Provider) Name() string {
return AmazonsWebServices
}
func (ap *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if ap.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return ap.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (ap *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
log.Fatal("empty plugin name")
}
ap.plugins[name] = plugin
}
func NewAmazonsWebServicesProvider() (cloudprovider.CloudProvider, error) {
return amazonsWebServicesProvider, nil
}

View File

@ -0,0 +1,835 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package amazonswebservices
import (
"context"
"fmt"
"strconv"
"strings"
"sync"
ackv1alpha1 "github.com/aws-controllers-k8s/elbv2-controller/apis/v1alpha1"
"github.com/kr/pretty"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/tools/cache"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
elbv2api "sigs.k8s.io/aws-load-balancer-controller/apis/elbv2/v1beta1"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
)
const (
NlbNetwork = "AmazonWebServices-NLB"
AliasNlb = "NLB-Network"
NlbARNsConfigName = "NlbARNs"
NlbVPCIdConfigName = "NlbVPCId"
NlbHealthCheckConfigName = "NlbHealthCheck"
PortProtocolsConfigName = "PortProtocols"
FixedConfigName = "Fixed"
NlbAnnotations = "Annotations"
NlbARNAnnoKey = "service.beta.kubernetes.io/aws-load-balancer-nlb-arn"
NlbPortAnnoKey = "service.beta.kubernetes.io/aws-load-balancer-nlb-port"
AWSTargetGroupSyncStatus = "aws-load-balancer-nlb-target-group-synced"
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
NlbConfigHashKey = "game.kruise.io/network-config-hash"
ResourceTagKey = "managed-by"
ResourceTagValue = "game.kruise.io"
)
const (
healthCheckEnabled = "healthCheckEnabled"
healthCheckIntervalSeconds = "healthCheckIntervalSeconds"
healthCheckPath = "healthCheckPath"
healthCheckPort = "healthCheckPort"
healthCheckProtocol = "healthCheckProtocol"
healthCheckTimeoutSeconds = "healthCheckTimeoutSeconds"
healthyThresholdCount = "healthyThresholdCount"
unhealthyThresholdCount = "unhealthyThresholdCount"
listenerActionType = "forward"
)
type portAllocated map[int32]bool
type nlbPorts struct {
arn string
ports []int32
}
type NlbPlugin struct {
maxPort int32
minPort int32
cache map[string]portAllocated
podAllocate map[string]*nlbPorts
mutex sync.RWMutex
}
type backend struct {
targetPort int
protocol corev1.Protocol
}
type healthCheck struct {
healthCheckEnabled *bool
healthCheckIntervalSeconds *int64
healthCheckPath *string
healthCheckPort *string
healthCheckProtocol *string
healthCheckTimeoutSeconds *int64
healthyThresholdCount *int64
unhealthyThresholdCount *int64
}
type nlbConfig struct {
loadBalancerARNs []string
healthCheck *healthCheck
vpcID string
backends []*backend
isFixed bool
annotations map[string]string
}
func startWatchTargetGroup(ctx context.Context) error {
var err error
go func() {
err = watchTargetGroup(ctx)
}()
return err
}
func watchTargetGroup(ctx context.Context) error {
scheme := runtime.NewScheme()
utilruntime.Must(ackv1alpha1.AddToScheme(scheme))
utilruntime.Must(elbv2api.AddToScheme(scheme))
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Metrics: metricsserver.Options{
BindAddress: "0",
},
Scheme: scheme,
})
if err != nil {
return err
}
informer, err := mgr.GetCache().GetInformer(ctx, &ackv1alpha1.TargetGroup{})
if err != nil {
return fmt.Errorf("failed to get informer: %v", err)
}
if _, err := informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
handleTargetGroupEvent(ctx, mgr.GetClient(), obj)
},
UpdateFunc: func(oldObj, newObj interface{}) {
handleTargetGroupEvent(ctx, mgr.GetClient(), newObj)
},
}); err != nil {
return fmt.Errorf("failed to add event handler: %v", err)
}
log.Info("Start to watch TargetGroups successfully")
return mgr.Start(ctx)
}
func handleTargetGroupEvent(ctx context.Context, c client.Client, obj interface{}) {
targetGroup, ok := obj.(*ackv1alpha1.TargetGroup)
if !ok {
log.Warning("Failed to convert event.Object to TargetGroup")
return
}
if targetGroup.Labels[AWSTargetGroupSyncStatus] == "false" {
targetGroupARN, err := getACKTargetGroupARN(targetGroup)
if err != nil {
return
}
log.Infof("targetGroup sync request watched, start to sync %s/%s, ARN: %s",
targetGroup.GetNamespace(), targetGroup.GetName(), targetGroupARN)
err = syncListenerAndTargetGroupBinding(ctx, c, targetGroup, &targetGroupARN)
if err != nil {
log.Errorf("syncListenerAndTargetGroupBinding by targetGroup %s error %v",
pretty.Sprint(targetGroup), err)
return
}
patch := client.RawPatch(types.MergePatchType,
[]byte(fmt.Sprintf(`{"metadata":{"labels":{"%s":"true"}}}`, AWSTargetGroupSyncStatus)))
err = c.Patch(ctx, targetGroup, patch)
if err != nil {
log.Warningf("patch targetGroup %s %s error %v",
pretty.Sprint(targetGroup), AWSTargetGroupSyncStatus, err)
}
}
}
func (n *NlbPlugin) Name() string {
return NlbNetwork
}
func (n *NlbPlugin) Alias() string {
return AliasNlb
}
func (n *NlbPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
n.mutex.Lock()
defer n.mutex.Unlock()
err := startWatchTargetGroup(ctx)
if err != nil {
return err
}
nlbOptions, ok := options.(provideroptions.AmazonsWebServicesOptions)
if !ok {
return cperrors.ToPluginError(fmt.Errorf("failed to convert options to nlbOptions"), cperrors.InternalError)
}
n.minPort = nlbOptions.NLBOptions.MinPort
n.maxPort = nlbOptions.NLBOptions.MaxPort
svcList := &corev1.ServiceList{}
err = c.List(ctx, svcList, client.MatchingLabels{ResourceTagKey: ResourceTagValue})
if err != nil {
return err
}
n.initLbCache(svcList.Items)
if err != nil {
return err
}
log.Infof("[%s] podAllocate cache complete initialization: %s", NlbNetwork, pretty.Sprint(n.podAllocate))
return nil
}
func (n *NlbPlugin) initCache(nlbARN string) {
if n.cache[nlbARN] == nil {
n.cache[nlbARN] = make(portAllocated, n.maxPort-n.minPort+1)
for j := n.minPort; j <= n.maxPort; j++ {
n.cache[nlbARN][j] = false
}
}
}
func (n *NlbPlugin) initLbCache(svcList []corev1.Service) {
if n.cache == nil {
n.cache = make(map[string]portAllocated)
}
if n.podAllocate == nil {
n.podAllocate = make(map[string]*nlbPorts)
}
for _, svc := range svcList {
lbARN := svc.Annotations[NlbARNAnnoKey]
if lbARN != "" {
n.initCache(lbARN)
var ports []int32
for _, port := range getPorts(svc.Spec.Ports) {
if port <= n.maxPort && port >= n.minPort {
n.cache[lbARN][port] = true
ports = append(ports, port)
}
}
if len(ports) != 0 {
n.podAllocate[svc.GetNamespace()+"/"+svc.GetName()] = &nlbPorts{arn: lbARN, ports: ports}
}
}
}
}
func (n *NlbPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (n *NlbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, err := networkManager.GetNetworkStatus()
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
lbConfig := parseLbConfig(networkConfig)
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// get svc
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
return pod, cperrors.ToPluginError(n.syncTargetGroupAndService(lbConfig, pod, c, ctx), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// update svc
if util.GetHash(lbConfig) != svc.GetAnnotations()[NlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
return pod, cperrors.ToPluginError(n.syncTargetGroupAndService(lbConfig, pod, c, ctx), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() {
return pod, cperrors.ToPluginError(c.DeleteAllOf(ctx, &elbv2api.TargetGroupBinding{},
client.InNamespace(pod.GetNamespace()),
client.MatchingLabels(map[string]string{ResourceTagKey: ResourceTagValue, SvcSelectorKey: pod.GetName()})),
cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() {
selector := client.MatchingLabels{
ResourceTagKey: ResourceTagValue,
SvcSelectorKey: pod.GetName(),
}
var tgbList elbv2api.TargetGroupBindingList
err = c.List(ctx, &tgbList, selector)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
if len(tgbList.Items) != len(svc.Spec.Ports) {
var tgList ackv1alpha1.TargetGroupList
err = c.List(ctx, &tgList, selector)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
patch := client.RawPatch(types.MergePatchType,
[]byte(fmt.Sprintf(`{"metadata":{"labels":{"%s":"false"}}}`, AWSTargetGroupSyncStatus)))
for _, tg := range tgList.Items {
err = c.Patch(ctx, &tg, patch)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
EndPoint: generateNlbEndpoint(svc.Annotations[NlbARNAnnoKey]),
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func generateNlbEndpoint(nlbARN string) string {
const arnPartsCount = 6
const loadBalancerPrefix = "loadbalancer/net/"
parts := strings.Split(nlbARN, ":")
if len(parts) != arnPartsCount {
return ""
}
region := parts[3]
loadBalancerName := strings.ReplaceAll(strings.TrimPrefix(parts[5], loadBalancerPrefix), "/", "-")
return fmt.Sprintf("%s.elb.%s.amazonaws.com", loadBalancerName, region)
}
func (n *NlbPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, client)
networkConfig := networkManager.GetNetworkConfig()
sc := parseLbConfig(networkConfig)
var podKeys []string
if sc.isFixed {
gss, err := util.GetGameServerSetOfPod(pod, client, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range n.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
n.deAllocate(podKey)
}
return nil
}
func (n *NlbPlugin) allocate(lbARNs []string, num int, nsName string) *nlbPorts {
n.mutex.Lock()
defer n.mutex.Unlock()
// Initialize cache for each lbARN if not already done
for _, nlbARN := range lbARNs {
n.initCache(nlbARN)
}
// Find lbARN with enough free ports
selectedARN := n.findLbWithFreePorts(lbARNs, num)
if selectedARN == "" {
return nil
}
// Allocate ports
ports := n.allocatePorts(selectedARN, num)
n.podAllocate[nsName] = &nlbPorts{arn: selectedARN, ports: ports}
log.Infof("pod %s allocate nlb %s ports %v", nsName, selectedARN, ports)
return &nlbPorts{arn: selectedARN, ports: ports}
}
func (n *NlbPlugin) findLbWithFreePorts(lbARNs []string, num int) string {
for _, nlbARN := range lbARNs {
freePorts := 0
for i := n.minPort; i <= n.maxPort && freePorts < num; i++ {
if !n.cache[nlbARN][i] {
freePorts++
}
}
if freePorts >= num {
return nlbARN
}
}
return ""
}
func (n *NlbPlugin) allocatePorts(lbARN string, num int) []int32 {
var ports []int32
for i := 0; i < num; i++ {
for p := n.minPort; p <= n.maxPort; p++ {
if !n.cache[lbARN][p] {
n.cache[lbARN][p] = true
ports = append(ports, p)
break
}
}
}
return ports
}
func (n *NlbPlugin) deAllocate(nsName string) {
n.mutex.Lock()
defer n.mutex.Unlock()
allocatedPorts, exist := n.podAllocate[nsName]
if !exist {
return
}
lbARN := allocatedPorts.arn
ports := allocatedPorts.ports
for _, port := range ports {
n.cache[lbARN][port] = false
}
delete(n.podAllocate, nsName)
log.Infof("pod %s deallocate nlb %s ports %v", nsName, lbARN, ports)
}
func init() {
nlbPlugin := NlbPlugin{
mutex: sync.RWMutex{},
}
amazonsWebServicesProvider.registerPlugin(&nlbPlugin)
}
func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *nlbConfig {
var lbARNs []string
var hc healthCheck
var vpcId string
backends := make([]*backend, 0)
isFixed := false
annotations := map[string]string{}
for _, c := range conf {
switch c.Name {
case NlbARNsConfigName:
for _, nlbARN := range strings.Split(c.Value, ",") {
if nlbARN != "" {
lbARNs = append(lbARNs, nlbARN)
}
}
case NlbHealthCheckConfigName:
for _, healthCheckConf := range strings.Split(c.Value, ",") {
confKV := strings.Split(healthCheckConf, ":")
if len(confKV) == 2 {
switch confKV[0] {
case healthCheckEnabled:
v, err := strconv.ParseBool(confKV[1])
if err != nil {
continue
}
hc.healthCheckEnabled = &v
case healthCheckIntervalSeconds:
v, err := strconv.ParseInt(confKV[1], 10, 64)
if err != nil {
continue
}
hc.healthCheckIntervalSeconds = &v
case healthCheckPath:
hc.healthCheckPath = &confKV[1]
case healthCheckPort:
hc.healthCheckPort = &confKV[1]
case healthCheckProtocol:
hc.healthCheckProtocol = &confKV[1]
case healthCheckTimeoutSeconds:
v, err := strconv.ParseInt(confKV[1], 10, 64)
if err != nil {
continue
}
hc.healthCheckTimeoutSeconds = &v
case healthyThresholdCount:
v, err := strconv.ParseInt(confKV[1], 10, 64)
if err != nil {
continue
}
hc.healthyThresholdCount = &v
case unhealthyThresholdCount:
v, err := strconv.ParseInt(confKV[1], 10, 64)
if err != nil {
continue
}
hc.unhealthyThresholdCount = &v
}
} else {
log.Warningf("nlb %s %s is invalid", NlbHealthCheckConfigName, confKV)
}
}
case NlbVPCIdConfigName:
vpcId = c.Value
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
var protocol corev1.Protocol
if len(ppSlice) != 2 {
protocol = corev1.ProtocolTCP
} else {
protocol = corev1.Protocol(ppSlice[1])
}
backends = append(backends, &backend{
targetPort: port,
protocol: protocol,
})
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
isFixed = v
case NlbAnnotations:
for _, anno := range strings.Split(c.Value, ",") {
annoKV := strings.Split(anno, ":")
if len(annoKV) == 2 {
annotations[annoKV[0]] = annoKV[1]
} else {
log.Warningf("nlb %s %s is invalid", NlbAnnotations, c.Value)
}
}
}
}
return &nlbConfig{
loadBalancerARNs: lbARNs,
healthCheck: &hc,
vpcID: vpcId,
backends: backends,
isFixed: isFixed,
annotations: annotations,
}
}
func getACKTargetGroupARN(tg *ackv1alpha1.TargetGroup) (string, error) {
if len(tg.Status.Conditions) == 0 {
return "", fmt.Errorf("targetGroup status not ready")
}
if tg.Status.Conditions[0].Status != "True" {
return "", fmt.Errorf("targetGroup status error: %s %s",
*tg.Status.Conditions[0].Message, *tg.Status.Conditions[0].Reason)
}
if tg.Status.ACKResourceMetadata != nil && tg.Status.ACKResourceMetadata.ARN != nil {
return string(*tg.Status.ACKResourceMetadata.ARN), nil
} else {
return "", fmt.Errorf("targetGroup status not ready")
}
}
func (n *NlbPlugin) syncTargetGroupAndService(config *nlbConfig,
pod *corev1.Pod, client client.Client, ctx context.Context) error {
var ports []int32
var lbARN string
podKey := pod.GetNamespace() + "/" + pod.GetName()
allocatedPorts, exist := n.podAllocate[podKey]
if !exist {
allocatedPorts = n.allocate(config.loadBalancerARNs, len(config.backends), podKey)
if allocatedPorts == nil {
return fmt.Errorf("no NLB has %d enough available ports for %s", len(config.backends), podKey)
}
}
lbARN = allocatedPorts.arn
ports = allocatedPorts.ports
ownerReference := getOwnerReference(client, ctx, pod, config.isFixed)
for i := range ports {
targetGroupName := fmt.Sprintf("%s-%d", pod.GetName(), ports[i])
protocol := string(config.backends[i].protocol)
targetPort := int64(config.backends[i].targetPort)
var targetTypeIP = string(ackv1alpha1.TargetTypeEnum_ip)
_, err := controllerutil.CreateOrUpdate(ctx, client, &ackv1alpha1.TargetGroup{
ObjectMeta: metav1.ObjectMeta{
Name: targetGroupName,
Namespace: pod.GetNamespace(),
OwnerReferences: ownerReference,
Labels: map[string]string{
ResourceTagKey: ResourceTagValue,
SvcSelectorKey: pod.GetName(),
AWSTargetGroupSyncStatus: "false",
},
Annotations: map[string]string{
NlbARNAnnoKey: lbARN,
NlbPortAnnoKey: fmt.Sprintf("%d", ports[i]),
},
},
Spec: ackv1alpha1.TargetGroupSpec{
HealthCheckEnabled: config.healthCheck.healthCheckEnabled,
HealthCheckIntervalSeconds: config.healthCheck.healthCheckIntervalSeconds,
HealthCheckPath: config.healthCheck.healthCheckPath,
HealthCheckPort: config.healthCheck.healthCheckPort,
HealthCheckProtocol: config.healthCheck.healthCheckProtocol,
HealthCheckTimeoutSeconds: config.healthCheck.healthCheckTimeoutSeconds,
HealthyThresholdCount: config.healthCheck.healthyThresholdCount,
UnhealthyThresholdCount: config.healthCheck.unhealthyThresholdCount,
Name: &targetGroupName,
Protocol: &protocol,
Port: &targetPort,
VPCID: &config.vpcID,
TargetType: &targetTypeIP,
Tags: []*ackv1alpha1.Tag{{Key: ptr.To[string](ResourceTagKey),
Value: ptr.To[string](ResourceTagValue)}},
},
}, func() error { return nil })
if err != nil {
return err
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(config.backends); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(config.backends[i].targetPort),
Port: ports[i],
Protocol: config.backends[i].protocol,
TargetPort: intstr.FromInt(config.backends[i].targetPort),
})
}
annotations := map[string]string{
NlbARNAnnoKey: lbARN,
NlbConfigHashKey: util.GetHash(config),
}
for key, value := range config.annotations {
annotations[key] = value
}
_, err := controllerutil.CreateOrUpdate(ctx, client, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: annotations,
OwnerReferences: ownerReference,
Labels: map[string]string{
ResourceTagKey: ResourceTagValue,
SvcSelectorKey: pod.GetName(),
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeClusterIP,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
},
}, func() error { return nil })
if err != nil {
return err
}
return nil
}
func syncListenerAndTargetGroupBinding(ctx context.Context, client client.Client,
tg *ackv1alpha1.TargetGroup, targetGroupARN *string) error {
actionType := listenerActionType
port, err := strconv.ParseInt(tg.Annotations[NlbPortAnnoKey], 10, 64)
if err != nil {
return err
}
lbARN := tg.Annotations[NlbARNAnnoKey]
podName := tg.Labels[SvcSelectorKey]
_, err = controllerutil.CreateOrUpdate(ctx, client, &ackv1alpha1.Listener{
ObjectMeta: metav1.ObjectMeta{
Name: tg.GetName(),
Namespace: tg.GetNamespace(),
OwnerReferences: tg.GetOwnerReferences(),
Labels: map[string]string{
ResourceTagKey: ResourceTagValue,
SvcSelectorKey: podName,
},
},
Spec: ackv1alpha1.ListenerSpec{
Protocol: tg.Spec.Protocol,
Port: &port,
LoadBalancerARN: &lbARN,
DefaultActions: []*ackv1alpha1.Action{
{
TargetGroupARN: targetGroupARN,
Type: &actionType,
},
},
Tags: []*ackv1alpha1.Tag{{Key: ptr.To[string](ResourceTagKey),
Value: ptr.To[string](ResourceTagValue)}},
},
}, func() error { return nil })
if err != nil {
return err
}
var targetTypeIP = elbv2api.TargetTypeIP
_, err = controllerutil.CreateOrUpdate(ctx, client, &elbv2api.TargetGroupBinding{
ObjectMeta: metav1.ObjectMeta{
Name: tg.GetName(),
Namespace: tg.GetNamespace(),
OwnerReferences: tg.GetOwnerReferences(),
Labels: map[string]string{
ResourceTagKey: ResourceTagValue,
SvcSelectorKey: podName,
},
},
Spec: elbv2api.TargetGroupBindingSpec{
TargetGroupARN: *targetGroupARN,
TargetType: &targetTypeIP,
ServiceRef: elbv2api.ServiceReference{
Name: podName,
Port: intstr.FromInt(int(port)),
},
},
}, func() error { return nil })
if err != nil {
return err
}
return nil
}
func getPorts(ports []corev1.ServicePort) []int32 {
var ret []int32
for _, port := range ports {
ret = append(ret, port.Port)
}
return ret
}
func getOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, isFixed bool) []metav1.OwnerReference {
ownerReferences := []metav1.OwnerReference{
{
APIVersion: pod.APIVersion,
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err == nil {
ownerReferences = []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}
}
return ownerReferences
}

View File

@ -0,0 +1,288 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package amazonswebservices
import (
"reflect"
"sync"
"testing"
"github.com/kr/pretty"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/utils/ptr"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
)
func TestAllocateDeAllocate(t *testing.T) {
tests := []struct {
loadBalancerARNs []string
nlb *NlbPlugin
num int
podKey string
}{
{
loadBalancerARNs: []string{"arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870",
"arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e"},
nlb: &NlbPlugin{
maxPort: int32(1000),
minPort: int32(951),
cache: make(map[string]portAllocated),
podAllocate: make(map[string]*nlbPorts),
mutex: sync.RWMutex{},
},
podKey: "xxx/xxx",
num: 3,
},
{
loadBalancerARNs: []string{"arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870",
"arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e"},
nlb: &NlbPlugin{
maxPort: int32(955),
minPort: int32(951),
cache: make(map[string]portAllocated),
podAllocate: make(map[string]*nlbPorts),
mutex: sync.RWMutex{},
},
podKey: "xxx/xxx",
num: 6,
},
}
for _, test := range tests {
allocatedPorts := test.nlb.allocate(test.loadBalancerARNs, test.num, test.podKey)
if int(test.nlb.maxPort-test.nlb.minPort+1) < test.num && allocatedPorts != nil {
t.Errorf("insufficient available ports but NLB was still allocated: %s",
pretty.Sprint(allocatedPorts))
}
if allocatedPorts == nil {
continue
}
if _, exist := test.nlb.podAllocate[test.podKey]; !exist {
t.Errorf("podAllocate[%s] is empty after allocated", test.podKey)
}
for _, port := range allocatedPorts.ports {
if port > test.nlb.maxPort || port < test.nlb.minPort {
t.Errorf("allocate port %d, unexpected", port)
}
if test.nlb.cache[allocatedPorts.arn][port] == false {
t.Errorf("allocate port %d failed", port)
}
}
test.nlb.deAllocate(test.podKey)
for _, port := range allocatedPorts.ports {
if test.nlb.cache[allocatedPorts.arn][port] == true {
t.Errorf("deAllocate port %d failed", port)
}
}
if _, exist := test.nlb.podAllocate[test.podKey]; exist {
t.Errorf("podAllocate[%s] is not empty after deallocated", test.podKey)
}
}
}
func TestParseLbConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
loadBalancerARNs []string
healthCheck *healthCheck
backends []*backend
isFixed bool
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbARNsConfigName,
Value: "arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870",
},
{
Name: PortProtocolsConfigName,
Value: "80",
},
},
loadBalancerARNs: []string{"arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870"},
healthCheck: &healthCheck{},
backends: []*backend{
{
targetPort: 80,
protocol: corev1.ProtocolTCP,
},
},
isFixed: false,
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbARNsConfigName,
Value: "arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870,arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e",
},
{
Name: NlbHealthCheckConfigName,
Value: "healthCheckEnabled:true,healthCheckIntervalSeconds:30,healthCheckPath:/health,healthCheckPort:8081,healthCheckProtocol:HTTP,healthCheckTimeoutSeconds:10,healthyThresholdCount:5,unhealthyThresholdCount:2",
},
{
Name: PortProtocolsConfigName,
Value: "10000/UDP,10001,10002/TCP",
},
{
Name: FixedConfigName,
Value: "true",
},
},
loadBalancerARNs: []string{"arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870", "arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e"},
healthCheck: &healthCheck{
healthCheckEnabled: ptr.To[bool](true),
healthCheckIntervalSeconds: ptr.To[int64](30),
healthCheckPath: ptr.To[string]("/health"),
healthCheckPort: ptr.To[string]("8081"),
healthCheckProtocol: ptr.To[string]("HTTP"),
healthCheckTimeoutSeconds: ptr.To[int64](10),
healthyThresholdCount: ptr.To[int64](5),
unhealthyThresholdCount: ptr.To[int64](2),
},
backends: []*backend{
{
targetPort: 10000,
protocol: corev1.ProtocolUDP,
},
{
targetPort: 10001,
protocol: corev1.ProtocolTCP,
},
{
targetPort: 10002,
protocol: corev1.ProtocolTCP,
},
},
isFixed: true,
},
}
for _, test := range tests {
sc := parseLbConfig(test.conf)
if !reflect.DeepEqual(test.loadBalancerARNs, sc.loadBalancerARNs) {
t.Errorf("loadBalancerARNs expect: %v, actual: %v", test.loadBalancerARNs, sc.loadBalancerARNs)
}
if !reflect.DeepEqual(test.healthCheck, sc.healthCheck) {
t.Errorf("healthCheck expect: %s, actual: %s", pretty.Sprint(test.healthCheck), pretty.Sprint(sc.healthCheck))
}
if !reflect.DeepEqual(test.backends, sc.backends) {
t.Errorf("ports expect: %s, actual: %s", pretty.Sprint(test.backends), pretty.Sprint(sc.backends))
}
if test.isFixed != sc.isFixed {
t.Errorf("isFixed expect: %v, actual: %v", test.isFixed, sc.isFixed)
}
}
}
func TestInitLbCache(t *testing.T) {
test := struct {
n *NlbPlugin
svcList []corev1.Service
cache map[string]portAllocated
podAllocate map[string]*nlbPorts
}{
n: &NlbPlugin{
minPort: 951,
maxPort: 1000,
},
cache: map[string]portAllocated{
"arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870": map[int32]bool{
988: true,
},
"arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e": map[int32]bool{
951: true,
999: true,
},
},
podAllocate: map[string]*nlbPorts{
"ns-0/name-0": {
arn: "arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870",
ports: []int32{988},
},
"ns-1/name-1": {
arn: "arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e",
ports: []int32{951, 999},
},
},
svcList: []corev1.Service{
{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
NlbARNAnnoKey: "arn:aws:elasticloadbalancing:us-east-1:888888888888:loadbalancer/net/aaa/3b332e6841f23870",
},
Labels: map[string]string{ResourceTagKey: ResourceTagValue},
Namespace: "ns-0",
Name: "name-0",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-A",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(80),
Port: 988,
Protocol: corev1.ProtocolTCP,
},
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
NlbARNAnnoKey: "arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/net/bbb/5fe74944d794d27e",
},
Labels: map[string]string{ResourceTagKey: ResourceTagValue},
Namespace: "ns-1",
Name: "name-1",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-B",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(8080),
Port: 951,
Protocol: corev1.ProtocolTCP,
},
{
TargetPort: intstr.FromInt(8081),
Port: 999,
Protocol: corev1.ProtocolTCP,
},
},
},
},
},
}
test.n.initLbCache(test.svcList)
for arn, pa := range test.cache {
for port, isAllocated := range pa {
if test.n.cache[arn][port] != isAllocated {
t.Errorf("nlb arn %s port %d isAllocated, expect: %t, actual: %t", arn, port, isAllocated, test.n.cache[arn][port])
}
}
}
if !reflect.DeepEqual(test.n.podAllocate, test.podAllocate) {
t.Errorf("podAllocate expect %v, but actully got %v", test.podAllocate, test.n.podAllocate)
}
}

View File

@ -17,13 +17,13 @@ limitations under the License.
package cloudprovider
import (
"flag"
"github.com/BurntSushi/toml"
"github.com/openkruise/kruise-game/cloudprovider/options"
"k8s.io/klog/v2"
)
import "flag"
var Opt *Options
type Options struct {
@ -43,25 +43,39 @@ type ConfigFile struct {
}
type CloudProviderConfig struct {
KubernetesOptions CloudProviderOptions
AlibabaCloudOptions CloudProviderOptions
KubernetesOptions CloudProviderOptions
AlibabaCloudOptions CloudProviderOptions
VolcengineOptions CloudProviderOptions
AmazonsWebServicesOptions CloudProviderOptions
TencentCloudOptions CloudProviderOptions
JdCloudOptions CloudProviderOptions
HwCloudOptions CloudProviderOptions
}
type tomlConfigs struct {
Kubernetes options.KubernetesOptions `toml:"kubernetes"`
AlibabaCloud options.AlibabaCloudOptions `toml:"alibabacloud"`
Kubernetes options.KubernetesOptions `toml:"kubernetes"`
AlibabaCloud options.AlibabaCloudOptions `toml:"alibabacloud"`
Volcengine options.VolcengineOptions `toml:"volcengine"`
AmazonsWebServices options.AmazonsWebServicesOptions `toml:"aws"`
TencentCloud options.TencentCloudOptions `toml:"tencentcloud"`
JdCloud options.JdCloudOptions `toml:"jdcloud"`
HwCloud options.HwCloudOptions `toml:"hwcloud"`
}
func (cf *ConfigFile) Parse() *CloudProviderConfig {
var config tomlConfigs
if _, err := toml.DecodeFile(cf.Path, &config); err != nil {
klog.Fatal(err)
}
return &CloudProviderConfig{
KubernetesOptions: config.Kubernetes,
AlibabaCloudOptions: config.AlibabaCloud,
KubernetesOptions: config.Kubernetes,
AlibabaCloudOptions: config.AlibabaCloud,
VolcengineOptions: config.Volcengine,
AmazonsWebServicesOptions: config.AmazonsWebServices,
TencentCloudOptions: config.TencentCloud,
JdCloudOptions: config.JdCloud,
HwCloudOptions: config.HwCloud,
}
}

View File

@ -0,0 +1,799 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package hwcloud
import (
"context"
"encoding/json"
"fmt"
"strconv"
"strings"
"sync"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
)
const (
PortProtocolsConfigName = "PortProtocols"
ExternalTrafficPolicyTypeConfigName = "ExternalTrafficPolicyType"
PublishNotReadyAddressesConfigName = "PublishNotReadyAddresses"
ElbIdAnnotationKey = "kubernetes.io/elb.id"
ElbIdsConfigName = "ElbIds"
ElbClassAnnotationKey = "kubernetes.io/elb.class"
ElbClassConfigName = "ElbClass"
ElbAvailableZoneAnnotationKey = "kubernetes.io/elb.availability-zones"
ElbAvailableZoneAnnotationConfigName = "ElbAvailableZone"
ElbConnLimitAnnotationKey = "kubernetes.io/elb.connection-limit"
ElbConnLimitConfigName = "ElbConnLimit"
ElbSubnetAnnotationKey = "kubernetes.io/elb.subnet-id"
ElbSubnetConfigName = "ElbSubnetId"
ElbEipAnnotationKey = "kubernetes.io/elb.eip-id"
ElbEipConfigName = "ElbEipId"
ElbEipKeepAnnotationKey = "kubernetes.io/elb.keep-eip"
ElbEipKeepConfigName = "ElbKeepd"
ElbEipAutoCreateOptionAnnotationKey = "kubernetes.io/elb.eip-auto-create-option"
ElbEipAutoCreateOptionConfigName = "ElbEipAutoCreateOption"
ElbLbAlgorithmAnnotationKey = "kubernetes.io/elb.lb-algorithm"
ElbLbAlgorithmConfigName = "ElbLbAlgorithm"
ElbSessionAffinityFlagAnnotationKey = "kubernetes.io/elb.session-affinity-flag"
ElbSessionAffinityFlagConfigName = "ElbSessionAffinityFlag"
ElbSessionAffinityOptionAnnotationKey = "kubernetes.io/elb.session-affinity-option"
ElbSessionAffinityOptionConfigName = "ElbSessionAffinityOption"
ElbTransparentClientIPAnnotationKey = "kubernetes.io/elb.enable-transparent-client-ip"
ElbTransparentClientIPConfigName = "ElbTransparentClientIP"
ElbXForwardedHostAnnotationKey = "kubernetes.io/elb.x-forwarded-host"
ElbXForwardedHostConfigName = "ElbXForwardedHost"
ElbTlsRefAnnotationKey = "kubernetes.io/elb.default-tls-container-ref"
ElbTlsRefConfigName = "ElbTlsRef"
ElbIdleTimeoutAnnotationKey = "kubernetes.io/elb.idle-timeout"
ElbIdleTimeoutConfigName = "ElbIdleTimeout"
ElbRequestTimeoutAnnotationKey = "kubernetes.io/elb.request-timeout"
ElbRequestTimeoutConfigName = "ElbRequestTimeout"
ElbResponseTimeoutAnnotationKey = "kubernetes.io/elb.response-timeout"
ElbResponseTimeoutConfigName = "ElbResponseTimeout"
ElbEnableCrossVPCAnnotationKey = "kubernetes.io/elb.enable-cross-vpc"
ElbEnableCrossVPCConfigName = "ElbEnableCrossVPC"
ElbL4FlavorIDAnnotationKey = "kubernetes.io/elb.l4-flavor-id"
ElbL4FlavorIDConfigName = "ElbL4FlavorID"
ElbL7FlavorIDAnnotationKey = "kubernetes.io/elb.l7-flavor-id"
ElbL7FlavorIDConfigName = "ElbL7FlavorID"
LBHealthCheckSwitchAnnotationKey = "kubernetes.io/elb.health-check-flag"
LBHealthCheckSwitchConfigName = "LBHealthCheckFlag"
LBHealthCheckOptionAnnotationKey = "kubernetes.io/elb.health-check-option"
LBHealthCHeckOptionConfigName = "LBHealthCheckOption"
)
const (
ElbConfigHashKey = "game.kruise.io/network-config-hash"
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
ProtocolTCPUDP corev1.Protocol = "TCPUDP"
FixedConfigName = "Fixed"
ElbNetwork = "HwCloud-ELB"
AliasELB = "ELB-Network"
ElbClassDedicated = "dedicated"
ElbClassShared = "shared"
ElbLbAlgorithmRoundRobin = "ROUND_ROBIN"
ElbLbAlgorithmLeastConn = "LEAST_CONNECTIONS"
ElbLbAlgorithmSourceIP = "SOURCE_IP"
)
type portAllocated map[int32]bool
type ElbPlugin struct {
maxPort int32
minPort int32
blockPorts []int32
cache map[string]portAllocated
podAllocate map[string]string
mutex sync.RWMutex
}
type elbConfig struct {
lbIds []string
targetPorts []int
protocols []corev1.Protocol
isFixed bool
elbClass string
elbConnLimit int32
elbLbAlgorithm string
elbSessionAffinityFlag string
elbSessionAffinityOption string
elbTransparentClientIP bool
elbXForwardedHost bool
elbIdleTimeout int32
elbRequestTimeout int32
elbResponseTimeout int32
externalTrafficPolicyType corev1.ServiceExternalTrafficPolicyType
publishNotReadyAddresses bool
lBHealthCheckSwitch string
lBHealtchCheckOption string
}
func (s *ElbPlugin) Name() string {
return ElbNetwork
}
func (s *ElbPlugin) Alias() string {
return AliasELB
}
func (s *ElbPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
s.mutex.Lock()
defer s.mutex.Unlock()
elbOptions := options.(provideroptions.HwCloudOptions).ELBOptions
s.minPort = elbOptions.MinPort
s.maxPort = elbOptions.MaxPort
s.blockPorts = elbOptions.BlockPorts
svcList := &corev1.ServiceList{}
err := c.List(ctx, svcList)
if err != nil {
return err
}
s.cache, s.podAllocate = initLbCache(svcList.Items, s.minPort, s.maxPort, s.blockPorts)
log.Infof("[%s] podAllocate cache complete initialization: %v", ElbNetwork, s.podAllocate)
return nil
}
func initLbCache(svcList []corev1.Service, minPort, maxPort int32, blockPorts []int32) (map[string]portAllocated, map[string]string) {
newCache := make(map[string]portAllocated)
newPodAllocate := make(map[string]string)
for _, svc := range svcList {
lbId := svc.Annotations[ElbIdAnnotationKey]
if lbId != "" && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
// init cache for that lb
if newCache[lbId] == nil {
newCache[lbId] = make(portAllocated, maxPort-minPort+1)
for i := minPort; i <= maxPort; i++ {
newCache[lbId][i] = false
}
}
// block ports
for _, blockPort := range blockPorts {
newCache[lbId][blockPort] = true
}
// fill in cache for that lb
var ports []int32
for _, port := range getPorts(svc.Spec.Ports) {
if port <= maxPort && port >= minPort {
value, ok := newCache[lbId][port]
if !ok || !value {
newCache[lbId][port] = true
ports = append(ports, port)
}
}
}
if len(ports) != 0 {
newPodAllocate[svc.GetNamespace()+"/"+svc.GetName()] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("svc %s/%s allocate elb %s ports %v", svc.Namespace, svc.Name, lbId, ports)
}
}
}
return newCache, newPodAllocate
}
func (s *ElbPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (s *ElbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseLbConfig(networkConfig)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
// get svc
svc := &corev1.Service{}
err = c.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
service, err := s.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
return pod, cperrors.ToPluginError(c.Create(ctx, service), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// old svc remain
if svc.OwnerReferences[0].Kind == "Pod" && svc.OwnerReferences[0].UID != pod.UID {
log.Infof("[%s] waitting old svc %s/%s deleted. old owner pod uid is %s, but now is %s", ElbNetwork, svc.Namespace, svc.Name, svc.OwnerReferences[0].UID, pod.UID)
return pod, nil
}
// update svc
if util.GetHash(sc) != svc.GetAnnotations()[ElbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
service, err := s.consSvc(sc, pod, c, ctx)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
return pod, cperrors.ToPluginError(c.Update(ctx, service), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
svc.Spec.Type = corev1.ServiceTypeClusterIP
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeClusterIP {
svc.Spec.Type = corev1.ServiceTypeLoadBalancer
return pod, cperrors.ToPluginError(c.Update(ctx, svc), cperrors.ApiCallError)
}
// network not ready
if svc.Status.LoadBalancer.Ingress == nil {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(c, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := c.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (s *ElbPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, c)
networkConfig := networkManager.GetNetworkConfig()
sc, err := parseLbConfig(networkConfig)
if err != nil {
return cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
var podKeys []string
if sc.isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range s.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
s.deAllocate(podKey)
}
return nil
}
func (s *ElbPlugin) allocate(lbIds []string, num int, nsName string) (string, []int32) {
s.mutex.Lock()
defer s.mutex.Unlock()
var ports []int32
var lbId string
// find lb with adequate ports
for _, slbId := range lbIds {
sum := 0
for i := s.minPort; i <= s.maxPort; i++ {
if !s.cache[slbId][i] {
sum++
}
if sum >= num {
lbId = slbId
break
}
}
}
if lbId == "" {
return "", nil
}
// select ports
for i := 0; i < num; i++ {
var port int32
if s.cache[lbId] == nil {
// init cache for new lb
s.cache[lbId] = make(portAllocated, s.maxPort-s.minPort+1)
for i := s.minPort; i <= s.maxPort; i++ {
s.cache[lbId][i] = false
}
// block ports
for _, blockPort := range s.blockPorts {
s.cache[lbId][blockPort] = true
}
}
for p, allocated := range s.cache[lbId] {
if !allocated {
port = p
break
}
}
s.cache[lbId][port] = true
ports = append(ports, port)
}
s.podAllocate[nsName] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("pod %s allocate slb %s ports %v", nsName, lbId, ports)
return lbId, ports
}
func (s *ElbPlugin) deAllocate(nsName string) {
s.mutex.Lock()
defer s.mutex.Unlock()
allocatedPorts, exist := s.podAllocate[nsName]
if !exist {
return
}
slbPorts := strings.Split(allocatedPorts, ":")
lbId := slbPorts[0]
ports := util.StringToInt32Slice(slbPorts[1], ",")
for _, port := range ports {
s.cache[lbId][port] = false
}
// block ports
for _, blockPort := range s.blockPorts {
s.cache[lbId][blockPort] = true
}
delete(s.podAllocate, nsName)
log.Infof("pod %s deallocate slb %s ports %v", nsName, lbId, ports)
}
func init() {
elbPlugin := ElbPlugin{
mutex: sync.RWMutex{},
}
hwCloudProvider.registerPlugin(&elbPlugin)
}
func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*elbConfig, error) {
var lbIds []string
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
externalTrafficPolicy := corev1.ServiceExternalTrafficPolicyTypeCluster
publishNotReadyAddresses := false
elbClass := ElbClassDedicated
elbConnLimit := int32(-1)
elbLbAlgorithm := ElbLbAlgorithmRoundRobin
elbSessionAffinityFlag := "off"
elbSessionAffinityOption := ""
elbTransparentClientIP := false
elbXForwardedHost := false
elbIdleTimeout := int32(-1)
elbRequestTimeout := int32(-1)
elbResponseTimeout := int32(-1)
lBHealthCheckSwitch := "on"
LBHealthCHeckOptionConfig := ""
for _, c := range conf {
switch c.Name {
case ElbIdsConfigName:
for _, slbId := range strings.Split(c.Value, ",") {
if slbId != "" {
lbIds = append(lbIds, slbId)
}
}
if len(lbIds) <= 0 {
return nil, fmt.Errorf("no elb id found, must specify at least one elb id")
}
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
isFixed = v
case ExternalTrafficPolicyTypeConfigName:
if strings.EqualFold(c.Value, string(corev1.ServiceExternalTrafficPolicyTypeLocal)) {
externalTrafficPolicy = corev1.ServiceExternalTrafficPolicyTypeLocal
}
case PublishNotReadyAddressesConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
publishNotReadyAddresses = v
case ElbClassConfigName:
if strings.EqualFold(c.Value, string(ElbClassShared)) {
elbClass = ElbClassShared
}
case ElbConnLimitConfigName:
v, err := strconv.Atoi(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb connection limit value: %s", c.Value)
continue
}
elbConnLimit = int32(v)
case ElbLbAlgorithmConfigName:
if strings.EqualFold(c.Value, ElbLbAlgorithmRoundRobin) {
elbLbAlgorithm = ElbLbAlgorithmRoundRobin
}
if strings.EqualFold(c.Value, ElbLbAlgorithmLeastConn) {
elbLbAlgorithm = ElbLbAlgorithmLeastConn
}
if strings.EqualFold(c.Value, ElbLbAlgorithmSourceIP) {
elbLbAlgorithm = ElbLbAlgorithmSourceIP
}
case ElbSessionAffinityFlagConfigName:
if strings.EqualFold(c.Value, "on") {
elbSessionAffinityFlag = "on"
}
case ElbSessionAffinityOptionConfigName:
if json.Valid([]byte(c.Value)) {
LBHealthCHeckOptionConfig = c.Value
} else {
return nil, fmt.Errorf("invalid elb session affinity option value: %s", c.Value)
}
elbSessionAffinityOption = c.Value
case ElbTransparentClientIPConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb transparent client ip value: %s", c.Value)
continue
}
elbTransparentClientIP = v
case ElbXForwardedHostConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb x forwarded host value: %s", c.Value)
continue
}
elbXForwardedHost = v
case ElbIdleTimeoutConfigName:
v, err := strconv.Atoi(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb idle timeout value: %s", c.Value)
continue
}
if v >= 0 && v <= 4000 {
elbIdleTimeout = int32(v)
} else {
_ = fmt.Errorf("ignore invalid elb idle timeout value: %s", c.Value)
continue
}
case ElbRequestTimeoutConfigName:
v, err := strconv.Atoi(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb request timeout value: %s", c.Value)
continue
}
if v >= 1 && v <= 300 {
elbRequestTimeout = int32(v)
} else {
_ = fmt.Errorf("ignore invalid elb request timeout value: %s", c.Value)
continue
}
case ElbResponseTimeoutConfigName:
v, err := strconv.Atoi(c.Value)
if err != nil {
_ = fmt.Errorf("ignore invalid elb response timeout value: %s", c.Value)
continue
}
if v >= 1 && v <= 300 {
elbResponseTimeout = int32(v)
} else {
_ = fmt.Errorf("ignore invalid elb response timeout value: %s", c.Value)
continue
}
case LBHealthCheckSwitchConfigName:
checkSwitch := strings.ToLower(c.Value)
if checkSwitch != "on" && checkSwitch != "off" {
return nil, fmt.Errorf("invalid lb health check switch value: %s", c.Value)
}
lBHealthCheckSwitch = checkSwitch
case LBHealthCHeckOptionConfigName:
if json.Valid([]byte(c.Value)) {
LBHealthCHeckOptionConfig = c.Value
} else {
return nil, fmt.Errorf("invalid lb health check option value: %s", c.Value)
}
}
}
return &elbConfig{
lbIds: lbIds,
protocols: protocols,
targetPorts: ports,
isFixed: isFixed,
externalTrafficPolicyType: externalTrafficPolicy,
publishNotReadyAddresses: publishNotReadyAddresses,
elbClass: elbClass,
elbConnLimit: elbConnLimit,
elbLbAlgorithm: elbLbAlgorithm,
elbSessionAffinityFlag: elbSessionAffinityFlag,
elbSessionAffinityOption: elbSessionAffinityOption,
elbTransparentClientIP: elbTransparentClientIP,
elbXForwardedHost: elbXForwardedHost,
elbIdleTimeout: elbIdleTimeout,
elbRequestTimeout: elbRequestTimeout,
elbResponseTimeout: elbResponseTimeout,
lBHealthCheckSwitch: lBHealthCheckSwitch,
lBHealtchCheckOption: LBHealthCHeckOptionConfig,
}, nil
}
func getPorts(ports []corev1.ServicePort) []int32 {
var ret []int32
for _, port := range ports {
ret = append(ret, port.Port)
}
return ret
}
func (s *ElbPlugin) consSvc(sc *elbConfig, pod *corev1.Pod, c client.Client, ctx context.Context) (*corev1.Service, error) {
var ports []int32
var lbId string
podKey := pod.GetNamespace() + "/" + pod.GetName()
allocatedPorts, exist := s.podAllocate[podKey]
if exist {
slbPorts := strings.Split(allocatedPorts, ":")
lbId = slbPorts[0]
ports = util.StringToInt32Slice(slbPorts[1], ",")
} else {
lbId, ports = s.allocate(sc.lbIds, len(sc.targetPorts), podKey)
if lbId == "" && ports == nil {
return nil, fmt.Errorf("there are no avaialable ports for %v", sc.lbIds)
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(sc.targetPorts); i++ {
if sc.protocols[i] == ProtocolTCPUDP {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), strings.ToLower(string(corev1.ProtocolTCP))),
Port: ports[i],
Protocol: corev1.ProtocolTCP,
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), strings.ToLower(string(corev1.ProtocolUDP))),
Port: ports[i],
Protocol: corev1.ProtocolUDP,
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
} else {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: fmt.Sprintf("%s-%s", strconv.Itoa(sc.targetPorts[i]), strings.ToLower(string(sc.protocols[i]))),
Port: ports[i],
Protocol: sc.protocols[i],
TargetPort: intstr.FromInt(sc.targetPorts[i]),
})
}
}
svcAnnotations := map[string]string{
ElbIdAnnotationKey: lbId,
ElbConfigHashKey: util.GetHash(sc),
ElbClassAnnotationKey: sc.elbClass,
ElbLbAlgorithmAnnotationKey: sc.elbLbAlgorithm,
ElbSessionAffinityFlagAnnotationKey: sc.elbSessionAffinityFlag,
ElbSessionAffinityOptionAnnotationKey: sc.elbSessionAffinityOption,
ElbTransparentClientIPAnnotationKey: strconv.FormatBool(sc.elbTransparentClientIP),
ElbXForwardedHostAnnotationKey: strconv.FormatBool(sc.elbXForwardedHost),
LBHealthCheckSwitchAnnotationKey: sc.lBHealthCheckSwitch,
}
if sc.elbClass == ElbClassDedicated {
} else {
svcAnnotations[ElbConnLimitAnnotationKey] = strconv.Itoa(int(sc.elbConnLimit))
}
if sc.elbIdleTimeout != -1 {
svcAnnotations[ElbIdleTimeoutAnnotationKey] = strconv.Itoa(int(sc.elbIdleTimeout))
}
if sc.elbRequestTimeout != -1 {
svcAnnotations[ElbRequestTimeoutAnnotationKey] = strconv.Itoa(int(sc.elbRequestTimeout))
}
if sc.elbResponseTimeout != -1 {
svcAnnotations[ElbResponseTimeoutAnnotationKey] = strconv.Itoa(int(sc.elbResponseTimeout))
}
if sc.lBHealthCheckSwitch == "on" {
svcAnnotations[LBHealthCheckOptionAnnotationKey] = sc.lBHealtchCheckOption
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: svcAnnotations,
OwnerReferences: getSvcOwnerReference(c, ctx, pod, sc.isFixed),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
ExternalTrafficPolicy: sc.externalTrafficPolicyType,
PublishNotReadyAddresses: sc.publishNotReadyAddresses,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
},
}
return svc, nil
}
func getSvcOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, isFixed bool) []metav1.OwnerReference {
ownerReferences := []metav1.OwnerReference{
{
APIVersion: pod.APIVersion,
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err == nil {
ownerReferences = []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}
}
return ownerReferences
}

View File

@ -0,0 +1,45 @@
package hwcloud
import (
"github.com/openkruise/kruise-game/cloudprovider"
"k8s.io/klog/v2"
)
const (
HwCloud = "HwCloud"
)
var (
hwCloudProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
)
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (ap *Provider) Name() string {
return HwCloud
}
func (ap *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if ap.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return ap.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (ap *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
klog.Fatal("empty plugin name")
}
ap.plugins[name] = plugin
}
func NewHwCloudProvider() (cloudprovider.CloudProvider, error) {
return hwCloudProvider, nil
}

View File

@ -0,0 +1,193 @@
English | [中文](./README.md)
Based on JdCloud Container Service, for game scenarios, combine OKG to provide various network model plugins.
## JdCloud-NLB configuration
JdCloud Container Service supports the reuse of NLB (Network Load Balancer) in Kubernetes. Different services (svcs) can use different ports of the same NLB. As a result, the JdCloud-NLB network plugin will record the port allocation for each NLB. For services that specify the network type as JdCloud-NLB, the JdCloud-NLB network plugin will automatically allocate a port and create a service object. Once it detects that the public IP of the svc has been successfully created, the GameServer's network will transition to the Ready state, completing the process.
### plugin configuration
```toml
[jdcloud]
enable = true
[jdcloud.nlb]
#To allocate external access ports for Pods, you need to define the idle port ranges that the NLB (Network Load Balancer) can use. The maximum range for each port segment is 200 ports.
max_port = 700
min_port = 500
```
### Parameter
#### NlbIds
- Meaningfill in the id of the clb. You can fill in more than one. You need to create the clb in [JdCloud].
- Valueeach clbId is divided by `,` . For example`netlb-aaa,netlb-bbb,...`
- ConfigurableY
#### PortProtocols
- Meaningthe ports and protocols exposed by the pod, support filling in multiple ports/protocols
- Value`port1/protocol1`,`port2/protocol2`,... The protocol names must be in uppercase letters.
- ConfigurableY
#### Fixed
- Meaningwhether the mapping relationship is fixed. If the mapping relationship is fixed, the mapping relationship remains unchanged even if the pod is deleted and recreated.
- Valuefalse / true
- ConfigurableY
#### AllocateLoadBalancerNodePorts
- MeaningWhether the generated service is assigned nodeport, this can be set to false only in nlb passthrough mode
- Valuefalse / true
- ConfigurableY
#### AllowNotReadyContainers
- Meaningthe container names that are allowed not ready when inplace updating, when traffic will not be cut.
- Value{containerName_0},{containerName_1},... egsidecar
- ConfigurableIt cannot be changed during the in-place updating process.
#### Annotations
- Meaningthe anno added to the service
- Valuekey1:value1,key2:value2...
- ConfigurableY
### Example
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: nlb
namespace: default
spec:
replicas: 3
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: JdCloud-NLB
networkConf:
- name: NlbIds
#Fill in Jdcloud Cloud LoadBalancer Id here
value: netlb-xxxxx
- name: PortProtocols
#Fill in the exposed ports and their corresponding protocols here.
#If there are multiple ports, the format is as follows: {port1}/{protocol1},{port2}/{protocol2}...
#If the protocol is not filled in, the default is TCP
value: 80/TCP
- name: AllocateLoadBalancerNodePorts
# Whether the generated service is assigned nodeport.
value: "true"
- name: Fixed
#Fill in here whether a fixed IP is required [optional] ; Default is false
value: "false"
- name: Annotations
#Fill in the anno related to clb on the service
#The format is as follows: {key1}:{value1},{key2}:{value2}...
value: "key1:value1,key2:value2"
gameServerTemplate:
spec:
containers:
- args:
- /data/server/start.sh
command:
- /bin/bash
image: gss-cn-north-1.jcr.service.jdcloud.com/gsshosting/pal:v1
name: game-server
EOF
```
Check the network status in GameServer:
```
networkStatus:
createTime: "2024-11-04T08:00:20Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xxx.xxx
ports:
- name: "8211"
port: 531
protocol: UDP
internalAddresses:
- ip: 10.0.0.95
ports:
- name: "8211"
port: 8211
protocol: UDP
lastTransitionTime: "2024-11-04T08:00:20Z"
networkType: JdCloud-NLB
```
## JdCloud-EIP configuration
JdCloud Container Service supports binding an Elastic Public IP directly to a pod in Kubernetes, allowing the pod to communicate directly with the external network.
- The cluster's network plugin uses Yunjian-CNI and cannot use Flannel to create the cluster.
- For specific usage restrictions of Elastic Public IPs, please refer to the JdCloud Elastic Public IP product documentation.
- Install the EIP-Controller component.
- The Elastic Public IP will not be deleted when the pod is destroyed.
### Parameter
#### BandwidthConfigName
- MeaningThe bandwidth of the Elastic Public IP, measured in Mbps, has a value range of [1, 1024].
- ValueMust be an integer
- ConfigurableY
#### ChargeTypeConfigName
- MeaningThe billing method for the Elastic Public IP
- Valuestring, `postpaid_by_usage`/`postpaid_by_duration`
- ConfigurableY
#### FixedEIPConfigName
- MeaningWhether to fixed the Elastic Public IP,if so, the EIP will not be changed when the pod is recreated.
- Valuestring, "false" / "true"
- ConfigurableY
#### AssignEIPConfigName
- MeaningWhether to designate a specific Elastic Public IP. If true, provide the ID of the Elastic Public IP; otherwise, an EIP will be automatically allocated.
- Valuestring, "false" / "true"
#### EIPIdConfigName
- MeaningIf a specific Elastic Public IP is designated, the ID of the Elastic Public IP must be provided, and the component will automatically perform the lookup and binding.
- Valuestringfor example`fip-xxxxxxxx`
### Example
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: eip
namespace: default
spec:
containers:
- args:
- /data/server/start.sh
command:
- /bin/bash
image: gss-cn-north-1.jcr.service.jdcloud.com/gsshosting/pal:v1
name: game-server
network:
networkType: JdCloud-EIP
networkConf:
- name: "BandWidth"
value: "10"
- name: "ChargeType"
value: postpaid_by_usage
- name: "Fixed"
value: "false"
replicas: 3
EOF
```
Check the network status in GameServer:
```
networkStatus:
createTime: "2024-11-04T10:53:14Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xxx.xxx
internalAddresses:
- ip: 10.0.0.95
lastTransitionTime: "2024-11-04T10:53:14Z"
networkType: JdCloud-EIP
```

View File

@ -0,0 +1,192 @@
中文 | [English](./README.md)
基于京东云容器服务针对游戏场景结合OKG提供各网络模型插件。
## JdCloud-NLB 相关配置
京东云容器服务支持在k8s中对NLB复用的机制不同的svc可以使用同一个NLB的不同端口。由此JdCloud-NLB network plugin将记录各NLB对应的端口分配情况对于指定了网络类型为JdCloud-NLBJdCloud-NLB网络插件将会自动分配一个端口并创建一个service对象待检测到svc公网IP创建成功后GameServer的网络变为Ready状态该过程执行完成。
### plugin配置
```toml
[jdcloud]
enable = true
[jdcloud.nlb]
#填写nlb可使用的空闲端口段用于为pod分配外部接入端口范围最大为200
max_port = 700
min_port = 500
```
### 参数
#### NlbIds
- 含义填写nlb的id可填写多个需要先在【京东云】中创建好nlb。
- 填写格式各个nlbId用,分割。例如netlb-aaa,netlb-bbb,...
- 是否支持变更:是
#### PortProtocols
- 含义pod暴露的端口及协议支持填写多个端口/协议
- 填写格式port1/protocol1,port2/protocol2,...(协议需大写)
- 是否支持变更:是
#### Fixed
- 含义是否固定访问IP/端口。若是即使pod删除重建网络内外映射关系不会改变
- 填写格式false / true
- 是否支持变更:是
#### AllocateLoadBalancerNodePorts
- 含义生成的service是否分配nodeport, 仅在nlb的直通模式passthrough才能设置为false
- 填写格式true/false
- 是否支持变更:是
#### AllowNotReadyContainers
- 含义:在容器原地升级时允许不断流的对应容器名称,可填写多个
- 填写格式:{containerName_0},{containerName_1},... 例如sidecar
- 是否支持变更:在原地升级过程中不可变更
#### Annotations
- 含义添加在service上的anno可填写多个
- 填写格式key1:value1,key2:value2...
- 是否支持变更:是
### 使用示例
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: nlb
namespace: default
spec:
replicas: 3
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: JdCloud-NLB
networkConf:
- name: NlbIds
#Fill in Jdcloud Cloud LoadBalancer Id here
value: netlb-xxxxx
- name: PortProtocols
#Fill in the exposed ports and their corresponding protocols here.
#If there are multiple ports, the format is as follows: {port1}/{protocol1},{port2}/{protocol2}...
#If the protocol is not filled in, the default is TCP
value: 80/TCP
- name: AllocateLoadBalancerNodePorts
# Whether the generated service is assigned nodeport.
value: "true"
- name: Fixed
#Fill in here whether a fixed IP is required [optional] ; Default is false
value: "false"
- name: Annotations
#Fill in the anno related to clb on the service
#The format is as follows: {key1}:{value1},{key2}:{value2}...
value: "key1:value1,key2:value2"
gameServerTemplate:
spec:
containers:
- args:
- /data/server/start.sh
command:
- /bin/bash
image: gss-cn-north-1.jcr.service.jdcloud.com/gsshosting/pal:v1
name: game-server
EOF
```
检查GameServer中的网络状态:
```
networkStatus:
createTime: "2024-11-04T08:00:20Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xxx.xxx
ports:
- name: "8211"
port: 531
protocol: UDP
internalAddresses:
- ip: 10.0.0.95
ports:
- name: "8211"
port: 8211
protocol: UDP
lastTransitionTime: "2024-11-04T08:00:20Z"
networkType: JdCloud-NLB
```
## JdCloud-EIP 相关配置
京东云容器服务支持在k8s中让一个 pod 和弹性公网 IP 直接进行绑定,可以让 pod 直接与外部网络进行通信。
- 集群的网络插件使用 yunjian-CNI不可使用 flannel 创建集群
- 弹性公网 IP 使用限制请具体参考京东云弹性公网 IP 产品文档
- 安装 EIP-Controller 组件
- 弹性公网 IP 不会随 POD 的销毁而删除
### 参数
#### BandwidthConfigName
- 含义弹性公网IP的带宽单位为 Mbps取值范围为 [1,1024]
- 填写格式:必须填整数,且不带单位
- 是否支持变更:是
#### ChargeTypeConfigName
- 含义弹性公网IP的计费方式取值按量计费postpaid_by_usage包年包月postpaid_by_duration
- 填写格式:字符串
- 是否支持变更:是
#### FixedEIPConfigName
- 含义是否固定弹性公网IP。若是即使pod删除重建弹性公网IP也不会改变
- 填写格式:"false" / "true",字符串
- 是否支持变更:是
#### AssignEIPConfigName
- 含义是否指定使用某个弹性公网IP请填写 true否则自动分配一个EIP
- 填写格式:"false" / "true",字符串
#### EIPIdConfigName
- 含义若指定使用某个弹性公网IP则必须填写弹性公网IP的ID组件会自动进行进行查询和绑定
- 填写格式字符串例如fip-xxxxxxxx
### 使用示例
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: eip
namespace: default
spec:
containers:
- args:
- /data/server/start.sh
command:
- /bin/bash
image: gss-cn-north-1.jcr.service.jdcloud.com/gsshosting/pal:v1
name: game-server
network:
networkType: JdCloud-EIP
networkConf:
- name: "BandWidth"
value: "10"
- name: "ChargeType"
value: postpaid_by_usage
- name: "Fixed"
value: "false"
replicas: 3
EOF
```
检查GameServer中的网络状态:
```
networkStatus:
createTime: "2024-11-04T10:53:14Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xxx.xxx
internalAddresses:
- ip: 10.0.0.95
lastTransitionTime: "2024-11-04T10:53:14Z"
networkType: JdCloud-EIP
```

View File

@ -0,0 +1,111 @@
package jdcloud
import (
"context"
cerr "errors"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
"github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
EIPNetwork = "JdCloud-EIP"
AliasSEIP = "EIP-Network"
EIPIdConfigName = "EIPId"
EIPIdAnnotationKey = "jdos.jd.com/eip.id"
EIPIfaceAnnotationKey = "jdos.jd.com/eip.iface"
EIPAnnotationKey = "jdos.jd.com/eip.ip"
BandwidthConfigName = "Bandwidth"
BandwidthAnnotationkey = "jdos.jd.com/eip.bandwith"
ChargeTypeConfigName = "ChargeType"
ChargeTypeAnnotationkey = "jdos.jd.com/eip.chargeMode"
EnableEIPAnnotationKey = "jdos.jd.com/eip.enable"
FixedEIPConfigName = "Fixed"
FixedEIPAnnotationKey = "jdos.jd.com/eip.static"
EIPNameAnnotationKey = "jdos.jd.com/eip-name"
AssignEIPConfigName = "AssignEIP"
AssignEIPAnnotationKey = "jdos.jd.com/eip.userAssign"
)
type EipPlugin struct {
}
func (E EipPlugin) Name() string {
return EIPNetwork
}
func (E EipPlugin) Alias() string {
return AliasSEIP
}
func (E EipPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (E EipPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
conf := networkManager.GetNetworkConfig()
pod.Annotations[EnableEIPAnnotationKey] = "true"
pod.Annotations[EIPNameAnnotationKey] = pod.GetNamespace() + "/" + pod.GetName()
//parse network configuration
for _, c := range conf {
switch c.Name {
case BandwidthConfigName:
pod.Annotations[BandwidthAnnotationkey] = c.Value
case ChargeTypeConfigName:
pod.Annotations[ChargeTypeAnnotationkey] = c.Value
case FixedEIPConfigName:
pod.Annotations[FixedEIPAnnotationKey] = c.Value
}
}
return pod, nil
}
func (E EipPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkWaiting,
}, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
if enable, ok := pod.Annotations[EnableEIPAnnotationKey]; !ok || (ok && enable != "true") {
return pod, errors.ToPluginError(cerr.New("eip plugin is not enabled"), errors.InternalError)
}
if _, ok := pod.Annotations[EIPIdAnnotationKey]; !ok {
return pod, nil
}
if _, ok := pod.Annotations[EIPAnnotationKey]; !ok {
return pod, nil
}
networkStatus.ExternalAddresses = []gamekruiseiov1alpha1.NetworkAddress{
{
IP: pod.Annotations[EIPAnnotationKey],
},
}
networkStatus.InternalAddresses = []gamekruiseiov1alpha1.NetworkAddress{
{
IP: pod.Status.PodIP,
},
}
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err := networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
func (E EipPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) errors.PluginError {
return nil
}
func init() {
jdcloudProvider.registerPlugin(&EipPlugin{})
}

View File

@ -0,0 +1 @@
package jdcloud

View File

@ -0,0 +1,61 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package jdcloud
import (
"github.com/openkruise/kruise-game/cloudprovider"
"k8s.io/klog/v2"
)
const (
Jdcloud = "Jdcloud"
)
var (
jdcloudProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
)
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (jp *Provider) Name() string {
return Jdcloud
}
func (jp *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if jp.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return jp.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (jp *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
klog.Fatal("empty plugin name")
}
jp.plugins[name] = plugin
}
func NewJdcloudProvider() (cloudprovider.CloudProvider, error) {
return jdcloudProvider, nil
}

View File

@ -0,0 +1,581 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package jdcloud
import (
"context"
"encoding/json"
"fmt"
"strconv"
"strings"
"sync"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
)
const (
LbType_NLB = "nlb"
)
type JdNLBElasticIp struct {
ElasticIpId string `json:"elasticIpId"`
}
type JdNLBAlgorithm string
const (
JdNLBDefaultConnIdleTime int = 600
JdNLBAlgorithmRoundRobin JdNLBAlgorithm = "RoundRobin"
JdNLBAlgorithmLeastConn JdNLBAlgorithm = "LeastConn"
JdNLBAlgorithmIpHash JdNLBAlgorithm = "IpHash"
)
type JdNLBListenerBackend struct {
ProxyProtocol bool `json:"proxyProtocol"`
Algorithm JdNLBAlgorithm `json:"algorithm"`
}
type JdNLBListener struct {
Protocol string `json:"protocol"`
ConnectionIdleTimeSeconds int `json:"connectionIdleTimeSeconds"`
Backend *JdNLBListenerBackend `json:"backend"`
}
type JdNLB struct {
Version string `json:"version"`
LoadBalancerId string `json:"loadBalancerId"`
LoadBalancerType string `json:"loadBalancerType"`
Internal bool `json:"internal"`
Listeners []*JdNLBListener `json:"listeners"`
}
const (
JdNLBVersion = "v1"
NlbNetwork = "JdCloud-NLB"
AliasNLB = "NLB-Network"
NlbIdLabelKey = "service.beta.kubernetes.io/jdcloud-loadbalancer-id"
NlbIdsConfigName = "NlbIds"
PortProtocolsConfigName = "PortProtocols"
FixedConfigName = "Fixed"
AllocateLoadBalancerNodePorts = "AllocateLoadBalancerNodePorts"
NlbAnnotations = "Annotations"
NlbConfigHashKey = "game.kruise.io/network-config-hash"
NlbSpecAnnotationKey = "service.beta.kubernetes.io/jdcloud-load-balancer-spec"
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
NlbAlgorithm = "service.beta.kubernetes.io/jdcloud-lb-algorithm"
NlbConnectionIdleTime = "service.beta.kubernetes.io/jdcloud-lb-idle-time"
)
type portAllocated map[int32]bool
type NlbPlugin struct {
maxPort int32
minPort int32
cache map[string]portAllocated
podAllocate map[string]string
mutex sync.RWMutex
}
type nlbConfig struct {
lbIds []string
targetPorts []int
protocols []corev1.Protocol
isFixed bool
annotations map[string]string
allocateLoadBalancerNodePorts bool
algorithm string
connIdleTimeSeconds int
}
func (c *NlbPlugin) Name() string {
return NlbNetwork
}
func (c *NlbPlugin) Alias() string {
return AliasNLB
}
func (c *NlbPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
c.mutex.Lock()
defer c.mutex.Unlock()
nlbOptions, ok := options.(provideroptions.JdCloudOptions)
if !ok {
return cperrors.ToPluginError(fmt.Errorf("failed to convert options to nlbOptions"), cperrors.InternalError)
}
c.minPort = nlbOptions.NLBOptions.MinPort
c.maxPort = nlbOptions.NLBOptions.MaxPort
svcList := &corev1.ServiceList{}
err := client.List(ctx, svcList)
if err != nil {
return err
}
c.cache, c.podAllocate = initLbCache(svcList.Items, c.minPort, c.maxPort)
return nil
}
func initLbCache(svcList []corev1.Service, minPort, maxPort int32) (map[string]portAllocated, map[string]string) {
newCache := make(map[string]portAllocated)
newPodAllocate := make(map[string]string)
for _, svc := range svcList {
lbId := svc.Labels[NlbIdLabelKey]
if lbId != "" && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
if newCache[lbId] == nil {
newCache[lbId] = make(portAllocated, maxPort-minPort)
for i := minPort; i < maxPort; i++ {
newCache[lbId][i] = false
}
}
var ports []int32
for _, port := range getPorts(svc.Spec.Ports) {
if port <= maxPort && port >= minPort {
newCache[lbId][port] = true
ports = append(ports, port)
}
}
if len(ports) != 0 {
newPodAllocate[svc.GetNamespace()+"/"+svc.GetName()] = lbId + ":" + util.Int32SliceToString(ports, ",")
}
}
}
log.Infof("[%s] podAllocate cache complete initialization: %v", NlbNetwork, newPodAllocate)
return newCache, newPodAllocate
}
func (c *NlbPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (c *NlbPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, err := networkManager.GetNetworkStatus()
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
config := parseLbConfig(networkConfig)
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// get svc
svc := &corev1.Service{}
err = client.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
return pod, cperrors.ToPluginError(client.Create(ctx, c.consSvc(config, pod, client, ctx)), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// update svc
if util.GetHash(config) != svc.GetAnnotations()[NlbConfigHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
return pod, cperrors.ToPluginError(client.Update(ctx, c.consSvc(config, pod, client, ctx)), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
svc.Spec.Type = corev1.ServiceTypeClusterIP
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeClusterIP {
svc.Spec.Type = corev1.ServiceTypeLoadBalancer
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// network not ready
if len(svc.Status.LoadBalancer.Ingress) == 0 {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(client, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := client.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (c *NlbPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
networkManager := utils.NewNetworkManager(pod, client)
networkConfig := networkManager.GetNetworkConfig()
sc := parseLbConfig(networkConfig)
var podKeys []string
if sc.isFixed {
gss, err := util.GetGameServerSetOfPod(pod, client, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range c.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
c.deAllocate(podKey)
}
return nil
}
func (c *NlbPlugin) allocate(lbIds []string, num int, nsName string) (string, []int32) {
c.mutex.Lock()
defer c.mutex.Unlock()
var ports []int32
var lbId string
// find lb with adequate ports
for _, nlbId := range lbIds {
sum := 0
for i := c.minPort; i < c.maxPort; i++ {
if !c.cache[nlbId][i] {
sum++
}
if sum >= num {
lbId = nlbId
break
}
}
}
// select ports
for i := 0; i < num; i++ {
var port int32
if c.cache[lbId] == nil {
c.cache[lbId] = make(portAllocated, c.maxPort-c.minPort)
for i := c.minPort; i < c.maxPort; i++ {
c.cache[lbId][i] = false
}
}
for p, allocated := range c.cache[lbId] {
if !allocated {
port = p
break
}
}
c.cache[lbId][port] = true
ports = append(ports, port)
}
c.podAllocate[nsName] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("pod %s allocate nlb %s ports %v", nsName, lbId, ports)
return lbId, ports
}
func (c *NlbPlugin) deAllocate(nsName string) {
c.mutex.Lock()
defer c.mutex.Unlock()
allocatedPorts, exist := c.podAllocate[nsName]
if !exist {
return
}
nlbPorts := strings.Split(allocatedPorts, ":")
lbId := nlbPorts[0]
ports := util.StringToInt32Slice(nlbPorts[1], ",")
for _, port := range ports {
c.cache[lbId][port] = false
}
delete(c.podAllocate, nsName)
log.Infof("pod %s deallocate nlb %s ports %v", nsName, lbId, ports)
}
func init() {
JdNlbPlugin := NlbPlugin{
mutex: sync.RWMutex{},
}
jdcloudProvider.registerPlugin(&JdNlbPlugin)
}
func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *nlbConfig {
var lbIds []string
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
allocateLoadBalancerNodePorts := true
annotations := map[string]string{}
algo := string(JdNLBAlgorithmRoundRobin)
idleTime := JdNLBDefaultConnIdleTime
for _, c := range conf {
switch c.Name {
case NlbIdsConfigName:
for _, clbId := range strings.Split(c.Value, ",") {
if clbId != "" {
lbIds = append(lbIds, clbId)
}
}
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
isFixed = v
case NlbAlgorithm:
algo = c.Value
case NlbConnectionIdleTime:
t, err := strconv.Atoi(c.Value)
if err == nil {
idleTime = t
}
case AllocateLoadBalancerNodePorts:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
allocateLoadBalancerNodePorts = v
case NlbAnnotations:
for _, anno := range strings.Split(c.Value, ",") {
annoKV := strings.Split(anno, ":")
if len(annoKV) == 2 {
annotations[annoKV[0]] = annoKV[1]
} else {
log.Warningf("nlb annotation %s is invalid", annoKV[0])
}
}
}
}
return &nlbConfig{
lbIds: lbIds,
protocols: protocols,
targetPorts: ports,
isFixed: isFixed,
annotations: annotations,
allocateLoadBalancerNodePorts: allocateLoadBalancerNodePorts,
algorithm: algo,
connIdleTimeSeconds: idleTime,
}
}
func getPorts(ports []corev1.ServicePort) []int32 {
var ret []int32
for _, port := range ports {
ret = append(ret, port.Port)
}
return ret
}
func (c *NlbPlugin) consSvc(config *nlbConfig, pod *corev1.Pod, client client.Client, ctx context.Context) *corev1.Service {
var ports []int32
var lbId string
podKey := pod.GetNamespace() + "/" + pod.GetName()
allocatedPorts, exist := c.podAllocate[podKey]
if exist {
nlbPorts := strings.Split(allocatedPorts, ":")
lbId = nlbPorts[0]
ports = util.StringToInt32Slice(nlbPorts[1], ",")
} else {
lbId, ports = c.allocate(config.lbIds, len(config.targetPorts), podKey)
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(config.targetPorts); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(config.targetPorts[i]),
Port: ports[i],
Protocol: config.protocols[i],
TargetPort: intstr.FromInt(config.targetPorts[i]),
})
}
annotations := map[string]string{
NlbIdLabelKey: lbId,
NlbSpecAnnotationKey: getNLBSpec(svcPorts, lbId, config.algorithm, config.connIdleTimeSeconds),
NlbConfigHashKey: util.GetHash(config),
}
for key, value := range config.annotations {
annotations[key] = value
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: annotations,
OwnerReferences: getSvcOwnerReference(client, ctx, pod, config.isFixed),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
AllocateLoadBalancerNodePorts: ptr.To[bool](config.allocateLoadBalancerNodePorts),
},
}
return svc
}
func getNLBSpec(ports []corev1.ServicePort, lbId, algorithm string, connIdleTimeSeconds int) string {
jdNlb := _getNLBSpec(ports, lbId, algorithm, connIdleTimeSeconds)
bytes, err := json.Marshal(jdNlb)
if err != nil {
return ""
}
return string(bytes)
}
func _getNLBSpec(ports []corev1.ServicePort, lbId, algorithm string, connIdleTimeSeconds int) *JdNLB {
var listeners = make([]*JdNLBListener, len(ports))
for k, v := range ports {
listeners[k] = &JdNLBListener{
Protocol: strings.ToUpper(string(v.Protocol)),
ConnectionIdleTimeSeconds: connIdleTimeSeconds,
Backend: &JdNLBListenerBackend{
Algorithm: JdNLBAlgorithm(algorithm),
},
}
}
return &JdNLB{
Version: JdNLBVersion,
LoadBalancerId: lbId,
LoadBalancerType: LbType_NLB,
Internal: false,
Listeners: listeners,
}
}
func getSvcOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, isFixed bool) []metav1.OwnerReference {
ownerReferences := []metav1.OwnerReference{
{
APIVersion: pod.APIVersion,
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err == nil {
ownerReferences = []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}
}
return ownerReferences
}

View File

@ -0,0 +1,344 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package jdcloud
import (
"context"
"k8s.io/utils/ptr"
"reflect"
"sync"
"testing"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"sigs.k8s.io/controller-runtime/pkg/client"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
)
func TestAllocateDeAllocate(t *testing.T) {
test := struct {
lbIds []string
nlb *NlbPlugin
num int
podKey string
}{
lbIds: []string{"xxx-A"},
nlb: &NlbPlugin{
maxPort: int32(700),
minPort: int32(500),
cache: make(map[string]portAllocated),
podAllocate: make(map[string]string),
mutex: sync.RWMutex{},
},
podKey: "xxx/xxx",
num: 3,
}
lbId, ports := test.nlb.allocate(test.lbIds, test.num, test.podKey)
if _, exist := test.nlb.podAllocate[test.podKey]; !exist {
t.Errorf("podAllocate[%s] is empty after allocated", test.podKey)
}
for _, port := range ports {
if port > test.nlb.maxPort || port < test.nlb.minPort {
t.Errorf("allocate port %d, unexpected", port)
}
if test.nlb.cache[lbId][port] == false {
t.Errorf("Allocate port %d failed", port)
}
}
test.nlb.deAllocate(test.podKey)
for _, port := range ports {
if test.nlb.cache[lbId][port] == true {
t.Errorf("deAllocate port %d failed", port)
}
}
if _, exist := test.nlb.podAllocate[test.podKey]; exist {
t.Errorf("podAllocate[%s] is not empty after deallocated", test.podKey)
}
}
func TestParseLbConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
lbIds []string
ports []int
protocols []corev1.Protocol
isFixed bool
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "xxx-A",
},
{
Name: PortProtocolsConfigName,
Value: "80",
},
},
lbIds: []string{"xxx-A"},
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
isFixed: false,
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: NlbIdsConfigName,
Value: "xxx-A,xxx-B,",
},
{
Name: PortProtocolsConfigName,
Value: "81/UDP,82,83/TCP",
},
{
Name: FixedConfigName,
Value: "true",
},
},
lbIds: []string{"xxx-A", "xxx-B"},
ports: []int{81, 82, 83},
protocols: []corev1.Protocol{corev1.ProtocolUDP, corev1.ProtocolTCP, corev1.ProtocolTCP},
isFixed: true,
},
}
for _, test := range tests {
sc := parseLbConfig(test.conf)
if !reflect.DeepEqual(test.lbIds, sc.lbIds) {
t.Errorf("lbId expect: %v, actual: %v", test.lbIds, sc.lbIds)
}
if !util.IsSliceEqual(test.ports, sc.targetPorts) {
t.Errorf("ports expect: %v, actual: %v", test.ports, sc.targetPorts)
}
if !reflect.DeepEqual(test.protocols, sc.protocols) {
t.Errorf("protocols expect: %v, actual: %v", test.protocols, sc.protocols)
}
if test.isFixed != sc.isFixed {
t.Errorf("isFixed expect: %v, actual: %v", test.isFixed, sc.isFixed)
}
}
}
func TestInitLbCache(t *testing.T) {
test := struct {
svcList []corev1.Service
minPort int32
maxPort int32
cache map[string]portAllocated
podAllocate map[string]string
}{
minPort: 500,
maxPort: 700,
cache: map[string]portAllocated{
"xxx-A": map[int32]bool{
666: true,
},
"xxx-B": map[int32]bool{
555: true,
},
},
podAllocate: map[string]string{
"ns-0/name-0": "xxx-A:666",
"ns-1/name-1": "xxx-B:555",
},
svcList: []corev1.Service{
{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
NlbIdLabelKey: "xxx-A",
},
Namespace: "ns-0",
Name: "name-0",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-A",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(80),
Port: 666,
Protocol: corev1.ProtocolTCP,
},
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
NlbIdLabelKey: "xxx-B",
},
Namespace: "ns-1",
Name: "name-1",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "pod-B",
},
Ports: []corev1.ServicePort{
{
TargetPort: intstr.FromInt(8080),
Port: 555,
Protocol: corev1.ProtocolTCP,
},
},
},
},
},
}
actualCache, actualPodAllocate := initLbCache(test.svcList, test.minPort, test.maxPort)
for lb, pa := range test.cache {
for port, isAllocated := range pa {
if actualCache[lb][port] != isAllocated {
t.Errorf("lb %s port %d isAllocated, expect: %t, actual: %t", lb, port, isAllocated, actualCache[lb][port])
}
}
}
if !reflect.DeepEqual(actualPodAllocate, test.podAllocate) {
t.Errorf("podAllocate expect %v, but actully got %v", test.podAllocate, actualPodAllocate)
}
}
func TestNlbPlugin_consSvc(t *testing.T) {
type fields struct {
maxPort int32
minPort int32
cache map[string]portAllocated
podAllocate map[string]string
}
type args struct {
config *nlbConfig
pod *corev1.Pod
client client.Client
ctx context.Context
}
tests := []struct {
name string
fields fields
args args
want *corev1.Service
}{
{
name: "convert svc cache exist",
fields: fields{
maxPort: 3000,
minPort: 1,
cache: map[string]portAllocated{
"default/test-pod": map[int32]bool{},
},
podAllocate: map[string]string{
"default/test-pod": "nlb-xxx:80,81",
},
},
args: args{
config: &nlbConfig{
lbIds: []string{"nlb-xxx"},
targetPorts: []int{82},
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
},
isFixed: false,
annotations: map[string]string{
"service.beta.kubernetes.io/jdcloud-load-balancer-spec": "{}",
},
allocateLoadBalancerNodePorts: true,
},
pod: &corev1.Pod{
TypeMeta: metav1.TypeMeta{
Kind: "pod",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
UID: "32fqwfqfew",
},
},
client: nil,
ctx: context.Background(),
},
want: &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
NlbConfigHashKey: util.GetHash(&nlbConfig{
lbIds: []string{"nlb-xxx"},
targetPorts: []int{82},
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
},
isFixed: false,
annotations: map[string]string{
"service.beta.kubernetes.io/jdcloud-load-balancer-spec": "{}",
},
allocateLoadBalancerNodePorts: true,
}),
"service.beta.kubernetes.io/jdcloud-load-balancer-spec": "{}",
"service.beta.kubernetes.io/jdcloud-loadbalancer-id": "nlb-xxx",
},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "v1",
Kind: "pod",
Name: "test-pod",
UID: "32fqwfqfew",
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: "test-pod",
},
Ports: []corev1.ServicePort{{
Name: "82",
Port: 80,
Protocol: "TCP",
TargetPort: intstr.IntOrString{
Type: 0,
IntVal: 82,
},
},
},
AllocateLoadBalancerNodePorts: ptr.To[bool](true),
},
},
},
}
for _, tt := range tests {
c := &NlbPlugin{
maxPort: tt.fields.maxPort,
minPort: tt.fields.minPort,
cache: tt.fields.cache,
podAllocate: tt.fields.podAllocate,
}
if got := c.consSvc(tt.args.config, tt.args.pod, tt.args.client, tt.args.ctx); !reflect.DeepEqual(got, tt.want) {
t.Errorf("consSvc() = %v, want %v", got, tt.want)
}
}
}

View File

@ -18,44 +18,48 @@ package kubernetes
import (
"context"
"net"
"strconv"
"strings"
"sync"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
"github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"net"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
"sync"
)
const (
HostPortNetwork = "Kubernetes-HostPort"
//ContainerPortsKey represents the configuration key when using hostPort.
//Its corresponding value format is as follows, containerName:port1/protocol1,port2/protocol2,... e.g. game-server:25565/TCP
//When no protocol is specified, TCP is used by default
// ContainerPortsKey represents the configuration key when using hostPort.
// Its corresponding value format is as follows, containerName:port1/protocol1,port2/protocol2,... e.g. game-server:25565/TCP
// When no protocol is specified, TCP is used by default
ContainerPortsKey = "ContainerPorts"
PortSameAsHost = "SameAsHost"
ProtocolTCPUDP = "TCPUDP"
)
type HostPortPlugin struct {
maxPort int32
minPort int32
isAllocated map[string]bool
portAmount map[int32]int
amountStat []int
mutex sync.RWMutex
maxPort int32
minPort int32
podAllocated map[string]string
portAmount map[int32]int
amountStat []int
mutex sync.RWMutex
}
func init() {
hostPortPlugin := HostPortPlugin{
mutex: sync.RWMutex{},
isAllocated: make(map[string]bool),
mutex: sync.RWMutex{},
podAllocated: make(map[string]string),
}
kubernetesProvider.registerPlugin(&hostPortPlugin)
}
@ -69,30 +73,32 @@ func (hpp *HostPortPlugin) Alias() string {
}
func (hpp *HostPortPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
log.Infof("Receiving pod %s/%s ADD Operation", pod.GetNamespace(), pod.GetName())
podNow := &corev1.Pod{}
err := c.Get(ctx, types.NamespacedName{
Namespace: pod.GetNamespace(),
Name: pod.GetName(),
}, podNow)
// There is a pod with same ns/name exists in cluster, do not allocate
if err == nil {
return pod, nil
log.Infof("There is a pod with same ns/name(%s/%s) exists in cluster, do not allocate", pod.GetNamespace(), pod.GetName())
return pod, errors.NewPluginError(errors.InternalError, "There is a pod with same ns/name exists in cluster")
}
if !k8serrors.IsNotFound(err) {
return pod, errors.NewPluginError(errors.ApiCallError, err.Error())
}
if _, ok := hpp.isAllocated[pod.GetNamespace()+"/"+pod.GetName()]; ok {
return pod, nil
}
networkManager := utils.NewNetworkManager(pod, c)
conf := networkManager.GetNetworkConfig()
containerPortsMap, containerProtocolsMap, numToAlloc := parseConfig(conf, pod)
hostPorts := hpp.allocate(numToAlloc, pod.GetNamespace()+"/"+pod.GetName())
log.Infof("pod %s/%s allocated hostPorts %v", pod.GetNamespace(), pod.GetName(), hostPorts)
var hostPorts []int32
if str, ok := hpp.podAllocated[pod.GetNamespace()+"/"+pod.GetName()]; ok {
hostPorts = util.StringToInt32Slice(str, ",")
log.Infof("pod %s/%s use hostPorts %v , which are allocated before", pod.GetNamespace(), pod.GetName(), hostPorts)
} else {
hostPorts = hpp.allocate(numToAlloc, pod.GetNamespace()+"/"+pod.GetName())
log.Infof("pod %s/%s allocated hostPorts %v", pod.GetNamespace(), pod.GetName(), hostPorts)
}
// patch pod container ports
containers := pod.Spec.Containers
@ -100,12 +106,30 @@ func (hpp *HostPortPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx cont
if ports, ok := containerPortsMap[container.Name]; ok {
containerPorts := container.Ports
for i, port := range ports {
containerPort := corev1.ContainerPort{
ContainerPort: port,
HostPort: hostPorts[numToAlloc-1],
Protocol: containerProtocolsMap[container.Name][i],
// -1 means same as host
if port == -1 {
port = hostPorts[numToAlloc-1]
}
protocol := containerProtocolsMap[container.Name][i]
hostPort := hostPorts[numToAlloc-1]
if protocol == ProtocolTCPUDP {
containerPorts = append(containerPorts,
corev1.ContainerPort{
ContainerPort: port,
HostPort: hostPort,
Protocol: corev1.ProtocolTCP,
}, corev1.ContainerPort{
ContainerPort: port,
HostPort: hostPort,
Protocol: corev1.ProtocolUDP,
})
} else {
containerPorts = append(containerPorts, corev1.ContainerPort{
ContainerPort: port,
HostPort: hostPort,
Protocol: protocol,
})
}
containerPorts = append(containerPorts, containerPort)
numToAlloc--
}
containers[cIndex].Ports = containerPorts
@ -116,20 +140,20 @@ func (hpp *HostPortPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx cont
}
func (hpp *HostPortPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
log.Infof("Receiving pod %s/%s UPDATE Operation", pod.GetNamespace(), pod.GetName())
node := &corev1.Node{}
err := c.Get(ctx, types.NamespacedName{
Name: pod.Spec.NodeName,
}, node)
if err != nil {
if k8serrors.IsNotFound(err) {
return pod, nil
}
return pod, errors.NewPluginError(errors.ApiCallError, err.Error())
}
iip, eip := getAddress(node)
nodeIp := getAddress(node)
networkManager := utils.NewNetworkManager(pod, c)
status, _ := networkManager.GetNetworkStatus()
if status != nil {
return pod, nil
}
iNetworkPorts := make([]gamekruiseiov1alpha1.NetworkPort, 0)
eNetworkPorts := make([]gamekruiseiov1alpha1.NetworkPort, 0)
@ -152,16 +176,24 @@ func (hpp *HostPortPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx co
}
}
// network not ready
if len(iNetworkPorts) == 0 || len(eNetworkPorts) == 0 || pod.Status.PodIP == "" {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, errors.ToPluginError(err, errors.InternalError)
}
networkStatus := gamekruiseiov1alpha1.NetworkStatus{
InternalAddresses: []gamekruiseiov1alpha1.NetworkAddress{
{
IP: iip,
IP: pod.Status.PodIP,
Ports: iNetworkPorts,
},
},
ExternalAddresses: []gamekruiseiov1alpha1.NetworkAddress{
{
IP: eip,
IP: nodeIp,
Ports: eNetworkPorts,
},
},
@ -173,7 +205,8 @@ func (hpp *HostPortPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx co
}
func (hpp *HostPortPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) errors.PluginError {
if _, ok := hpp.isAllocated[pod.GetNamespace()+"/"+pod.GetName()]; !ok {
log.Infof("Receiving pod %s/%s DELETE Operation", pod.GetNamespace(), pod.GetName())
if _, ok := hpp.podAllocated[pod.GetNamespace()+"/"+pod.GetName()]; !ok {
return nil
}
@ -209,16 +242,20 @@ func (hpp *HostPortPlugin) Init(c client.Client, options cloudprovider.CloudProv
return err
}
for _, pod := range podList.Items {
var hostPorts []int32
if pod.GetAnnotations()[gamekruiseiov1alpha1.GameServerNetworkType] == HostPortNetwork {
for _, container := range pod.Spec.Containers {
for _, port := range container.Ports {
if port.HostPort >= hpp.minPort && port.HostPort <= hpp.maxPort {
newPortAmount[port.HostPort]++
hpp.isAllocated[pod.GetNamespace()+"/"+pod.GetName()] = true
hostPorts = append(hostPorts, port.HostPort)
}
}
}
}
if len(hostPorts) != 0 {
hpp.podAllocated[pod.GetNamespace()+"/"+pod.GetName()] = util.Int32SliceToString(hostPorts, ",")
}
}
size := 0
@ -234,6 +271,7 @@ func (hpp *HostPortPlugin) Init(c client.Client, options cloudprovider.CloudProv
hpp.portAmount = newPortAmount
hpp.amountStat = newAmountStat
log.Infof("[Kubernetes-HostPort] podAllocated init: %v", hpp.podAllocated)
return nil
}
@ -251,7 +289,7 @@ func (hpp *HostPortPlugin) allocate(num int, nsname string) []int32 {
hpp.amountStat[index+1]++
}
hpp.isAllocated[nsname] = true
hpp.podAllocated[nsname] = util.Int32SliceToString(hostPorts, ",")
return hostPorts
}
@ -266,7 +304,7 @@ func (hpp *HostPortPlugin) deAllocate(hostPorts []int32, nsname string) {
hpp.amountStat[amount-1]++
}
delete(hpp.isAllocated, nsname)
delete(hpp.podAllocated, nsname)
}
func verifyContainerName(containerName string, pod *corev1.Pod) bool {
@ -278,35 +316,36 @@ func verifyContainerName(containerName string, pod *corev1.Pod) bool {
return false
}
func getAddress(node *corev1.Node) (string, string) {
var eip string
var iip string
func getAddress(node *corev1.Node) string {
nodeIp := ""
for _, a := range node.Status.Addresses {
if a.Type == corev1.NodeExternalIP && net.ParseIP(a.Address) != nil {
eip = a.Address
nodeIp = a.Address
}
}
for _, a := range node.Status.Addresses {
if a.Type == corev1.NodeExternalDNS {
eip = a.Address
nodeIp = a.Address
}
}
for _, a := range node.Status.Addresses {
if a.Type == corev1.NodeInternalIP && net.ParseIP(a.Address) != nil {
iip = a.Address
if nodeIp == "" {
for _, a := range node.Status.Addresses {
if a.Type == corev1.NodeInternalIP && net.ParseIP(a.Address) != nil {
nodeIp = a.Address
}
}
for _, a := range node.Status.Addresses {
if a.Type == corev1.NodeInternalDNS {
nodeIp = a.Address
}
}
}
for _, a := range node.Status.Addresses {
if a.Type == corev1.NodeInternalDNS {
iip = a.Address
}
}
return iip, eip
return nodeIp
}
func parseConfig(conf []gamekruiseiov1alpha1.NetworkConfParams, pod *corev1.Pod) (map[string][]int32, map[string][]corev1.Protocol, int) {
@ -323,9 +362,15 @@ func parseConfig(conf []gamekruiseiov1alpha1.NetworkConfParams, pod *corev1.Pod)
for _, portString := range strings.Split(cpSlice[1], ",") {
ppSlice := strings.Split(portString, "/")
// handle port
port, err := strconv.ParseInt(ppSlice[0], 10, 32)
if err != nil {
continue
var port int64
var err error
if ppSlice[0] == PortSameAsHost {
port = -1
} else {
port, err = strconv.ParseInt(ppSlice[0], 10, 32)
if err != nil {
continue
}
}
numToAlloc++
ports = append(ports, int32(port))

View File

@ -15,7 +15,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/utils/pointer"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
@ -78,23 +78,6 @@ func (i IngressPlugin) Init(client client.Client, options cloudprovider.CloudPro
}
func (i IngressPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
conf := networkManager.GetNetworkConfig()
ic, err := parseIngConfig(conf, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
err = c.Create(ctx, consSvc(ic, pod, c, ctx))
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
err = c.Create(ctx, consIngress(ic, pod, c, ctx))
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
return pod, nil
}
@ -239,7 +222,7 @@ func parseIngConfig(conf []gamekruiseiov1alpha1.NetworkConfParams, pod *corev1.P
return ingConfig{}, fmt.Errorf("%s", paramsError)
}
case IngressClassNameKey:
ic.ingressClassName = pointer.String(c.Value)
ic.ingressClassName = ptr.To[string](c.Value)
case TlsSecretNameKey:
ic.tlsSecretName = c.Value
case TlsHostsKey:
@ -375,8 +358,8 @@ func consOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, i
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: pointer.BoolPtr(true),
BlockOwnerDeletion: pointer.BoolPtr(true),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
@ -388,8 +371,8 @@ func consOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, i
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: pointer.BoolPtr(true),
BlockOwnerDeletion: pointer.BoolPtr(true),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}

View File

@ -2,14 +2,16 @@ package kubernetes
import (
"fmt"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
"reflect"
"testing"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/api/networking/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/pointer"
"reflect"
"testing"
"k8s.io/utils/ptr"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
)
func TestParseIngConfig(t *testing.T) {
@ -187,8 +189,8 @@ func TestConsIngress(t *testing.T) {
Kind: "Pod",
Name: "pod-3",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
Controller: pointer.BoolPtr(true),
BlockOwnerDeletion: pointer.BoolPtr(true),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
}
@ -378,8 +380,8 @@ func TestConsSvc(t *testing.T) {
Kind: "Pod",
Name: "pod-3",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
Controller: pointer.BoolPtr(true),
BlockOwnerDeletion: pointer.BoolPtr(true),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
}

View File

@ -0,0 +1,279 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kubernetes
import (
"context"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"sigs.k8s.io/controller-runtime/pkg/client"
"strconv"
"strings"
)
const (
NodePortNetwork = "Kubernetes-NodePort"
PortProtocolsConfigName = "PortProtocols"
SvcSelectorDisabledKey = "game.kruise.io/svc-selector-disabled"
)
type NodePortPlugin struct {
}
func (n *NodePortPlugin) Name() string {
return NodePortNetwork
}
func (n *NodePortPlugin) Alias() string {
return ""
}
func (n *NodePortPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (n *NodePortPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return pod, nil
}
func (n *NodePortPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, _ := networkManager.GetNetworkStatus()
networkConfig := networkManager.GetNetworkConfig()
npc, err := parseNodePortConfig(networkConfig)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ParameterError, err.Error())
}
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// get svc
svc := &corev1.Service{}
err = client.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
return pod, cperrors.ToPluginError(client.Create(ctx, consNodePortSvc(npc, pod, client, ctx)), cperrors.ApiCallError)
}
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
// update svc
if util.GetHash(npc) != svc.GetAnnotations()[ServiceHashKey] {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
return pod, cperrors.ToPluginError(client.Update(ctx, consNodePortSvc(npc, pod, client, ctx)), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Selector[SvcSelectorKey] == pod.GetName() {
newSelector := svc.Spec.Selector
newSelector[SvcSelectorDisabledKey] = pod.GetName()
delete(svc.Spec.Selector, SvcSelectorKey)
svc.Spec.Selector = newSelector
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Selector[SvcSelectorDisabledKey] == pod.GetName() {
newSelector := svc.Spec.Selector
newSelector[SvcSelectorKey] = pod.GetName()
delete(svc.Spec.Selector, SvcSelectorDisabledKey)
svc.Spec.Selector = newSelector
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
toUpDateSvc, err := utils.AllowNotReadyContainers(client, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := client.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
node := &corev1.Node{}
err = client.Get(ctx, types.NamespacedName{
Name: pod.Spec.NodeName,
}, node)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
if pod.Status.PodIP == "" {
// Pod IP not exist, Network NotReady
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
if port.NodePort == 0 {
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
instrEPort := intstr.FromInt(int(port.NodePort))
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: getAddress(node),
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
externalAddresses = append(externalAddresses, externalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func (n *NodePortPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
return nil
}
func init() {
kubernetesProvider.registerPlugin(&NodePortPlugin{})
}
type nodePortConfig struct {
ports []int
protocols []corev1.Protocol
isFixed bool
}
func parseNodePortConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) (*nodePortConfig, error) {
var ports []int
var protocols []corev1.Protocol
isFixed := false
for _, c := range conf {
switch c.Name {
case PortProtocolsConfigName:
ports, protocols = parsePortProtocols(c.Value)
case FixedKey:
var err error
isFixed, err = strconv.ParseBool(c.Value)
if err != nil {
return nil, err
}
}
}
return &nodePortConfig{
ports: ports,
protocols: protocols,
isFixed: isFixed,
}, nil
}
func parsePortProtocols(value string) ([]int, []corev1.Protocol) {
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
for _, pp := range strings.Split(value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
return ports, protocols
}
func consNodePortSvc(npc *nodePortConfig, pod *corev1.Pod, c client.Client, ctx context.Context) *corev1.Service {
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(npc.ports); i++ {
svcPorts = append(svcPorts, corev1.ServicePort{
Name: strconv.Itoa(npc.ports[i]),
Port: int32(npc.ports[i]),
Protocol: npc.protocols[i],
TargetPort: intstr.FromInt(npc.ports[i]),
})
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: map[string]string{
ServiceHashKey: util.GetHash(npc),
},
OwnerReferences: consOwnerReference(c, ctx, pod, npc.isFixed),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeNodePort,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
},
}
return svc
}

View File

@ -0,0 +1,185 @@
package kubernetes
import (
"reflect"
"testing"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/utils/ptr"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/pkg/util"
)
func TestParseNPConfig(t *testing.T) {
tests := []struct {
conf []gamekruiseiov1alpha1.NetworkConfParams
podNetConfig *nodePortConfig
}{
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: PortProtocolsConfigName,
Value: "80",
},
},
podNetConfig: &nodePortConfig{
ports: []int{80},
protocols: []corev1.Protocol{corev1.ProtocolTCP},
},
},
{
conf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: PortProtocolsConfigName,
Value: "8021/UDP",
},
},
podNetConfig: &nodePortConfig{
ports: []int{8021},
protocols: []corev1.Protocol{corev1.ProtocolUDP},
},
},
}
for _, test := range tests {
podNetConfig, _ := parseNodePortConfig(test.conf)
if !reflect.DeepEqual(podNetConfig, test.podNetConfig) {
t.Errorf("expect podNetConfig: %v, but actual: %v", test.podNetConfig, podNetConfig)
}
}
}
func TestConsNPSvc(t *testing.T) {
pod := &corev1.Pod{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
Kind: "Pod",
},
ObjectMeta: metav1.ObjectMeta{
Name: "pod-3",
Namespace: "ns",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
},
}
// case 0
npcCase0 := &nodePortConfig{
ports: []int{
80,
8080,
},
protocols: []corev1.Protocol{
corev1.ProtocolTCP,
corev1.ProtocolTCP,
},
isFixed: false,
}
svcCase0 := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "pod-3",
Namespace: "ns",
Annotations: map[string]string{
ServiceHashKey: util.GetHash(npcCase0),
},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "v1",
Kind: "Pod",
Name: "pod-3",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeNodePort,
Selector: map[string]string{
SvcSelectorKey: "pod-3",
},
Ports: []corev1.ServicePort{
{
Name: "80",
Port: int32(80),
TargetPort: intstr.FromInt(80),
Protocol: corev1.ProtocolTCP,
},
{
Name: "8080",
Port: int32(8080),
TargetPort: intstr.FromInt(8080),
Protocol: corev1.ProtocolTCP,
},
},
},
}
// case 1
npcCase1 := &nodePortConfig{
ports: []int{
8021,
},
protocols: []corev1.Protocol{
corev1.ProtocolUDP,
},
isFixed: false,
}
svcCase1 := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "pod-3",
Namespace: "ns",
Annotations: map[string]string{
ServiceHashKey: util.GetHash(npcCase1),
},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "v1",
Kind: "Pod",
Name: "pod-3",
UID: "bff0afd6-bb30-4641-8607-8329547324eb",
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeNodePort,
Selector: map[string]string{
SvcSelectorKey: "pod-3",
},
Ports: []corev1.ServicePort{
{
Name: "8021",
Port: int32(8021),
TargetPort: intstr.FromInt(8021),
Protocol: corev1.ProtocolUDP,
},
},
},
}
tests := []struct {
npc *nodePortConfig
svc *corev1.Service
}{
{
npc: npcCase0,
svc: svcCase0,
},
{
npc: npcCase1,
svc: svcCase1,
},
}
for i, test := range tests {
actual := consNodePortSvc(test.npc, pod, nil, nil)
if !reflect.DeepEqual(actual, test.svc) {
t.Errorf("case %d: expect service: %v , but actual: %v", i, test.svc, actual)
}
}
}

View File

@ -18,10 +18,17 @@ package manager
import (
"context"
"github.com/openkruise/kruise-game/cloudprovider/hwcloud"
"github.com/openkruise/kruise-game/cloudprovider/jdcloud"
"github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
"github.com/openkruise/kruise-game/cloudprovider/alibabacloud"
aws "github.com/openkruise/kruise-game/cloudprovider/amazonswebservices"
"github.com/openkruise/kruise-game/cloudprovider/kubernetes"
"github.com/openkruise/kruise-game/cloudprovider/tencentcloud"
volcengine "github.com/openkruise/kruise-game/cloudprovider/volcengine"
corev1 "k8s.io/api/core/v1"
log "k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
@ -116,5 +123,57 @@ func NewProviderManager() (*ProviderManager, error) {
}
}
if configs.VolcengineOptions.Valid() && configs.VolcengineOptions.Enabled() {
// build and register volcengine cloud provider
vcp, err := volcengine.NewVolcengineProvider()
if err != nil {
log.Errorf("Failed to initialize volcengine provider.because of %s", err.Error())
} else {
pm.RegisterCloudProvider(vcp, configs.VolcengineOptions)
}
}
if configs.AmazonsWebServicesOptions.Valid() && configs.AmazonsWebServicesOptions.Enabled() {
// build and register amazon web services provider
vcp, err := aws.NewAmazonsWebServicesProvider()
if err != nil {
log.Errorf("Failed to initialize amazons web services provider.because of %s", err.Error())
} else {
pm.RegisterCloudProvider(vcp, configs.AmazonsWebServicesOptions)
}
}
if configs.TencentCloudOptions.Valid() && configs.TencentCloudOptions.Enabled() {
// build and register tencent cloud provider
tcp, err := tencentcloud.NewTencentCloudProvider()
if err != nil {
log.Errorf("Failed to initialize tencentcloud provider.because of %s", err.Error())
} else {
pm.RegisterCloudProvider(tcp, configs.TencentCloudOptions)
}
}
if configs.JdCloudOptions.Valid() && configs.JdCloudOptions.Enabled() {
// build and register tencent cloud provider
tcp, err := jdcloud.NewJdcloudProvider()
if err != nil {
log.Errorf("Failed to initialize jdcloud provider.because of %s", err.Error())
} else {
pm.RegisterCloudProvider(tcp, configs.JdCloudOptions)
}
}
if configs.HwCloudOptions.Valid() && configs.HwCloudOptions.Enabled() {
// build and register hw cloud provider
hp, err := hwcloud.NewHwCloudProvider()
if err != nil {
log.Errorf("Failed to initialize hwcloud provider.because of %s", err.Error())
} else {
pm.RegisterCloudProvider(hp, configs.HwCloudOptions)
}
} else {
log.Warningf("HwCloudProvider is not enabled, enable flag is %v, config valid flag is %v", configs.HwCloudOptions.Enabled(), configs.HwCloudOptions.Valid())
}
return pm, nil
}

View File

@ -3,23 +3,43 @@ package options
type AlibabaCloudOptions struct {
Enable bool `toml:"enable"`
SLBOptions SLBOptions `toml:"slb"`
NLBOptions NLBOptions `toml:"nlb"`
}
type SLBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
BlockPorts []int32 `toml:"block_ports"`
}
type NLBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
BlockPorts []int32 `toml:"block_ports"`
}
func (o AlibabaCloudOptions) Valid() bool {
// SLB valid
slbOptions := o.SLBOptions
if slbOptions.MaxPort-slbOptions.MinPort != 200 {
for _, blockPort := range slbOptions.BlockPorts {
if blockPort >= slbOptions.MaxPort || blockPort <= slbOptions.MinPort {
return false
}
}
if int(slbOptions.MaxPort-slbOptions.MinPort)-len(slbOptions.BlockPorts) >= 200 {
return false
}
if slbOptions.MinPort <= 0 {
return false
}
return true
// NLB valid
nlbOptions := o.NLBOptions
for _, blockPort := range nlbOptions.BlockPorts {
if blockPort >= nlbOptions.MaxPort || blockPort <= nlbOptions.MinPort {
return false
}
}
return nlbOptions.MinPort > 0
}
func (o AlibabaCloudOptions) Enabled() bool {

View File

@ -0,0 +1,31 @@
package options
// https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-limits.html
// Listeners per Network Load Balancer is 50
const maxPortRange = 50
type AmazonsWebServicesOptions struct {
Enable bool `toml:"enable"`
NLBOptions AWSNLBOptions `toml:"nlb"`
}
type AWSNLBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
}
func (ao AmazonsWebServicesOptions) Valid() bool {
nlbOptions := ao.NLBOptions
if nlbOptions.MaxPort-nlbOptions.MinPort+1 > maxPortRange {
return false
}
if nlbOptions.MinPort < 1 || nlbOptions.MaxPort > 65535 {
return false
}
return true
}
func (ao AmazonsWebServicesOptions) Enabled() bool {
return ao.Enable
}

View File

@ -0,0 +1,32 @@
package options
type HwCloudOptions struct {
Enable bool `toml:"enable"`
ELBOptions ELBOptions `toml:"elb"`
}
type ELBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
BlockPorts []int32 `toml:"block_ports"`
}
func (o HwCloudOptions) Valid() bool {
elbOptions := o.ELBOptions
for _, blockPort := range elbOptions.BlockPorts {
if blockPort >= elbOptions.MaxPort || blockPort <= elbOptions.MinPort {
return false
}
}
if int(elbOptions.MaxPort-elbOptions.MinPort)-len(elbOptions.BlockPorts) > 200 {
return false
}
if elbOptions.MinPort <= 0 {
return false
}
return true
}
func (o HwCloudOptions) Enabled() bool {
return o.Enable
}

View File

@ -0,0 +1,28 @@
package options
type JdCloudOptions struct {
Enable bool `toml:"enable"`
NLBOptions JdNLBOptions `toml:"nlb"`
}
type JdNLBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
}
func (v JdCloudOptions) Valid() bool {
nlbOptions := v.NLBOptions
if nlbOptions.MaxPort > 65535 {
return false
}
if nlbOptions.MinPort < 1 {
return false
}
return true
}
func (v JdCloudOptions) Enabled() bool {
return v.Enable
}

View File

@ -0,0 +1,13 @@
package options
type TencentCloudOptions struct {
Enable bool `toml:"enable"`
}
func (o TencentCloudOptions) Enabled() bool {
return o.Enable
}
func (o TencentCloudOptions) Valid() bool {
return true
}

View File

@ -0,0 +1,35 @@
package options
type VolcengineOptions struct {
Enable bool `toml:"enable"`
CLBOptions CLBOptions `toml:"clb"`
}
type CLBOptions struct {
MaxPort int32 `toml:"max_port"`
MinPort int32 `toml:"min_port"`
BlockPorts []int32 `toml:"block_ports"`
}
func (v VolcengineOptions) Valid() bool {
clbOptions := v.CLBOptions
for _, blockPort := range clbOptions.BlockPorts {
if blockPort >= clbOptions.MaxPort || blockPort < clbOptions.MinPort {
return false
}
}
if clbOptions.MaxPort > 65535 {
return false
}
if clbOptions.MinPort < 1 {
return false
}
return true
}
func (v VolcengineOptions) Enabled() bool {
return v.Enable
}

View File

@ -0,0 +1,208 @@
package tencentcloud
import (
"context"
"encoding/json"
"fmt"
"strconv"
"strings"
"github.com/openkruise/kruise-game/pkg/util"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/util/intstr"
kruisev1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
ClbNetwork = "TencentCloud-CLB"
AliasCLB = "CLB-Network"
ClbIdsConfigName = "ClbIds"
PortProtocolsConfigName = "PortProtocols"
CLBPortMappingAnnotation = "networking.cloud.tencent.com/clb-port-mapping"
EnableCLBPortMappingAnnotation = "networking.cloud.tencent.com/enable-clb-port-mapping"
CLBPortMappingResultAnnotation = "networking.cloud.tencent.com/clb-port-mapping-result"
CLBPortMappingStatuslAnnotation = "networking.cloud.tencent.com/clb-port-mapping-status"
)
type ClbPlugin struct{}
type portProtocol struct {
port int
protocol string
}
type clbConfig struct {
targetPorts []portProtocol
}
type portMapping struct {
Port int `json:"port"`
Protocol string `json:"protocol"`
Address string `json:"address"`
}
func (p *ClbPlugin) Name() string {
return ClbNetwork
}
func (p *ClbPlugin) Alias() string {
return AliasCLB
}
func (p *ClbPlugin) Init(c client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
return nil
}
func (p *ClbPlugin) OnPodAdded(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
return p.reconcile(c, pod, ctx)
}
func (p *ClbPlugin) OnPodUpdated(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
if pod.DeletionTimestamp != nil {
return pod, nil
}
return p.reconcile(c, pod, ctx)
}
// Ensure the annotation of pod is correct.
func (p *ClbPlugin) reconcile(c client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
networkManager := utils.NewNetworkManager(pod, c)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
pod, err := networkManager.UpdateNetworkStatus(kruisev1alpha1.NetworkStatus{
CurrentNetworkState: kruisev1alpha1.NetworkWaiting,
}, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
clbConf, err := parseLbConfig(networkConfig)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ParameterError)
}
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err != nil && !apierrors.IsNotFound(err) {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
if pod.Annotations == nil {
pod.Annotations = make(map[string]string)
}
pod.Annotations[CLBPortMappingAnnotation] = getClbPortMappingAnnotation(clbConf, gss)
enableCLBPortMapping := "true"
if networkManager.GetNetworkDisabled() {
enableCLBPortMapping = "false"
}
pod.Annotations[EnableCLBPortMappingAnnotation] = enableCLBPortMapping
if pod.Annotations[CLBPortMappingStatuslAnnotation] == "Ready" {
if result := pod.Annotations[CLBPortMappingResultAnnotation]; result != "" {
mappings := []portMapping{}
if err := json.Unmarshal([]byte(result), &mappings); err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
if len(mappings) != 0 {
internalAddresses := make([]kruisev1alpha1.NetworkAddress, 0)
externalAddresses := make([]kruisev1alpha1.NetworkAddress, 0)
for _, mapping := range mappings {
ss := strings.Split(mapping.Address, ":")
if len(ss) != 2 {
continue
}
lbIP := ss[0]
lbPort, err := strconv.Atoi(ss[1])
if err != nil {
continue
}
port := mapping.Port
instrIPort := intstr.FromInt(port)
instrEPort := intstr.FromInt(lbPort)
portName := instrIPort.String()
protocol := corev1.Protocol(mapping.Protocol)
internalAddresses = append(internalAddresses, kruisev1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []kruisev1alpha1.NetworkPort{
{
Name: portName,
Port: &instrIPort,
Protocol: protocol,
},
},
})
externalAddresses = append(externalAddresses, kruisev1alpha1.NetworkAddress{
IP: lbIP,
Ports: []kruisev1alpha1.NetworkPort{
{
Name: portName,
Port: &instrEPort,
Protocol: protocol,
},
},
})
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = kruisev1alpha1.NetworkReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
}
}
}
}
return pod, nil
}
func (p *ClbPlugin) OnPodDeleted(c client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
return nil
}
func init() {
clbPlugin := ClbPlugin{}
tencentCloudProvider.registerPlugin(&clbPlugin)
}
func getClbPortMappingAnnotation(clbConf *clbConfig, gss *kruisev1alpha1.GameServerSet) string {
poolName := fmt.Sprintf("%s-%s", gss.Namespace, gss.Name)
var buf strings.Builder
for _, pp := range clbConf.targetPorts {
buf.WriteString(fmt.Sprintf("%d %s %s\n", pp.port, pp.protocol, poolName))
}
return buf.String()
}
var ErrMissingPortProtocolsConfig = fmt.Errorf("missing %s config", PortProtocolsConfigName)
func parseLbConfig(conf []kruisev1alpha1.NetworkConfParams) (*clbConfig, error) {
ports := []portProtocol{}
for _, c := range conf {
switch c.Name {
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
protocol := "TCP"
if len(ppSlice) == 2 {
protocol = ppSlice[1]
}
ports = append(ports, portProtocol{
port: port,
protocol: protocol,
})
}
}
}
if len(ports) == 0 {
return nil, ErrMissingPortProtocolsConfig
}
return &clbConfig{
targetPorts: ports,
}, nil
}

View File

@ -0,0 +1,74 @@
package tencentcloud
import (
"reflect"
"testing"
kruisev1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
)
func TestParseLbConfig(t *testing.T) {
tests := []struct {
conf []kruisev1alpha1.NetworkConfParams
clbConfig *clbConfig
}{
{
conf: []kruisev1alpha1.NetworkConfParams{
{
Name: ClbIdsConfigName,
Value: "xxx-A",
},
{
Name: PortProtocolsConfigName,
Value: "80",
},
},
clbConfig: &clbConfig{
targetPorts: []portProtocol{
{
port: 80,
protocol: "TCP",
},
},
},
},
{
conf: []kruisev1alpha1.NetworkConfParams{
{
Name: ClbIdsConfigName,
Value: "xxx-A,xxx-B,",
},
{
Name: PortProtocolsConfigName,
Value: "81/UDP,82,83/TCP",
},
},
clbConfig: &clbConfig{
targetPorts: []portProtocol{
{
port: 81,
protocol: "UDP",
},
{
port: 82,
protocol: "TCP",
},
{
port: 83,
protocol: "TCP",
},
},
},
},
}
for i, test := range tests {
lc, err := parseLbConfig(test.conf)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(test.clbConfig, lc) {
t.Errorf("case %d: lbId expect: %v, actual: %v", i, test.clbConfig, lc)
}
}
}

View File

@ -0,0 +1,59 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package tencentcloud
import (
"github.com/openkruise/kruise-game/cloudprovider"
"k8s.io/klog/v2"
)
const (
TencentCloud = "TencentCloud"
)
var tencentCloudProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (ap *Provider) Name() string {
return TencentCloud
}
func (ap *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if ap.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return ap.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (ap *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
klog.Fatal("empty plugin name")
}
ap.plugins[name] = plugin
}
func NewTencentCloudProvider() (cloudprovider.CloudProvider, error) {
return tencentCloudProvider, nil
}

View File

@ -0,0 +1,85 @@
package utils
import (
"context"
kruisePub "github.com/openkruise/kruise-api/apps/pub"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
"sigs.k8s.io/controller-runtime/pkg/client"
"strings"
)
func AllowNotReadyContainers(c client.Client, ctx context.Context, pod *corev1.Pod, svc *corev1.Service, isSvcShared bool) (bool, cperrors.PluginError) {
// get lifecycleState
lifecycleState, exist := pod.GetLabels()[kruisePub.LifecycleStateKey]
// get gss
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err != nil {
return false, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// get allowNotReadyContainers
var allowNotReadyContainers []string
for _, kv := range gss.Spec.Network.NetworkConf {
if kv.Name == gamekruiseiov1alpha1.AllowNotReadyContainersNetworkConfName {
for _, allowNotReadyContainer := range strings.Split(kv.Value, ",") {
if allowNotReadyContainer != "" {
allowNotReadyContainers = append(allowNotReadyContainers, allowNotReadyContainer)
}
}
}
}
// PreInplaceUpdating
if exist && lifecycleState == string(kruisePub.LifecycleStatePreparingUpdate) {
// ensure PublishNotReadyAddresses is true when containers pre-updating
if !svc.Spec.PublishNotReadyAddresses && util.IsContainersPreInplaceUpdating(pod, gss, allowNotReadyContainers) {
svc.Spec.PublishNotReadyAddresses = true
return true, nil
}
// ensure remove finalizer
if svc.Spec.PublishNotReadyAddresses || !util.IsContainersPreInplaceUpdating(pod, gss, allowNotReadyContainers) {
pod.GetLabels()[gamekruiseiov1alpha1.InplaceUpdateNotReadyBlocker] = "false"
}
} else {
pod.GetLabels()[gamekruiseiov1alpha1.InplaceUpdateNotReadyBlocker] = "true"
if !svc.Spec.PublishNotReadyAddresses {
return false, nil
}
if isSvcShared {
// ensure PublishNotReadyAddresses is false when all pods are updated
if gss.Status.UpdatedReplicas == gss.Status.Replicas {
podList := &corev1.PodList{}
err := c.List(ctx, podList, &client.ListOptions{
Namespace: gss.GetNamespace(),
LabelSelector: labels.SelectorFromSet(map[string]string{
gamekruiseiov1alpha1.GameServerOwnerGssKey: gss.GetName(),
})})
if err != nil {
return false, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
for _, p := range podList.Items {
_, condition := util.GetPodConditionFromList(p.Status.Conditions, corev1.PodReady)
if condition == nil || condition.Status != corev1.ConditionTrue {
return false, nil
}
}
svc.Spec.PublishNotReadyAddresses = false
return true, nil
}
} else {
_, condition := util.GetPodConditionFromList(pod.Status.Conditions, corev1.PodReady)
if condition == nil || condition.Status != corev1.ConditionTrue {
return false, nil
}
svc.Spec.PublishNotReadyAddresses = false
return true, nil
}
}
return false, nil
}

View File

@ -0,0 +1,289 @@
package utils
import (
"context"
kruisePub "github.com/openkruise/kruise-api/apps/pub"
kruiseV1alpha1 "github.com/openkruise/kruise-api/apps/v1alpha1"
kruiseV1beta1 "github.com/openkruise/kruise-api/apps/v1beta1"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"testing"
)
var (
scheme = runtime.NewScheme()
)
func init() {
utilruntime.Must(gamekruiseiov1alpha1.AddToScheme(scheme))
utilruntime.Must(kruiseV1beta1.AddToScheme(scheme))
utilruntime.Must(kruiseV1alpha1.AddToScheme(scheme))
utilruntime.Must(corev1.AddToScheme(scheme))
}
func TestAllowNotReadyContainers(t *testing.T) {
tests := []struct {
// input
pod *corev1.Pod
svc *corev1.Service
gss *gamekruiseiov1alpha1.GameServerSet
isSvcShared bool
podElse []*corev1.Pod
// output
inplaceUpdateNotReadyBlocker string
isSvcUpdated bool
}{
// When svc is not shared, pod updated, svc should not publish NotReadyAddresses
{
pod: &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case0-0",
UID: "xxx0",
Labels: map[string]string{
kruisePub.LifecycleStateKey: string(kruisePub.LifecycleStateUpdating),
gamekruiseiov1alpha1.GameServerOwnerGssKey: "case0",
},
},
Status: corev1.PodStatus{
Conditions: []corev1.PodCondition{
{
Type: corev1.PodReady,
Status: corev1.ConditionTrue,
},
},
ContainerStatuses: []corev1.ContainerStatus{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v1.0",
},
},
},
},
svc: &corev1.Service{
Spec: corev1.ServiceSpec{
PublishNotReadyAddresses: true,
},
},
gss: &gamekruiseiov1alpha1.GameServerSet{
TypeMeta: metav1.TypeMeta{
Kind: "GameServerSet",
APIVersion: "game.kruise.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case0",
UID: "xxx0",
},
Spec: gamekruiseiov1alpha1.GameServerSetSpec{
Network: &gamekruiseiov1alpha1.Network{
NetworkConf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: gamekruiseiov1alpha1.AllowNotReadyContainersNetworkConfName,
Value: "name_B",
},
},
},
GameServerTemplate: gamekruiseiov1alpha1.GameServerTemplate{
PodTemplateSpec: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v1.0",
},
},
},
},
},
},
},
isSvcShared: false,
inplaceUpdateNotReadyBlocker: "true",
isSvcUpdated: true,
},
// When svc is not shared & pod is pre-updating & svc PublishNotReadyAddresses is false, svc should publish NotReadyAddresses
{
pod: &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case1-0",
UID: "xxx0",
Labels: map[string]string{
kruisePub.LifecycleStateKey: string(kruisePub.LifecycleStatePreparingUpdate),
gamekruiseiov1alpha1.InplaceUpdateNotReadyBlocker: "true",
gamekruiseiov1alpha1.GameServerOwnerGssKey: "case1",
},
},
Status: corev1.PodStatus{
Conditions: []corev1.PodCondition{
{
Type: corev1.PodReady,
Status: corev1.ConditionTrue,
},
},
ContainerStatuses: []corev1.ContainerStatus{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v1.0",
},
},
},
},
svc: &corev1.Service{
Spec: corev1.ServiceSpec{
PublishNotReadyAddresses: false,
},
},
gss: &gamekruiseiov1alpha1.GameServerSet{
TypeMeta: metav1.TypeMeta{
Kind: "GameServerSet",
APIVersion: "game.kruise.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case1",
UID: "xxx0",
},
Spec: gamekruiseiov1alpha1.GameServerSetSpec{
Network: &gamekruiseiov1alpha1.Network{
NetworkConf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: gamekruiseiov1alpha1.AllowNotReadyContainersNetworkConfName,
Value: "name_B",
},
},
},
GameServerTemplate: gamekruiseiov1alpha1.GameServerTemplate{
PodTemplateSpec: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v2.0",
},
},
},
},
},
},
},
isSvcShared: false,
inplaceUpdateNotReadyBlocker: "true",
isSvcUpdated: true,
},
// When svc is not shared & pod is pre-updating & svc PublishNotReadyAddresses is true, finalizer of pod should be removed to enter next stage
{
pod: &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case2-0",
UID: "xxx0",
Labels: map[string]string{
kruisePub.LifecycleStateKey: string(kruisePub.LifecycleStatePreparingUpdate),
gamekruiseiov1alpha1.GameServerOwnerGssKey: "case2",
},
},
Status: corev1.PodStatus{
Conditions: []corev1.PodCondition{
{
Type: corev1.PodReady,
Status: corev1.ConditionTrue,
},
},
ContainerStatuses: []corev1.ContainerStatus{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v1.0",
},
},
},
},
svc: &corev1.Service{
Spec: corev1.ServiceSpec{
PublishNotReadyAddresses: true,
},
},
gss: &gamekruiseiov1alpha1.GameServerSet{
TypeMeta: metav1.TypeMeta{
Kind: "GameServerSet",
APIVersion: "game.kruise.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: "xxx",
Name: "case2",
UID: "xxx0",
},
Spec: gamekruiseiov1alpha1.GameServerSetSpec{
Network: &gamekruiseiov1alpha1.Network{
NetworkConf: []gamekruiseiov1alpha1.NetworkConfParams{
{
Name: gamekruiseiov1alpha1.AllowNotReadyContainersNetworkConfName,
Value: "name_B",
},
},
},
GameServerTemplate: gamekruiseiov1alpha1.GameServerTemplate{
PodTemplateSpec: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "name_A",
Image: "v1.0",
},
{
Name: "name_B",
Image: "v1.0",
},
},
},
},
},
},
},
isSvcShared: false,
inplaceUpdateNotReadyBlocker: "false",
isSvcUpdated: false,
},
}
for i, test := range tests {
objs := []client.Object{test.gss, test.pod, test.svc}
c := fake.NewClientBuilder().WithScheme(scheme).WithObjects(objs...).Build()
actual, err := AllowNotReadyContainers(c, context.TODO(), test.pod, test.svc, test.isSvcShared)
if err != nil {
t.Errorf("case %d: %s", i, err.Error())
}
if actual != test.isSvcUpdated {
t.Errorf("case %d: expect isSvcUpdated is %v but actually got %v", i, test.isSvcUpdated, actual)
}
if test.pod.GetLabels()[gamekruiseiov1alpha1.InplaceUpdateNotReadyBlocker] != test.inplaceUpdateNotReadyBlocker {
t.Errorf("case %d: expect inplaceUpdateNotReadyBlocker is %v but actually got %v", i, test.inplaceUpdateNotReadyBlocker, test.pod.GetLabels()[gamekruiseiov1alpha1.InplaceUpdateNotReadyBlocker])
}
}
}

View File

@ -0,0 +1,124 @@
English | [中文](./README.zh_CN.md)
The Volcaengine Kubernetes Engine supports the CLB reuse mechanism in k8s. Different SVCs can use different ports of the same CLB. Therefore, the Volcengine-CLB network plugin will record the port allocation corresponding to each CLB. For the specified network type as Volcengine-CLB, the Volcengine-CLB network plugin will automatically allocate a port and create a service object. Wait for the svc ingress field. After the public network IP is successfully created, the GameServer network is in the Ready state and the process is completed.
![image](https://github.com/lizhipeng629/kruise-game/assets/110802158/209de309-b9b7-4ba8-b2fb-da0d299e2edb)
## Volcengine-CLB configuration
### plugin configuration
```toml
[volcengine]
enable = true
[volcengine.clb]
#Fill in the free port segment that clb can use to allocate external access ports to pods, The maximum port range is 200.
max_port = 700
min_port = 500
```
### Parameter
#### ClbIds
- Meaningfill in the id of the clb. You can fill in more than one. You need to create the clb in [Volcano Engine].
- Valueeach clbId is divided by `,` . For example: `clb-9zeo7prq1m25ctpfrw1m7`,`clb-bp1qz7h50yd3w58h2f8je`,...
- ConfigurableY
#### PortProtocols
- Meaningthe ports and protocols exposed by the pod, support filling in multiple ports/protocols
- Value`port1/protocol1`,`port2/protocol2`,... The protocol names must be in uppercase letters.
- ConfigurableY
#### AllocateLoadBalancerNodePorts
- MeaningWhether the generated service is assigned nodeport, this can be set to false only in clb passthrough mode
- Valuefalse / true
- ConfigurableY
#### Fixed
- Meaningwhether the mapping relationship is fixed. If the mapping relationship is fixed, the mapping relationship remains unchanged even if the pod is deleted and recreated.
- Valuefalse / true
- ConfigurableY
#### AllowNotReadyContainers
- Meaningthe container names that are allowed not ready when inplace updating, when traffic will not be cut.
- Value{containerName_0},{containerName_1},... egsidecar
- ConfigurableIt cannot be changed during the in-place updating process.
#### Annotations
- Meaningthe anno added to the service
- Valuekey1:value1,key2:value2...
- ConfigurableY
### Example
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: gss-2048-clb
namespace: default
spec:
replicas: 3
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: Volcengine-CLB
networkConf:
- name: ClbIds
#Fill in Volcengine Cloud LoadBalancer Id here
value: clb-xxxxx
- name: PortProtocols
#Fill in the exposed ports and their corresponding protocols here.
#If there are multiple ports, the format is as follows: {port1}/{protocol1},{port2}/{protocol2}...
#If the protocol is not filled in, the default is TCP
value: 80/TCP
- name: AllocateLoadBalancerNodePorts
# Whether the generated service is assigned nodeport.
value: "true"
- name: Fixed
#Fill in here whether a fixed IP is required [optional] ; Default is false
value: "false"
- name: Annotations
#Fill in the anno related to clb on the service
#The format is as follows: {key1}:{value1},{key2}:{value2}...
value: "key1:value1,key2:value2"
gameServerTemplate:
spec:
containers:
- image: cr-helm2-cn-beijing.cr.volces.com/kruise/2048:v1.0
name: app-2048
volumeMounts:
- name: shared-dir
mountPath: /var/www/html/js
- image: cr-helm2-cn-beijing.cr.volces.com/kruise/2048-sidecar:v1.0
name: sidecar
args:
- bash
- -c
- rsync -aP /app/js/* /app/scripts/ && while true; do echo 11;sleep 2; done
volumeMounts:
- name: shared-dir
mountPath: /app/scripts
volumes:
- name: shared-dir
emptyDir: {}
EOF
```
Check the network status in GameServer:
```
networkStatus:
createTime: "2024-01-19T08:19:49Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xx.xxx
ports:
- name: "80"
port: 6611
protocol: TCP
internalAddresses:
- ip: 172.16.200.60
ports:
- name: "80"
port: 80
protocol: TCP
lastTransitionTime: "2024-01-19T08:19:49Z"
networkType: Volcengine-CLB
```

View File

@ -0,0 +1,125 @@
中文 | [English](./README.md)
火山引擎容器服务支持在k8s中对CLB复用的机制不同的svc可以使用同一个CLB的不同端口。由此Volcengine-CLB network plugin将记录各CLB对应的端口分配情况对于指定了网络类型为Volcengine-CLBVolcengine-CLB网络插件将会自动分配一个端口并创建一个service对象待svc ingress字段的公网IP创建成功后GameServer的网络处于Ready状态该过程执行完成。
![image](https://github.com/lizhipeng629/kruise-game/assets/110802158/209de309-b9b7-4ba8-b2fb-da0d299e2edb)
## Volcengine-CLB 相关配置
### plugin配置
```toml
[volcengine]
enable = true
[volcengine.clb]
#填写clb可使用的空闲端口段用于为pod分配外部接入端口范围最大为200
max_port = 700
min_port = 500
```
### 参数
#### ClbIds
- 含义填写clb的id可填写多个需要现在【火山引擎】中创建好clb。
- 填写格式各个clbId用,分割。例如clb-9zeo7prq1m25ctpfrw1m7,clb-bp1qz7h50yd3w58h2f8je,...
- 是否支持变更:是
#### PortProtocols
- 含义pod暴露的端口及协议支持填写多个端口/协议
- 填写格式port1/protocol1,port2/protocol2,...(协议需大写)
- 是否支持变更:是
#### Fixed
- 含义是否固定访问IP/端口。若是即使pod删除重建网络内外映射关系不会改变
- 填写格式false / true
- 是否支持变更:是
#### AllocateLoadBalancerNodePorts
- 含义生成的service是否分配nodeport, 仅在clb的直通模式passthrough才能设置为false
- 填写格式true/false
- 是否支持变更:是
#### AllowNotReadyContainers
- 含义:在容器原地升级时允许不断流的对应容器名称,可填写多个
- 填写格式:{containerName_0},{containerName_1},... 例如sidecar
- 是否支持变更:在原地升级过程中不可变更
#### Annotations
- 含义添加在service上的anno可填写多个
- 填写格式key1:value1,key2:value2...
- 是否支持变更:是
### 使用示例
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: gss-2048-clb
namespace: default
spec:
replicas: 3
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: Volcengine-CLB
networkConf:
- name: ClbIds
#Fill in Volcengine Cloud LoadBalancer Id here
value: clb-xxxxx
- name: PortProtocols
#Fill in the exposed ports and their corresponding protocols here.
#If there are multiple ports, the format is as follows: {port1}/{protocol1},{port2}/{protocol2}...
#If the protocol is not filled in, the default is TCP
value: 80/TCP
- name: AllocateLoadBalancerNodePorts
# Whether the generated service is assigned nodeport.
value: "true"
- name: Fixed
#Fill in here whether a fixed IP is required [optional] ; Default is false
value: "false"
- name: Annotations
#Fill in the anno related to clb on the service
#The format is as follows: {key1}:{value1},{key2}:{value2}...
value: "key1:value1,key2:value2"
gameServerTemplate:
spec:
containers:
- image: cr-helm2-cn-beijing.cr.volces.com/kruise/2048:v1.0
name: app-2048
volumeMounts:
- name: shared-dir
mountPath: /var/www/html/js
- image: cr-helm2-cn-beijing.cr.volces.com/kruise/2048-sidecar:v1.0
name: sidecar
args:
- bash
- -c
- rsync -aP /app/js/* /app/scripts/ && while true; do echo 11;sleep 2; done
volumeMounts:
- name: shared-dir
mountPath: /app/scripts
volumes:
- name: shared-dir
emptyDir: {}
EOF
```
检查GameServer中的网络状态:
```
networkStatus:
createTime: "2024-01-19T08:19:49Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: xxx.xxx.xx.xxx
ports:
- name: "80"
port: 6611
protocol: TCP
internalAddresses:
- ip: 172.16.200.60
ports:
- name: "80"
port: 80
protocol: TCP
lastTransitionTime: "2024-01-19T08:19:49Z"
networkType: Volcengine-CLB
```

View File

@ -0,0 +1,688 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package volcengine
import (
"context"
"fmt"
"strconv"
"strings"
"sync"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
log "k8s.io/klog/v2"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
cperrors "github.com/openkruise/kruise-game/cloudprovider/errors"
provideroptions "github.com/openkruise/kruise-game/cloudprovider/options"
"github.com/openkruise/kruise-game/cloudprovider/utils"
"github.com/openkruise/kruise-game/pkg/util"
)
const (
ClbNetwork = "Volcengine-CLB"
AliasCLB = "CLB-Network"
ClbIdLabelKey = "service.beta.kubernetes.io/volcengine-loadbalancer-id"
ClbIdsConfigName = "ClbIds"
PortProtocolsConfigName = "PortProtocols"
FixedConfigName = "Fixed"
AllocateLoadBalancerNodePorts = "AllocateLoadBalancerNodePorts"
ClbAnnotations = "Annotations"
ClbConfigHashKey = "game.kruise.io/network-config-hash"
ClbIdAnnotationKey = "service.beta.kubernetes.io/volcengine-loadbalancer-id"
ClbAddressTypeKey = "service.beta.kubernetes.io/volcengine-loadbalancer-address-type"
ClbAddressTypePublic = "PUBLIC"
ClbSchedulerKey = "service.beta.kubernetes.io/volcengine-loadbalancer-scheduler"
ClbSchedulerWRR = "wrr"
SvcSelectorKey = "statefulset.kubernetes.io/pod-name"
EnableClbScatterConfigName = "EnableClbScatter"
EnableMultiIngressConfigName = "EnableMultiIngress"
)
type portAllocated map[int32]bool
type ClbPlugin struct {
maxPort int32
minPort int32
blockPorts []int32
cache map[string]portAllocated
podAllocate map[string]string
mutex sync.RWMutex
lastScatterIdx int // 新增:用于轮询打散
}
type clbConfig struct {
lbIds []string
targetPorts []int
protocols []corev1.Protocol
isFixed bool
annotations map[string]string
allocateLoadBalancerNodePorts bool
enableClbScatter bool // 新增:打散开关
enableMultiIngress bool // 新增:多 ingress IP 开关
}
func (c *ClbPlugin) Name() string {
return ClbNetwork
}
func (c *ClbPlugin) Alias() string {
return AliasCLB
}
func (c *ClbPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
log.Infof("[CLB] Init called, options: %+v", options)
c.mutex.Lock()
defer c.mutex.Unlock()
clbOptions, ok := options.(provideroptions.VolcengineOptions)
if !ok {
log.Errorf("[CLB] failed to convert options to clbOptions: %+v", options)
return cperrors.ToPluginError(fmt.Errorf("failed to convert options to clbOptions"), cperrors.InternalError)
}
c.minPort = clbOptions.CLBOptions.MinPort
c.maxPort = clbOptions.CLBOptions.MaxPort
c.blockPorts = clbOptions.CLBOptions.BlockPorts
svcList := &corev1.ServiceList{}
err := client.List(ctx, svcList)
if err != nil {
log.Errorf("[CLB] client.List failed: %v", err)
return err
}
c.cache, c.podAllocate = initLbCache(svcList.Items, c.minPort, c.maxPort, c.blockPorts)
log.Infof("[CLB] Init finished, minPort=%d, maxPort=%d, blockPorts=%v, svcCount=%d", c.minPort, c.maxPort, c.blockPorts, len(svcList.Items))
return nil
}
func initLbCache(svcList []corev1.Service, minPort, maxPort int32, blockPorts []int32) (map[string]portAllocated, map[string]string) {
newCache := make(map[string]portAllocated)
newPodAllocate := make(map[string]string)
for _, svc := range svcList {
lbId := svc.Labels[ClbIdLabelKey]
if lbId != "" && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
if newCache[lbId] == nil {
newCache[lbId] = make(portAllocated, maxPort-minPort)
for i := minPort; i < maxPort; i++ {
newCache[lbId][i] = false
}
}
// block ports
for _, blockPort := range blockPorts {
newCache[lbId][blockPort] = true
}
var ports []int32
for _, port := range getPorts(svc.Spec.Ports) {
if port <= maxPort && port >= minPort {
newCache[lbId][port] = true
ports = append(ports, port)
}
}
if len(ports) != 0 {
newPodAllocate[svc.GetNamespace()+"/"+svc.GetName()] = lbId + ":" + util.Int32SliceToString(ports, ",")
}
}
}
log.Infof("[%s] podAllocate cache complete initialization: %v", ClbNetwork, newPodAllocate)
return newCache, newPodAllocate
}
func (c *ClbPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
log.Infof("[CLB] OnPodAdded called for pod %s/%s", pod.GetNamespace(), pod.GetName())
return pod, nil
}
func (c *ClbPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, cperrors.PluginError) {
log.Infof("[CLB] OnPodUpdated called for pod %s/%s", pod.GetNamespace(), pod.GetName())
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, err := networkManager.GetNetworkStatus()
if err != nil {
log.Errorf("[CLB] GetNetworkStatus failed: %v", err)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkConfig := networkManager.GetNetworkConfig()
log.V(4).Infof("[CLB] NetworkConfig: %+v", networkConfig)
config := parseLbConfig(networkConfig)
log.V(4).Infof("[CLB] Parsed clbConfig: %+v", config)
if networkStatus == nil {
log.Infof("[CLB] networkStatus is nil, set NetworkNotReady for pod %s/%s", pod.GetNamespace(), pod.GetName())
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}, pod)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
networkStatus = &gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkNotReady,
}
}
// get svc
svc := &corev1.Service{}
err = client.Get(ctx, types.NamespacedName{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
}, svc)
if err != nil {
if errors.IsNotFound(err) {
log.Infof("[CLB] Service not found for pod %s/%s, will create new svc", pod.GetNamespace(), pod.GetName())
svc, err := c.consSvc(config, pod, client, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
return pod, cperrors.ToPluginError(client.Create(ctx, svc), cperrors.ApiCallError)
}
log.Errorf("[CLB] client.Get svc failed: %v", err)
return pod, cperrors.NewPluginError(cperrors.ApiCallError, err.Error())
}
if len(svc.OwnerReferences) > 0 && svc.OwnerReferences[0].Kind == "Pod" && svc.OwnerReferences[0].UID != pod.UID {
log.Infof("[CLB] waiting old svc %s/%s deleted. old owner pod uid is %s, but now is %s", svc.Namespace, svc.Name, svc.OwnerReferences[0].UID, pod.UID)
return pod, nil
}
// update svc
if util.GetHash(config) != svc.GetAnnotations()[ClbConfigHashKey] {
log.Infof("[CLB] config hash changed for pod %s/%s, updating svc", pod.GetNamespace(), pod.GetName())
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
return pod, cperrors.NewPluginError(cperrors.InternalError, err.Error())
}
newSvc, err := c.consSvc(config, pod, client, ctx)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
return pod, cperrors.ToPluginError(client.Update(ctx, newSvc), cperrors.ApiCallError)
}
// disable network
if networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
log.V(4).Infof("[CLB] Network disabled, set svc type to ClusterIP for pod %s/%s", pod.GetNamespace(), pod.GetName())
svc.Spec.Type = corev1.ServiceTypeClusterIP
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// enable network
if !networkManager.GetNetworkDisabled() && svc.Spec.Type == corev1.ServiceTypeClusterIP {
log.V(4).Infof("[CLB] Network enabled, set svc type to LoadBalancer for pod %s/%s", pod.GetNamespace(), pod.GetName())
svc.Spec.Type = corev1.ServiceTypeLoadBalancer
return pod, cperrors.ToPluginError(client.Update(ctx, svc), cperrors.ApiCallError)
}
// network not ready
if len(svc.Status.LoadBalancer.Ingress) == 0 {
log.Infof("[CLB] svc %s/%s has no ingress, network not ready", svc.Namespace, svc.Name)
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkNotReady
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
// allow not ready containers
if util.IsAllowNotReadyContainers(networkManager.GetNetworkConfig()) {
log.V(4).Infof("[CLB] AllowNotReadyContainers enabled for pod %s/%s", pod.GetNamespace(), pod.GetName())
toUpDateSvc, err := utils.AllowNotReadyContainers(client, ctx, pod, svc, false)
if err != nil {
return pod, err
}
if toUpDateSvc {
err := client.Update(ctx, svc)
if err != nil {
return pod, cperrors.ToPluginError(err, cperrors.ApiCallError)
}
}
}
// network ready
networkReady(svc, pod, networkStatus, config)
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
return pod, cperrors.ToPluginError(err, cperrors.InternalError)
}
func networkReady(svc *corev1.Service, pod *corev1.Pod, networkStatus *gamekruiseiov1alpha1.NetworkStatus, config *clbConfig) {
internalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
externalAddresses := make([]gamekruiseiov1alpha1.NetworkAddress, 0)
// 检查是否启用多 ingress IP 支持
if config.enableMultiIngress && len(svc.Status.LoadBalancer.Ingress) > 1 {
// 多 ingress IP 模式:为每个 ingress IP 创建单独的 external address
for _, ingress := range svc.Status.LoadBalancer.Ingress {
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
// 每个 ingress IP 都创建一个单独的 external address
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: ingress.IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
externalAddresses = append(externalAddresses, externalAddress)
}
}
} else {
// 单 ingress IP 模式(原有逻辑)
if len(svc.Status.LoadBalancer.Ingress) > 0 {
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
instrEPort := intstr.FromInt(int(port.Port))
externalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: svc.Status.LoadBalancer.Ingress[0].IP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrEPort,
Protocol: port.Protocol,
},
},
}
externalAddresses = append(externalAddresses, externalAddress)
}
}
}
// internal addresses 逻辑保持不变
for _, port := range svc.Spec.Ports {
instrIPort := port.TargetPort
internalAddress := gamekruiseiov1alpha1.NetworkAddress{
IP: pod.Status.PodIP,
Ports: []gamekruiseiov1alpha1.NetworkPort{
{
Name: instrIPort.String(),
Port: &instrIPort,
Protocol: port.Protocol,
},
},
}
internalAddresses = append(internalAddresses, internalAddress)
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
}
func (c *ClbPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) cperrors.PluginError {
log.Infof("[CLB] OnPodDeleted called for pod %s/%s", pod.GetNamespace(), pod.GetName())
networkManager := utils.NewNetworkManager(pod, client)
networkConfig := networkManager.GetNetworkConfig()
sc := parseLbConfig(networkConfig)
var podKeys []string
if sc.isFixed {
log.Infof("[CLB] isFixed=true, check gss for pod %s/%s", pod.GetNamespace(), pod.GetName())
gss, err := util.GetGameServerSetOfPod(pod, client, ctx)
if err != nil && !errors.IsNotFound(err) {
return cperrors.ToPluginError(err, cperrors.ApiCallError)
}
// gss exists in cluster, do not deAllocate.
if err == nil && gss.GetDeletionTimestamp() == nil {
log.Infof("[CLB] gss exists, skip deAllocate for pod %s/%s", pod.GetNamespace(), pod.GetName())
return nil
}
// gss not exists in cluster, deAllocate all the ports related to it.
for key := range c.podAllocate {
gssName := pod.GetLabels()[gamekruiseiov1alpha1.GameServerOwnerGssKey]
if strings.Contains(key, pod.GetNamespace()+"/"+gssName) {
podKeys = append(podKeys, key)
}
}
} else {
podKeys = append(podKeys, pod.GetNamespace()+"/"+pod.GetName())
}
for _, podKey := range podKeys {
log.Infof("[CLB] deAllocate for podKey %s", podKey)
c.deAllocate(podKey)
}
return nil
}
func (c *ClbPlugin) allocate(lbIds []string, num int, nsName string, enableClbScatter ...bool) (string, []int32, error) {
c.mutex.Lock()
defer c.mutex.Unlock()
log.Infof("[CLB] allocate called, lbIds=%v, num=%d, nsName=%s, scatter=%v", lbIds, num, nsName, enableClbScatter)
if len(lbIds) == 0 {
return "", nil, fmt.Errorf("no load balancer IDs provided")
}
var ports []int32
var lbId string
useScatter := false
if len(enableClbScatter) > 0 {
useScatter = enableClbScatter[0]
}
if useScatter && len(lbIds) > 0 {
log.V(4).Infof("[CLB] scatter enabled, round robin from idx %d", c.lastScatterIdx)
// 轮询分配
startIdx := c.lastScatterIdx % len(lbIds)
for i := 0; i < len(lbIds); i++ {
idx := (startIdx + i) % len(lbIds)
clbId := lbIds[idx]
if c.cache[clbId] == nil {
// we assume that an empty cache is allways allocatable
c.newCacheForSingleLb(clbId)
lbId = clbId
c.lastScatterIdx = idx + 1 // 下次从下一个开始
break
}
sum := 0
for p := c.minPort; p < c.maxPort; p++ {
if !c.cache[clbId][p] {
sum++
}
if sum >= num {
lbId = clbId
c.lastScatterIdx = idx + 1 // 下次从下一个开始
break
}
}
if lbId != "" {
break
}
}
} else {
log.V(4).Infof("[CLB] scatter disabled, use default order")
// 原有逻辑
for _, clbId := range lbIds {
if c.cache[clbId] == nil {
c.newCacheForSingleLb(clbId)
lbId = clbId
break
}
sum := 0
for i := c.minPort; i < c.maxPort; i++ {
if !c.cache[clbId][i] {
sum++
}
if sum >= num {
lbId = clbId
break
}
}
if lbId != "" {
break
}
}
}
if lbId == "" {
return "", nil, fmt.Errorf("unable to find load balancer with %d available ports", num)
}
// Find available ports sequentially
portCount := 0
for port := c.minPort; port < c.maxPort && portCount < num; port++ {
if !c.cache[lbId][port] {
c.cache[lbId][port] = true
ports = append(ports, port)
portCount++
}
}
// Check if we found enough ports
if len(ports) < num {
// Rollback: release allocated ports
for _, port := range ports {
c.cache[lbId][port] = false
}
return "", nil, fmt.Errorf("insufficient available ports on load balancer %s: found %d, need %d", lbId, len(ports), num)
}
c.podAllocate[nsName] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("[CLB] pod %s allocate clb %s ports %v", nsName, lbId, ports)
return lbId, ports, nil
}
// newCacheForSingleLb initializes the port allocation cache for a single load balancer. MUST BE CALLED IN LOCK STATE
func (c *ClbPlugin) newCacheForSingleLb(lbId string) {
if c.cache[lbId] == nil {
c.cache[lbId] = make(portAllocated, c.maxPort-c.minPort+1)
for i := c.minPort; i <= c.maxPort; i++ {
c.cache[lbId][i] = false
}
// block ports
for _, blockPort := range c.blockPorts {
c.cache[lbId][blockPort] = true
}
}
}
func (c *ClbPlugin) deAllocate(nsName string) {
c.mutex.Lock()
defer c.mutex.Unlock()
log.Infof("[CLB] deAllocate called for nsName=%s", nsName)
allocatedPorts, exist := c.podAllocate[nsName]
if !exist {
log.Warningf("[CLB] deAllocate: nsName=%s not found in podAllocate", nsName)
return
}
clbPorts := strings.Split(allocatedPorts, ":")
lbId := clbPorts[0]
ports := util.StringToInt32Slice(clbPorts[1], ",")
for _, port := range ports {
c.cache[lbId][port] = false
}
// block ports
for _, blockPort := range c.blockPorts {
c.cache[lbId][blockPort] = true
}
delete(c.podAllocate, nsName)
log.Infof("pod %s deallocate clb %s ports %v", nsName, lbId, ports)
}
func init() {
clbPlugin := ClbPlugin{
mutex: sync.RWMutex{},
}
volcengineProvider.registerPlugin(&clbPlugin)
}
func parseLbConfig(conf []gamekruiseiov1alpha1.NetworkConfParams) *clbConfig {
log.Infof("[CLB] parseLbConfig called, conf=%+v", conf)
var lbIds []string
ports := make([]int, 0)
protocols := make([]corev1.Protocol, 0)
isFixed := false
allocateLoadBalancerNodePorts := true
annotations := map[string]string{}
enableClbScatter := false
enableMultiIngress := false
for _, c := range conf {
switch c.Name {
case ClbIdsConfigName:
seenIds := make(map[string]struct{})
for _, clbId := range strings.Split(c.Value, ",") {
if clbId != "" {
if _, exists := seenIds[clbId]; !exists {
lbIds = append(lbIds, clbId)
seenIds[clbId] = struct{}{}
}
}
}
case PortProtocolsConfigName:
for _, pp := range strings.Split(c.Value, ",") {
ppSlice := strings.Split(pp, "/")
port, err := strconv.Atoi(ppSlice[0])
if err != nil {
continue
}
ports = append(ports, port)
if len(ppSlice) != 2 {
protocols = append(protocols, corev1.ProtocolTCP)
} else {
protocols = append(protocols, corev1.Protocol(ppSlice[1]))
}
}
case FixedConfigName:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
isFixed = v
case AllocateLoadBalancerNodePorts:
v, err := strconv.ParseBool(c.Value)
if err != nil {
continue
}
allocateLoadBalancerNodePorts = v
case ClbAnnotations:
for _, anno := range strings.Split(c.Value, ",") {
annoKV := strings.Split(anno, ":")
if len(annoKV) == 2 {
annotations[annoKV[0]] = annoKV[1]
} else {
log.Warningf("clb annotation %s is invalid", annoKV[0])
}
}
case EnableClbScatterConfigName:
v, err := strconv.ParseBool(c.Value)
if err == nil {
enableClbScatter = v
}
case EnableMultiIngressConfigName:
v, err := strconv.ParseBool(c.Value)
if err == nil {
enableMultiIngress = v
}
}
}
return &clbConfig{
lbIds: lbIds,
protocols: protocols,
targetPorts: ports,
isFixed: isFixed,
annotations: annotations,
allocateLoadBalancerNodePorts: allocateLoadBalancerNodePorts,
enableClbScatter: enableClbScatter,
enableMultiIngress: enableMultiIngress,
}
}
func getPorts(ports []corev1.ServicePort) []int32 {
var ret []int32
for _, port := range ports {
ret = append(ret, port.Port)
}
return ret
}
func (c *ClbPlugin) consSvc(config *clbConfig, pod *corev1.Pod, client client.Client, ctx context.Context) (*corev1.Service, error) {
var ports []int32
var lbId string
podKey := pod.GetNamespace() + "/" + pod.GetName()
allocatedPorts, exist := c.podAllocate[podKey]
if exist {
clbPorts := strings.Split(allocatedPorts, ":")
lbId = clbPorts[0]
ports = util.StringToInt32Slice(clbPorts[1], ",")
} else {
var err error
lbId, ports, err = c.allocate(config.lbIds, len(config.targetPorts), podKey, config.enableClbScatter)
if err != nil {
log.Errorf("[CLB] pod %s allocate clb failed: %v", podKey, err)
return nil, err
}
}
svcPorts := make([]corev1.ServicePort, 0)
for i := 0; i < len(config.targetPorts); i++ {
portName := fmt.Sprintf("%d-%s", config.targetPorts[i], strings.ToLower(string(config.protocols[i])))
svcPorts = append(svcPorts, corev1.ServicePort{
Name: portName,
Port: ports[i],
Protocol: config.protocols[i],
TargetPort: intstr.FromInt(config.targetPorts[i]),
})
}
annotations := map[string]string{
ClbSchedulerKey: ClbSchedulerWRR,
ClbAddressTypeKey: ClbAddressTypePublic,
ClbIdAnnotationKey: lbId,
ClbConfigHashKey: util.GetHash(config),
}
for key, value := range config.annotations {
annotations[key] = value
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: pod.GetName(),
Namespace: pod.GetNamespace(),
Annotations: annotations,
OwnerReferences: getSvcOwnerReference(client, ctx, pod, config.isFixed),
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeLoadBalancer,
Selector: map[string]string{
SvcSelectorKey: pod.GetName(),
},
Ports: svcPorts,
AllocateLoadBalancerNodePorts: ptr.To[bool](config.allocateLoadBalancerNodePorts),
},
}
return svc, nil
}
func getSvcOwnerReference(c client.Client, ctx context.Context, pod *corev1.Pod, isFixed bool) []metav1.OwnerReference {
ownerReferences := []metav1.OwnerReference{
{
APIVersion: pod.APIVersion,
Kind: pod.Kind,
Name: pod.GetName(),
UID: pod.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
if isFixed {
gss, err := util.GetGameServerSetOfPod(pod, c, ctx)
if err == nil {
ownerReferences = []metav1.OwnerReference{
{
APIVersion: gss.APIVersion,
Kind: gss.Kind,
Name: gss.GetName(),
UID: gss.GetUID(),
Controller: ptr.To[bool](true),
BlockOwnerDeletion: ptr.To[bool](true),
},
}
}
}
return ownerReferences
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,206 @@
package volcengine
import (
"context"
"encoding/json"
"fmt"
"strconv"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider"
"github.com/openkruise/kruise-game/cloudprovider/errors"
"github.com/openkruise/kruise-game/cloudprovider/utils"
corev1 "k8s.io/api/core/v1"
log "k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
EIPNetwork = "Volcengine-EIP"
AliasSEIP = "EIP-Network"
ReleaseStrategyConfigName = "ReleaseStrategy"
PoolIdConfigName = "PoolId"
ResourceGroupIdConfigName = "ResourceGroupId"
BandwidthConfigName = "Bandwidth"
BandwidthPackageIdConfigName = "BandwidthPackageId"
ChargeTypeConfigName = "ChargeType"
DescriptionConfigName = "Description"
VkeAnnotationPrefix = "vke.volcengine.com"
UseExistEIPAnnotationKey = "vke.volcengine.com/primary-eip-id"
WithEIPAnnotationKey = "vke.volcengine.com/primary-eip-allocate"
EipAttributeAnnotationKey = "vke.volcengine.com/primary-eip-attributes"
EipStatusKey = "vke.volcengine.com/allocated-eips"
DefaultEipConfig = "{\"type\": \"Elastic\"}"
)
type eipStatus struct {
EipId string `json:"EipId,omitempty"` // EIP 实例 ID
EipAddress string `json:"EipAddress,omitempty"` // EIP 实例公网地址
EniId string `json:"EniId,omitempty"` // Pod 实例的弹性网卡 ID
EniIp string `json:"niIp,omitempty"` // Pod 实例的弹性网卡的私网 IPv4 地址
}
type EipPlugin struct {
}
func (E EipPlugin) Name() string {
return EIPNetwork
}
func (E EipPlugin) Alias() string {
return AliasSEIP
}
func (E EipPlugin) Init(client client.Client, options cloudprovider.CloudProviderOptions, ctx context.Context) error {
log.Infof("Initializing Volcengine EIP plugin")
return nil
}
func (E EipPlugin) OnPodAdded(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
log.Infof("begin to handle PodAdded for pod name %s, namespace %s", pod.Name, pod.Namespace)
networkManager := utils.NewNetworkManager(pod, client)
// 获取网络配置参数
networkConfs := networkManager.GetNetworkConfig()
log.Infof("pod %s/%s network configs: %+v", pod.Namespace, pod.Name, networkConfs)
if networkManager.GetNetworkType() != EIPNetwork {
log.Infof("pod %s/%s network type is not %s, skipping", pod.Namespace, pod.Name, EIPNetwork)
return pod, nil
}
log.Infof("processing pod %s/%s with Volcengine EIP network", pod.Namespace, pod.Name)
// 检查是否有 UseExistEIPAnnotationKey 的配置
eipID := ""
if pod.Annotations == nil {
log.Infof("pod %s/%s has no annotations, initializing", pod.Namespace, pod.Name)
pod.Annotations = make(map[string]string)
}
eipConfig := make(map[string]interface{})
// 从配置中提取参数
for _, conf := range networkConfs {
log.Infof("processing network config for pod %s/%s: %s=%s", pod.Namespace, pod.Name, conf.Name, conf.Value)
switch conf.Name {
case UseExistEIPAnnotationKey:
pod.Annotations[UseExistEIPAnnotationKey] = conf.Value
eipID = conf.Value
log.Infof("pod %s/%s using existing EIP ID: %s", pod.Namespace, pod.Name, eipID)
case "billingType":
var err error
eipConfig[conf.Name], err = strconv.ParseInt(conf.Value, 10, 64)
if err != nil {
log.Infof("failed to parse billingType for pod %s/%s: %v", pod.Namespace, pod.Name, err)
return pod, errors.ToPluginError(err, errors.InternalError)
}
log.Infof("pod %s/%s billingType set to: %v", pod.Namespace, pod.Name, eipConfig[conf.Name])
case "bandwidth":
var err error
eipConfig[conf.Name], err = strconv.ParseInt(conf.Value, 10, 64)
if err != nil {
log.Infof("failed to parse bandwidth for pod %s/%s: %v", pod.Namespace, pod.Name, err)
return pod, errors.ToPluginError(err, errors.InternalError)
}
log.Infof("pod %s/%s bandwidth set to: %v", pod.Namespace, pod.Name, eipConfig[conf.Name])
default:
eipConfig[conf.Name] = conf.Value
log.Infof("pod %s/%s setting %s to: %v", pod.Namespace, pod.Name, conf.Name, conf.Value)
}
}
// 更新 Pod 注解
if eipID != "" {
// 使用已有的 EIP
log.Infof("pod %s/%s using existing EIP ID: %s", pod.Namespace, pod.Name, eipID)
pod.Annotations[UseExistEIPAnnotationKey] = eipID
} else {
// 使用新建逻辑
if len(eipConfig) == 0 {
eipConfig["description"] = "Created by the OKG Volcengine EIP plugin. Do not delete or modify."
}
configs, _ := json.Marshal(eipConfig)
log.Infof("pod %s/%s allocating new EIP with config: %s", pod.Namespace, pod.Name, string(configs))
pod.Annotations[WithEIPAnnotationKey] = DefaultEipConfig
pod.Annotations[EipAttributeAnnotationKey] = string(configs)
}
log.Infof("completed OnPodAdded for pod %s/%s", pod.Namespace, pod.Name)
return pod, nil
}
func (E EipPlugin) OnPodUpdated(client client.Client, pod *corev1.Pod, ctx context.Context) (*corev1.Pod, errors.PluginError) {
log.Infof("begin to handle PodUpdated for pod name %s, namespace %s", pod.Name, pod.Namespace)
networkManager := utils.NewNetworkManager(pod, client)
networkStatus, _ := networkManager.GetNetworkStatus()
if networkStatus == nil {
log.Infof("network status is nil for pod %s/%s, updating to waiting state", pod.Namespace, pod.Name)
pod, err := networkManager.UpdateNetworkStatus(gamekruiseiov1alpha1.NetworkStatus{
CurrentNetworkState: gamekruiseiov1alpha1.NetworkWaiting,
}, pod)
if err != nil {
log.Infof("failed to update network status for pod %s/%s: %v", pod.Namespace, pod.Name, err)
return pod, errors.ToPluginError(err, errors.InternalError)
}
return pod, nil
}
podEipStatus := []eipStatus{}
if str, ok := pod.Annotations[EipStatusKey]; ok {
log.Infof("found EIP status annotation for pod %s/%s: %s", pod.Namespace, pod.Name, str)
err := json.Unmarshal([]byte(str), &podEipStatus)
if err != nil {
log.Infof("failed to unmarshal EipStatusKey for pod %s/%s: %v", pod.Namespace, pod.Name, err)
return pod, errors.ToPluginError(fmt.Errorf("failed to unmarshal EipStatusKey, err: %w", err), errors.ParameterError)
}
log.Infof("updating network status for pod %s/%s, internal IP: %s, external IP: %s",
pod.Namespace, pod.Name, podEipStatus[0].EniIp, podEipStatus[0].EipAddress)
var internalAddresses []gamekruiseiov1alpha1.NetworkAddress
var externalAddresses []gamekruiseiov1alpha1.NetworkAddress
for _, eipStatus := range podEipStatus {
internalAddresses = append(internalAddresses, gamekruiseiov1alpha1.NetworkAddress{
IP: eipStatus.EniIp,
})
externalAddresses = append(externalAddresses, gamekruiseiov1alpha1.NetworkAddress{
IP: eipStatus.EipAddress,
})
}
networkStatus.InternalAddresses = internalAddresses
networkStatus.ExternalAddresses = externalAddresses
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
log.Infof("network for pod %s/%s is ready, EIP: %s", pod.Namespace, pod.Name, podEipStatus[0].EipAddress)
pod, err = networkManager.UpdateNetworkStatus(*networkStatus, pod)
if err != nil {
log.Infof("failed to update network status for pod %s/%s: %v", pod.Namespace, pod.Name, err)
}
return pod, errors.ToPluginError(err, errors.InternalError)
}
log.Infof("no EIP status found for pod %s/%s, waiting for allocation", pod.Namespace, pod.Name)
return pod, nil
}
func (E EipPlugin) OnPodDeleted(client client.Client, pod *corev1.Pod, ctx context.Context) errors.PluginError {
log.Infof("handling pod deletion for pod %s/%s", pod.Namespace, pod.Name)
// 检查是否需要额外处理
if pod.Annotations != nil {
if eipID, ok := pod.Annotations[UseExistEIPAnnotationKey]; ok {
log.Infof("pod %s/%s being deleted had existing EIP ID: %s", pod.Namespace, pod.Name, eipID)
}
if _, ok := pod.Annotations[WithEIPAnnotationKey]; ok {
log.Infof("pod %s/%s being deleted had allocated EIP", pod.Namespace, pod.Name)
}
}
log.Infof("completed deletion handling for pod %s/%s", pod.Namespace, pod.Name)
return nil
}
func init() {
volcengineProvider.registerPlugin(&EipPlugin{})
}

View File

@ -0,0 +1,182 @@
package volcengine
import (
"context"
"encoding/json"
"testing"
"github.com/openkruise/kruise-game/apis/v1alpha1"
gamekruiseiov1alpha1 "github.com/openkruise/kruise-game/apis/v1alpha1"
"github.com/openkruise/kruise-game/cloudprovider/alibabacloud/apis/v1beta1"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
func TestEipPlugin_Init(t *testing.T) {
plugin := EipPlugin{}
assert.Equal(t, EIPNetwork, plugin.Name())
assert.Equal(t, AliasSEIP, plugin.Alias())
err := plugin.Init(nil, nil, context.Background())
assert.NoError(t, err)
}
func TestEipPlugin_OnPodAdded_UseExistingEIP(t *testing.T) {
// 创建测试 Pod
networkConf := []v1alpha1.NetworkConfParams{}
networkConf = append(networkConf, v1alpha1.NetworkConfParams{
Name: UseExistEIPAnnotationKey,
Value: "eip-12345",
})
jsonStr, _ := json.Marshal(networkConf)
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
v1alpha1.GameServerNetworkType: EIPNetwork,
v1alpha1.GameServerNetworkConf: string(jsonStr),
},
},
}
// 创建假的 client
scheme := runtime.NewScheme()
_ = corev1.AddToScheme(scheme)
fakeClient := fake.NewClientBuilder().WithScheme(scheme).Build()
// 执行测试
plugin := EipPlugin{}
updatedPod, err := plugin.OnPodAdded(fakeClient, pod, context.Background())
// 检查结果
assert.NoError(t, err)
assert.Equal(t, "eip-12345", updatedPod.Annotations[UseExistEIPAnnotationKey])
assert.Equal(t, EIPNetwork, updatedPod.Annotations[v1alpha1.GameServerNetworkType])
jErr := json.Unmarshal([]byte(updatedPod.Annotations[v1alpha1.GameServerNetworkConf]), &networkConf)
assert.NoError(t, jErr)
}
func addKvToParams(networkConf []v1alpha1.NetworkConfParams, keys []string, values []string) []v1alpha1.NetworkConfParams {
// 遍历 keys 和 values添加到 map 中
for i := 0; i < len(keys); i++ {
networkConf = append(networkConf, v1alpha1.NetworkConfParams{
Name: keys[i],
Value: values[i],
})
}
return networkConf
}
func TestEipPlugin_OnPodAdded_NewEIP(t *testing.T) {
networkConf := []v1alpha1.NetworkConfParams{}
networkConf = addKvToParams(networkConf, []string{"name", "isp", "bandwidth", "description", "billingType"},
[]string{"eip-demo", "BGP", "100", "demo for pods eip", "2"})
jsonStr, _ := json.Marshal(networkConf)
// 创建测试 Pod 并添加相关注解
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
v1alpha1.GameServerNetworkType: EIPNetwork,
v1alpha1.GameServerNetworkConf: string(jsonStr),
},
},
}
// 创建假的 client
scheme := runtime.NewScheme()
_ = corev1.AddToScheme(scheme)
fakeClient := fake.NewClientBuilder().WithScheme(scheme).Build()
// 执行测试
plugin := EipPlugin{}
updatedPod, err := plugin.OnPodAdded(fakeClient, pod, context.Background())
// 检查结果
assert.NoError(t, err)
assert.Equal(t, DefaultEipConfig, updatedPod.Annotations[WithEIPAnnotationKey])
assert.Equal(t, EIPNetwork, updatedPod.Annotations[v1alpha1.GameServerNetworkType])
attributeStr, ok := pod.Annotations[EipAttributeAnnotationKey]
assert.True(t, ok)
attributes := make(map[string]interface{})
jErr := json.Unmarshal([]byte(attributeStr), &attributes)
assert.NoError(t, jErr)
assert.Equal(t, "eip-demo", attributes["name"])
assert.Equal(t, "BGP", attributes["isp"])
assert.Equal(t, float64(100), attributes["bandwidth"])
assert.Equal(t, "demo for pods eip", attributes["description"])
assert.Equal(t, float64(2), attributes["billingType"])
}
func TestEipPlugin_OnPodUpdated_WithNetworkStatus(t *testing.T) {
// 创建测试 Pod 并添加网络状态
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
v1alpha1.GameServerNetworkType: EIPNetwork,
"cloud.kruise.io/network-status": `{"currentNetworkState":"Waiting"}`,
},
},
Status: corev1.PodStatus{},
}
// 创建假的 client 包含 PodEIP
scheme := runtime.NewScheme()
_ = corev1.AddToScheme(scheme)
_ = v1beta1.AddToScheme(scheme)
_ = gamekruiseiov1alpha1.AddToScheme(scheme)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(pod).
Build()
// 执行测试
plugin := EipPlugin{}
// Ensure network status includes EIP information
networkStatus := &v1alpha1.NetworkStatus{}
networkStatus.ExternalAddresses = []v1alpha1.NetworkAddress{{IP: "203.0.113.1"}}
networkStatus.InternalAddresses = []v1alpha1.NetworkAddress{{IP: "10.0.0.1"}}
networkStatus.CurrentNetworkState = gamekruiseiov1alpha1.NetworkReady
networkStatusBytes, jErr := json.Marshal(networkStatus)
assert.NoError(t, jErr)
pod.Annotations[v1alpha1.GameServerNetworkStatus] = string(networkStatusBytes)
updatedPod, err := plugin.OnPodUpdated(fakeClient, pod, context.Background())
assert.NoError(t, err)
// 更新一下podStatus
// Update network status manually to simulate what OnPodUpdated should do
jErr = json.Unmarshal([]byte(updatedPod.Annotations[v1alpha1.GameServerNetworkStatus]), &networkStatus)
assert.NoError(t, jErr)
// 检查结果
assert.Contains(t, updatedPod.Annotations[v1alpha1.GameServerNetworkStatus], "Ready")
assert.Contains(t, updatedPod.Annotations[v1alpha1.GameServerNetworkStatus], "203.0.113.1")
assert.Contains(t, updatedPod.Annotations[v1alpha1.GameServerNetworkStatus], "10.0.0.1")
}
func TestEipPlugin_OnPodDeleted(t *testing.T) {
plugin := EipPlugin{}
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
Namespace: "default",
Annotations: map[string]string{
v1alpha1.GameServerNetworkType: EIPNetwork,
"cloud.kruise.io/network-status": `{"currentNetworkState":"Waiting"}`,
},
},
Status: corev1.PodStatus{},
}
err := plugin.OnPodDeleted(nil, pod, context.Background())
assert.Nil(t, err)
}

View File

@ -0,0 +1,61 @@
/*
Copyright 2024 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package volcengine
import (
"github.com/openkruise/kruise-game/cloudprovider"
"k8s.io/klog/v2"
)
const (
Volcengine = "Volcengine"
)
var (
volcengineProvider = &Provider{
plugins: make(map[string]cloudprovider.Plugin),
}
)
type Provider struct {
plugins map[string]cloudprovider.Plugin
}
func (vp *Provider) Name() string {
return Volcengine
}
func (vp *Provider) ListPlugins() (map[string]cloudprovider.Plugin, error) {
if vp.plugins == nil {
return make(map[string]cloudprovider.Plugin), nil
}
return vp.plugins, nil
}
// register plugin of cloud provider and different cloud providers
func (vp *Provider) registerPlugin(plugin cloudprovider.Plugin) {
name := plugin.Name()
if name == "" {
klog.Fatal("empty plugin name")
}
vp.plugins[name] = plugin
}
func NewVolcengineProvider() (cloudprovider.CloudProvider, error) {
return volcengineProvider, nil
}

View File

@ -0,0 +1,31 @@
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned-issuer
namespace: system
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: cert
namespace: system
spec:
commonName: kruise-game-controller-manager
dnsNames:
- $(SERVICE_NAME).$(SERVICE_NAMESPACE)
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc.cluster.local
secretName: kruise-game-certs
usages:
- server auth
- client auth
privateKey:
algorithm: RSA
size: 2048
rotationPolicy: Never
issuerRef:
name: selfsigned-issuer
kind: Issuer
group: cert-manager.io

View File

@ -0,0 +1,5 @@
resources:
- certificate.yaml
configurations:
- kustomizeconfig.yaml

View File

@ -0,0 +1,16 @@
# This configuration is for teaching kustomize how to update name ref and var substitution
nameReference:
- kind: Issuer
group: cert-manager.io
fieldSpecs:
- kind: Certificate
group: cert-manager.io
path: spec/issuerRef/name
varReference:
- kind: Certificate
group: cert-manager.io
path: spec/commonName
- kind: Certificate
group: cert-manager.io
path: spec/dnsNames

View File

@ -3,8 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
creationTimestamp: null
controller-gen.kubebuilder.io/version: v0.16.5
name: poddnats.alibabacloud.com
spec:
group: alibabacloud.com
@ -21,14 +20,19 @@ spec:
description: PodDNAT is the Schema for the poddnats API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object

View File

@ -0,0 +1,113 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
name: podeips.alibabacloud.com
spec:
group: alibabacloud.com
names:
kind: PodEIP
listKind: PodEIPList
plural: podeips
singular: podeip
scope: Namespaced
versions:
- name: v1beta1
schema:
openAPIV3Schema:
description: PodEIP is the Schema for the podeips API
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: PodEIPSpec defines the desired state of PodEIP
properties:
allocationID:
type: string
allocationType:
description: AllocationType ip type and release strategy
properties:
releaseAfter:
type: string
releaseStrategy:
allOf:
- enum:
- Follow
- TTL
- Never
- enum:
- Follow
- TTL
- Never
description: ReleaseStrategy is the type for eip release strategy
type: string
type:
default: Auto
description: IPAllocType is the type for eip alloc strategy
enum:
- Auto
- Static
type: string
required:
- releaseStrategy
- type
type: object
bandwidthPackageID:
type: string
required:
- allocationID
- allocationType
type: object
status:
description: PodEIPStatus defines the observed state of PodEIP
properties:
bandwidthPackageID:
description: BandwidthPackageID
type: string
eipAddress:
description: eip
type: string
internetChargeType:
type: string
isp:
type: string
name:
type: string
networkInterfaceID:
description: eni
type: string
podLastSeen:
description: PodLastSeen is the timestamp when pod resource last seen
format: date-time
type: string
privateIPAddress:
type: string
publicIpAddressPoolID:
type: string
resourceGroupID:
type: string
status:
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,125 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.5
name: dedicatedclblisteners.networking.cloud.tencent.com
spec:
group: networking.cloud.tencent.com
names:
kind: DedicatedCLBListener
listKind: DedicatedCLBListenerList
plural: dedicatedclblisteners
singular: dedicatedclblistener
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: CLB ID
jsonPath: .spec.lbId
name: LbId
type: string
- description: Port of CLB Listener
jsonPath: .spec.lbPort
name: LbPort
type: integer
- description: Pod name of target pod
jsonPath: .spec.targetPod.podName
name: Pod
type: string
- description: State of the dedicated clb listener
jsonPath: .status.state
name: State
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: DedicatedCLBListener is the Schema for the dedicatedclblisteners
API
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: DedicatedCLBListenerSpec defines the desired state of DedicatedCLBListener
properties:
extensiveParameters:
type: string
lbId:
type: string
x-kubernetes-validations:
- message: Value is immutable
rule: self == oldSelf
lbPort:
format: int64
type: integer
x-kubernetes-validations:
- message: Value is immutable
rule: self == oldSelf
lbRegion:
type: string
x-kubernetes-validations:
- message: Value is immutable
rule: self == oldSelf
protocol:
enum:
- TCP
- UDP
type: string
x-kubernetes-validations:
- message: Value is immutable
rule: self == oldSelf
targetPod:
properties:
podName:
type: string
targetPort:
format: int64
type: integer
required:
- podName
- targetPort
type: object
required:
- lbId
- lbPort
- protocol
type: object
status:
description: DedicatedCLBListenerStatus defines the observed state of
DedicatedCLBListener
properties:
address:
type: string
listenerId:
type: string
message:
type: string
state:
enum:
- Bound
- Available
- Pending
- Failed
- Deleting
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@ -20,7 +20,7 @@ bases:
# crd/kustomization.yaml
- ../webhook
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
#- ../certmanager
- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
# - ../prometheus
@ -38,39 +38,39 @@ patchesStrategicMerge:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
#- manager_webhook_patch.yaml
- manager_webhook_patch.yaml
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'.
# Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA injection in the admission webhooks.
# 'CERTMANAGER' needs to be enabled to use ca injection
#- webhookcainjection_patch.yaml
- webhookcainjection_patch.yaml
# the following config is for teaching kustomize how to do var substitution
vars:
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
#- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
# objref:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert # this name should match the one in certificate.yaml
# fieldref:
# fieldpath: metadata.namespace
#- name: CERTIFICATE_NAME
# objref:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert # this name should match the one in certificate.yaml
#- name: SERVICE_NAMESPACE # namespace of the service
# objref:
# kind: Service
# version: v1
# name: webhook-service
# fieldref:
# fieldpath: metadata.namespace
#- name: SERVICE_NAME
# objref:
# kind: Service
# version: v1
# name: webhook-service
- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
objref:
kind: Certificate
group: cert-manager.io
version: v1
name: cert # this name should match the one in certificate.yaml
fieldref:
fieldpath: metadata.namespace
- name: CERTIFICATE_NAME
objref:
kind: Certificate
group: cert-manager.io
version: v1
name: cert # this name should match the one in certificate.yaml
- name: SERVICE_NAMESPACE # namespace of the service
objref:
kind: Service
version: v1
name: webhook-service
fieldref:
fieldpath: metadata.namespace
- name: SERVICE_NAME
objref:
kind: Service
version: v1
name: webhook-service

View File

@ -9,15 +9,16 @@ spec:
containers:
- name: manager
ports:
- containerPort: 9443
- containerPort: 9876
name: webhook-server
protocol: TCP
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
- mountPath: /tmp/webhook-certs/
name: cert
readOnly: true
volumes:
- name: cert
secret:
defaultMode: 420
secretName: webhook-server-cert
secretName: kruise-game-certs
optional: false

View File

@ -1,8 +1,15 @@
# This patch add annotation to admission webhook config and
# the variables $(CERTIFICATE_NAMESPACE) and $(CERTIFICATE_NAME) will be substituted by kustomize.
apiVersion: admissionregistration.k8s.io/v1beta1
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: mutating-webhook-configuration
name: mutating-webhook
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: validating-webhook
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)

View File

@ -9,3 +9,40 @@ enable = true
[alibabacloud.slb]
max_port = 700
min_port = 500
block_ports = [593]
[alibabacloud.nlb]
max_port = 1502
min_port = 1000
block_ports = [1025, 1434, 1068]
[hwcloud]
enable = true
[hwcloud.elb]
max_port = 700
min_port = 500
block_ports = []
[volcengine]
enable = true
[volcengine.clb]
max_port = 600
min_port = 550
block_ports = [593]
[aws]
enable = false
[aws.nlb]
max_port = 30050
min_port = 30001
[jdcloud]
enable = false
[jdcloud.nlb]
max_port = 700
min_port = 500
[tencentcloud]
enable = true
[tencentcloud.clb]
min_port = 700
max_port = 750

View File

@ -39,6 +39,9 @@ spec:
args:
- --leader-elect=false
- --provider-config=/etc/kruise-game/config.toml
- --api-server-qps=5
- --api-server-qps-burst=10
- --enable-cert-generation=false
image: controller:latest
name: manager
env:

View File

@ -2,7 +2,6 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: manager-role
rules:
- apiGroups:
@ -13,19 +12,49 @@ rules:
- create
- patch
- apiGroups:
- admissionregistration.k8s.io
- ""
resources:
- mutatingwebhookconfigurations
- nodes
- persistentvolumeclaims
- persistentvolumes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
- persistentvolumeclaims/status
- persistentvolumes/status
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods/status
- services/status
verbs:
- get
- patch
- update
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- create
@ -38,6 +67,7 @@ rules:
- alibabacloud.com
resources:
- poddnats
- podeips
verbs:
- get
- list
@ -46,6 +76,7 @@ rules:
- alibabacloud.com
resources:
- poddnats/status
- podeips/status
verbs:
- get
- apiGroups:
@ -62,17 +93,6 @@ rules:
- apps.kruise.io
resources:
- podprobemarkers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps.kruise.io
resources:
- statefulsets
verbs:
- create
@ -91,88 +111,32 @@ rules:
- patch
- update
- apiGroups:
- ""
- elbv2.k8s.aws
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- targetgroupbindings
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
- elbv2.services.k8s.aws
resources:
- pods/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- services
- listeners
- targetgroups
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- services/status
verbs:
- get
- patch
- update
- apiGroups:
- game.kruise.io
resources:
- gameservers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- game.kruise.io
resources:
- gameservers/finalizers
verbs:
- update
- apiGroups:
- game.kruise.io
resources:
- gameservers/status
verbs:
- get
- patch
- update
- apiGroups:
- game.kruise.io
resources:
- gameserversets
verbs:
- create
@ -185,17 +149,37 @@ rules:
- apiGroups:
- game.kruise.io
resources:
- gameservers/finalizers
- gameserversets/finalizers
verbs:
- update
- apiGroups:
- game.kruise.io
resources:
- gameservers/status
- gameserversets/status
verbs:
- get
- patch
- update
- apiGroups:
- networking.cloud.tencent.com
resources:
- dedicatedclblisteners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- networking.cloud.tencent.com
resources:
- dedicatedclblisteners/status
verbs:
- get
- apiGroups:
- networking.k8s.io
resources:

View File

@ -0,0 +1,297 @@
---
# service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: index-offset-scheduler
namespace: kruise-game-system
---
# clusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: 'true'
name: index-offset-scheduler
rules:
- apiGroups:
- ''
- events.k8s.io
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- coordination.k8s.io
resourceNames:
- kube-scheduler
- index-offset-scheduler
resources:
- leases
verbs:
- get
- list
- update
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leasecandidates
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- pods
verbs:
- delete
- get
- list
- watch
- apiGroups:
- ''
resources:
- bindings
- pods/binding
verbs:
- create
- apiGroups:
- ''
resources:
- pods/status
verbs:
- patch
- update
- apiGroups:
- ''
resources:
- replicationcontrollers
- services
verbs:
- get
- list
- watch
- apiGroups:
- apps
- extensions
resources:
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- persistentvolumeclaims
- persistentvolumes
verbs:
- get
- list
- watch
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
- apiGroups:
- storage.k8s.io
resources:
- csinodes
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- csidrivers
verbs:
- get
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- csistoragecapacities
verbs:
- get
- list
- watch
- apiGroups:
- ""
resourceNames:
- kube-scheduler
- index-offset-scheduler
resources:
- endpoints
verbs:
- delete
- get
- patch
- update
---
# ClusterRoleBinding: index-offset-scheduler
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: index-offset-scheduler-as-kube-scheduler
subjects:
- kind: ServiceAccount
name: index-offset-scheduler
namespace: kruise-game-system
roleRef:
kind: ClusterRole
name: index-offset-scheduler
apiGroup: rbac.authorization.k8s.io
---
# ClusterRoleBinding: system:volume-scheduler
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: index-offset-scheduler-as-volume-scheduler
subjects:
- kind: ServiceAccount
name: index-offset-scheduler
namespace: kruise-game-system
roleRef:
kind: ClusterRole
name: system:volume-scheduler
apiGroup: rbac.authorization.k8s.io
---
# RoleBinding: apiserver
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: index-offset-scheduler-extension-apiserver-authentication-reader
namespace: kube-system
roleRef:
kind: Role
name: extension-apiserver-authentication-reader
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: index-offset-scheduler
namespace: kruise-game-system
---
# configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: index-offset-scheduler-config
namespace: kruise-game-system
data:
scheduler-config.yaml: |
# stable v1 after version 1.25
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
leaderElection:
leaderElect: false
resourceNamespace: kruise-game-system
resourceName: index-offset-scheduler
profiles:
- schedulerName: index-offset-scheduler
plugins:
score:
enabled:
- name: index-offset-scheduler
---
# deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: index-offset-scheduler
namespace: kruise-game-system
labels:
app: index-offset-scheduler
spec:
replicas: 1
selector:
matchLabels:
app: index-offset-scheduler
template:
metadata:
labels:
app: index-offset-scheduler
spec:
serviceAccountName: index-offset-scheduler
containers:
- name: scheduler
# change your image
image: openkruise/kruise-game-scheduler-index-offset:v1.0
imagePullPolicy: Always
command:
- /app/index-offset-scheduler
- --config=/etc/kubernetes/scheduler-config.yaml
- --v=5
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 500m
memory: 512Mi
volumeMounts:
- name: config
mountPath: /etc/kubernetes
# imagePullSecrets:
# - name: <your image pull secret>
volumes:
- name: config
configMap:
name: index-offset-scheduler-config

View File

@ -0,0 +1,2 @@
resources:
- index-offset-scheduler.yaml

View File

@ -1,2 +1,6 @@
resources:
- manifests.yaml
- service.yaml
configurations:
- kustomizeconfig.yaml

View File

@ -0,0 +1,24 @@
# the following config is for teaching kustomize where to look at when substituting vars.
nameReference:
- kind: Service
version: v1
fieldSpecs:
- kind: MutatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/name
- kind: ValidatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/name
namespace:
- kind: MutatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/namespace
create: true
- kind: ValidatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/namespace
create: true
varReference:
- path: metadata/annotations

View File

@ -0,0 +1,65 @@
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: mutating-webhook
webhooks:
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
name: webhook-service
namespace: kruise-game-system
path: /mutate-v1-pod
failurePolicy: Fail
matchPolicy: Equivalent
name: mgameserverset.kb.io
rules:
- operations:
- CREATE
- UPDATE
- DELETE
apiGroups:
- ""
apiVersions:
- v1
resources:
- pods
objectSelector:
matchExpressions:
- key: game.kruise.io/owner-gss
operator: Exists
sideEffects: None
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: validating-webhook
webhooks:
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
name: webhook-service
namespace: kruise-game-system
path: /validate-v1alpha1-gss
failurePolicy: Fail
matchPolicy: Equivalent
name: vgameserverset.kb.io
namespaceSelector: {}
objectSelector: {}
rules:
- apiGroups:
- game.kruise.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- gameserversets
sideEffects: None
timeoutSeconds: 10

View File

@ -7,6 +7,6 @@ metadata:
spec:
ports:
- port: 443
targetPort: 9876
targetPort: webhook-server
selector:
control-plane: controller-manager

View File

@ -2,7 +2,7 @@
OpenKruiseGame allows you to set the states of game servers. You can manually set the value of opsState or DeletionPriority for a game server. You can also use the service quality feature to automatically set the value of opsState or DeletionPriority for a game server. During scale-in, a proper GameServerSet workload is selected for scale-in based on the states of game servers. The scale-in rules are as follows:
1. Scale in game servers based on the opsState values. Scale in the game servers for which the opsState values are `WaitToBeDeleted`, `None`, and `Maintaining` in sequence.
1. Scale in game servers based on the opsState values. Scale in the game servers for which the opsState values are `WaitToBeDeleted`, `None`, `Allocated`, and `Maintaining` in sequence.
2. If two or more game servers have the same opsState value, game servers are performed based on the values of DeletionPriority. The game server with the largest DeletionPriority value is deleted first.

View File

@ -58,12 +58,31 @@ OpenKruiseGame has the following core features:
<table style="border-collapse: collapse;">
<tr style="border: none;">
<td style="border: none;"><center><img src="../images/bilibili-logo.png" width="120"> </center></td>
<td style="border: none;"><center><img src="../images/hypergryph-logo.png" width="120" ></center> </td>
<td style="border: none;"><center><img src="../images/shangyou-logo.jpeg" width="120" ></center></td>
<td style="border: none;"><center><img src="../images/guanying-logo.png" width="120" ></center> </td>
<td style="border: none;"><center><img src="../images/booming-logo.png" width="120" ></center> </td>
<td style="border: none;"><center><img src="../images/xingzhe-logo.png" width="120" ></center> </td>
<td style="border: none;"><center><img src="../images/lilith-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="../images/hypergryph-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/jjworld-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/bilibili-logo.png" width="80"> </center></td>
<td style="border: none;"><center><img src="../images/shangyou-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="../images/yahaha-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/xingzhe-logo.png" width="80" ></center> </td>
</tr>
<tr style="border: none;">
<td style="border: none;"><center><img src="../images/juren-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/baibian-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="../images/chillyroom-logo.png" width="80" ></center></td>
<td style="border: none;"><center><img src="../images/wuduan-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/yostar-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/bekko-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/xingchao-logo.png" width="80" ></center> </td>
</tr>
<tr style="border: none;">
<td style="border: none;"><center><img src="../images/wanglong-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/guanying-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/booming-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/gsshosting-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/yongshi-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/360-logo.png" width="80" ></center> </td>
<td style="border: none;"><center><img src="../images/vma-logo.png" width="80" ></center> </td>
</tr>
</table>

View File

@ -38,8 +38,24 @@ type GameServerTemplate struct {
// Requests and claims for persistent volumes.
VolumeClaimTemplates []corev1.PersistentVolumeClaim `json:"volumeClaimTemplates,omitempty"`
// ReclaimPolicy indicates the reclaim policy for GameServer.
// Default is Cascade.
ReclaimPolicy GameServerReclaimPolicy `json:"reclaimPolicy,omitempty"`
}
type GameServerReclaimPolicy string
const (
// CascadeGameServerReclaimPolicy indicates that GameServer is deleted when the pod is deleted.
// The age of GameServer is exactly the same as that of the pod.
CascadeGameServerReclaimPolicy GameServerReclaimPolicy = "Cascade"
// DeleteGameServerReclaimPolicy indicates that GameServers will be deleted when replicas of GameServerSet decreases.
// The GameServer will not be deleted when the corresponding pod is deleted due to manual deletion, update, eviction, etc.
DeleteGameServerReclaimPolicy GameServerReclaimPolicy = "Delete"
)
```
#### UpdateStrategy
@ -241,6 +257,24 @@ type GameServerSpec struct {
// Whether to perform network isolation and cut off the access layer network
// Default is false
NetworkDisabled bool `json:"networkDisabled,omitempty"`
// Containers can be used to make the corresponding GameServer container fields
// different from the fields defined by GameServerTemplate in GameServerSetSpec.
Containers []GameServerContainer `json:"containers,omitempty"`
}
type GameServerContainer struct {
// Name indicates the name of the container to update.
Name string `json:"name"`
// Image indicates the image of the container to update.
// When Image updated, pod.spec.containers[*].image will be updated immediately.
Image string `json:"image,omitempty"`
// Resources indicates the resources of the container to update.
// When Resources updated, pod.spec.containers[*].Resources will be not updated immediately,
// which will be updated when pod recreate.
Resources corev1.ResourceRequirements `json:"resources,omitempty"`
}
```

View File

@ -1,5 +1,7 @@
## Feature overview
### Auto Scaling-down
Compared to stateless service types, game servers have higher requirements for automatic scaling, especially in terms of scaling down.
The differences between game servers become more and more obvious over time, and the precision requirements for scaling down are extremely high. Coarse-grained scaling mechanisms can easily cause negative effects such as player disconnections, resulting in huge losses for the business.
@ -59,7 +61,7 @@ spec:
periodSeconds: 15
triggers:
- type: external
metricType: Value
metricType: AverageValue
metadata:
scalerAddress: kruise-game-external-scaler.kruise-game-system:6000
@ -96,4 +98,110 @@ NAME STATE OPSSTATE DP UP
minecraft-1 Ready None 0 0
minecraft-2 Ready None 0 0
```
```
### Auto Scaling-up
In addition to setting the automatic scaling policy, you can also set the automatic scaling policy.
#### Scaling with resource metrics or custom metrics
Native Kubernetes supports auto scaling-up using CPU utilization, and its complete yaml is as follows:
```yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: minecraft # Fill in the name of the corresponding GameServerSet
spec:
scaleTargetRef:
name: minecraft # Fill in the name of the corresponding GameServerSet
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
pollingInterval: 30
minReplicaCount: 0
advanced:
horizontalPodAutoscalerConfig:
behavior: # Inherit from HPA behavior, refer to https://kubernetes.io/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/#configurable-scaling-behavior
scaleDown:
stabilizationWindowSeconds: 45 # Set the scaling-down stabilization window time to 45 seconds
policies:
- type: Percent
value: 100
periodSeconds: 15
triggers:
- type: external
metricType: AverageValue
metadata:
scalerAddress: kruise-game-external-scaler.kruise-game-system:6000
- type: cpu
metricType: Utilization # Allowed types are 'Utilization' or 'AverageValue'
metadata:
value: "50"
```
Pressure testing of the gameserver, you can see that the gameserver began to scale-up
```bash
kubectl get gss
NAME DESIRED CURRENT UPDATED READY MAINTAINING WAITTOBEDELETED AGE
minecraft 5 5 5 0 0 0 7s
# After a while
kubectl get gss
NAME DESIRED CURRENT UPDATED READY MAINTAINING WAITTOBEDELETED AGE
minecraft 20 20 20 20 0 0 137s
```
#### Set the minimum number of game servers whose opsState is None
OKG supports setting the minimum number of game servers. When the current number of game servers whose opsState is None is less than the set value, OKG will automatically expand new game servers so that the number of game servers whose opsState is None meets the set minimum number.
The configuration method is as follows. In this example, the minimum number of game servers with opsState set to None is 3:
```yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: minecraft # Fill in the name of the corresponding GameServerSet
spec:
scaleTargetRef:
name: minecraft # Fill in the name of the corresponding GameServerSet
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
pollingInterval: 30
minReplicaCount: 0
advanced:
horizontalPodAutoscalerConfig:
behavior: # Inherit from HPA behavior, refer to https://kubernetes.io/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/#configurable-scaling-behavior
scaleDown:
stabilizationWindowSeconds: 45 # Set the scaling-down stabilization window time to 45 seconds
policies:
- type: Percent
value: 100
periodSeconds: 15
triggers:
- type: external
metricType: AverageValue
metadata:
minAvailable: "3" # 设置opsState为None的游戏服的最小个数
scalerAddress: kruise-game-external-scaler.kruise-game-system:6000
```
First apply a GameServerSet with 1 replicas, after the KEDA detection cycle, immediately scale up two new game servers. At this time, the number of game servers whose opsState is None is not less than the minAvailable value, and scale-up process is completed.
```bash
kubectl get gs
NAME STATE OPSSTATE DP UP AGE
minecraft-0 Ready None 0 0 7s
# After a while
kubectl get gs
NAME STATE OPSSTATE DP UP AGE
minecraft-0 Ready None 0 0 20s
minecraft-1 Ready None 0 0 5s
minecraft-2 Ready None 0 0 5s
```

View File

@ -133,6 +133,7 @@ OpenKruiseGame supports the following network plugins:
- AlibabaCloud-NATGW
- AlibabaCloud-SLB
- AlibabaCloud-SLB-SharedPort
- Volcengine-EIP
---
@ -421,7 +422,7 @@ SlbIds
PortProtocols
- Meaning: the ports in the pod to be exposed and the protocols. You can specify multiple ports and protocols.
- Value: in the format of port1/protocol1,port2/protocol2,... The protocol names must be in uppercase letters.
- Value: in the format of port1/protocol1,port2/protocol2,... (same protocol port should like 8000/TCPUDP) The protocol names must be in uppercase letters.
- Configuration change supported or not: yes.
Fixed
@ -430,6 +431,84 @@ Fixed
- Value: false or true.
- Configuration change supported or not: yes.
ExternalTrafficPolicyType
- Meaning: Service LB forward type, if Local Service LB just forward traffice to local node Pod, we can keep source IP without SNAT
- Value: : Local/Cluster Default value is Cluster
- Configuration change supported or not: not. It maybe related to "IP/Port mapping relationship Fixed", recommend not to change
AllowNotReadyContainers
- Meaning: the container names that are allowed not ready when inplace updating, when traffic will not be cut.
- Value: {containerName_0},{containerName_1},... Examplesidecar
- Configuration change supported or not: It cannot be changed during the in-place updating process.
LBHealthCheckSwitch
- MeaningWhether to enable health check
- Format"on" means on, "off" means off. Default is on
- Whether to support changes: Yes
LBHealthCheckFlag
- Meaning: Whether to enable http type health check
- Format: "on" means on, "off" means off. Default is on
- Whether to support changes: Yes
LBHealthCheckType
- Meaning: Health Check Protocol
- Format: fill in "tcp" or "http", the default is tcp
- Whether to support changes: Yes
LBHealthCheckConnectTimeout
- Meaning: Maximum timeout for health check response.
- Format: Unit: seconds. The value range is [1, 300]. The default value is "5"
- Whether to support changes: Yes
LBHealthyThreshold
- Meaning: After the number of consecutive successful health checks, the health check status of the server will be determined from failure to success.
- Format: Value range [2, 10]. Default value is "2"
- Whether to support changes: Yes
LBUnhealthyThreshold
- Meaning: After the number of consecutive health check failures, the health check status of the server will be determined from success to failure.
- Format: Value range [2, 10]. The default value is "2"
- Whether to support changes: Yes
LBHealthCheckInterval
- Meaning: health check interval.
- Format: Unit: seconds. The value range is [1, 50]. The default value is "10"
- Whether to support changes: Yes
LBHealthCheckProtocolPort
- Meaningthe protocols & ports of HTTP type health check.
- FormatMultiple values are separated by ','. e.g. https:443,http:80
- Whether to support changes: Yes
LBHealthCheckUri
- Meaning: The corresponding uri when the health check type is HTTP.
- Format: The length is 1~80 characters, only letters, numbers, and characters can be used. Must start with a forward slash (/). Such as "/test/index.html"
- Whether to support changes: Yes
LBHealthCheckDomain
- Meaning: The corresponding domain name when the health check type is HTTP.
- Format: The length of a specific domain name is limited to 1~80 characters. Only lowercase letters, numbers, dashes (-), and half-width periods (.) can be used.
- Whether to support changes: Yes
LBHealthCheckMethod
- Meaning: The corresponding method when the health check type is HTTP.
- Format: "GET" or "HEAD"
- Whether to support changes: Yes
#### Plugin configuration
```
[alibabacloud]
@ -473,6 +552,787 @@ PortProtocols
- Value: in the format of port1/protocol1,port2/protocol2,... The protocol names must be in uppercase letters.
- Configuration change supported or not: no. The configuration change can be supported in future.
AllowNotReadyContainers
- Meaning: the container names that are allowed not ready when inplace updating, when traffic will not be cut.
- Value: {containerName_0},{containerName_1},... Examplesidecar
- Configuration change supported or not: It cannot be changed during the in-place updating process.
#### Plugin configuration
None
---
### AlibabaCloud-NLB
#### Plugin name
`AlibabaCloud-NLB`
#### Cloud Provider
AlibabaCloud
#### Plugin description
- AlibabaCloud-NLB enables game servers to be accessed from the Internet by using Layer 4 Network Load Balancer (NLB) of Alibaba Cloud. AlibabaCloud-NLB uses different ports of the same NLB instance to forward Internet traffic to different game servers. The NLB instance only forwards traffic, but does not implement load balancing.
- This network plugin supports network isolation.
#### Network parameters
NlbIds
- Meaning: the NLB instance ID. You can fill in multiple ids.
- Value: in the format of nlbId-0,nlbId-1,... An example value can be "nlb-ji8l844c0qzii1x6mc,nlb-26jbknebrjlejt5abu"
- Configuration change supported or not: yes. You can add new nlbIds at the end. However, it is recommended not to change existing nlbId that is in use.
PortProtocols
- Meaning: the ports in the pod to be exposed and the protocols. You can specify multiple ports and protocols.
- Value: in the format of port1/protocol1,port2/protocol2,... The protocol names must be in uppercase letters.
- Configuration change supported or not: yes.
Fixed
- Meaning: whether the mapping relationship is fixed. If the mapping relationship is fixed, the mapping relationship remains unchanged even if the pod is deleted and recreated.
- Value: false or true.
- Configuration change supported or not: yes.
AllowNotReadyContainers
- Meaning: the container names that are allowed not ready when inplace updating, when traffic will not be cut.
- Value: {containerName_0},{containerName_1},... Examplesidecar
- Configuration change supported or not: It cannot be changed during the in-place updating process.
LBHealthCheckFlag
- Meaning: Whether to enable health check
- Format: "on" means on, "off" means off. Default is on
- Whether to support changes: Yes
LBHealthCheckType
- Meaning: Health Check Protocol
- Format: fill in "tcp" or "http", the default is tcp
- Whether to support changes: Yes
LBHealthCheckConnectPort
- Meaning: Server port for health check.
- Format: Value range [0, 65535]. Default value is "0"
- Whether to support changes: Yes
LBHealthCheckConnectTimeout
- Meaning: Maximum timeout for health check response.
- Format: Unit: seconds. The value range is [1, 300]. The default value is "5"
- Whether to support changes: Yes
LBHealthyThreshold
- Meaning: After the number of consecutive successful health checks, the health check status of the server will be determined from failure to success.
- Format: Value range [2, 10]. Default value is "2"
- Whether to support changes: Yes
LBUnhealthyThreshold
- Meaning: After the number of consecutive health check failures, the health check status of the server will be determined from success to failure.
- Format: Value range [2, 10]. The default value is "2"
- Whether to support changes: Yes
LBHealthCheckInterval
- Meaning: health check interval.
- Format: Unit: seconds. The value range is [1, 50]. The default value is "10"
- Whether to support changes: Yes
LBHealthCheckUri
- Meaning: The corresponding uri when the health check type is HTTP.
- Format: The length is 1~80 characters, only letters, numbers, and characters can be used. Must start with a forward slash (/). Such as "/test/index.html"
- Whether to support changes: Yes
LBHealthCheckDomain
- Meaning: The corresponding domain name when the health check type is HTTP.
- Format: The length of a specific domain name is limited to 1~80 characters. Only lowercase letters, numbers, dashes (-), and half-width periods (.) can be used.
- Whether to support changes: Yes
LBHealthCheckMethod
- Meaning: The corresponding method when the health check type is HTTP.
- Format: "GET" or "HEAD"
- Whether to support changes: Yes
#### Plugin configuration
```
[alibabacloud]
enable = true
[alibabacloud.nlb]
# Specify the range of available ports of the NLB instance. Ports in this range can be used to forward Internet traffic to pods. In this example, the range includes 500 ports.
max_port = 1500
min_port = 1000
```
#### Example
```
cat <<EOF | kubectl apply -f -
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: gs-nlb
namespace: default
spec:
replicas: 1
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkConf:
- name: NlbIds
value: nlb-muyo7fv6z646ygcxxx
- name: PortProtocols
value: "80"
- name: Fixed
value: "true"
networkType: AlibabaCloud-NLB
gameServerTemplate:
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/gs-demo/gameserver:network
name: gameserver
EOF
```
The network status of GameServer would be as follows:
```
networkStatus:
createTime: "2024-04-28T12:41:56Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- endPoint: nlb-muyo7fv6z646ygcxxx.cn-xxx.nlb.aliyuncs.com
ip: ""
ports:
- name: "80"
port: 1047
protocol: TCP
internalAddresses:
- ip: 172.16.0.1
ports:
- name: "80"
port: 80
protocol: TCP
lastTransitionTime: "2024-04-28T12:41:56Z"
networkType: AlibabaCloud-NLB
```
Clients can access the game server by using nlb-muyo7fv6z646ygcxxx.cn-xxx.nlb.aliyuncs.com:1047
---
### AlibabaCloud-EIP
#### Plugin name
`AlibabaCloud-EIP`
#### Cloud Provider
AlibabaCloud
#### Plugin description
- Allocate a separate EIP for each GameServer
- The exposed public access port is consistent with the port monitored in the container, which is managed by security group.
- It is necessary to install the latest version of the ack-extend-network-controller component in the ACK cluster. For details, please refer to the [component description page](https://cs.console.aliyun.com/#/next/app-catalog/ack/incubator/ack-extend-network-controller).
#### Network parameters
ReleaseStrategy
- Meaning: Specifies the EIP release policy.
- Value:
- Follow: follows the lifecycle of the pod that is associated with the EIP. This is the default value.
- Never: does not release the EIP. You need to manually release the EIP when you no longer need the EIP. ( By 'kubectl delete podeip {gameserver name} -n {gameserver namespace}')
- You can also specify the timeout period of the EIP. For example, if you set the time period to 5m30s, the EIP is released 5.5 minutes after the pod is deleted. Time expressions written in Go are supported.
- Configuration change supported or not: no.
PoolId
- Meaning: Specifies the EIP address pool. For more information. It could be nil.
- Configuration change supported or not: no.
ResourceGroupId
- Meaning: Specifies the resource group to which the EIP belongs. It could be nil.
- Configuration change supported or not: no.
Bandwidth
- Meaning: Specifies the maximum bandwidth of the EIP. Unit: Mbit/s. It could be nil. Default is 5.
- Configuration change supported or not: no.
BandwidthPackageId
- Meaning: Specifies the EIP bandwidth plan that you want to use.
- Configuration change supported or not: no.
ChargeType
- Meaning: Specifies the metering method of the EIP.
- Value
- PayByTraffic: Fees are charged based on data transfer.
- PayByBandwidth: Fees are charged based on bandwidth usage.
- Configuration change supported or not: no.
Description
- Meaning: The description of EIP resource
- Configuration change supported or not: no.
#### Plugin configuration
None
#### Example
```yaml
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: eip-nginx
namespace: default
spec:
replicas: 1
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: AlibabaCloud-EIP
networkConf:
- name: ReleaseStrategy
value: Never
- name: Bandwidth
value: "3"
- name: ChargeType
value: PayByTraffic
gameServerTemplate:
spec:
containers:
- image: nginx
name: nginx
```
The network status of GameServer would be as follows:
```yaml
networkStatus:
createTime: "2023-07-17T10:10:18Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: 47.98.xxx.xxx
internalAddresses:
- ip: 192.168.1.51
lastTransitionTime: "2023-07-17T10:10:18Z"
networkType: AlibabaCloud-EIP
```
The generated podeip eip-nginx-0 would be as follows
```yaml
apiVersion: alibabacloud.com/v1beta1
kind: PodEIP
metadata:
annotations:
k8s.aliyun.com/eip-controller: ack-extend-network-controller
creationTimestamp: "2023-07-17T09:58:12Z"
finalizers:
- podeip-controller.alibabacloud.com/finalizer
generation: 1
name: eip-nginx-1
namespace: default
resourceVersion: "41443319"
uid: 105a9575-998e-4e17-ab91-8f2597eeb55f
spec:
allocationID: eip-xxx
allocationType:
releaseStrategy: Never
type: Auto
status:
eipAddress: 47.98.xxx.xxx
internetChargeType: PayByTraffic
isp: BGP
networkInterfaceID: eni-xxx
podLastSeen: "2023-07-17T10:36:02Z"
privateIPAddress: 192.168.1.51
resourceGroupID: rg-xxx
status: InUse
```
In addition, the generated EIP resource will be named after {pod namespace}/{pod name} in the Alibaba Cloud console, which corresponds to each game server one by one.
---
### AlibabaCloud-NLB-SharedPort
#### Plugin name
`AlibabaCloud-NLB-SharedPort`
#### Cloud Provider
AlibabaCloud
#### Plugin description
- AlibabaCloud-NLB-SharedPort enables game servers to be accessed from the Internet by using Layer 4 NLB of Alibaba Cloud, which is similar to AlibabaCloud-SLB-SharedPort.
This network plugin applies to stateless network services, such as proxy or gateway, in gaming scenarios.
- This network plugin supports network isolation.
#### Network parameters
SlbIds
- Meaning: the CLB instance IDs. You can specify multiple NLB instance IDs.
- Value: an example value can be nlb-9zeo7prq1m25ctpfrw1m7
- Configuration change supported or not: no.
PortProtocols
- Meaning: the ports in the pod to be exposed and the protocols. You can specify multiple ports and protocols.
- Value: in the format of port1/protocol1,port2/protocol2,... The protocol names must be in uppercase letters.
- Configuration change supported or not: no.
AllowNotReadyContainers
- Meaning: the container names that are allowed not ready when inplace updating, when traffic will not be cut.
- Value: {containerName_0},{containerName_1},... Examplesidecar
- Configuration change supported or not: It cannot be changed during the in-place updating process.
#### Plugin configuration
None
#### Example
Deploy a GameServerSet with two containers, one named app-2048 and the other named sidecar.
Specify the network parameter AllowNotReadyContainers as sidecar,
then the entire pod will still provide services when the sidecar is updated in place.
```yaml
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: gss-2048-nlb
namespace: default
spec:
replicas: 3
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
podUpdatePolicy: InPlaceIfPossible
network:
networkType: AlibabaCloud-NLB-SharedPort
networkConf:
- name: NlbIds
value: nlb-26jbknebrjlejt5abu
- name: PortProtocols
value: 80/TCP
- name: AllowNotReadyContainers
value: sidecar
gameServerTemplate:
spec:
containers:
- image: registry.cn-beijing.aliyuncs.com/acs/2048:v1.0
name: app-2048
volumeMounts:
- name: shared-dir
mountPath: /var/www/html/js
- image: registry.cn-beijing.aliyuncs.com/acs/2048-sidecar:v1.0
name: sidecar
args:
- bash
- -c
- rsync -aP /app/js/* /app/scripts/ && while true; do echo 11;sleep 2; done
volumeMounts:
- name: shared-dir
mountPath: /app/scripts
volumes:
- name: shared-dir
emptyDir: {}
```
After successful deployment, update the sidecar image to v2.0 and observe the corresponding endpoint:
```bash
kubectl get ep -w | grep nlb-26jbknebrjlejt5abu
nlb-26jbknebrjlejt5abu 192.168.0.8:80,192.168.0.82:80,192.168.63.228:80 10m
```
After waiting for the entire update process to end, you can find that there are no changes in the ep, indicating that no extraction has been performed.
---
### TencentCloud-CLB
#### Plugin name
`TencentCloud-CLB`
#### Cloud Provider
TencentCloud
#### Plugin description
- TencentCloud-CLB enables game servers to be accessed from the Internet by using Cloud Load Balancer (CLB) of Tencent Cloud. CLB is a type of Server Load Balancer (CLB). TencentCloud-CLB uses different ports for different game servers. The CLB instance only forwards traffic, but does not implement load balancing.
- The [tke-extend-network-controller](https://github.com/tkestack/tke-extend-network-controller) network plugin needs to be installed (can be installed through the TKE application market).
- This network plugin supports network isolation.
#### Network parameters
ClbIds
- Meaning: the CLB instance ID. You can fill in multiple ids.
- Value: in the format of slbId-0,slbId-1,... An example value can be "lb-9zeo7prq1m25ctpfrw1m7,lb-bp1qz7h50yd3w58h2f8je"
- Configuration change supported or not: yes. You can add new slbIds at the end. However, it is recommended not to change existing slbId that is in use.
PortProtocols
- Meaning: the ports in the pod to be exposed and the protocols. You can specify multiple ports and protocols.
- Value: in the format of port1/protocol1,port2/protocol2,... The protocol names must be in uppercase letters.
- Configuration change supported or not: yes.
#### Plugin configuration
```
[tencentcloud]
enable = true
[tencentcloud.clb]
# Specify the range of available ports of the CLB instance. Ports in this range can be used to forward Internet traffic to pods. In this example, the range includes 200 ports.
min_port = 1000
max_port = 1100
```
#### Example
```yaml
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: clb-nginx
namespace: default
spec:
replicas: 1
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: TencentCloud-CLB
networkConf:
- name: ClbIds
value: "lb-3ip9k5kr,lb-4ia8k0yh"
- name: PortProtocols
value: "80/TCP,7777/UDP"
gameServerTemplate:
spec:
containers:
- image: nginx
name: nginx
```
The network status of GameServer would be as follows:
```yaml
networkStatus:
createTime: "2024-10-28T03:16:20Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: 139.155.64.52
ports:
- name: "80"
port: 1002
protocol: TCP
- ip: 139.155.64.52
ports:
- name: "7777"
port: 1003
protocol: UDP
internalAddresses:
- ip: 172.16.7.106
ports:
- name: "80"
port: 80
protocol: TCP
- ip: 172.16.7.106
ports:
- name: "7777"
port: 7777
protocol: UDP
lastTransitionTime: "2024-10-28T03:16:20Z"
networkType: TencentCloud-CLB
```
---
### HwCloud-ELB
#### Plugin name
`HwCloud-ELB`
#### Cloud Provider
HwCloud
#### Plugin description
- HwCloud-ELB enables game servers to be accessed from the Internet by using Layer 4 Load Balancer (ELB) of Huawei Cloud. ELB is a type of Server Load Balancer (SLB). HwCloud-ELB uses different ports of the same ELB instance to forward Internet traffic to different game servers. The ELB instance only forwards traffic, but does not implement load balancing.
- This network plugin supports network isolation.
#### Network parameters
ElbIds
- Meaning: the ELB instance ID. You can fill in multiple ids. at least one
- Value: in the format of elbId-0,elbId-1,... An example value can be "lb-9zeo7prq1m25ctpfrw1m7,lb-bp1qz7h50yd3w58h2f8je"
- Configuration change supported or not: yes. You can add new elbIds at the end. However, it is recommended not to change existing elbId that is in use.
PortProtocols
- Meaning: the ports in the pod to be exposed and the protocols. You can specify multiple ports and protocols.
- Value: in the format of port1/protocol1,port2/protocol2,... (same protocol port should like 8000/TCPUDP) The protocol names must be in uppercase letters.
- Configuration change supported or not: yes.
Fixed
- Meaning: whether the mapping relationship is fixed. If the mapping relationship is fixed, the mapping relationship remains unchanged even if the pod is deleted and recreated.
- Value: false or true.
- Configuration change supported or not: yes.
AllowNotReadyContainers
- Meaning: the container names that are allowed not ready when inplace updating, when traffic will not be cut.
- Value: {containerName_0},{containerName_1},... Examplesidecar
- Configuration change supported or not: It cannot be changed during the in-place updating process.
ExternalTrafficPolicyType
- Meaning: Service LB forward type, if Local Service LB just forward traffice to local node Pod, we can keep source IP without SNAT
- Value: : Local/Cluster Default value is Cluster
- Configuration change supported or not: not. It maybe related to "IP/Port mapping relationship Fixed", recommend not to change
LB config parameters consistent with huawei cloud ccm https://github.com/kubernetes-sigs/cloud-provider-huaweicloud/blob/master/docs/usage-guide.md
LBHealthCheckFlag
- Meaning: Whether to enable health check
- Format: "on" means on, "off" means off. Default is on
- Whether to support changes: Yes
LBHealthCheckOption
- Meaning: Health Check Config
- Format: json string link {"delay": 3, "timeout": 15, "max_retries": 3}
- Whether to support changes: Yes
ElbClass
- Meaning: huawei lb class
- Format: dedicated or shared (default dedicated)
- Whether to support changes: No
ElbConnLimit
- Meaning: elb conn limit work with shared class lb
- Format: the value ranges from -1 to 2147483647. The default value is -1
- Whether to support changes: No
ElbLbAlgorithm
- Meaning: Specifies the load balancing algorithm of the backend server group
- Format: ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP default ROUND_ROBIN
- Whether to support changes: Yes
ElbSessionAffinityFlag
- Meaning: Specifies whether to enable session affinity
- Format: on, off default off
- Whether to support changes: Yes
ElbSessionAffinityOption
- Meaning: Specifies the sticky session timeout duration in minutes.
- Format: json string like {"type": "SOURCE_IP", "persistence_timeout": 15}
- Whether to support changes: Yes
ElbTransparentClientIP
- Meaning: Specifies whether to pass source IP addresses of the clients to backend servers
- Format: true or false default false
- Whether to support changes: Yes
ElbXForwardedHost
- Meaning: Specifies whether to rewrite the X-Forwarded-Host header
- Format: true or false default false
- Whether to support changes: Yes
ElbIdleTimeout
- Meaning: Specifies the idle timeout for the listener
- Format: 0 to 4000 default not set, use lb default value
- Whether to support changes: Yes
ElbRequestTimeout
- Meaning: Specifies the request timeout for the listener.
- Format: 1 to 300 default not set, use lb default value
- Whether to support changes: Yes
ElbResponseTimeout
- Meaning: Specifies the response timeout for the listener
- Format: 1 to 300 default not set, use lb default value
- Whether to support changes: Yes
#### Plugin configuration
```
[hwcloud]
enable = true
[hwcloud.elb]
max_port = 700
min_port = 500
block_ports = []
```
---
### Volcengine-EIP
#### Plugin name
`Volcengine-EIP`
#### Cloud Provider
Volcengine
#### Plugin description
- Allocates or binds a dedicated Elastic IP (EIP) from Volcengine for each GameServer. You can specify an existing EIP via annotation or `networkConf`, or let the system allocate a new EIP automatically.
- The exposed public access port is consistent with the port listened to in the container. Security group policies need to be configured by the user.
- Suitable for game server scenarios that require public network access.
- Requires the `vpc-cni-controlplane` component to be installed in the cluster. For details, see [component documentation](https://www.volcengine.com/docs/6460/101015).
#### Network parameters
> For more parameters, refer to: https://www.volcengine.com/docs/6460/1152127
name
- EIP name. If not specified, the system will generate one automatically.
- Whether to support changes: no.
isp
- EIP type.
- Whether to support changes: no.
projectName
- Meaning: Project name to which the EIP belongs. Default is `default`.
- Whether to support changes: no.
bandwidth
- Meaning: Peak bandwidth in Mbps. Optional.
- Whether to support changes: no.
bandwidthPackageId
- Meaning: Shared bandwidth package ID to bind. Optional. If not set, EIP will not be bound to a shared bandwidth package.
- Whether to support changes: no.
billingType
- Meaning: EIP billing type.
- Value:
- 2: (default) Pay-by-bandwidth.
- 3: Pay-by-traffic.
- Whether to support changes: no.
description
- Meaning: Description of the EIP resource.
- Whether to support changes: no.
#### Annotation parameters
- `vke.volcengine.com/primary-eip-id`: Specify an existing EIP ID. The Pod will bind this EIP at startup.
#### Plugin configuration
None
#### Example
```yaml
apiVersion: game.kruise.io/v1alpha1
kind: GameServerSet
metadata:
name: eip-nginx
namespace: default
spec:
replicas: 1
updateStrategy:
rollingUpdate:
podUpdatePolicy: InPlaceIfPossible
network:
networkType: Volcengine-EIP
gameServerTemplate:
spec:
containers:
- image: nginx
name: nginx
```
The network status of the generated GameServer is as follows:
```yaml
networkStatus:
createTime: "2025-01-17T10:10:18Z"
currentNetworkState: Ready
desiredNetworkState: Ready
externalAddresses:
- ip: 106.xx.xx.xx
internalAddresses:
- ip: 192.168.1.51
lastTransitionTime: "2025-01-17T10:10:18Z"
networkType: Volcengine-EIP
```
Pod annotation example:
```yaml
metadata:
annotations:
vke.volcengine.com/primary-eip-id: eip-xxx
vke.volcengine.com/primary-eip-attributes: '{"bandwidth":3,"billingType":"2"}'
```
The EIP resource will be named `{pod namespace}/{pod name}` in the Volcengine console, corresponding one-to-one with each GameServer.
---

BIN
docs/images/360-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

BIN
docs/images/aws-nlb.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 656 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

BIN
docs/images/bekko-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 165 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 265 KiB

After

Width:  |  Height:  |  Size: 138 KiB

Some files were not shown because too many files have changed in this diff Show More