Distributed ML Training and Fine-Tuning on Kubernetes
Go to file
Anya Kramar b71a69064f
Add Changelog for Trainer v2.0.0-rc.0 (#2666)
* fix(docs): convert commits to list in changelog.py for compatibility

Signed-off-by: kramaranya <kramaranya15@gmail.com>

* chore(docs): add Changelog for Trainer v2.0.0-rc.0

Signed-off-by: kramaranya <kramaranya15@gmail.com>

---------

Signed-off-by: kramaranya <kramaranya15@gmail.com>
2025-06-11 08:18:50 +00:00
.github Remove SDK (#2657) 2025-06-09 13:41:48 +00:00
api feat(controller): Implement PodSpecOverride API (#2614) 2025-06-06 17:04:18 +00:00
charts/kubeflow-trainer Remove kubeflow-trainer prefix from jobset resource names (#2596) 2025-04-11 14:44:06 +00:00
cmd Update Go to v1.24 (#2615) (#2620) 2025-04-28 17:02:01 +00:00
docs Add Changelog for Trainer v2.0.0-rc.0 (#2666) 2025-06-11 08:18:50 +00:00
examples Remove SDK (#2657) 2025-06-09 13:41:48 +00:00
hack Remove SDK (#2657) 2025-06-09 13:41:48 +00:00
manifests KEP-2401: Support loading local LLMs (#2644) 2025-06-06 18:12:16 +00:00
pkg KEP-2401: Support loading local LLMs (#2644) 2025-06-06 18:12:16 +00:00
test feat(controller): Implement PodSpecOverride API (#2614) 2025-06-06 17:04:18 +00:00
.flake8 [SDK] Get Kubernetes Events for Job (#1975) 2024-01-11 16:30:12 +00:00
.gitignore Remove SDK (#2657) 2025-06-09 13:41:48 +00:00
.golangci.yaml chore: Enable GCI for golangci-lint (#2540) 2025-03-18 14:47:21 +00:00
.pre-commit-config.yaml Remove SDK (#2657) 2025-06-09 13:41:48 +00:00
ADOPTERS.md Remove the Training Operator V1 Source Code (#2389) 2025-02-04 15:25:38 +00:00
CHANGELOG.md Add Changelog for Trainer v2.0.0-rc.0 (#2666) 2025-06-11 08:18:50 +00:00
CONTRIBUTING.md Remove SDK (#2657) 2025-06-09 13:41:48 +00:00
LICENSE Initial commit 2017-06-28 11:38:15 -07:00
Makefile Remove SDK (#2657) 2025-06-09 13:41:48 +00:00
OWNERS Nominate @Electronic-Waste as approver and @astefanutti as reviewer (#2659) 2025-06-06 09:54:15 +00:00
README.md Remove SDK (#2657) 2025-06-09 13:41:48 +00:00
ROADMAP.md Remove the Training Operator V1 Source Code (#2389) 2025-02-04 15:25:38 +00:00
go.mod Update Go to v1.24 (#2615) (#2620) 2025-04-28 17:02:01 +00:00
go.sum chore(deps): bump golang.org/x/net from 0.36.0 to 0.38.0 (#2602) 2025-04-17 00:19:24 +00:00

README.md

Kubeflow Trainer

Build Status Coverage Status Go Report Card OpenSSF Best Practices

logo

Overview

Kubeflow Trainer is a Kubernetes-native project designed for large language models (LLMs) fine-tuning and enabling scalable, distributed training of machine learning (ML) models across various frameworks, including PyTorch, JAX, TensorFlow, and others.

You can integrate other ML libraries such as HuggingFace, DeepSpeed, or Megatron-LM with Kubeflow Training to orchestrate their ML training on Kubernetes.

Kubeflow Trainer enables you to effortlessly develop your LLMs with the Kubeflow Python SDK, and build Kubernetes-native Training Runtimes using Kubernetes Custom Resource APIs.

logo

Kubeflow Trainer Introduction

The following KubeCon + CloudNativeCon 2024 talk provides an overview of Kubeflow Trainer capabilities:

Kubeflow Trainer

Getting Started

Please check the official Kubeflow documentation to install and get started with Kubeflow Trainer.

Community

The following links provide information on how to get involved in the community:

Contributing

Please refer to the CONTRIBUTING guide.

Changelog

Please refer to the CHANGELOG.

Kubeflow Training Operator V1

Kubeflow Trainer project is currently in alpha status, and APIs may change. If you are using Kubeflow Training Operator V1, please refer to this migration document.

Kubeflow Community will maintain the Training Operator V1 source code at the release-1.9 branch.

You can find the documentation for Kubeflow Training Operator V1 in these guides.

Acknowledgement

This project was originally started as a distributed training operator for TensorFlow and later we merged efforts from other Kubeflow Training Operators to provide a unified and simplified experience for both users and developers. We are very grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions. We'd also like to thank everyone who's contributed to and maintained the original operators.