# Distributed training In this part, we will implement distributed training in KubeFlow Pipeline. The method used is Tensorflow's MultiWorkerMirroredStrategy CPU. In this tutorial, we will learn how to specify pods to be deployed under a specific node, and how to communicate with each worker. ## Before training Before running the training program, we need to make some settings, include: * Adding tags to each node. * Deploying service YAML file. * Add label name to your pod on the pipeline. * Use add_node_selector_constraint to specify pod deployed under a specific node * Add TF_config environment variables to the code. ### Adding tags to each node The purpose of adding tags is that the subsequent steps can directly specify the pod to be deployed under a specific node. You run following commands to apply your node. The added screen example can shown in the **Figure.1** . ```commandline // add tags kubectl label nodes