Documentation for Kubernetes v1.4 is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Edit This Page

This guide explains how to set up cluster federation that lets us control multiple Kubernetes clusters.

Prerequisites

This guide assumes that you have a running Kubernetes cluster. If not, then head over to the getting started guides to bring up a cluster.

This guide also assumes that you have a Kubernetes release downloaded from here, extracted into a directory and all the commands in this guide are run from that directory.

$ curl -L https://github.com/kubernetes/kubernetes/releases/download/v1.4.0/kubernetes.tar.gz | tar xvzf -
$ cd kubernetes

This guide also assumes that you have an installation of Docker running locally, i.e. on the machine where you run the commands described in this guide.

Setting up a federation control plane

Setting up federation requires running the federation control plane which consists of etcd, federation-apiserver (via the hyperkube binary) and federation-controller-manager (also via the hyperkube binary). You can run these binaries as pods on an existing Kubernetes cluster.

Note: This is a new mechanism to turn up Kubernetes Cluster Federation. If you want to follow the old mechanism, please refer to the section Previous Federation turn up mechanism at the end of this guide.

Initial setup

Create a directory to store the configs required to turn up federation and export that directory path in the environment variable FEDERATION_OUTPUT_ROOT. This can be an existing directory, but it is highly recommended to create a separate directory so that it is easier to clean up later.

$ export FEDERATION_OUTPUT_ROOT="${PWD}/_output/federation"
$ mkdir -p "${FEDERATION_OUTPUT_ROOT}"

Initialize the setup.

$ federation/deploy/deploy.sh init

Optionally, you can create/edit ${FEDERATION_OUTPUT_ROOT}/values.yaml to customize any value in federation/federation/manifests/federation/values.yaml. Example:

apiserverRegistry: "gcr.io/myrepository"
apiserverVersion: "v1.5.0-alpha.0.1010+892a6d7af59c0b"
controllerManagerRegistry: "gcr.io/myrepository"
controllerManagerVersion: "v1.5.0-alpha.0.1010+892a6d7af59c0b"

Assuming you have built and pushed the hyperkube image to the repository with the given tag in the example above.

Getting images

To run the federation control plane components as pods, you first need the images for all the components. You can either use the official release images or you can build them yourself from HEAD.

Using official release images

As part of every Kubernetes release, official release images are pushed to gcr.io/google_containers. To use the images in this repository, you can set the container image fields in the following configs to point to the images in this repository. gcr.io/google_containers/hyperkube image includes the federation-apiserver and federation-controller-manager binaries, so you can point the corresponding configs for those components to the hyperkube image.

Building and pushing images from HEAD

To build the binaries, check out the Kubernetes repository and run the following commands from the root of the source directory:

$ federation/develop/develop.sh build_binaries

To build the image and push it to the repository, run:

$ KUBE_REGISTRY="gcr.io/myrepository" federation/develop/develop.sh build_image
$ KUBE_REGISTRY="gcr.io/myrepository" federation/develop/develop.sh push

Note: This is going to overwite the values you might have set for apiserverRegistry, apiserverVersion, controllerManagerRegistry and controllerManagerVersion in your ${FEDERATION_OUTPUT_ROOT}/values.yaml file. Hence, it is not recommend to customize these values in ${FEDERATION_OUTPUT_ROOT}/values.yaml if you are building the images from source.

Running the federation control plane

Once you have the images, you can turn up the federation control plane by running:

$ federation/deploy/deploy.sh deploy_federation

This spins up the federation control components as pods managed by Deployments on your existing Kubernetes cluster. It also starts a type: LoadBalancer Service for the federation-apiserver and a PVC backed by a dynamically provisioned PV for etcd. All these components are created in the federation namespace.

You can verify that the pods are available by running the following command:

$ kubectl get deployments --namespace=federation
NAME                            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
federation-apiserver            1         1         1            1           1m
federation-controller-manager   1         1         1            1           1m

Running deploy.sh also creates a new record in your kubeconfig for us to be able to talk to federation apiserver. You can view this by running kubectl config view.

Note: Dynamic provisioning for persistent volume currently works only on AWS, GKE, and GCE. However, you can edit the created Deployments to suit your needs, if required.

Registering Kubernetes clusters with federation

Now that you have the federation control plane up and running, you can start registering Kubernetes clusters.

First of all, you need to create a secret containing kubeconfig for that Kubernetes cluster, which federation control plane will use to talk to that Kubernetes cluster. For now, you can create this secret in the host Kubernetes cluster (that hosts federation control plane). When federation starts supporting secrets, you will be able to create this secret there. Suppose that your kubeconfig for Kubernetes cluster is at /cluster1/kubeconfig, you can run the following command to create the secret:

$ kubectl create secret generic cluster1 --namespace=federation --from-file=/cluster1/kubeconfig

Note that the file name should be kubeconfig since file name determines the name of the key in the secret.

Now that the secret is created, you are ready to register the cluster. The YAML file for cluster will look like:

apiVersion: federation/v1beta1
kind: Cluster
metadata:
  name: cluster1
spec:
  serverAddressByClientCIDRs:
  - clientCIDR: <client-cidr>
    serverAddress: <apiserver-address>
  secretRef:
    name: <secret-name>

You need to insert the appropriate values for <client-cidr>, <apiserver-address> and <secret-name>. <secret-name> here is name of the secret that you just created. serverAddressByClientCIDRs contains the various server addresses that clients can use as per their CIDR. You can set the server’s public IP address with CIDR "0.0.0.0/0" which all clients will match. In addition, if you want internal clients to use server’s clusterIP, you can set that as serverAddress. The client CIDR in that case will be a CIDR that only matches IPs of pods running in that cluster.

Assuming your YAML file is located at /cluster1/cluster.yaml, you can run the following command to register this cluster:

$ kubectl create -f /cluster1/cluster.yaml --context=federation-cluster

By specifying --context=federation-cluster, you direct the request to federation apiserver. You can ensure that the cluster registration was successful by running:

$ kubectl get clusters --context=federation-cluster
NAME       STATUS    VERSION   AGE
cluster1   Ready               3m

Updating KubeDNS

Once the cluster is registered with the federation, you are all set to use it. But for the cluster to be able to route federation service requests, you need to restart KubeDNS and pass it a --federations flag which tells it about valid federation DNS hostnames. Format of the flag is like this:

--federations=${FEDERATION_NAME}=${DNS_DOMAIN_NAME}

To update KubeDNS with federations flag, you can edit the existing kubedns replication controller to include that flag in pod template spec and then delete the existing pod. Replication controller will recreate the pod with updated template.

To find the name of existing kubedns replication controller, run

$ kubectl get rc --namespace=kube-system

This will list all the replication controllers. Name of the kube-dns replication controller will look like kube-dns-v18. You can then edit it by running:

$ kubectl edit rc <rc-name> --namespace=kube-system

Add the --federations flag as args to kube-dns container in the YAML file that pops up after running the above command.

To delete the existing kube dns pod, you can first find it by running:

$ kubectl get pods --namespace=kube-system

And then delete it by running:

$ kubectl delete pods <pod-name> --namespace=kube-system

You are now all set to start using federation.

Turn down

In order to turn the federation control plane down run the following command:

$ federation/deploy/deploy.sh destroy_federation

Previous Federation turn up mechanism

This describes the previous mechanism we had to turn up Kubernetes Cluster Federation. It is recommended to use the new turn up mechanism. If you would like to use this mechanism instead of the new one, please let us know why the new mechanism doesn’t work for your case by filing an issue here - https://github.com/kubernetes/kubernetes/issues/new

Getting images

To run these as pods, you first need images for all the components. You can use official release images or you can build from HEAD.

Using official release images

As part of every release, images are pushed to gcr.io/google_containers. To use these images, set env var FEDERATION_PUSH_REPO_BASE=gcr.io/google_containers This will always use the latest image. To use the hyperkube image which includes federation-apiserver and federation-controller-manager from a specific release, set the FEDERATION_IMAGE_TAG environment variable.

Building and pushing images from HEAD

To run the code from HEAD, you need to build and push your own images. You can build the images using the following command:

$ FEDERATION=true KUBE_RELEASE_RUN_TESTS=n make quick-release

Next, you need to push these images to a registry such as Google Container Registry or Docker Hub, so that your cluster can pull them. If Kubernetes cluster is running on Google Compute Engine (GCE), then you can push the images to gcr.io/<gce-project-name>. The command to push the images will look like:

$ FEDERATION=true FEDERATION_PUSH_REPO_BASE=gcr.io/<gce-project-name> ./build/push-federation-images.sh

Running the federation control plane

Once you have the images, you can run these as pods on your existing kubernetes cluster. The command to run these pods on an existing GCE cluster will look like:

$ KUBERNETES_PROVIDER=gce FEDERATION_DNS_PROVIDER=google-clouddns FEDERATION_NAME=myfederation DNS_ZONE_NAME=myfederation.example FEDERATION_PUSH_REPO_BASE=gcr.io/google_containers ./federation/cluster/federation-up.sh

KUBERNETES_PROVIDER is the cloud provider.

FEDERATION_DNS_PROVIDER can be google-clouddns or aws-route53. It will be set appropriately if it is missing and KUBERNETES_PROVIDER is one of gce, gke and aws. This is used to resolve DNS requests for federation services. The service controller keeps DNS records with the provider updated as services/pods are updated in underlying kubernetes clusters.

FEDERATION_NAME is a name you can choose for your federation. This is the name that will appear in DNS routes.

DNS_ZONE_NAME is the domain to be used for DNS records. This is a domain that you need to buy and then configure it such that DNS queries for that domain are routed to the appropriate provider as per FEDERATION_DNS_PROVIDER.

Running that command creates a namespace federation and creates 2 deployments: federation-apiserver and federation-controller-manager. You can verify that the pods are available by running the following command:

$ kubectl get deployments --namespace=federation
NAME                            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
federation-apiserver            1         1         1            1           1m
federation-controller-manager   1         1         1            1           1m

Running federation-up.sh also creates a new record in your kubeconfig for us to be able to talk to federation apiserver. You can view this by running kubectl config view.

Note: federation-up.sh creates the federation-apiserver pod with an etcd container that is backed by a persistent volume, so as to persist data. This currently works only on AWS, GKE, and GCE. You can edit federation/manifests/federation-apiserver-deployment.yaml to suit your needs, if required.

For more information

Analytics

Create an Issue Edit this Page