Documentation for Kubernetes v1.4 is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Edit This Page

You’ve seen how to configure and deploy pods and containers, using some of the most common configuration parameters. This section dives into additional features that are especially useful for running applications in production.

Using a Volume for storage

The container file system only lives as long as the container does, so when a container crashes and restarts, changes to the filesystem will be lost and the container will restart from a clean slate. For more consistent storage that lasts for the life of a Pod, you need a volume. This is especially important to stateful applications, such as key-value stores and databases.

For example, Redis is a key-value cache and store, which we use in the guestbook and other examples. We can add a volume to it to store data as follows:

redis-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis
spec:
  template:
    metadata:
      labels:
        app: redis
        tier: backend
    spec:
      # Provision a fresh volume for the pod
      volumes:
        - name: data
          emptyDir: {}
      containers:
      - name: redis
        image: kubernetes/redis:v1
        ports:
        - containerPort: 6379
        # Mount the volume into the pod
        volumeMounts:
        - mountPath: /redis-master-data
          name: data   # must match the name of the volume, above

emptyDir volumes live for the lifespan of the pod, which is longer than the lifespan of any one container, so if the container fails and is restarted, our storage will live on.

In addition to the local disk storage provided by emptyDir, Kubernetes supports many different network-attached storage solutions, including PD on GCE and EBS on EC2, which are preferred for critical data, and will handle details such as mounting and unmounting the devices on the nodes. See the volumes doc for more details.

Distributing credentials

Many applications need credentials, such as passwords, OAuth tokens, and TLS keys, to authenticate with other applications, databases, and services. Storing these credentials in container images or environment variables is less than ideal, since the credentials can then be copied by anyone with access to the image, pod/container specification, host file system, or host Docker daemon.

Kubernetes provides a mechanism, called secrets, that facilitates delivery of sensitive credentials to applications. A Secret is a simple resource containing a map of data. For instance, you can create a simple secret with a username and password as follows:

$ kubectl create secret generic mysecret --from-literal=username="admin",password="1234"
secret "mysecret" created

This is equivalent to kubectl create -f:

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  username: YWRtaW4=
  password: MTIzNA==

As with other resources, the created secret can be viewed with get:

$ kubectl get secrets
NAME                  TYPE                                  DATA      AGE
default-token-zirbw   kubernetes.io/service-account-token   3         3h
mysecret              Opaque                                2         2m

To use the secret, you need to reference it in a pod or pod template. The secret volume source enables you to mount it as an in-memory directory into your containers.

redis-secret-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis
spec:
  template:
    metadata:
      labels:
        app: redis
        tier: backend
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: supersecret # The "mysecret" secret populates this "supersecret" volume. 
          secret:
            secretName: mysecret
      containers:
      - name: redis
        image: kubernetes/redis:v1
        ports:
        - containerPort: 6379
        volumeMounts:
        - mountPath: /redis-master-data
          name: data
        - mountPath: /var/run/secrets/super # Mount the "supersecret" volume into the pod.
          name: supersecret

For more details, see the secrets document, example and design doc.

Authenticating with a private image registry

Secrets can also be used to pass image registry credentials.

The easiest way to create a secret for Docker registry is:

$ kubectl create secret docker-registry myregistrykey --docker-username=janedoe --docker-password=●●●●●●●●●●● --docker-email=jdoe@example.com
secret "myregistrykey" created

Alternatively, you can do the equivalent with the following steps. First, create a .docker/config.json, such as by running docker login <registry.domain>. Then put the resulting .docker/config.json file into a secret resource. For example:

$ docker login
Username: janedoe
Password: ●●●●●●●●●●●
Email: jdoe@example.com
WARNING: login credentials saved in /Users/jdoe/.docker/config.json.
Login Succeeded

$ echo $(cat ~/.docker/config.json)
{ "https://index.docker.io/v1/": { "auth": "ZmFrZXBhc3N3b3JkMTIK", "email": "jdoe@example.com" } }

$ cat ~/.docker/config.json | base64
eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K

$ cat > /tmp/image-pull-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: myregistrykey
data:
  .dockerconfigjson: eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
type: kubernetes.io/dockerconfigjson
EOF

$ kubectl create -f /tmp/image-pull-secret.yaml
secret "myregistrykey" created

Now, you can create pods which reference that secret by adding an imagePullSecrets section to a pod definition.

apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
    - name: foo
      image: janedoe/awesomeapp:v1
  imagePullSecrets:
    - name: myregistrykey

Helper containers

Pods support running multiple containers co-located together. They can be used to host vertically integrated application stacks, but their primary motivation is to support auxiliary helper programs that assist the primary application. Typical examples are data pullers, data pushers, and proxies.

Such containers typically need to communicate with one another, often through the file system. This can be achieved by mounting the same volume into both containers. An example of this pattern would be a web server with a program that polls a git repository for new updates:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-nginx
spec:
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
      - name: www-data
        emptyDir: {}
      containers:
      - name: nginx
        image: nginx
        # This container reads from the www-data volume
        volumeMounts:
        - mountPath: /srv/www
          name: www-data
          readOnly: true
      - name: git-monitor
        image: myrepo/git-monitor
        env:
        - name: GIT_REPO
          value: http://github.com/some/repo.git
        # This container writes to the www-data volume
        volumeMounts:
        - mountPath: /data
          name: www-data

More examples can be found in our blog article and presentation slides.

Resource management

Kubernetes’s scheduler will place applications only where they have adequate CPU and memory, but it can only do so if it knows how much resources they require. The consequence of specifying too little CPU is that the containers could be starved of CPU if too many other containers were scheduled onto the same node. Similarly, containers could die unpredictably due to running out of memory if no memory were requested, which can be especially likely for large-memory applications.

If no resource requirements are specified, a nominal amount of resources is assumed. (This default is applied via a LimitRange for the default Namespace. It can be viewed with kubectl describe limitrange limits.) You may explicitly specify the amount of resources required as follows:

redis-resource-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis
spec:
  template:
    metadata:
      labels:
        app: redis
        tier: backend
    spec:
      containers:
      - name: redis
        image: kubernetes/redis:v1
        ports:
        - containerPort: 80
        resources:
          limits:
            # cpu units are cores
            cpu: 500m
            # memory units are bytes
            memory: 64Mi
          requests:
            # cpu units are cores
            cpu: 500m
            # memory units are bytes
            memory: 64Mi

The container will die due to OOM (out of memory) if it exceeds its specified limit, so specifying a value a little higher than expected generally improves reliability. By specifying request, pod is guaranteed to be able to use that much of resource when needed. See Resource QoS for the difference between resource limits and requests.

If you’re not sure how much resources to request, you can first launch the application without specifying resources, and use resource usage monitoring to determine appropriate values.

Liveness and readiness probes (aka health checks)

Many applications running for long periods of time eventually transition to broken states, and cannot recover except by restarting them. Kubernetes provides liveness probes to detect and remedy such situations.

A common way to probe an application is using HTTP, which can be specified as follows:

nginx-probe-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        livenessProbe:
          httpGet:
            # Path to probe; should be cheap, but representative of typical behavior
            path: /index.html
            port: 80
          initialDelaySeconds: 30
          timeoutSeconds: 1

Other times, applications are only temporarily unable to serve, and will recover on their own. Typically in such cases you’d prefer not to kill the application, but don’t want to send it requests, either, since the application won’t respond correctly or at all. A common such scenario is loading large data or configuration files during application startup. Kubernetes provides readiness probes to detect and mitigate such situations. Readiness probes are configured similarly to liveness probes, just using the readinessProbe field. A pod with containers reporting that they are not ready will not receive traffic through Kubernetes services.

For more details (e.g., how to specify command-based probes), see the example in the walkthrough, the standalone example, and the documentation.

Handling initialization

Applications often need a set of initialization steps prior to performing their day job. This may include:

Kubernetes now includes a beta feature known as init containers, which are one or more containers in a pod that get a chance to run and initialize shared volumes prior to the other application containers starting. An init container is exactly like a regular container, except that it always runs to completion and each init container must complete successfully before the next one is started. If the init container fails (exits with a non-zero exit code) on a RestartNever pod the pod will fail - otherwise it will be restarted until it succeeds or the user deletes the pod.

Since init containers are a beta feature, they are specified by setting the pod.beta.kubernetes.io/init-containers annotation on a pod (or replica set, deployment, daemon set, pet set, or job). The value of the annotation must be a string containing a JSON array of container definitions:

nginx-init-containers.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  annotations:
    pod.beta.kubernetes.io/init-containers: '[
        {
            "name": "install",
            "image": "busybox",
            "command": ["wget", "-O", "/work-dir/index.html", "http://kubernetes.io/index.html"],
            "volumeMounts": [
                {
                    "name": "workdir",
                    "mountPath": "/work-dir"
                }
            ]
        }
    ]'
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - name: workdir
      mountPath: /usr/share/nginx/html
  dnsPolicy: Default
  volumes:
  - name: workdir
    emptyDir: {}

The status of the init containers is returned as another annotation - pod.beta.kubernetes.io/init-container-statuses – as an array of the container statuses (similar to the status.containerStatuses field).

Init containers support all of the same features as normal containers, including resource limits, volumes, and security settings. The resource requests and limits for an init container are handled slightly different than normal containers since init containers are run one at a time instead of all at once - any limits or quotas will be applied based on the largest init container resource quantity, rather than as the sum of quantities. Init containers do not support readiness probes since they will run to completion before the pod can be ready.

Complete Init Container Documentation

Lifecycle hooks and termination notice

Of course, nodes and applications may fail at any time, but many applications benefit from clean shutdown, such as to complete in-flight requests, when the termination of the application is deliberate. To support such cases, Kubernetes supports two kinds of notifications:

The specification of a pre-stop hook is similar to that of probes, but without the timing-related parameters. For example:

nginx-lifecycle-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        lifecycle:
          preStop:
            exec:
              # SIGTERM triggers a quick exit; gracefully terminate instead
              command: ["/usr/sbin/nginx","-s","quit"]

Termination message

In order to achieve a reasonably high level of availability, especially for actively developed applications, it’s important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using kubectl or the UI, in addition to general log collection. It is possible to specify a terminationMessagePath where a container will write its ‘death rattle’?, such as assertion failure messages, stack traces, exceptions, and so on. The default path is /dev/termination-log.

Here is a toy example:

pod-w-message.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-w-message
spec:
  containers:
  - name: messager
    image: "ubuntu:14.04"
    command: ["/bin/sh","-c"]
    args: ["sleep 60 && /bin/echo Sleep expired > /dev/termination-log"]

The message is recorded along with the other state of the last (i.e., most recent) termination:


$ kubectl create -f ./pod-w-message.yaml
pod "pod-w-message" created
$ sleep 70
$ kubectl get pods/pod-w-message -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"
Sleep expired
$ kubectl get pods/pod-w-message -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.exitCode}}{{end}}"
0

What’s next?

Learn more about managing deployments.

Analytics

Create an Issue Edit this Page