Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/4' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Database
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Kubernetes resource graphing with Heapster, InfluxDB and Grafana

Agile Testing - Grig Gheorghiu - Tue, 11/29/2016 - 23:58
I know that the Cloud Native Computing Foundation chose Prometheus as the monitoring platform of choice for Kubernetes, but in this post I'll show you how to quickly get started with graphing CPU, memory, disk and network in a Kubernetes cluster using Heapster, InfluxDB and Grafana.

The documentation in the kubernetes/heapster GitHub repo is actually pretty good. Here's what I did:

$ git clone https://github.com/kubernetes/heapster.git
$ cd heapster/deploy/kube-config/influxdb

Look at the yaml manifests to see if you need to customize anything. I left everything 'as is' and ran:

$ kubectl create -f .
deployment "monitoring-grafana" created
service "monitoring-grafana" created
deployment "heapster" created
service "heapster" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created

Then you can run 'kubectl cluster-info' and look for the monitoring-grafana endpoint. Since the monitoring-grafana service is of type LoadBalancer, if you run your Kubernetes cluster in AWS, the service creation will also involve the creation of an ELB. By default the ELB security group allows 80 from all, so I edited that to restrict it to some known IPs.

After a few minutes, you should see CPU and memory graphs shown in the Kubernetes dashboard. Here is an example showing pods running in the kube-system namespace:



You can also hit the Grafana endpoint and choose the Cluster or Pods dashboards. Note that if you have a namespace different from default and kube-system, you have to enter its name manually in the namespace field of the Grafana Pods dashboard. Only then you'll be able to see data corresponding to pods running in that namespace (or at least I had to jump through that hoop.)

Here is an example of graphs for the kubernetes-dashboard pod running in the kube-system namespace:


For info on how to customize the Grafana graphs, here's a good post from Deis.

Running an application using Kubernetes on AWS

Agile Testing - Grig Gheorghiu - Wed, 11/23/2016 - 02:13
I've been knee-deep in Kubernetes for the past few weeks and to say that I like it is an understatement. It's exhilarating to have at your fingertips a distributed platfom created by Google's massive brain power.

I'll jump right in and talk about how I installed Kubernetes in AWS and how I created various resources in Kubernetes in order to run a database-backed PHP-based web application.

Installing Kubernetes

I used the tack tool from my laptop running OSX to spin up a Kubernetes cluster in AWS. Tack uses terraform under the hood, which I liked a lot because it makes it very easy to delete all AWS resources and start from scratch while you are experimenting with it. I went with the tack defaults and spun up 3 m3.medium EC2 instances for running etcd and the Kubernetes API, the scheduler and the controller manager in an HA configuration. Tack also provisioned 3 m3.medium EC2 instances as Kubernetes workers/minions, in an EC2 auto-scaling group. Finally, tack spun up a t2.nano EC2 instance to server as a bastion host for getting access into the Kubernetes cluster. All 7 EC2 instances launched by tack run CoreOS.

Using kubectl

Tack also installs kubectl, which is the Kubernetes command-line management tool. I used kubectl to create the various Kubernetes resources needed to run my application: deployments, services, secrets, config maps, persistent volumes etc. It pays to become familiar with the syntax and arguments of kubectl.

Creating namespaces

One thing I needed to do right off the bat was to think about ways to achieve multi-tenancy in my Kubernetes cluster. This is done with namespaces. Here's my namespace.yaml file:

$ cat namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: tenant1

To create the namespace tenant1, I used kubectl create:

$ kubectl create -f namespace.yaml

To list all namespaces:

$ kubectl get namespaces
NAME          STATUS    AGE
default       Active    12d
kube-system   Active    12d
tenant1       Active    11d 

If you don't need a dedicated namespace per tenant, you can just run kubectl commands in the 'default' namespace.

Creating persistent volumes, storage classes and persistent volume claims

I'll show how you can create two types of Kubernetes persistent volumes in AWS: one based on EFS, and one based on EBS. I chose the EFS one for my web application layer, for things such as shared configuration and media files. I chose the EBS one for my database layer, to be mounted as the data volume.

First, I created an EFS share using the AWS console (although I recommend using terraform to do it automatically, but I am not there yet). I allowed the Kubernetes worker security group to access this share. I noted one of the DNS names available for it, e.g. us-west-2a.fs-c830ab1c.efs.us-west-2.amazonaws.com. I used this Kubernetes manifest to define a persistent volume (PV) based on this EFS share:

$ cat web-pv-efs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-efs-web
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: s-west-2a.fs-c830ab1c.efs.us-west-2.amazonaws.com
    path: "/"

To create the PV, I used kubectl create, and I also specified the namespace tenant1:

$ kubectl create -f web-pv-efs.yaml --namespace tenant1

However, creating a PV is not sufficient. Pods use persistent volume claims (PVC) to refer to persistent volumes in their manifests. So I had to create a PVC:

$ cat web-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: web-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi 

$ kubectl create -f web-pvc.yaml --namespace tenant1

Note that a PVC does not refer directly to a PV. The storage specified in the PVC is provisioned from available persistent volumes.

Instead of defining a persistent volume for the EBS volume I wanted to use for the database, I created a storage class:

$ cat db-storageclass-ebs.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: db-ebs
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2

$ kubectl create -f db-storageclass-ebs.yaml --namespace tenant1

I also created a PVC which does refer directly to the storage class name db-ebs. When the PVC is used in a pod, the underlying resource (i.e. the EBS volume in this case) will be automatically provisioned by Kubernetes.

$ cat db-pvc-ebs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: db-pvc-ebs
  annotations:
     volume.beta.kubernetes.io/storage-class: 'db-ebs'
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi

$ kubectl create -f db-pvc-ebs.yaml --namespace tenant1

To list the newly created resource, you can use:

$ kubectl get pv,pvc,storageclass --namespace tenant1

Creating secrets and ConfigMaps

I followed the "Persistent Installation of MySQL and Wordpress on Kubernetes" guide to figure out how to create and use Kubernetes secrets. Here is how to create a secret for the MySQL root password, necessary when you spin up a pod based on a Percona or plain MySQL image:
$ echo -n $MYSQL_ROOT_PASSWORD > mysql-root-pass.secret
$ kubectl create secret generic mysql-root-pass --from-file=mysql-root-pass.secret --namespace tenant1 

Kubernetes also has the handy notion of ConfigMap, a resource where you can store either entire configuration files, or key/value properties that you can then use in other Kubernetes resource definitions. For example, I save the GitHub branch and commit environment variables for the code I deploy in a ConfigMap:
$ kubectl create configmap git-config --namespace tenant1 \                 --from-literal=GIT_BRANCH=$GIT_BRANCH \                 --from-literal=GIT_COMMIT=$GIT_COMMIT
I'll show how to use secrets and ConfigMaps in pod definitions a bit later on.
Creating an ECR image pull secret and a service account

We use AWS ECR to store our Docker images. Kubernetes can access images stored in ECR, but you need to jump through a couple of hoops to make that happen. First, you need to create a Kubernetes secret of type dockerconfigjson which encapsulates the ECR credentials in base64 format. Here's a shell script that generates a file called ecr-pull-secret.yaml:

#!/bin/bash

TMP_JSON_CONFIG=/tmp/ecr_config.json

PASSWORD=$(aws --profile default --region us-west-2 ecr get-login | cut -d ' ' -f 6)

cat > $TMP_JSON_CONFIG << EOF
{"https://YOUR_AWS_ECR_ID.dkr.ecr.us-west-2.amazonaws.com":{"username":"AWS","email":"none","password":"$PASSWORD"}}
EOF


BASE64CONFIG=$(cat $TMP_JSON_CONFIG | base64)
cat > ecr-pull-secret.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
  name: ecr-key
  namespace: tenant1
data:
  .dockerconfigjson: $BASE64CONFIG
type: kubernetes.io/dockerconfigjson
EOF

rm -rf $TMP_JSON_CONFIG

Once you run the script and generate the file, you can then define a Kubernetes service account that will use this secret:

$ cat service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: tenant1
  name: tenant1-dev
imagePullSecrets:
 - name: ecr-key

Note that the service account refers to the ecr-key secret in the imagePullSecrets property.

As usual, kubectl create will create these resources based on their manifests:

$ kubectl create -f ecr-pull-secret.yaml
$ kubectl create -f service-account.yaml

Creating deployments

The atomic unit of scheduling in Kubernetes is a pod. You don't usually create a pod directly (though you can, and I'll show you a case where it makes sense.) Instead, you create a deployment, which keeps track of how many pod replicas you need, and spins up the exact number of pods to fulfill your requirement. A deployment actually creates a replica set under the covers, but in general you don't deal with replica sets directly. Note that deployments are the new recommended way to create multiple pods. The old way, which is still predominant in the documentation, was to use replication controllers.

Here's my deployment manifest for a pod running a database image:

$ cat db-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: db-deployment
  labels:
    app: myapp
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: myapp
        tier: db
    spec:
      containers:
      - name: db
        image: MY_ECR_ID.dkr.ecr.us-west-2.amazonaws.com/myapp-db:tenant1
        imagePullPolicy: Always
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-root-pass
              key: mysql-root-pass.secret
        - name: MYSQL_DATABASE
          valueFrom:
            configMapKeyRef:
              name: tenant1-config
              key: MYSQL_DATABASE
        - name: MYSQL_USER
          valueFrom:
            configMapKeyRef:
              name: tenant1-config
              key: MYSQL_USER
        - name: MYSQL_DUMP_FILE
          valueFrom:
            configMapKeyRef:
              name: tenant1-config
              key: MYSQL_DUMP_FILE
        - name: S3_BUCKET
          valueFrom:
            configMapKeyRef:
              name: tenant1-config
              key: S3_BUCKET
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: ebs
          mountPath: /var/lib/mysql
      volumes:
      - name: ebs
        persistentVolumeClaim:
          claimName:  db-pvc-ebs
      serviceAccount: tenant1-dev

The template section specifies the elements necessary for spinning up new pods. Of particular importance are the labels, which, as we will see, are used by services to select pods that are included in a given service.  The image property specifies the ECR Docker image used to spin up new containers. In my case, the image is called myapp-db and it is tagged with the tenant name tenant1. Here is the Dockerfile from which this image was generated:

$ cat Dockerfile
FROM mysql:5.6

# disable interactive functions
ARG DEBIAN_FRONTEND=noninteractive

RUN apt-get update && \
    apt-get install -y python-pip
RUN pip install awscli

VOLUME /var/lib/mysql

COPY etc/mysql/my.cnf /etc/mysql/my.cnf
COPY scripts/db_setup.sh /usr/local/bin/db_setup.sh

Nothing out of the ordinary here. The image is based on the mysql DockerHub image, specifically version 5.6. The my.cnf is getting added in as a customization, and a db_setup.sh script is copied over so it can be run at a later time.

Some other things to note about the deployment manifest:

  • I made pretty heavy use of secrets and ConfigMap key/values
  • I also used the db-pvc-ebs Persistent Volume Claim and mounted the underlying physical resource (an EBS volume in this case) as /var/lib/mysql
  • I used the tenant1-dev service account, which allows the deployment to pull down the container image from ECR
  • I didn't specify the number of replicas I wanted, which means that 1 pod will be created (the default)

To create the deployment, I ran kubectl:

$ kubectl create -f db-deployment.yaml --record --namespace tenant1

Note that I used the --record flag, which tells Kubernetes to keep a history of the commands used to create or update that deployment. You can show this history with the kubectl rollout history command:

$ kubectl --namespace tenant1 rollout history deployment db-deployment 

To list the running deployments, replica sets and pods, you can use:

$ kubectl get get deployments,rs,pods --namespace tenant1 --show-all

Here is another example of a deployment manifest, this time for redis:

$ cat redis-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-deployment
spec:
  replicas: 1
  minReadySeconds: 10
  template:
    metadata:
      labels:
        app: myapp
        tier: redis
    spec:
      containers:
        - name: redis
          command: ["redis-server", "/etc/redis/redis.conf", "--requirepass", "$(REDIS_PASSWORD)"]
          image: MY_ECR_ID.dkr.ecr.us-west-2.amazonaws.com/myapp-redis:tenant1
          imagePullPolicy: Always
          env:
          - name: REDIS_PASSWORD
            valueFrom:
              secretKeyRef:
                name: redis-pass
                key: redis-pass.secret
          ports:
          - containerPort: 6379
            protocol: TCP
      serviceAccount: tenant1-dev

One thing that is different from the db deployment is the way a secret (REDIS_PASSWORD) is used as a command-line parameter for the container command. Make sure you use in this case the syntax $(VARIABLE_NAME) because that's what Kubernetes expects.

Also note the labels, which have app: myapp in common with the db deployment, but a different value for tier, redis instead of db.

My last deployment example for now is the one for the web application pods:

$ cat web-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web-deployment
spec:
  replicas: 2
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: myapp
        tier: frontend
    spec:
      containers:
      - name: web
        image: MY_ECR_ID.dkr.ecr.us-west-2.amazonaws.com/myapp-web:tenant1
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: web-persistent-storage
          mountPath: /var/www/html/shared
      volumes:
      - name: web-persistent-storage
        persistentVolumeClaim:
          claimName: web-pvc
      serviceAccount: tenant1-dev

Note that replicas is set to 2, so that 2 pods will be launched and kept running at all times. The labels have the same common part app: myapp, but the tier is different, set to frontend.  The persistent volume claim web-pvc for the underlying physical EFS volume is used to mount /var/www/html/shared over EFS.

The image used for the container is derived from a stock ubuntu:14.04 DockerHub image, with apache and php 5.6 installed on top. Something along these lines:

FROM ubuntu:14.04

RUN apt-get update && \
    apt-get install -y ntp build-essential binutils zlib1g-dev telnet git acl lzop unzip mcrypt expat xsltproc python-pip curl language-pack-en-base && \
    pip install awscli

RUN export LC_ALL=en_US.UTF-8 && export LC_ALL=en_US.UTF-8 && export LANG=en_US.UTF-8 && \
        apt-get install -y mysql-client-5.6 software-properties-common && add-apt-repository ppa:ondrej/php

RUN apt-get update && \
    apt-get install -y --allow-unauthenticated apache2 apache2-utils libapache2-mod-php5.6 php5.6 php5.6-mcrypt php5.6-curl php-pear php5.6-common php5.6-gd php5.6-dev php5.6-opcache php5.6-json php5.6-mysql

RUN apt-get remove -y libapache2-mod-php5 php7.0-cli php7.0-common php7.0-json php7.0-opcache php7.0-readline php7.0-xml

RUN curl -sSL https://getcomposer.org/composer.phar -o /usr/bin/composer \
    && chmod +x /usr/bin/composer \
    && composer selfupdate

COPY files/apache2-foreground /usr/local/bin/
RUN chmod +x /usr/local/bin/apache2-foreground
EXPOSE 80
CMD bash /usr/local/bin/apache2-foreground

Creating services

In Kubernetes, you are not supposed to refer to individual pods when you want to target the containers running inside them. Instead, you need to use services, which provide endpoints for accessing a set of pods based on a set of labels.

Here is an example of a service for the db-deployment I created above:

$ cat db-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: db
  labels:
    app: myapp
spec:
  ports:
    - port: 3306
  selector:
    app: myapp
    tier: db
  clusterIP: None

Note the selector property, which is set to app: myapp and tier: db. By specifying these labels, we make sure that only the deployments tagged with those labels will be included in this service. There is only one deployment with those 2 labels, and that is db-deployment.

Here are similar service manifests for the redis and web deployments:

$ cat redis-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    app: myapp
spec:
  ports:
    - port: 6379
  selector:
    app: myapp
    tier: redis
  clusterIP: None

$ cat web-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: web
  labels:
    app: myapp
spec:
  ports:
    - port: 80
  selector:
    app: myapp
    tier: frontend
  type: LoadBalancer

The selector properties for each service are set so that the proper deployment is included in each service.

One important thing to note in the definition of the web service: its type is set to LoadBalancer. Since Kubernetes is AWS-aware, the service creation will create an actual ELB in AWS, so that the application can be accessible from the outside world. It turns out that this is not the best way to expose applications externally, since this LoadBalancer resource operates only at the TCP layer. What we need is a proper layer 7 load balancer, and in a future post I'll show how to use a Kubernetes ingress controller in conjunction with the traefik proxy to achieve that. In the mean time, here is a KubeCon presentation from Gerred Dillon on "Kubernetes Ingress: Your Router, Your Rules".

To create the services defined above, I used kubectl:

$ kubectl create -f db-service.yaml --namespace tenant1
$ kubectl create -f redis-service.yaml --namespace tenant1$ kubectl create -f web-service.yaml --namespace tenant1
At this point, the web application can refer to the database 'host' in its configuration files by simply using the name of the database service, which is db in our example. Similarly, the web application can refer to the redis 'host' by using the name of the redis service, which is redis. The Kubernetes magic will make sure calls to db and redis are properly routed to their end destinations, which are the actual containers running those services.

Running commands inside pods with kubectl exec

Although you are not really supposed to do this in a container world, I found it useful to run a command such as loading a database from a MySQL dump file on a newly created pod. Kubernetes makes this relatively easy via the kubectl exec functionality. Here's how I did it:

DEPLOYMENT=db-deployment
NAMESPACE=tenant1

POD=$(kubectl --namespace $NAMESPACE get pods --show-all | grep $DEPLOYMENT | awk '{print $1}')
echo Running db_setup.sh command on pod $POD
kubectl --namespace $NAMESPACE exec $POD -it /usr/local/bin/db_setup.sh

where db_setup.sh downloads a sql.tar.gz file from S3 and loads it into MySQL.

A handy troubleshooting tool is to get a shell prompt inside a pod. First you get the pod name (via kubectl get pods --show-all), then you run:

$ kubectl --namespace tenant1 exec -it $POD -- bash -il

Sharing volumes across containers

One of the patterns I found useful in docker-compose files is to mount a container volume into another container, for example to check out the source code in a container volume, then mount it as /var/www/html in another container running the web application. This pattern is not extremely well supported in Kubernetes, but you can find your way around it by using init-containers.

Here's an example of creating an individual pod for the sole purpose of running a Capistrano task against the web application source code. Simply running two regular containers inside the same pod would not achieve this goal, because the order of creation for those containers is random. What we need is to force one container to start before any regular containers by declaring it to be an 'init-container'.

$ cat capistrano-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: capistrano
  annotations:
     pod.beta.kubernetes.io/init-containers: '[
            {
                "name": "data4capistrano",
                "image": "MY_ECR_ID.dkr.ecr.us-west-2.amazonaws.com/myapp-web:tenant1",
                "command": ["cp", "-rH", "/var/www/html/current", "/tmpfsvol/"],
                "volumeMounts": [
                    {
                        "name": "crtvol",
                        "mountPath": "/tmpfsvol"
                    }
                ]
            }
        ]'
spec:
  containers:
  - name: capistrano
    image: MY_ECR_ID.dkr.ecr.us-west-2.amazonaws.com/capistrano:tenant1
    imagePullPolicy: Always
    command: [ "cap", "$(CAP_STAGE)", "$(CAP_TASK)", "--trace" ]
    env:
    - name: CAP_STAGE
      valueFrom:
        configMapKeyRef:
          name: tenant1-cap-config
          key: CAP_STAGE
    - name: CAP_TASK
      valueFrom:
        configMapKeyRef:
          name: tenant1-cap-config
          key: CAP_TASK
    - name: DEPLOY_TO
      valueFrom:
        configMapKeyRef:
          name: tenant1-cap-config
          key: DEPLOY_TO
    volumeMounts:
    - name: crtvol
      mountPath: /var/www/html
    - name: web-persistent-storage
      mountPath: /var/www/html/shared
  volumes:
  - name: web-persistent-storage
    persistentVolumeClaim:
      claimName: web-pvc
  - name: crtvol
    emptyDir: {}
  restartPolicy: Never
  serviceAccount: tenant1-dev

The logic is here is a bit convoluted. Hopefully some readers of this post will know a better way to achieve the same thing. What I am doing here is launching a container based on the myapp-web:tenant1 Docker image, which already contains the source code checked out from GitHub. This container is declared as an init-container, so it's guaranteed to run first. What it does is it mounts a special Kubernetes volume declared at the bottom of the pod manifest as an emptyDir. This means that Kubernetes will allocate some storage on the node where this pod will run. The data4capistrano container runs a command which copies the contents of the /var/www/html/current directory from the myapp-web image into this storage space mounted as /tmpfsvol inside data4capistrano. One other thing to note is that init-containers are a beta feature currently, so their declaration needs to be embedded into an annotation.

When the regular capistrano container is created inside the pod, it also mounts the same emptyDir container (which is not empty at this point, because it was populated by the init-container), this time as /var/www/html. It also mounts the shared EFS file system as /var/www/html/shared. With these volumes in place, it has all it needs in order to run Capistrano locally via the cap command. The stage, task, and target directory for Capistrano are passed via ConfigMaps values.

One thing to note is that the RestartPolicy is set to Never for this pod, because we only want to run it once and be done with it.

To run the pod, I used kubectl again:

$ kubectl create -f capistrano-pod.yaml --namespace tenant1

Creating jobs

Kubernetes also has the concept of jobs, which differ from deployments in that they run one instance of a pod and make sure it completes. Jobs are useful for one-off tasks that you want to run, or for periodic tasks such as cron commands. Here is an example of a job manifest which runs a script that uses the twig template engine under the covers in order to generate a configuration file for the web application:

$ cat template-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: myapp-template
spec:
  template:
    metadata:
      name: myapp-template
    spec:
      containers:
      - name: myapp-template
        image: Y_ECR_ID.dkr.ecr.us-west-2.amazonaws.com/myapp-template:tenant1
        imagePullPolicy: Always
        command: [ "php", "/root/scripts/templatize.php"]
        env:
        - name: DBNAME
          valueFrom:
            configMapKeyRef:
              name: tenant1-config
              key: MYSQL_DATABASE
        - name: DBUSER
          valueFrom:
            configMapKeyRef:
              name: tenant1-config
              key: MYSQL_USER
        - name: DBPASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-db-pass
              key: mysql-db-pass.secret
        - name: REDIS_PASSWORD
          valueFrom:
            secretKeyRef:
              name: redis-pass
              key: redis-pass.secret
        volumeMounts:
        - name: web-persistent-storage
          mountPath: /var/www/html/shared
      volumes:
      - name: web-persistent-storage
        persistentVolumeClaim:
          claimName: web-pvc
      restartPolicy: Never
      serviceAccount: tenant1-dev

The templatize.php script substitutes DBNAME, DBUSER, DBPASSWORD and REDIS_PASSWORD with the values passed in the job manifest, obtained from either Kubernetes secrets or ConfigMaps.

To create the job, I used kubectl:

$ kubectl create -f template-job.yaml --namespace tenant1

Performing rolling updates and rollbacks for Kubernetes deployments

Once your application pods are running, you'll need to update the application to a new version. Kubernetes allows you to do a rolling update of your deployments. One advantage of using deployments as opposed to the older method of using replication controllers is that the update process for deployment happens on the Kubernetes server side, and can be paused and restarted. There are a few ways of doing a rolling update for a deployment (and a recent linux.com article has a good overview as well).

a) You can modify the deployment's yaml file and change a label such as a version or a git commit, then run kubectl apply:

$ kubectl --namespace tenant1 apply -f deployment.yaml

Note from the Kubernetes documentation on updating deployments:

a Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e. .spec.template) is changed, e.g. updating labels or container images of the template. Other updates, such as scaling the Deployment, will not trigger a rollout.

b) You can use kubectl set to specify a new image for the deployment containers. Example from the documentation:
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 deployment "nginx-deployment" image update

c) You can use kubectl patch to add a unique label to the deployment spec template on the fly. This is the method I've been using, with the label being set to a timestamp:
$ kubectl patch deployment web-deployment --namespace tenant1 -p \  "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%Y%M%d%H%M%S'`\"}}}}}"

When updating a deployment, a new replica set will be created for that deployment, and the specified number of pods will be launched by that replica set, while the pods from the old replica set will be shut down. However, the old replica set itself will be preserved, allowing you to perform a rollback if needed. 
If you want to roll back to a previous version, you can use kubectl rollout history to show the revisions of your deployment updates:
$ kubectl --namespace tenant1 rollout history deployment web-deploymentdeployments "web-deployment"REVISION CHANGE-CAUSE1 kubectl create -f web-deployment.yaml --record --namespace tenant12 kubectl patch deployment web-deployment --namespace tenant1 -p {"spec":{"template":{"metadata":{"labels":{"date":"1479161196"}}}}}3 kubectl patch deployment web-deployment --namespace tenant1 -p {"spec":{"template":{"metadata":{"labels":{"date":"1479161573"}}}}}4 kubectl patch deployment web-deployment --namespace tenant1 -p {"spec":{"template":{"metadata":{"labels":{"date":"1479243444"}}}}}
Now use kubectl rollout undo to rollback to a previous revision:
$ kubectl --namespace tenant1 rollout undo deployments web-deployment --to-revision=3deployment "web-deployment" rolled back
I should note that all these kubectl commands can be easily executed out of Jenkins pipeline scripts or shell steps. I use a Docker image to wrap kubectl and its keys so that they I don't have to install it on the Jenkins worker nodes.

And there you have it. I hope the examples I provided will shed some light on some aspects of Kubernetes that go past the 'Kubernetes 101' stage. Before I forget, here's a good overview from the official documentation on using Kubernetes in production.

I have a lot more Kubernetes things on my plate, and I hope to write blog posts on all of them. Some of these:

  • ingress controllers based on traefik
  • creation and renewal of Let's Encrypt certificates
  • monitoring
  • logging
  • using the Helm package manager
  • ...and more




Creating EC2 and Route 53 resources with Terraform

Agile Testing - Grig Gheorghiu - Thu, 10/20/2016 - 22:14
Inspired by the great series of Terraform-related posts published on the Gruntwork blog, I've been experimenting with Terraform the last couple of days. So far, I like it a lot, and I think the point that Yevgeniy Brikman makes in the first post of the series, on why they chose Terraform over other tools (which are the usual suspects Chef, Puppet, Ansible), is a very valid point: Terraform is a declarative and client-only orchestration tool that allows you to manage immutable infrastructure. Read that post for more details about why this is a good thing.

In this short post I'll show how I am using Terraform in its Docker image incarnation to create an AWS ELB and a Route 53 CNAME record pointing to the name of the newly-created ELB.

I created a directory called terraform and created this Dockerfile inside it:

$ cat Dockerfile
FROM hashicorp/terraform:full
COPY data /data/
WORKDIR /data

My Terraform configuration files are under terraform/data locally. I have 2 files, one for variable declarations and one for the actual resource declarations.

Here is the variable declaration file:

$ cat data/vars.tf

variable "access_key" {}
variable "secret_key" {}
variable "region" {
  default = "us-west-2"
}

variable "exposed_http_port" {
  description = "The HTTP port exposed by the application"
  default = 8888
}

variable "security_group_id" {
  description = "The ID of the ELB security group"
  default = "sg-SOMEID"
}

variable "host1_id" {
  description = "EC2 Instance ID for ELB Host #1"
  default = "i-SOMEID1"
}

variable "host2_id" {
  description = "EC2 Instance ID for ELB Host #2"
  default = "i-SOMEID2"
}

variable "elb_cname" {
  description = "CNAME for the ELB"
}

variable "route53_zone_id" {
  description = "Zone ID for the Route 53 zone "
  default = "MY_ZONE_ID"
}

Note that I am making some assumptions, namely that the ELB will point to 2 EC2 instances. This is because I know beforehand which 2 instances I want to point it to. In my case, those instances are configured as Rancher hosts, and the ELB's purpose is to expose as port 80 to the world an internal Rancher load balancer port (say 8888).

Here is the resource declaration file:

$ cat data/main.tf

# --------------------------------------------------------
# CONFIGURE THE AWS CONNECTION
# --------------------------------------------------------

provider "aws" {
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
  region = "${var.region}"
}

# --------------------------------------------------------
# CREATE A NEW ELB
# --------------------------------------------------------

resource "aws_elb" "my-elb" {
  name = "MY-ELB"
  availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]

  listener {
    instance_port = "${var.exposed_http_port}"
    instance_protocol = "http"
    lb_port = 80
    lb_protocol = "http"
  }

  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    target = "TCP:${var.exposed_http_port}"
    interval = 10
  }

  instances = ["${var.host1_id}", "${var.host2_id}"]
  security_groups = ["${var.security_group_id}"]
  cross_zone_load_balancing = true
  idle_timeout = 400
  connection_draining = true
  connection_draining_timeout = 400

  tags {
    Name = "MY-ELB"
  }
}

# --------------------------------------------------------
# CREATE A ROUTE 53 CNAME FOR THE ELB
# --------------------------------------------------------

resource "aws_route53_record" "my-elb-cname" {
  zone_id = "${var.route53_zone_id}"
  name = "${var.elb_cname}"
  type = "CNAME"
  ttl = "300"
  records = ["${aws_elb.my-elb.dns_name}"]
}

First I declare a provider of type aws, then I declare 2 resources, one of type aws_elb, and another one of type aws_route53_record. These types are all detailed in the very good Terraform documentation for the AWS provider.

The aws_elb resource defines an ELB named my-elb, which points to the 2 EC2 instances mentioned above. The instances are specified by their variable names from vars.tf, using the syntax ${var.VARIABLE_NAME}, e.g. ${var.host1_id}. For the other properties of the ELB resource, consult the Terraform aws_elb documentation.

The aws_route53_record defined a CNAME record in the given Route 53 zone file (specified via the route53_zone_id variable). An important thing to note here is that the CNAME points to the name of the ELB just created via the aws_elb.my-elb.dns_name variable. This is one of the powerful things you can do in Terraform - reference properties of resources in other resources.

Again, for more details on aws_route53_record, consult the Terraform documentation.

Given these files, I built a local Docker image:

$ docker build -t terraform:local

I can then run the Terraform 'plan' command to see what Terraform intends to do:

$ docker run -t --rm terraform:local plan \
-var "access_key=$TERRAFORM_AWS_ACCESS_KEY" \
-var "secret_key=$TERRAFORM_AWS_SECRET_KEY" \
-var "exposed_http_port=$LB_EXPOSED_HTTP_PORT" \
-var "elb_cname=$ELB_CNAME"

The nice thing about this is that I can run Terraform in exactly the same way via Jenkins. The variables above are defined in Jenkins either as credentials of type 'secret text' (the 2 AWS keys), or as build parameters of type string. In Jenkins, the Docker image name would be specified as an ECR image, something of the type ECR_ID.dkr.ecr.us-west-2.amazonaws.com/terraform.

After making sure that the plan corresponds to what I expected, I ran the Terraform apply command:

$ docker run -t --rm terraform:local apply \
-var "access_key=$TERRAFORM_AWS_ACCESS_KEY" \
-var "secret_key=$TERRAFORM_AWS_SECRET_KEY" \
-var "exposed_http_port=$LB_EXPOSED_HTTP_PORT" \
-var "elb_cname=$ELB_CNAME"

One thing to note is that the AWS credentials I used are for an IAM user that has only the privileges needed to create the resources I need. I tinkered with the IAM policy generator until I got it right. Terraform will emit various AWS errors when it's not able to make certain calls. Those errors help you add the required privileges to the IAM policies. In my case, here are some example of policies.

Allow all ELB operations:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1476919435000",
"Effect": "Allow",
"Action": [ "elasticloadbalancing:*"
],
"Resource": [
"*"
]
}
]
}
Allow the ec2:DescribeSecurityGroups operation:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1476983196000",
"Effect": "Allow",
"Action": [
"ec2:DescribeSecurityGroups"
],
"Resource": [
"*"
]
}
]
}
Allow the route53:GetHostedZone and GetChange operations:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1476986919000",
"Effect": "Allow",
"Action": [
"route53:GetHostedZone",
"route53:GetChange"
],
"Resource": [
"*"
]
}
]
}

Allow the creation, changing and listing of Route 53 record sets in a given Route 53 zone file:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1476987070000",
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets",
"route53:ListResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/MY_ZONE_ID"
]
}
]
}

Docker orchestration with Rancher

Agile Testing - Grig Gheorghiu - Fri, 09/09/2016 - 20:27
For the last month or so I've been experimenting with Rancher as the orchestration layer for Docker-based deployments. I've been pretty happy with it so far. Here are some of my notes and a few tips and tricks. I also recommend reading through the very good Rancher  documentation. In what follows I'll assume that the cluster management engine used by Rancher is its own engine called Cattle. Rancher also supports Kubernetes, Mesos and Docker Swarm.

Running the Rancher server

I provisioned an EC2 instance, installed Docker on it, then ran this command to launch the Rancher server as a Docker container (it will also get launched automatically if you reboot the EC2 instance):


# docker run -d --restart=always -p 8080:8080 rancher/server

Creating Rancher environments
It's important to think about the various environments you want to manage in Rancher. If you have multiple projects that you want to manage with Rancher, as well as multiple environments for your infrastructure, such as development, staging and production, I recommend you create a Rancher environment per project/infrastructure-environment combination, for example a Rancher environment called proj1dev, another one called proj1stage, another called proj1prod, and similarly for other projects: proj2dev, proj2stage, proj2prod etc.
Tip: Since all containers in the same Rancher environment can by default connect to all other containers in that Rancher environment, having a project/infrastructure-environment combination as detailed above will provide good isolation and security from one project to another, and from one infrastructure environment to another within the same project. I recommend you become familiar with Rancher environments by reading more about them in the documentation.
In what follows I'll assume the current environment is proj1dev.
Creating Rancher API key pairs
Within each environment, create an API key pair. Copy and paste the two keys (one access key and one secret access key) somewhere safe.

Adding Rancher hosts
Within each environment, you need to add Rancher hosts. They are the compute nodes that will run the various Docker containers that you will orchestrate with Rancher. In my case, I provisioned two hosts per environment as EC2 instances running Docker.
In the Rancher UI, when you go to Infrastructure  -> Hosts then click the Add Host button, you should see a docker run command that you can run on each host in order to launch the Rancher Agent on that host. Something like this:
# docker run -d --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.0.2 http://your-rancher-server-name.example.com:8080/v1/scripts/5536854597A70149E388:1473267600000:rfQVqxXcvIPulNw72fUOQG66iGM
Note that you need to allow UDP ports 500 and 4500 from each Rancher host to/from any other host and to/from the Rancher server. This is because Rancher uses IPSec tunnels for inter-host communication. The Rancher hosts also need to talk to the Rancher server over port 8080 (or whatever port you have exposed for the Rancher server container).
Adding ECR registries
We use ECR as our Docker registry. Within each environment, I had to add our ECR registry. In the Rancher UI, I went to Infrastructure -> Registries, then clicked Add Registry and chose Custom as the registry type. In the attribute fields, I specified:
  • Address: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com
  • Email: none
  • Username: AWS
  • Password: the result of running these commands (you need to install and configure the awscli for this to work):
    • apt-get install python-pip; pip install awscli
    • aws configure (specify the keys for an IAM user allowed to access the ECR registry)
    • aws ecr get-login | cut -d ' ' -f 6

Application architecture
For this example I will consider an application composed of a Web application based on Apache/PHP running in 2 or more containers and mounting its shared files (configuration, media) over NFS. The Web app talks to a MySQL database server mounting its data files over NFS. The Web app containers are behind one or more instances of a Rancher load balancer, and the Rancher LB instances are fronted by an Amazon Elastic Load Balancer.
Rancher stacks
A 'stack' in Rancher corresponds to a set of services defined in a docker-compose YAML file. These services can also have Rancher-specific attributes (such as desired number of containers aka 'scale', health checks, etc) defined in a special rancher-compose YAML file. I'll show plenty of examples of these files in what follows. My stack naming convention will be projname-environment-stacktype, for example proj1-development-nfs, proj1--development-database etc.
Tip: Try to experiment with creating stacks in the Rancher UI, then either view or export their configurations via the stack settings button in the UI:

This was a life saver for me especially when it comes to lower-level stacks such as NFS or Rancher load balancers. Exporting the configuration will download a zip file containing two files: docker-compose.yml and rancher-compose.yml. It will save you from figuring out on your own the exact syntax you need to use in these files.
Creating an NFS stack
One of the advantages of using Rancher is that it offers an extensive catalog of services ready to be used within your infrastructure. One such service is Convoy NFS. To use it, I started out by going to the Catalog menu option in the Rancher UI, then selecting Convoy NFS. In the following screen I specified proj1-development-nfs as the stack name, as well as the NFS server's IP address and mount point.


Note that I had already set up an EC2 instance to act as an NFS server. I attached an EBS volume per project/environment. So in the example above, I exported a directory called /nfs/development/proj1.
After launching the NFS stack, you should see it in the Stacks screen in the Rancher UI. The stack will consist of 2 services, one called convoy-nfs and the other called convoy-nfs-storagepool:

Once the NFS stack is up and running, you can export its configuration as explained above.

To create or update a stack programmatically, I used the rancher-compose utility and wrapped it inside shell scripts. Here is an example of a shell script that calls rancher-compose to create an NFS stack:
$ cat rancher-nfssetup.sh#!/bin/bash

COMMAND=$@

rancher-compose -p proj1-development-nfs --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-nfssetup.yml --rancher-file rancher-compose.yml $COMMAND

Note that there is no command line option for the target Rancher environment. It suffices to use the Rancher API keys for a given environment in order to target that environment.

Here is the docker-compose file for this stack, which I obtained by exporting the stack configuration from the UI:
$ cat docker-compose-nfssetup.ymlconvoy-nfs-storagepool: labels: io.rancher.container.create_agent: 'true' command: - storagepool-agent image: rancher/convoy-agent:v0.9.0 volumes: - /var/run:/host/var/run - /run:/host/run convoy-nfs: labels: io.rancher.scheduler.global: 'true' io.rancher.container.create_agent: 'true' command: - volume-agent-nfs image: rancher/convoy-agent:v0.9.0 pid: host privileged: true volumes: - /lib/modules:/lib/modules:ro - /proc:/host/proc - /var/run:/host/var/run - /run:/host/run - /etc/docker/plugins:/etc/docker/plugins
Here is the portion of my rancher-compose.yml file that has to do with the NFS stack, again obtained by exporting the NFS stack configuration:
convoy-nfs-storagepool: scale: 1 health_check: port: 10241 interval: 2000 unhealthy_threshold: 3 strategy: recreate response_timeout: 2000 request_line: GET /healthcheck HTTP/1.0 healthy_threshold: 2 metadata: mount_dir: /nfs/development/proj1 nfs_server: 172.31.41.108 convoy-nfs: health_check: port: 10241 interval: 2000 unhealthy_threshold: 3 strategy: recreate response_timeout: 2000 request_line: GET /healthcheck HTTP/1.0 healthy_threshold: 2 metadata: mount_dir: /nfs/development/proj1 nfs_server: 172.31.41.108 mount_opts: ''

To create the NFS stack, all I need to do at this point is to call:

$ ./rancher-nfssetup.sh up

To inspect the logs for the stack, I can call:

$ ./rancher-nfssetup.sh logs

Note that I passed various arguments to the rancher-compose utility. Most of them are specified as environment variables. This allows me to add the bash script to version control without worrying about credentials, secrets etc. I also use the --env-file .envvars option, which allows me to define environment variables in the .envvars file and have them interpolated by rancher-compose in the various yml files it uses.
Creating volumes using the NFS stack
One of my goals was to attach NFS-based volumes to Docker containers in my infrastructure. To do this, I needed to create volumes in Rancher. One way to do it is to go to Infrastructure -> Storage in the Rancher UI, then go to the area corresponding to the NFS stack you want and click Add Volume, giving the volume a name and a description. Doing it manually is well and good, but I wanted to do it automatically, so I used another bash script around rancher-compose together with another docker-compose file:
$ cat rancher-volsetup.sh#!/bin/bash COMMAND=$@ rancher-compose -p proj1-development-volsetup --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-volsetup.yml --rancher-file rancher-compose.yml $COMMAND

$ cat docker-compose-volsetup.ymlvolsetup: image: ubuntu:14.04 labels: io.rancher.container.start_once: true volumes: - volMysqlData:/var/lib/mysql - volAppShared:/var/www/shared volume_driver: proj1-development-nfs
A few things to note in the docker-compose-volsetup.yml file:
  • I used the ubuntu:14.04 Docker image and I attached two volumes, one called volMysqlData and once called volAppSharedData. The first one will be mounted on the Docker container as /var/lib/mysql and the second one will be mounted as /var/www/shared. These are arbitrary paths, since my goal was just to create the volumes as Rancher resources.
  • I wanted the volsetup service to run once so that the volumes get created, then stop. For that, I used the special Rancher label io.rancher.container.start_once: true
  • I used as the volume_driver the NFS stack proj1-development-nfs I created above. This is important, because I want these volumes to be created within this NFS stack.
I used the following commands to create and start the proj1-development-volsetup stack, then to show its logs, and finally to shut it down and remove its containers, which are not needed anymore once the volumes get created: ./rancher-volsetup.sh up -d sleep 30 ./rancher-volsetup.sh logs ./rancher-volsetup.sh down ./rancher-volsetup.sh rm --force
I haven't figured out yet how to remove a Rancher stack programmatically, so for these 'helper' type stacks I had to use the Rancher UI to delete them.At this point, if you look in the /nfs/development/proj1 directory on the NFS server, you should see 2 directories with the same names as the volumes we created.
Creating a database stack
So far I haven't used any custom Docker images. For the database layer of my application, I will want to use a custom image which I will push to the Amazon ECR registry. I will use this image in a docker-compose file in order to set up and start the database in Rancher.
I have a directory called db containing the following Dockerfile:
$ cat Dockerfile
FROM percona

VOLUME /var/lib/mysql

COPY etc/mysql/my.cnf /etc/mysql/my.cnf
COPY scripts/db_setup.sh /usr/local/bin/db_setup.sh

I have a customized MySQL configuration file my.cnf (in my local directory db/etc/mysql) which gets copied to the Docker image as /etc/mysql.my.cnf. I also have a db_setup.sh bash script in my local directory db/scripts which gets copied to /usr/local/bin in the Docker image. In this script I grant rights to a MySQL user used by the Web app, and I also load a MySQL dump file if it exists:
$ cat scripts/db_setup.sh#!/bin/bash set -e host="$1" until mysql -h "$host" -uroot -p$MYSQL_ROOT_PASSWORD -e "SHOW DATABASES"; do >&2 echo "MySQL is unavailable - sleeping" sleep 1 done >&2 echo "MySQL is up - executing GRANT statement" mysql -h "$host" -uroot -p$MYSQL_ROOT_PASSWORD \ -e "GRANT ALL ON $MYSQL_DATABASE.* TO $MYSQL_USER@'%' IDENTIFIED BY \"$MYSQL_PASSWORD\"" >&2 echo "Starting to load SQL dump" mysql -h "$host" -uroot -p$MYSQL_ROOT_PASSWORD $MYSQL_DATABASE < /dbdump/$MYSQL_DUMP_FILE >&2 echo "Finished loading SQL dump"
Note that the database name, database user name and password, as well as the MySQL root password are all passed in environment variables.
To build this Docker image, I ran:
$ docker build -t my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/db:proj1-development .
Note that I tagged the image with the proj1-development tag.
To push this image to Amazon ECR, I first called:
$(aws get-login)
then:
$ docker push my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/db:proj1-development
To run the db_setup.sh script inside a Docker container in order to set up the database, I put together the following docker-compose file:
$ cat docker-compose-dbsetup.ymlECRCredentials:  environment:    AWS_REGION: $AWS_REGION    AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID    AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY  labels:    io.rancher.container.pull_image: always    io.rancher.container.create_agent: 'true'    io.rancher.container.agent.role: environment    io.rancher.container.start_once: true  tty: true  image: objectpartners/rancher-ecr-credentials  stdin_open: true
db:  image: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/db:proj1-development  labels:    io.rancher.container.pull_image: always    io.rancher.scheduler.affinity:host_label: dbsetup=proj1  volumes:    - volMysqlData:/var/lib/mysql  volume_driver: proj1-development-nfs  environment:    - MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
dbsetup:  image: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/db:proj1-development  labels:    io.rancher.container.pull_image: always    io.rancher.container.start_once: true    io.rancher.scheduler.affinity:host_label: dbsetup=proj1  command: /usr/local/bin/db_setup.sh db  links:    - db:db  volumes:    - volMysqlData:/var/lib/mysql    - /dbdump/proj1:/dbdump  volume_driver: proj1-development-nfs  environment:    - MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD    - MYSQL_DATABASE=$MYSQL_DATABASE    - MYSQL_USER=$MYSQL_USER    - MYSQL_PASSWORD=$MYSQL_PASSWORD    - MYSQL_DUMP_FILE=$MYSQL_DUMP_FILE
A few things to note:
  • there are 3 services in this docker-compose file
    • a ECRCredentials service which connects to Amazon ECR and allows the ECR image db:proj1-development to be used by the other 2 services
    • a db service which runs a Docker container based on the db:proj1-development ECR image, and which launches a MySQL database with the root password set to the value of the MYSQL_ROOT_PASSWORD environment variable
    • a dbsetup service that also runs a Docker container based on the db:proj1-development ECR image, but instead of the default command, which would run MySQL, it runs the db_setup.sh script (specified in the command directive); this service also uses environment variables specifying the database to be loaded from the SQL dump file, as well as the user and password that will get grants to that database
  • the dbsetup service links to the db service via the links directive
  • the dbsetup service is a 'run once then stop' type of service, which is why it has the label io.rancher.container.start_once: true attached
  • both the db and the dbsetup service will run on a Rancher host with the label 'dbsetup=proj1'; this is because we want to load the SQL dump from a file that the dbsetup service can find
    • we will put this file on a specific Rancher host in a directory called /dbdump/proj1, which will then be mounted by the dbsetup container as /dbdump
    • the db_setup.sh script will then load the SQL file called MYSQL_DUMP_FILE from the /dbdump directory
    • this can also work if we'd just put the SQL file in the same NFS volume as the MySQL data files, but I wanted to experiment with host labels in this case
  • wherever NFS volumes are used, for example for volMysqlData, the volume_driver needs to be set to the proper NFS stack, proj1-development-nfs in this case
It goes without saying that mounting the MySQL data files from NFS is a potential performance bottleneck, so you probably wouldn't do this in production. I wanted to experiment with NFS in Rancher, and the performance I've seen in development and staging for some of our projects doesn't seem too bad.
To run a Rancher stack based on this docker-compose-dbsetup.yml file, I used this bash script:
$ cat rancher-dbsetup.sh#!/bin/bash
COMMAND=$@
rancher-compose -p proj1-development-dbsetup --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-dbsetup.yml --rancher-file rancher-compose.yml $COMMAND
Note that all environment variables referenced in the docker-compose-dbsetup.yml file are set in the .envvars file.
I wanted to run the proj1-development-dbsetup stack and then shut down its services once the dbsetup service completes.  I used these commands as part of a bash script:
./rancher-dbsetup.sh up -d
while :do        ./rancher-dbsetup.sh logs --lines "10" > dbsetup.log 2>&1        grep 'Finished loading SQL dump' dbsetup.log        result=$?        if [ $result -eq 0 ]; then            break        fi        echo Waiting 10 seconds for DB load to finish...        sleep 10done./rancher-dbsetup.sh logs./rancher-dbsetup.sh down./rancher-dbsetup.sh rm --force
Once the database is setup, I want to launch MySQL and keep it running so it can be used by the Web application. I have a separate docker-compose file for that:
$ cat docker-compose-dblaunch.ymlECRCredentials:  environment:    AWS_REGION: $AWS_REGION    AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID    AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY  labels:    io.rancher.container.pull_image: always    io.rancher.container.create_agent: 'true'    io.rancher.container.agent.role: environment    io.rancher.container.start_once: true  tty: true  image: objectpartners/rancher-ecr-credentials  stdin_open: true
db:  image: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/db:proj1-development  labels:    io.rancher.container.pull_image: always  volumes:    - volMysqlData:/var/lib/mysql  volume_driver: proj1-development-nfs
The db service is similar to the one in the docker-compose-dbsetup.yml file. In this case the database is all set up, so we don't need anything except the NFS volume to mount the MySQL data files from.
As usual, I have a bash script that calls docker-compose in order to create a stack called proj1-development-database:
$ cat rancher-dblaunch.sh#!/bin/bash
COMMAND=$@
rancher-compose -p proj1-development-database --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-dblaunch.yml --rancher-file rancher-compose.yml $COMMAND
I call this script like this:
./rancher-dblaunch.sh up -d
At this point, the proj1-development-database stack is up and running and contains the db service running as a container on one of the Rancher hosts in the Rancher 'proj1dev' environment.
Creating a Web application stack

So far, I've been using either off-the-shelf or slightly customized Docker images. For the Web application stack I will be using more heavily customized images. The building block is a 'base' image whose Dockerfile contains directives for installing commonly used packages and for adding users.

Here is the Dockerfile for a 'base' image running Ubuntu 14.04:

FROM ubuntu:14.04

RUN apt-get update && \
    apt-get install -y ntp build-essential build-essential binutils zlib1g-dev \
                       git acl cronolog lzop unzip mcrypt expat xsltproc python-pip curl language-pack-en-base
RUN pip install awscli

RUN adduser --ui 501 --ingroup www-data --shell /bin/bash --home /home/myuser myuser
RUN mkdir /home/myuser/.ssh
COPY files/myuser_authorized_keys /home/myuser/.ssh/authorized_keys
RUN chown -R myuser:www-data /home/myuser/.ssh && \
    chmod 700 /home/myuser/.ssh && \
    chmod 600 /home/myuser/.ssh/authorized_keys 

When I built this image, I tagged it as my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/base:proj1-development.

Here is the Dockerfile for an image (based on the base image above) that installs Apache, PHP 5.6 (using a custom apt repository), RVM, Ruby and the compass gem:

FROM  my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/base:proj1-development

RUN export LC_ALL=en_US.UTF-8 && export LC_ALL=en_US.UTF-8 && export LANG=en_US.UTF-8 && \
        apt-get install -y mysql-client-5.6 software-properties-common && add-apt-repository ppa:ondrej/php5-5.6

RUN apt-get update && \
    apt-get install -y --allow-unauthenticated apache2 apache2-utils libapache2-mod-php5 \
                       php5 php5-mcrypt php5-curl php-pear php5-gd \
                       php5-dev php5-mysql php5-readline php5-xsl php5-xmlrpc php5-intl

# Install composer
RUN curl -sSL https://getcomposer.org/composer.phar -o /usr/bin/composer \
    && chmod +x /usr/bin/composer \
    && composer selfupdate

# Install rvm and compass gem for SASS image compilation

RUN curl https://raw.githubusercontent.com/rvm/rvm/master/binscripts/rvm-installer -o /tmp/rvm-installer.sh && \
        chmod 755 /tmp/rvm-installer.sh && \
        gpg --keyserver hkp://keys.gnupg.net --recv-keys D39DC0E3 && \
        /tmp/rvm-installer.sh stable --path /home/myuser/.rvm --auto-dotfiles --user-install && \
        /home/myuser/.rvm/bin/rvm get stable && \
        /home/myuser/.rvm/bin/rvm reload && \
        /home/myuser/.rvm/bin/rvm autolibs 3

RUN /home/myuser/.rvm/bin/rvm install ruby-2.2.2  && \
        /home/myuser/.rvm/bin/rvm alias create default ruby-2.2.2 && \
        /home/myuser/.rvm/wrappers/ruby-2.2.2/gem install bundler && \
        /home/myuser/.rvm/wrappers/ruby-2.2.2/gem install compass

COPY files/apache2-foreground /usr/local/bin/
EXPOSE 80
CMD ["apache2-foreground"]

When I built this image, I tagged it as  my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/apache-php:proj1-development

With these 2 images as building blocks, I put together 2 more images, one for building artifacts for the Web application, and one for launching it.

Here is the Dockerfile for an image that builds the artifacts for the Web application:

FROM my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/apache-php:proj1-development

ADD ./scripts/app_setup.sh /usr/local/bin/

The heavy lifting takes place in the app_setup.sh script. That's where you would do things such as pull a specified git branch from application repo on GitHub, then run composer (if it's a PHP app) or other build tools in order to generate the artifacts necessary for running the application. At the end of this script, I generate a tar.gz of the code + any artifacts and upload it to S3 so I can use it when I generate the Docker image for the Web app.

When I built this image, I tagged it as  my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/appsetup:proj1-development

To actually run a Docker container based on the appsetup image, I used this docker-compose file:

$ cat docker-compose-appsetup.yml
ECRCredentials:
  environment:
    AWS_REGION: $AWS_REGION
    AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
  labels:
    io.rancher.container.pull_image: always
    io.rancher.container.create_agent: 'true'
    io.rancher.container.agent.role: environment
    io.rancher.container.start_once: true
  tty: true
  image: objectpartners/rancher-ecr-credentials
  stdin_open: true

appsetup:
        image: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/appsetup:proj1-development
  labels:
    io.rancher.container.pull_image: always
  command: /usr/local/bin/app_setup.sh
  external_links:
    - proj1-development-database/db:db
  volumes:
    - volAppShared:/var/www/shared
  volume_driver: proj1-development-nfs
  environment:
    - GIT_URL=$GIT_URL
    - GIT_BRANCH=$GIT_BRANCH
    - AWS_S3_REGION=$AWS_S3_REGION
    - AWS_S3_ACCESS_KEY_ID=$AWS_S3_ACCESS_KEY_ID
    - AWS_S3_SECRET_ACCESS_KEY=$AWS_S3_SECRET_ACCESS_KEY
    - AWS_S3_RELEASE_BUCKET=$AWS_S3_RELEASE_BUCKET
    - AWS_S3_RELEASE_FILENAME=$AWS_S3_RELEASE_FILENAME

Some things to note:
  • the command executed when a Docker container based on the appsetup service is launched is /usr/local/bin/app_setup.sh, as specified in the command directive
    • the app_setup.sh script runs commands that connect to the database, hence the need for the appsetup service to link to the MySQL database running in the proj1-development-database stack launched above; for that, I used the external_links directive
  • the appsetup service mounts an NFS volume (volAppShared) as /var/www/shared
    • the volume_driver needs to be proj1-development-nfs
    • before running the service, I created proper application configuration files under /nfs/development/proj1/volAppShared on the NFS server, specifying things such as the database server name (which needs to be 'db', since this is how the database container is linked as), the database name, user name and password, etc.
  • the appsetup service uses various environment variables referenced in the environment directive; it will pass these variables to the app_setup.sh script
To run the appsetup service, I used another bash script around the rancher-compose command:
$ cat rancher-appsetup.sh#!/bin/bash
COMMAND=$@
rancher-compose -p proj1-development-appsetup --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-appsetup.yml --rancher-file rancher-compose.yml $COMMAND

Tip: When using its Cattle cluster management engine, Rancher does not add services linked to each other as static entries in /etc/hosts on the containers. Instead, it provides an internal DNS service so that containers in the same environment can reach each other by DNS names as long as they link to each other in docker-compose files. If you go to a shell prompt inside a container, you can ping other containers by name even from one Rancher stack to another. For example, from a web container in the proj1-development-app stack you can ping a database container in the proj1-development-database stack linked in the docker-compose file as db and you would get back a name of the type db.proj1-development-app.rancher.internal.
Tip: There is no need to expose ports from containers within the same Rancher environment. I spent many hours troubleshooting issues related to ports and making sure ports are unique across stacks, only to realize that the internal ports that the services listen on (3306 for MySQL, 80 and 443 for Apache) are reachable from the other containers in the same Rancher environment. The only ports you need exposed to the external world in the architecture I am describing are the load balancer ports, as I'll describe below.
Here is the Dockerfile for an image that runs the Web application:
FROM my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/apache-php:proj1-development
# disable interactive functions
ARG DEBIAN_FRONTEND=noninteractive

RUN a2enmod headers \
&& a2enmod rewrite \
&& a2enmod ssl

RUN rm -rf /etc/apache2/ports.conf /etc/apache2/sites-enabled/*
ADD etc/apache2/sites-enabled /etc/apache2/sites-enabled
ADD etc/apache2/ports.conf /etc/apache2/ports.conf

ADD release /var/www/html/release
RUN chown -R myuser:www-data /var/www/html/release
This image is based on the apache-php image but adds Apache customizations, as well as the release directory obtained from the tar.gz file uploaded to S3 by the appsetup service.

When I built this image, I tagged it as  my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/app:proj1-development

Code deployment

My code deployment process is a bash script (which can be used standalone, or as part of a Jenkins job, or can be turned into a Jenkins pipeline) that first runs the appsetup service in order to generate a tar.gz of the code and artifacts, then downloads it from S3 and uses it as the local release directory to be copied into the app image. The script then pushes the app Docker image to Amazon ECR. The environment variables are either defined in an .envvars file or passed via Jenkins parameters. The script assumes that the Dockerfile for the app image is in the current directory, and that the etc directory structure used for the Apache files in the app image is also in the current directory (they are all checked into the project repository, so Jenkins will find them).

./rancher-appsetup.sh up -dsleep 20cp /dev/null appsetup.logwhile :do        ./rancher-appsetup.sh logs >> appsetup.log 2>&1        grep 'Restarting web server apache2' appsetup.log        result=$?        if [ $result -eq 0 ]; then            break        fi        echo Waiting 10 seconds for app code deployment to finish...        sleep 10done./rancher-appsetup.sh logs./rancher-appsetup.sh down./rancher-appsetup.sh rm --force
# download release.tar.gz from S3 and unpack it
set -a. .envvarsset +a
export AWS_ACCESS_KEY_ID=$AWS_S3_ACCESS_KEY_IDexport AWS_SECRET_ACCESS_KEY=$AWS_S3_SECRET_ACCESS_KEY
rm -rf $AWS_S3_RELEASE_FILENAME.tar.gz
aws s3 --region $AWS_S3_REGION ls s3://$AWS_S3_RELEASE_BUCKET/aws s3 --region $AWS_S3_REGION cp s3://$AWS_S3_RELEASE_BUCKET/$AWS_S3_RELEASE_FILENAME.tar.gz .
tar xfz $AWS_S3_RELEASE_FILENAME.tar.gz
# build app docker image and push it to ECR
cat << "EOF" > awscreds[default]aws_access_key_id=$AWS_ACCESS_KEY_IDaws_secret_access_key=$AWS_SECRET_ACCESS_KEYEOF
export AWS_SHARED_CREDENTIALS_FILE=./awscreds $(aws ecr --region=$AWS_REGION get-login)/usr/bin/docker build -t my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/app:proj1-development ./usr/bin/docker push my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/app:proj1-development

Launching the app service

At this point, the Docker image for the app service has been pushed to Amazon ECR, but the service itself hasn't been started. To do that, I use this docker-compose file:

$ cat docker-compose-app.yml
ECRCredentials:
  environment:
    AWS_REGION: $AWS_REGION
    AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
  labels:
    io.rancher.container.pull_image: always
    io.rancher.container.create_agent: 'true'
    io.rancher.container.agent.role: environment
    io.rancher.container.start_once: true
  tty: true
  image: objectpartners/rancher-ecr-credentials
  stdin_open: true

app:
  image: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/app:proj1-development
  labels:
    io.rancher.container.pull_image: always
  external_links:
    - proj1-development-database/db:db
  volumes:
    - volAppShared:/var/www/shared
  volume_driver: proj1-development-nfs

Nothing very different about this file compare to the files I've shown so far. The app service mounts the volAppShared NFS volume as /var/www/shared, and links to the MySQL database service db already running in the proj1-development-database Rancher stack, giving it the name 'db'.

To run the app service, I use this bash script wrapping rancher-compose:

$ cat rancher-app.sh
#!/bin/bash

COMMAND=$@

rancher-compose -p proj1-development-app --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-app.yml --rancher-file rancher-compose.yml $COMMAND

Since the proj1-development-app stack may already be running with an old version of the app Docker image, I will invoke rancher-app.sh with the force-upgrade option of the rancher-compose command:

./rancher-app.sh up -d --force-upgrade --confirm-upgrade --pull --batch-size "1"

This will perform a rolling upgrade of the app service, by stopping the containers for the app service one at a time (as indicated by the batch-size parameter), then pulling the latest Docker image for the app service, and finally starting each container again. Speaking of 'containers' plural, you can indicate how many containers should run at all times for the app service by adding these lines to rancher-compose.yml:

app:
  scale: 2

In my case, I want 2 containers to run at all times. If you stop one container from the Rancher UI, you will see another one restarted automatically by Rancher in order to preserve the value specified for the 'scale' parameter.

Creating a load balancer stack

When I started to run load balancers in Rancher, I created them via the Rancher UI. I created a new stack, then added a load balancer service to it. It took me a while to figure out that I can then export the stack configuration and generate a docker-compose file and a rancher-compose snippet I can add to my main rancher-compose.yml file.

Here is the docker-compose file I use:

$ cat docker-compose-lbsetup.yml
lb:
  ports:
  - 8000:80
  - 8001:443
  external_links:
  - proj1-development-app/app:app
  labels:
    io.rancher.loadbalancer.ssl.ports: '8001'
    io.rancher.loadbalancer.target.proj1-development-app/app: proj1.dev.mydomain.com:8000=80,8001=443
  tty: true
  image: rancher/load-balancer-service
  stdin_open: true

The ports directive tell the load balancer which ports to expose externally and what ports to map them to. This example shows that port 8000 will be exposed externally and mapped to port 80 on the target service, and port 8001 will be exposed externally and mapped to port 443 on the target service.

The external_links directive tells the load balancer which service to load balance. In this example, it is the app service in the proj1-development-app stack.

The labels directive does layer 7 load balancing by allowing you to specify a domain name that you want to send to a specific port. In this example, I want to send HTTP requests coming on port 8000 for proj1.dev.mydomain.com to port 80 on the target containers for the app service, and HTTPS requests coming on port 8001 for the same proj1.dev.mydomain.com name to port 443 on the target containers.

I could have also added a new line under labels, specifying that I want requests for proj1-admin.dev.mydomain.com coming on port 8000 to be sent to a different port on the target containers, assuming that I had Apache configured to listen on that port. You can read more about the load balancing features available in Rancher in the documentation.

Here is the load balancer section in rancher-compose.yml:

lb:
  scale: 2
  load_balancer_config:
    haproxy_config: {}
  default_cert: proj1.dev.mydomain.com
  health_check:
    port: 42
    interval: 2000
    unhealthy_threshold: 3
    healthy_threshold: 2
    response_timeout: 2000

Note that there is a mention of a default_cert. This is an SSL key + cert that I uploaded to Rancher via the UI by going to Infrastructure -> Certificates and that I named proj1.dev.mydomain.com. The Rancher Catalog does contain an integration for Let's Encrypt but I haven't had a chance to test it yet (from the Rancher Catalog: "The Let's Encrypt Certificate Manager obtains a free (SAN) SSL Certificate from the Let's Encrypt CA and adds it to Rancher's certificate store. Once the certificate is created it is scheduled for auto-renewal 14-days before expiration. The renewed certificate is propagated to all applicable load balancer services.")

Note also that the scale value is 2, which means that there will be 2 containers for the lb service.

Tip: In the Rancher UI, you can open a shell into any container, or view the logs for any container by going to the Settings icon of that container, and choosing Execute Shell or View Logs:

Tip: Rancher load balancers are based on haproxy. You can open a shell into a container running for the lb service, then look at the haproxy configuration file in /etc/haproxy/haproxy.cfg. To troubleshoot haproxy issues, you can enable UDP logging in /etc/rsyslog.conf by removing the comments before the following 2 lines:

#$ModLoad imudp
#$UDPServerRun 514

then restarting the rsyslog service. Then you can restart the haproxy service and inspect its log file in /var/log/haproxy.log.
To run the lb service, I use this bash script:

$ cat rancher-lbsetup.sh
#!/bin/bash

COMMAND=$@

rancher-compose -p proj1-development-lb --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-lbsetup.yml --rancher-file rancher-compose.yml $COMMAND

I want to do a rolling upgrade of the lb service in case anything has changed, so I invoke the rancher-compose wrapper script in a similar way to the one for the app service:

./rancher-lbsetup.sh up -d --force-upgrade --confirm-upgrade --batch-size "1"

Putting it all together in Jenkins

First I created a GitHub repository with the following structure:

  • All docker-compose-*.yml files
  • The rancher-compose.yml file
  • All rancher-*.sh bash scripts wrapping the rancher-compose command
  • A directory for the base Docker image (containing its Dockerfile and any other files that need to go into that image)
  • A directory for the apache-php Docker image
  • A directory for the db Docker image
  • A directory for the appsetup Docker image
  • A Dockerfile in the current directory for the app Docker image
  • An etc directory in the current directory used by the Dockerfile for the app image

Each project/environment combination has a branch created in this GitHub repository. For example, for the proj1 development environment I would create a proj1dev branch which would then contain any customizations I need for this project -- usually stack names, Docker tags, Apache configuration files under the etc directory.

My end goal was to use Jenkins to drive the launching of the Rancher services and the deployment of the code. Eventually I will use a Jenkins Pipeline to string together the various steps of the workflow, but for now I have 5 individual Jenkins jobs which all check out the proj1dev branch of the GitHub repo above. The jobs contain shell-type build steps where I actually call the various rancher bash scripts around rancher-compose. The Jenkins jobs also take parameters corresponding to the environment variables used in the docker-compose files and in the rancher bash scripts. I also use the Credentials section in Jenkins to store any secrets such as the Rancher API keys, AWS keys, S3 keys, ECR keys etc. On the Jenkins master and executor nodes I installed the rancher and rancher-compose CLI utilities (I downloaded the rancher CLI from the footer of the Rancher UI).

Job #1 builds the Docker images discussed above: base, apache-php, db, and appsetup (but not the app image yet).

Job #2 runs rancher-nfssetup.sh and rancher-volsetup.sh in order to set up the NFS stack and the volumes used by the dbsetup, appsetup, db and app services.

Job #3 runs rancher-dbsetup.sh and rancher-dblaunch.sh in order to set up the database via the dbsetup service, then launch the db service.

At this point, everything is ready for deployment of the application.

Job #4 is the code deployment job. It runs the sequence of steps detailed in the Code Deployment section above.

Job #5 is the rolling upgrade job for the app service and the lb service. If those services have never been started before, they will get started. If they are already running, they will be upgraded in a rolling fashion, batch-size containers at a time as I detailed above.

When a new code release needs to be pushed to the proj1dev Rancher environment, I would just run job #4 followed by job #5. Obviously you can string these jobs together in a Jenkins Pipeline, which I intend to do next.

Some more Rancher tips and tricks









Thu, 01/01/1970 - 01:00