How-To – Use kubernetes/openshift watch parameter in REST interface

Some api or oapi calls support the watch parameter

E.G: https://docs.openshift.com/enterprise/3.1/rest_api/openshift_v1.html#list-or-watch-objects-of-kind-route

list or watch objects of kind Route

GET /oapi/v1/namespaces/{namespace}/routes

Parameters

Type
Name
Description
Required
Schema
Default

QueryParameter

pretty

If ‘true’, then the output is pretty printed.

false

string

QueryParameter

labelSelector

A selector to restrict the list of returned objects by their labels. Defaults to everything.

false

string

QueryParameter

fieldSelector

A selector to restrict the list of returned objects by their fields. Defaults to everything.

false

string

QueryParameter

watch

Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.

false

boolean

QueryParameter

resourceVersion

When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.

false

string

Let’s check this route

$ oc get route helloworld-route
NAME               HOST/PORT                                           PATH      SERVICE      LABELS           INSECURE POLICY   TLS TERMINATION
helloworld-route   spring-boot-helloworld.plainjava.appad4.tsi-af.de             helloworld   app=helloworld                    

 

So as an user for this project/namespace with the necessary rights you’ll get the existing route objects by:

$ curl -k -H "Authorization: Bearer $(oc whoami -t)" -X GET "https://172.30.0.1/oapi/v1/namespaces/plainjava/routes/helloworld-route"
{
  "kind": "Route",
  "apiVersion": "v1",
  "metadata": {
    "name": "helloworld-route",
    "namespace": "plainjava",
    "selfLink": "/oapi/v1/namespaces/plainjava/routes/helloworld-route",
    "uid": "480cbb83-4e5c-11e6-885c-0050560461a7",
    "resourceVersion": "11567709",
    "creationTimestamp": "2016-07-20T09:28:13Z",
    "labels": {
      "app": "helloworld"
    }
  },
  "spec": {
    "host": "spring-boot-helloworld.plainjava.appad4.tsi-af.de",
    "to": {
      "kind": "Service",
      "name": "helloworld"
    },
    "port": {
      "targetPort": 8080
    }
  },
  "status": {}
}

 

Now, Setting a watch and modifying the host of the route and change it back again. Check the modification message for the host.:

{"type":"ADDED","object":{"kind":"Route","apiVersion":"v1","metadata":{"name":"helloworld-route","namespace":"plainjava","selfLink":"/oapi/v1/namespaces/plainjava/routes/helloworld-route","uid":"480cbb83-4e5c-11e6-885c-0050560461a7","resourceVersion":"11567709","creationTimestamp":"2016-07-20T09:28:13Z","labels":{"app":"helloworld"}},"spec":{"host":"spring-boot-helloworld.plainjava.appad4.tsi-af.de","to":{"kind":"Service","name":"helloworld"},"port":{"targetPort":8080}},"status":{}}}
{"type":"MODIFIED","object":{"kind":"Route","apiVersion":"v1","metadata":{"name":"helloworld-route","namespace":"plainjava","selfLink":"/oapi/v1/namespaces/plainjava/routes/helloworld-route","uid":"480cbb83-4e5c-11e6-885c-0050560461a7","resourceVersion":"14600520","creationTimestamp":"2016-07-20T09:28:13Z","labels":{"app":"helloworld"}},"spec":{"host":"spring-boot-helloworld-1.plainjava.appad4.tsi-af.de","to":{"kind":"Service","name":"helloworld"},"port":{"targetPort":8080}},"status":{}}}
{"type":"MODIFIED","object":{"kind":"Route","apiVersion":"v1","metadata":{"name":"helloworld-route","namespace":"plainjava","selfLink":"/oapi/v1/namespaces/plainjava/routes/helloworld-route","uid":"480cbb83-4e5c-11e6-885c-0050560461a7","resourceVersion":"14600675","creationTimestamp":"2016-07-20T09:28:13Z","labels":{"app":"helloworld"}},"spec":{"host":"spring-boot-helloworld.plainjava.appad4.tsi-af.de","to":{"kind":"Service","name":"helloworld"},"port":{"targetPort":8080}},"status":{}}}

How-To Use supervisord in Docker Images

Supervisord is “a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.”

  • To install in your Dockefile get it from the epel repository – you have to enable epel first, of course.
    FROM rhel7
    ...
    yum install -y --enablerepo=epel supervisor
  • Config is in /etc/supervisord.conf
    You have to set “nodaemon=true”, so that supervisord will start in foreground

    [supervisord]
    nodaemon=true
    ...
  • Important is, that unix signals are passed to supervisord, so that there’ll be no zombies in case of deletion of the container
    • Find this blog post about signals in docker containers
    • Supervisord will handle it’s subprocesses according to the signals it gets. So it is important, that it runs as PID 1 in the container and is not started by a shell.
      • So verify, that supervisord is started in the exec format
    • If you manage a shell script with supervisord, ensure, that you catch relevant signals within you script and proceed accordingly
      ...
      function clean_up {
              # Perform program cleanup
              ...
              exit 0
      }
      trap clean_up SIGHUP SIGINT SIGTERM
  • If you want to manage a daemon process with your supervisord, you have to ensure, that it runs in the foreground – most daemon start commands are supporting this. Otherwise, you could use this script. See this post.
    #! /usr/bin/env bash
    set -eu
    pidfile="/var/run/your-daemon.pid"
    command=/usr/sbin/your-daemon
    # Proxy signals
    function kill_app(){
        kill $(cat $pidfile)
        exit 0 # exit okay
    }
    trap "kill_app" SIGINT SIGTERM
    # Launch daemon
    exec $command
    sleep 2
    # Loop while the pidfile and the process exist
    while [ -f $pidfile ] && kill -0 $(cat $pidfile) ; do
        sleep 0.5
    done
    exit 1000 # exit unexpected

    “kill -0” doesn’t send any kill signals, it only checks, if the permissions are sufficient to kill the process.

  • Don’t use “sleep inf” or “tail -f /dev/null” at the end of your scripts in order to block its ending. They won’t pass unix signals. Instead as described above, use a loop with a short sleep
  • Also important to mention is that the default behaviour of supervisord is to NOT restart programs when they finish with an exit code of “0”.
    • So ensure, that in case of crashes, your programs, daemons etc. are exiting with a non-0 state, so that supervisord knows that it should have to restart them
  • Supervisord will not end, when all programs have been finished. So even if all your programs have been exited with code 0, the supervisor process will run further on. If you want to have another behaviour, you have to implement an  event listener.
  • If supervisord is killed, it waits for the programs to be ended before it terminates
  • Here’s a test script I used for the investigations
    #!/bin/bash
    set -e
    echo "This program is running as PID $$ "
    function trap_with_arg() {
        func="$1" ; shift
        for sig ; do
            trap "$func $sig" "$sig"
        done
    }
    function clean_up {
        # Perform program exit
        echo Trapped: $1
        if [[ "$1" == "SIGTERM" ]] ; then
            exit 0
        else
            exit 1
        fi
    }
    trap_with_arg clean_up SIGHUP SIGINT SIGTERM
    while /bin/true ; do
        sleep 0.5
    done

How-To – Use YUM installer in containers

Using a RHEL base image, you’ll just use yum the “usual” way in installing packages for your container.

Though containers should be small and only contain the really necessary packages, there are some best-practices.

  1. Enable only the necessary repositories.
    yum install -y –disablerepo=”*” –enablerepo=”…” …
    Inside a RHEL7 container, subscription-manager is disabled. But on the host system check with: subscription-manager repos –list-enabled
  2. Don’t install documentation with your packages, because you might not need it and it just consumes space
    yum install/update –setopt=tsflags=nodocs …
  3. Check if it makes sense for you to use “delta rpm” https://www.certdepot.net/rhel7-get-started-delta-rpms/
    It is so far only available for rhel-7-server-rpms:
    yum install -y –setopt=tsflags=nodocs –disablerepo=”*” –enablerepo=”rhel-7-server-rpms” deltarpm
  4. Change your yum repository settings for further commands permanently. So for example only get security updates for rhel7 server rpms
    RUN yum install -y –setopt=tsflags=nodocs –disablerepo=”*” –enablerepo=”rhel-7-server-rpms” yum-utils  && \
    yum-config-manager –disable “*” && \
    yum-config-manager –enable rhel-7-server-rpms && \
    yum update -y –setopt=tsflags=nodocs && \
  5. Use provided PGP Keys (check /etc/pki/rpm-gpg)

    rpm –import file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release \

    && rpm –import http://… \
    ,,,,
  6. How to enable EPEL

    rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \
    && rpm –import file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 \
    && yum install -y –enablerepo=epel …
  7. yum clean all at the end

    && yum clean all

How-To – using images from central registry in production (atomic registry)

This how-to covers what has to be done to pull an image located in atomic registry into an openshift deployment.

General steps:

  • add secret
  • add pull secret to build configuration

see also

https://docs.openshift.com/container-platform/3.3/dev_guide/builds.html

secret to access private central docker registry

oc secrets new-dockercfg registry-appaoc-roambee
--docker-server='registry-appaoc.tsi-af.de:443'--docker-password=eyJh...14wA
--docker-email=unused --docker-username=unused

use provided token for password

when using a build configuration, add to build configuration

oc set build-secret --pull bc/telegraf registry-appaoc-roambee

when using a deployment configuration, add secret to service account

oc secrets add sa/default secrets/registry-appaoc-roambee --for=pull

How-To – run a pod as root

This how-to covers what has to be done to run a pod as root

We’ll use a project sample here.

oc project sample

Create a new service account

$ oc create serviceaccount useroot 

Add service account to security context constraint anyuid

$ oc adm policy add-scc-to-user anyuid -z useroot -n sample

# oc edit scc anyuid
 
...
users:
- system:serviceaccount:sample:useroot
...

Add service account to deployment config

$ oc patch dc/myAppNeedsRoot --patch '{"spec":{"template":{"spec":{"serviceAccountName": "useroot"}}}}'
oc edit dc myAppNeedsRoot
...
    spec:
      containers:
      ...
    serviceAccount: useroot
    serviceAccountName: useroot
    ....
...
 
 

 

This enables a deployed docker container to run as any user (e.g. root). Openshift ensures that only the necessary security context constraints are used. So to have a container running as root, you also have to ensure that the container explicitly requests it – e.g. by a “User 0” directive in your Dockerfile or by forcing it by a “runAsUser 0” directive in your container’s  security context. Otherwise Openshift might decide, to choose the “restricted” security constraint anyway.

How-To – push a docker image from docker registry to #openshift registry

This how-to covers what has to be done to push an image that is available in a docker registry into openshift registry.

We’ll use a project sample here.

oc project sample-prj
  • suppose there is an image foo-image:latest in docker registry.

You have to find out the clusterIP of the docker registry service. This needs additional rights!

# oc get svc/docker-registry -n default
NAME              CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
docker-registry   <clusterIP>     <none>        5000/TCP

Tag image for the clusterIP, Port 5000 and project:

docker tag foo-image:latest <clusterIP>:5000/sample-prj/foo-image:latest

Log into Openshift registry

docker login -u $(oc whoami) –p $(oc whoami -t) -e foo@foo.com <clusterIP>:5000

Push into Openshfit registry

docker push <clusterIP>:5000/sample-prj/foo-image:latest

Additionaly you will find an automatically created imagestream.

oc get is .... name foo-image

IDC Whitepaper: DevOps and Modern Application Development in the Cloud: Red Hat, T-Systems, and Microsoft Offer Managed Hybrid PaaS via Data Trustee Model in Europe

IDC created a whitepaper with Redhat, describing the hybric approach for openshift on Azure. Furthermore it gives you a current status of PaaS adoption in the market.

Interesting quote from the summary:

The secret sauce in digital transformation is software and application innovation to develop and launch new applications, products and services for customers. Companies that are serious about achieving software development competencies have thriving test and development environments and an appetite to invest in 3rd platform technologies, critically cloud computing.
All digital transformation (DX) initiatives require a cloud adoption strategy, which is fundamental to success. Thriving companies are using cloud as the vehicle to deliver on their DX agenda with scale, speed, and quality. And within the cloud mega-trend, the emphasis is shifting from pure infrastructure to platform as a service (PaaS), where new applications are built. IDC estimates that the Western European PaaS market will grow at a CAGR of 33.1% by 2020 as more companies adopt PaaS to rewrite applications for a cloud deployment. IDC sees this as a viable strategy that can drive speed, consistency, and quality as well as unlock new innovative capabilities.
As the pace of innovation accelerates within businesses, they are demanding that cloud platforms should include even more capabilities — the use of containers, better interoperability, integration, and heterogeneity, for example. Containers are the key building blocks of PaaS, essentially opening the door for the opportunity to run multitenant PaaS in a public cloud.
IDC sees modern PaaS offerings as enablers for deploying applications into public or private cloud infrastructures and for bringing agility and cost advantages. This, in turn, paves the way for newer processes such as DevOps, continuous integration, and continuous delivery as well as applications built with a microservices architecture.

Download paper: idc-paper-azure-hybrid-appagile-march-2017.pdf

How to use a Corporate ImageStreams as a centrally maintained trusted source of container images

OpenShift provides a single Namespace containing all the ImageStreams that could be considered part of the platform: all these images are maintained and provided by OpenShift Origin, CentOS, Software Collections Library or Red Hat.

One separate Namespace for your Images

It could be considered a good practice, to separate all the ImageStreams provided by your own organization into one namespace, declaring them to be the “officially supported ACME Corp container images”. Let’s call this namespace ‘acme-corp’ throughout this article. These container images from ‘acme-corp’ could be provided and maintained by ACME Corp’s IT DevOps Team. You can read more on the interfaces between Dev and Ops on the Red Hat Enterprise Linux Blog.

Accessing them just like they are from ‘openshift’ namespace

OpenShift references Images (the OpenShift configuration item not the container image itself) in many situations: as part of a BuildConfig or as part of a DeploymentConfig, for example to start a new deployment when an ImageChange trigger is received from an ImageStreamTag.

To receive these triggers, and to use/pull a container image from a different namespace, some configurations need to be done:

  • Project A (the project using the image from our officially supported ACME Corp container images namespace) must be authorized to pull images
  • each project needs to be authorized to pull images from there
  • This configuration must be automated

Granting access from specific actions between namespaces could be accomplished by using

oc adm add-role-to-group system:image-puller system:serviceaccounts:project-a -n acme-corp

This will enable project A to pull images from any ImageStream in namespace acme-corp. You might repeat that for each project that shall be allowed to access ‘acme-corp’, but if we assume that all projects shall be granted access to ‘acme-corp’ ImageStream, a more elegant way is to modify the project template of OpenShift.

Configuring your Project Template

Modifying OpenShift’s template for projects is a simple operation:

  • a template must be created within the default namespace, and
  • the master must be reconfigured to use this template

So let’s see what the default project template looks like. It is embedded with OpenShift, so we can not find it somewhere on disk, we need to get it out and create a file to store it: `oc adm create-bootstrap-project-template -o yaml > acme-project-template.yaml`. To create (or later on replace) a template in OpenShift use `oc create -f acme-project-template.yaml`.

What you see within this template is a set of defaults configured by OpenShift for each project that gets created. And what we want to achieve is that each newly created project has access to ImageStreams in ‘acme-corp’ namespace. To grant that access we need to extend the `system:image-pullers` RoleBinding. This is basically the same activity show above: `oc adm add-role-to-group …`

Here you see the complete RoleBinding configuration item including access to namespace ‘acme-corp’. You can find the complete project template as a gitlab snippet.

- kind: RoleBinding
 apiVersion: v1
 groupNames:
 - system:serviceaccounts:acme-corp
 - system:serviceaccounts:${PROJECT_NAME}
 metadata:
 name: system:image-pullers
 namespace: ${PROJECT_NAME}
 roleRef:
 name: system:image-puller
 subjects:
 - kind: SystemGroup
 name: system:serviceaccounts:${PROJECT_NAME}
 userNames: null

Next: `oc replace -f acme-project-template.yaml` to replace/update the template within OpenShift.

Half way done, we only need to tell the OpenShift master to use this template for each newly created project. Keep in mind, if you are running more than one OpenShift Master, you need to do it on each master, as we will modify `/etc/origin/master/master-config.yaml`. And if you are using `oc cluster up`, there is no `master-config.yaml` on your local disk.

What we need to do is to replace the empty definition of `{“projectConfig”:{“projectRequestTemplate”}}` with a value of `”default/project-request”`.

I will leave it to the read how to achieve this goal in the most efficient way, maybe you use openshift-ansible or dsh… in the end, we need to reconfigure and restart all OpenShift Masters.

New Project defaults

Each project we create from now on will have access to ImageStreams within the ‘acme-corp’ namespace. Let’s validate:

[goern]$ oc new-project is-testing
Now using project "is-testing" on server "https://openshift.example.com".
[...]

[goern]$ oc get is
No resources found.

[goern]$ oc get is -n openshift
NAME DOCKER REPO TAGS UPDATED
registry...com/jboss-fuse-6/fis-karaf-openshift latest,2.0,1.0 + 2 more... 2 weeks ago
jboss-amq-62 registry...com/jboss-amq-6/amq62-openshift 1.1,1.1-2,latest + 2 more... 2 weeks ago
[...]

[goern]$ oc get is -n acme-corp
NAME DOCKER REPO TAGS UPDATED
redis 172.30.142.230:5000/acme-corp/redis latest 6 minutes ago

At this point we are able to use the ‘redis’ ImageStream out of ‘acme-corp’ namespace. Well done.

Conclusion

By customizing OpenShift Master’s projectConfig, we can not only use a custom project template to grant access from newly created project to other namespaces per default, we can also setting cluster wide node selectors, or configure the level of overcommitment.

Have Fun!