Getting started with ARO – Application deployment

Blog Post • 9 min read

This is the fourth blog post in a four-part series aimed at helping IT experts understand how they can leverage the benefits of OpenShift container platform.

In the first blog post I was comparing OpenShift with Kubernetes and showing the benefits of enterprise grade solutions in container orchestration.

The second blog post introduced some of the OpenShift basic concepts and architecture components.

The third blog post was about how to deploy the ARO solution in Azure.

This last blog post in the series is about how to use the ARO/OpenShift solution to host applications.

In particular, I will walk through a 3 tier application stack deployment. The app stack has a Database, an API and a Web tier, running in different OpenShift apps.

After the successful deployment of an Azure Red Hat OpenShift cluster Developers, Engineers, DevOps and SRE can start to use it easily through the portal or through the oc CLI.

With a configured AAD connection admins can use their domain credentials to authenticate through the OpenShift API. If you have not yet acquired the API URL you can do it with the following command and store it in a variable.

apiServer=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query apiserverProfile.url -o tsv)

After acquiring the API address the oc command can be used to log in to the cluster as follows.

oc login $apiServer

Authentication required for https://api.zf34w66z.westeurope.aroapp.io:6443 (openshift)
Username: username
Password: pass

After the successful login the oc command similarly to kubectl can be used to manage the cluster. First of all, let’s turn on command completion to speed up the operational tasks with the cluster

source <(oc completion bash)

Then for example show the list of nodes with the following familiar command.

oc get nodes
NAME                                          STATUS   ROLES    AGE    VERSION
aroncweutest-487zs-master-0                   Ready    master   108m   v1.16.2
aroncweutest-487zs-master-1                   Ready    master   108m   v1.16.2
aroncweutest-487zs-master-2                   Ready    master   108m   v1.16.2
aroncweutest-487zs-worker-westeurope1-dgbrd   Ready    worker   99m    v1.16.2
aroncweutest-487zs-worker-westeurope2-4876d   Ready    worker   98m    v1.16.2
aroncweutest-487zs-worker-westeurope3-vsndp   Ready    worker   99m    v1.16.2

So now that we have our cluster up and running and we are connected to it lets create a project and deploy a few workloads.

Use the following command to create a new project.

oc new-project testproject

The command will provide the following output

Now using project "testproject" on server "https://api.zf34w66z.westeurope.aroapp.io:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app django-psql-example

to build a new example application in Python. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node

We will not follow the recommended commands to do some tests as I assume everybody can copy-paste a few lines. If you are interested in what happens go and try it for yourself.

We are going to use a Microsoft provided application stack to show how easy it is to deploy workloads to Azure. The application will have a Database of MongoDB an API of Node.JS and a WEB Frontend.

For the database we are going to use a template-based deployment approach. Red Hat OpenShift provides a list of different templates for different applications. For MongoDB there are two templates a mongodb-ephemeral and a mongodb-persistent. The ephemeral version comes with an ephemeral storage meaning that, when the container restarts, the data is lost. Conversely, the persistent version comes with a persistent volume allowing the container to be restarted/moved between nodes. Resulting in a better solution for production workloads.

List the available templates with the following command.

oc get templates -n openshift

The command will list about 125 templates databases, webservers, api etc.

…
mariadb-ephemeral                               MariaDB database service, without persistent storage. For more information ab...   8 (3 generated)   3
mariadb-persistent                              MariaDB database service, with persistent storage. For more information about...   9 (3 generated)   4
mongodb-ephemeral                               MongoDB database service, without persistent storage. For more information ab...   8 (3 generated)   3
mongodb-persistent                              MongoDB database service, with persistent storage. For more information about...   9 (3 generated)   4
mysql-ephemeral                                 MySQL database service, without persistent storage. For more information abou...   8 (3 generated)   3
mysql-persistent                                MySQL database service, with persistent storage. For more information about u...   9 (3 generated)   4
nginx-example                                   An example Nginx HTTP server and a reverse proxy (nginx) application that ser...   10 (3 blank)      5
nodejs-mongo-persistent                         An example Node.js application with a MongoDB database. For more information...    19 (4 blank)      9
nodejs-mongodb-example                          An example Node.js application with a MongoDB database. For more information...    18 (4 blank)      8
…

As mentioned previously, we will use the mongodb-persistent template to deploy the database on to the cluster. The oc process command uses a template as an input and populates it with the provided template variables and as an output creates a JSON output by default. This output can be used as an input for the oc create command to create on the fly the resources in the project. If -o yaml is used, the output will be in YAML not JSON format.

oc process openshift//mongodb-persistent \
    -p MONGODB_USER=ratingsuser \
    -p MONGODB_PASSWORD=ratingspassword \
    -p MONGODB_DATABASE=ratingsdb \
    -p MONGODB_ADMIN_PASSWORD=ratingspassword | oc create -f -

If everything works well a similar output should show up.

secret/mongodb created
service/mongodb created
persistentvolumeclaim/mongodb created
deploymentconfig.apps.openshift.io/mongodb created

After a few minutes execute the following command to list the deployed resources

oc status 
…
svc/mongodb - 172.30.236.243:27017
  dc/mongodb deploys openshift/mongodb:3.6
    deployment #1 deployed 9 minutes ago - 1 pod
…

The output will show that OpenShift used a deploymentconfig and a deployment pod to deploy the mongodb database, it configured a replication controller with 1 pod and exposed the database as a service but no cluster external rout has been defined.

In the next step we are going to deploy the API server. For the sake of this demo we are going to use Source-2-Image as a build strategy for the API server. The source code is located in a git repository which needs to be forked first.

In the next step the oc new-app command will be used to build a new image from the source code in the git repository.

oc new-app https://github.com/sl31pn1r/rating-api --strategy=source

The picture below shows that the S2I process was able to identify the source code as a Node.js 10 code.

If we execute oc status again, it will show the newly deployed pod with its build related objects.

oc status
…
svc/rating-api - 172.30.90.52:8080
  dc/rating-api deploys istag/rating-api:latest <-
    bc/rating-api source builds https://github.com/sl31pn1r/rating-api on openshift/nodejs:10-SCL
    deployment #1 deployed 20 seconds ago - 1 pod
…

If you are interested in all the deployed Kubernetes objects so far, execute the oc get all command. It will show for example that for this Node.js deployment it used a build container and a deployment container as well, to prepare and deploy the source code into a application container. It also created an imagestream and a buildconfig.

The API needs to know where it can access the MongoDB. This can be configured through an environment variable in the deployment configuration. The environment variable name must be MONGODB_URI and the value needs to be the service FQDN is formed of [service name].[project name].svc.cluster.local.

After configuring the environment variable OpenShift will automatically redeploy the API pod with the new configuration.

To verify that the API can access the MongoDB use the WEB UI or the oc logs PODNAME command. You should see something similar as the below.

If you want to trigger a deployment whenever you change something in your code, a GitHub webhook must be configured. In the first step, get the GitHub secret.

oc get bc/rating-api -o=jsonpath='{.spec.triggers..github.secret}'

Then retrieve the webhook trigger URL.

oc describe bc/rating-api

In your GitHub repository, go to Settings → Webhooks and select Add webhook.

Paste the URL output with the replaced secret into the Payload URL field and change the Content type to application/json. Leave the secret empty on the GitHub page and click on Add webhook.

For the WEB frontend the same Source-2-Image approach can be followed. First forking the repository into your own GitHub account.

Then using the same oc new-app command to deploy the application from the source code.

oc new-app https://github.com/sl31pn1r/rating-web --strategy=source

After the successful deployment the WEB service needs to know where the API server can be found. The same way as we used an environment variable to define where the database was for the API server, we can point the WEB server to the API service’s FQDN by creating an API environment variable.

oc set env dc rating-web API=http://rating-api:8080

This service now is deployed and configured; the only minor issue is that there is no way to access it as of now from the external world. Kubernetes/OpenShift Pods and services are by default only accessible from the cluster, therefore the WEB frontend needs to be exposed.

This can be done with a short command.

oc expose svc/rating-web

After the service has been successfully exposed the external route can be queried with the following command.

oc get route rating-web

The command should return something similar to the following output, where the host name received can be opened in a web browser.

After the successful setup of the web service, configure the GitHub webhook the similar way you did for the API service.

oc get bc/rating-web -o=jsonpath='{.spec.triggers..github.secret}'
oc describe bc/rating-web

Configure the webhook under your GitHub repo’s settings with the secret and the URL collected from the previous two commands’ output.

As a final step secure the API service in the cluster by creating a network policy. Network policies allow cluster admins and developers to secure their workloads within the cluster by defining from where and to what services traffic can flow from. Use the following code to create the network policy. This policy will only allow the web service to connect to the API service on ingress.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-allow-from-web
  namespace: testproject
spec:
  podSelector:
    matchLabels:
      app: rating-api
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: rating-web

How could this work for your business?

Come speak to us and we will walk you through exactly how it works.

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

Ilja Summala
Ilja’s passion and tech knowledge help customers transform how they manage infrastructure and develop apps in cloud.
Ilja Summala LinkedIn
Group CTO