Getting started with ARO – Application deployment

This is the fourth blog post in a four-part series aimed at helping IT experts understand how they can leverage the benefits of OpenShift container platform.

In the first blog post I was comparing OpenShift with Kubernetes and showing the benefits of enterprise grade solutions in container orchestration.

The second blog post introduced some of the OpenShift basic concepts and architecture components.

The third blog post was about how to deploy the ARO solution in Azure.

This last blog post in the series is about how to use the ARO/OpenShift solution to host applications.

In particular, I will walk through a 3 tier application stack deployment. The app stack has a Database, an API and a Web tier, running in different OpenShift apps.

After the successful deployment of an Azure Red Hat OpenShift cluster Developers, Engineers, DevOps and SRE can start to use it easily through the portal or through the oc CLI.

With a configured AAD connection admins can use their domain credentials to authenticate through the OpenShift API. If you have not yet acquired the API URL you can do it with the following command and store it in a variable.

apiServer=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query apiserverProfile.url -o tsv)

After acquiring the API address the oc command can be used to log in to the cluster as follows.

oc login $apiServer

Authentication required for (openshift)
Username: username
Password: pass

After the successful login the oc command similarly to kubectl can be used to manage the cluster. First of all, let’s turn on command completion to speed up the operational tasks with the cluster

source <(oc completion bash)

Then for example show the list of nodes with the following familiar command.

oc get nodes
NAME                                          STATUS   ROLES    AGE    VERSION
aroncweutest-487zs-master-0                   Ready    master   108m   v1.16.2
aroncweutest-487zs-master-1                   Ready    master   108m   v1.16.2
aroncweutest-487zs-master-2                   Ready    master   108m   v1.16.2
aroncweutest-487zs-worker-westeurope1-dgbrd   Ready    worker   99m    v1.16.2
aroncweutest-487zs-worker-westeurope2-4876d   Ready    worker   98m    v1.16.2
aroncweutest-487zs-worker-westeurope3-vsndp   Ready    worker   99m    v1.16.2

So now that we have our cluster up and running and we are connected to it lets create a project and deploy a few workloads.

Use the following command to create a new project.

oc new-project testproject

The command will provide the following output

Now using project "testproject" on server "".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app django-psql-example

to build a new example application in Python. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node

We will not follow the recommended commands to do some tests as I assume everybody can copy-paste a few lines. If you are interested in what happens go and try it for yourself.

We are going to use a Microsoft provided application stack to show how easy it is to deploy workloads to Azure. The application will have a Database of MongoDB an API of Node.JS and a WEB Frontend.

For the database we are going to use a template-based deployment approach. Red Hat OpenShift provides a list of different templates for different applications. For MongoDB there are two templates a mongodb-ephemeral and a mongodb-persistent. The ephemeral version comes with an ephemeral storage meaning that, when the container restarts, the data is lost. Conversely, the persistent version comes with a persistent volume allowing the container to be restarted/moved between nodes. Resulting in a better solution for production workloads.

List the available templates with the following command.

oc get templates -n openshift

The command will list about 125 templates databases, webservers, api etc.

mariadb-ephemeral                               MariaDB database service, without persistent storage. For more information ab...   8 (3 generated)   3
mariadb-persistent                              MariaDB database service, with persistent storage. For more information about...   9 (3 generated)   4
mongodb-ephemeral                               MongoDB database service, without persistent storage. For more information ab...   8 (3 generated)   3
mongodb-persistent                              MongoDB database service, with persistent storage. For more information about...   9 (3 generated)   4
mysql-ephemeral                                 MySQL database service, without persistent storage. For more information abou...   8 (3 generated)   3
mysql-persistent                                MySQL database service, with persistent storage. For more information about u...   9 (3 generated)   4
nginx-example                                   An example Nginx HTTP server and a reverse proxy (nginx) application that ser...   10 (3 blank)      5
nodejs-mongo-persistent                         An example Node.js application with a MongoDB database. For more information...    19 (4 blank)      9
nodejs-mongodb-example                          An example Node.js application with a MongoDB database. For more information...    18 (4 blank)      8

As mentioned previously, we will use the mongodb-persistent template to deploy the database on to the cluster. The oc process command uses a template as an input and populates it with the provided template variables and as an output creates a JSON output by default. This output can be used as an input for the oc create command to create on the fly the resources in the project. If -o yaml is used, the output will be in YAML not JSON format.

oc process openshift//mongodb-persistent \
    -p MONGODB_USER=ratingsuser \
    -p MONGODB_PASSWORD=ratingspassword \
    -p MONGODB_DATABASE=ratingsdb \
    -p MONGODB_ADMIN_PASSWORD=ratingspassword | oc create -f -

If everything works well a similar output should show up.

secret/mongodb created
service/mongodb created
persistentvolumeclaim/mongodb created created

After a few minutes execute the following command to list the deployed resources

oc status 
svc/mongodb -
  dc/mongodb deploys openshift/mongodb:3.6
    deployment #1 deployed 9 minutes ago - 1 pod

The output will show that OpenShift used a deploymentconfig and a deployment pod to deploy the mongodb database, it configured a replication controller with 1 pod and exposed the database as a service but no cluster external rout has been defined.

In the next step we are going to deploy the API server. For the sake of this demo we are going to use Source-2-Image as a build strategy for the API server. The source code is located in a git repository which needs to be forked first.

In the next step the oc new-app command will be used to build a new image from the source code in the git repository.

oc new-app --strategy=source

The picture below shows that the S2I process was able to identify the source code as a Node.js 10 code.

If we execute oc status again, it will show the newly deployed pod with its build related objects.

oc status
svc/rating-api -
  dc/rating-api deploys istag/rating-api:latest <-
    bc/rating-api source builds on openshift/nodejs:10-SCL
    deployment #1 deployed 20 seconds ago - 1 pod

If you are interested in all the deployed Kubernetes objects so far, execute the oc get all command. It will show for example that for this Node.js deployment it used a build container and a deployment container as well, to prepare and deploy the source code into a application container. It also created an imagestream and a buildconfig.

The API needs to know where it can access the MongoDB. This can be configured through an environment variable in the deployment configuration. The environment variable name must be MONGODB_URI and the value needs to be the service FQDN is formed of [service name].[project name].svc.cluster.local.

After configuring the environment variable OpenShift will automatically redeploy the API pod with the new configuration.

To verify that the API can access the MongoDB use the WEB UI or the oc logs PODNAME command. You should see something similar as the below.

If you want to trigger a deployment whenever you change something in your code, a GitHub webhook must be configured. In the first step, get the GitHub secret.

oc get bc/rating-api -o=jsonpath='{.spec.triggers..github.secret}'

Then retrieve the webhook trigger URL.

oc describe bc/rating-api

In your GitHub repository, go to Settings → Webhooks and select Add webhook.

Paste the URL output with the replaced secret into the Payload URL field and change the Content type to application/json. Leave the secret empty on the GitHub page and click on Add webhook.

For the WEB frontend the same Source-2-Image approach can be followed. First forking the repository into your own GitHub account.

Then using the same oc new-app command to deploy the application from the source code.

oc new-app --strategy=source

After the successful deployment the WEB service needs to know where the API server can be found. The same way as we used an environment variable to define where the database was for the API server, we can point the WEB server to the API service’s FQDN by creating an API environment variable.

oc set env dc rating-web API=http://rating-api:8080

This service now is deployed and configured; the only minor issue is that there is no way to access it as of now from the external world. Kubernetes/OpenShift Pods and services are by default only accessible from the cluster, therefore the WEB frontend needs to be exposed.

This can be done with a short command.

oc expose svc/rating-web

After the service has been successfully exposed the external route can be queried with the following command.

oc get route rating-web

The command should return something similar to the following output, where the host name received can be opened in a web browser.

After the successful setup of the web service, configure the GitHub webhook the similar way you did for the API service.

oc get bc/rating-web -o=jsonpath='{.spec.triggers..github.secret}'
oc describe bc/rating-web

Configure the webhook under your GitHub repo’s settings with the secret and the URL collected from the previous two commands’ output.

As a final step secure the API service in the cluster by creating a network policy. Network policies allow cluster admins and developers to secure their workloads within the cluster by defining from where and to what services traffic can flow from. Use the following code to create the network policy. This policy will only allow the web service to connect to the API service on ingress.

kind: NetworkPolicy
  name: api-allow-from-web
  namespace: testproject
      app: rating-api
    - from:
        - podSelector:
              app: rating-web

How could this work for your business?

Come speak to us and we will walk you through exactly how it works.

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

    Getting started with Azure Red Hat OpenShift (ARO)

    This is the third blog post in a four-part series aimed at helping IT experts understand how they can leverage the benefits of OpenShift container platform.

    The first blog post was comparing OpenShift with Kubernetes and showing the benefits of enterprise grade solutions in container orchestration.

    The second blog post introduced some of the OpenShift basic concepts and architecture components.

    This third blog post is about how to deploy the ARO solution in Azure.

    The last blogpost will cover how to use the ARO/OpenShift solution to host applications.

    ARO cluster deployment

    Until recently ARO was a preview feature which had to be enabled – this is not necessary anymore.


    I recommend upgrading to the latest Azure CLI. I am using an Ubuntu 18.04 as my host for interacting with the Azure API. Use the following command to only upgrade azure-cli before you start.

    sudo apt-get install --only-upgrade azure-cli -y

    After upgrading Azure CLI log in to your subscription

    az login
    az account set -s $subscriptionId

    Resource groups

    It is important to mention, before we start with the cluster deployment, that the user selected for the deployment needs permissions to be able to create resource groups in the subscription. ARO deployment, similarly to AKS, creates a separate resource group for cluster resources and uses 2 resource groups: one for the managed application and one for the cluster resources. You cannot just deploy ARO to an existing resource group.

    I am using a new resource group for all my ARO related base resources, VNet, Manage Application as an example.

    az group create --name dk-os-weu-sbx --location westeurope

    VNet and SubNets

    I am creating a new VNet for this deployment, however it is possible to use an existing VNet with two new SubNets for master and worker nodes.

    az network vnet create \
      -g "dk-os-weu-sbx" \
      -n vnet-aro-weu-sbx \
      --address-prefixes \

    After the VNet, I’m creating my two SubNets one for the workers and one for the masters. For the Workers it is important to allocate a large enough network range to allow for autoscaling. I am also enabling the Azure internal access to the Microsoft Container Registry service.

    az network vnet subnet create \
        -g "dk-os-weu-sbx" \
        --vnet-name vnet-aro-weu-sbx \
        -n "snet-aro-weu-master-sbx" \
        --address-prefixes \
        --service-endpoints Microsoft.ContainerRegistry \
    az network vnet subnet create \
        -g "dk-os-weu-sbx" \
        --vnet-name vnet-aro-weu-sbx \
        -n "snet-aro-weu-worker-sbx" \
        --address-prefixes \
        --service-endpoints Microsoft.ContainerRegistry \

    After creating the subnets, we need to disable the private link network policies on the subnets.

    az network vnet subnet update \
      -g "dk-os-weu-sbx" \
      --vnet-name vnet-aro-weu-sbx \
      -n "snet-aro-weu-master-sbx" \
      --disable-private-link-service-network-policies true \
    az network vnet subnet update \
      -g "dk-os-weu-sbx" \
      --vnet-name vnet-aro-weu-sbx \
      -n "snet-aro-weu-worker-sbx" \
      --disable-private-link-service-network-policies true \

    Azure API providers

    To be able to deploy an ARO cluster, several Azure providers need to be enabled first.

    az provider register -n "Microsoft.RedHatOpenShift"  
    az provider register -n "Microsoft.Authorization" 
    az provider register -n "Microsoft.Network" 
    az provider register -n "Microsoft.Compute"
    az provider register -n "Microsoft.ContainerRegistry"
    az provider register -n "Microsoft.ContainerService"
    az provider register -n "Microsoft.KeyVault"
    az provider register -n "Microsoft.Solutions"
    az provider register -n "Microsoft.Storage"

    Red Hat pull secret

    In order to get Red Hat provided templates, an image pull secret needs to be acquired from Red Hat:

    Cluster deployment

    Create the cluster with the following command, replace the relevant parameters with your values.

    az aro create \
      -g "dk-os-weu-sbx" \
      -n "aroncweutest" \
      --vnet vnet-aro-weu-sbx \
      --master-subnet "snet-aro-weu-master-sbx" \
      --worker-subnet "snet-aro-weu-worker-sbx" \
      --cluster-resource-group "aro-aronctest" \
      --location "West Europe" \
      --pull-secret @pull-secret.txt

    If you are planning to use a company domain use the –domain parameter to define it. The full list of parameters can be found in the Azure CLI documentation.

    Deployment takes about 30-40 minutes and as an output you will get back some of the relevant information.

    apiserverProfile contains the API URL necessary to log in to the cluster

    consoleProfile contains the URL for the WEB UI.

    The command will deploy a cluster with 3 masters and 3 workers. The Masters will be D8s_v3 while the workers D4s_v3 machines deployed into three different Availability Zones within a Region. If these default sizes are not fitting for the purpose they can be parameterized within the create command with the –master-vm-size and –worker-vm-size parameters.

    If a deployment fails or the cluster needs to be removed for any other reason, use the following command to delete it. The complete removal of a successfully deployed cluster can take about 30-40 minutes.

    az aro delete -g "dk-os-weu-sbx"   -n "aroncweutest"

    Connecting to a cluster

    After the successful deployment of a cluster the cluster admin credentials can be acquired with the following command.

    az aro list-credentials --name "aroncweutest" --resource-group "dk-os-weu-sbx"

    The command will return the following JSON.

      "kubeadminPassword": "vJiK7-I9MZ7-RKrPP-9V5Gi", 
      "kubeadminUsername": "kubeadmin" 

    To log in to the cluster WEB UI, open the URL provided by the consoleProfile.url property and enter the credentials.

    After the successful login, the Dashboard will show the initial cluster health.

    To log in to the API through the CLI, download the OC binary and execute the following command.

    oc login apiserverProfile.url

    Then enter the credentials and you can start to use the “oc” command to manage the cluster.

    Azure AD Integration

    With Azure Active Directory (AAD) integration through OAuth, companies can leverage the existing team structures, groups from their Active Directory to separate responsibilities, access to the OpenShift cluster.

    To start with it first create a few environment variables to support the implementation

    domain=$(az aro show -g dk-os-weu-sbx -n aroncweutest --query clusterProfile.domain -o tsv)  
    location=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query location -o tsv)  
    apiServer=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query apiserverProfile.url -o tsv)  
    webConsole=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query consoleProfile.url -o tsv)  

    An Azure AD Application needs to be created to integrate OpenShift authentication with Azure AD.

    az ad app create \ 
      --query appId -o tsv \ 
      --display-name $appName \ 
      --reply-urls $oauthCallbackURL \ 
      --password $appSecret 

    Create a new environment variable with the application id.

    appId=$(az ad app list --display-name $appName | jq -r '.[] | "\(.appId)"')

    This user will need an Azure Active Directory Graph scope (Azure Active Directory Graph.User.Read) permission.

    az ad app permission add \
     --api 00000002-0000-0000-c000-000000000000 \
     --api-permissions 311a71cc-e848-46a1-bdf8-97ff7156d8e6=Scope \
     --id $appID

    Create optional claims to use e-mail with an UPN fallback authentication.

    cat > manifest.json<< EOF
      "name": "upn",
      "source": null,
      "essential": false,
      "additionalProperties": []
    "name": "email",
      "source": null,
      "essential": false,
      "additionalProperties": []

    Configure optional claims for the Application

    az ad app update \
      --set optionalClaims.idToken=@manifest.json \
      --id $appId

    Configure an OpenShift OpenID authentication secret.

    oc create secret generic openid-client-secret-azuread \
      --namespace openshift-config \

    Create and openshift OAuth resource object which connects the cluster with the AAD.

    kind: OAuth
      name: cluster
      - name: AAD
        mappingMethod: claim
        type: OpenID
          clientID: $appId
            name: openid-client-secret-azuread
          - email
          - profile
            include_granted_scopes: "true"
            - email
            - upn
            - name
            - email

    Apply the YAML file to create the resource. (You need to be logged in with the kubeadmin user)

    The reply URL in the AAD application needs to point to the oauthCallbackURL; this can be changed through the portal.

    oc apply -f oidc.yaml

    After a few minutes you should be able to log in on the OpenShift web UI with any AAD user.

    After choosing the AAD option for login, using the AAD credentials, a user can log in and start to work.

    In my next blog post I am going to continue with some basic application implementations on OpenShift ARO.

    How could this work for your business?

    Come speak to us and we will walk you through exactly how it works.

    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

      An introduction to OpenShift

      This is the second blog post in a four-part series aimed at helping IT experts understand how they can leverage the benefits of OpenShift container platform.

      In the first blog post I was comparing OpenShift with Kubernetes and showing the benefits of enterprise grade solutions in container orchestration.

      This, the second blog post, introduces some of the basic OpenShift concepts and architecture components.

      The third blog post is about how to deploy the ARO solution in Azure.

      The last blog post covers how to use the ARO/OpenShift solution to host applications.

      OpenShift Architecture

      OpenShift is a turn-key enterprise grade, secure and reliable containerisation solution built on open source Kubernetes. It is built on the open source Kubernetes with additional components to provide out of the box self-service, dashboards, automation-CI/CD, container image registry, multilingual support, and other Kubernetes extensions, enterprise grade features.

      The following diagram depicts the architecture of the OpenShift Containerisation Platform highlighting with green the components which were added/modified by Red Hat.

      Figure 1 – OpenShift container platform Architecture (green ones are modified or new architecture components)

      RHEL CoreOS – The base operating system is Red Hat Enterprise Linux CoreOS. CoreOS is a lightweight RHEL version providing essential OS features and combines the ease of over-the-air updates from Container Linux with the Red Hat Enterprise Linux kernel for container hosts.

      CRI-OCRI-O is a lightweight Docker alternative. It’s a Kubernetes Container Runtime Interface enabling to use Open Container Initiative compatible runtimes. CRI-O supports OCI container images from any container registry.

      KubernetesKubernetes is the de facto , industry standard container orchestration engine, managing several hosts master and workers to run containers. Kubernetes resources define how applications are built, operated, managed, etc.

      ETCDETCD is a distributed database of key-value pairs, storing cluster, Kubernetes object configuration and state information.

      OpenShift Kubernetes Extensions – OpenShift Kubernetes Extensions are Custom Resource Definitions (CRDs) in the Kubernetes ETCD database, providing additional functionality compared to a vanilla Kubernetes deployment.

      Containerized Services – Most internal features run as containers on a Kubernetes environment. These are fulfilling the base infrastructure functions such as networking, authentication, etc.

      Runtimes and xPaaS – These are base ready to use container images and templates for developers. A set of base images for JBoss middleware products such as JBoss EAP and ActiveMQ, and for other languages and databases Java, Node.JS, PHP, MongoDB, MySQL, etc.

      DevOps Tools – Rest API provides the main point of interaction with the platform. WEB UI, CLI or other third-party CI/CD tools can connect to this API and allow end users to interact with the platform.

      With the architecture components described in the previous section, OpenShift platform provides automated development workflows allowing developers to concentrate on business outcomes rather than learning about Kubernetes or containerization in detail.

      Main OpenShift components

      OpenShift Nodes

      Similarly, to vanilla Kubernetes OpenShift makes distinction between two different node types, cluster master and cluster workers.

      Cluster Masters

      Cluster Masters are running services required to control the OpenShift cluster such as API Server, etcd and the Controller Manger Server.

      The API Server validates and configures Kubernetes objects.

      The etcd database stores the object configuration information and state

      The Controller Manager Server watches the etcd database for changes and enforces those through the API server on the Kubernetes objects.

      Kubelet is the service which manages request connected to local containers on the masters.

      CRI-O and Kubelet are running as Systemd managed services.

      Cluster Workers

      Cluster Workers are running three main services: the Kubelet, Kube-Proxy and the Container Runtime CRI-O. Workers are grouped into MachineSets CRDs.

      Kubelet is the service which accepts the requests coming from the Controller Manager Server implementing changes in resources, deploying destroying resources as requested.

      Kube-Proxy manages communication to the Pods, and across worker nodes.

      CRI-O is the container runtime.

      Similarly, to vanilla Kubernetes the smallest object is the Pod in an OpenShift cluster.

      MachineSets are custom resources grouping nodes like worker nodes to manage autoscaling and running of Kubernetes compute resources Pods.

      High Availability is built into the platform by running control plane services on multiple masters and running application resources in ReplicaSets behind Services on worker nodes.


      Operators are the preferred method of managing services on the OpenShift control plane. Operators integrate with Kubernetes APIs and CLI tools, performing health checks, managing updates, and ensuring that the service/application remains in a specified state.

      Platform operators

      Operators include critical networking, monitoring and credential services. Platform operators are responsible to manage services related to the entire OpenShift platform. These operators provide an API to allow administrators to configure these components.

      Application operators

      Application related operators are managed by the Cluster Operator Lifecycle Management. These operators are either Red Hat Operators or Certified operators from third parties and can be used to manage specific application workloads on the clusters.


      Projects are custom resources used in OpenShift to group Kubernetes resources and to provide access for users based on these groupings. Projects can also receive quotas to limit the available resources, number of pods, volumes etc. A project allows a team to organize and manage their workloads in isolation from other teams.


      OpenShift uses Service, Ingress and Route resources to manage network communication between pods and route traffic to the pods from cluster external sources.

      Service resource exposes a single IP while load balances traffic between pods sitting behind it within the cluster.

      A Route resource provides a DNS record, making the service available to cluster external sources.

      The Ingress Operator implements an ingress controller API and enables external access to services running on the OpenShift Container Platform.

      Service Mesh

      OpenShift Service Mesh provides operational control over the service mesh functionality and a way to connect, secure and monitor microservice applications running on the platform. It is based on the Istio project, using a mesh of envoy proxies in a transparent way providing discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. The solution also provides A/B testing, canary releases, rate limiting, access control, and end-to-end authentication,


      An integrated Elasticsearch, Fluentd, and Kibana (EFK) provides the cluster wide logging functionality. Fluentd is deployed to each nodes and collecting the all node and container logs writing those to Elastisearch. Kibana is the visualization tool where developers and administrators can create dashboards.


      OpenShift has an integrated pre-installed monitoring solution based on the wider Prometheus ecosystem. It monitors cluster components and alerts cluster administrators about issues. It uses Grafana for visualization with dashboards.


      Metering focuses on in-cluster metric data using Prometheus as a default source of information. Metering enables users to do reporting on namespaces, pods and other Kubernetes resources. It allows the generation of Reports with periodic ETL jobs using SQL queries.


      OpenShift Serverless can Kubernetes native APIs, as well as familiar languages and frameworks, to deploy applications and container workloads. OpenShift Serverless is based on the open source Knative project, providing portability and consistency across hybrid and multi-cloud environments.

      Container-native virtualization

      Container-native virtualization allows administrators and developers to run and manage virtual machine workloads alongside container workloads. It allows the platform to create and manage Linux and Windows virtual machines, import and clone existing virtual machines. It also provides the functionality of live migration of virtual machines between nodes.

      Automation, CI/CD

      OpenShift comes with integrated features such as Source-to-Image (S2I) and Image Streams to help developers execute changes on their application much quicker than in an vanilla Kubernetes environment.

      Docker build

      Docker build image build strategy allows developers with docker containerisation knowledge to define their own Dockerfile based image builds. It expects a repository with a Dockerfile and all required artefacts.


      Source-to-image can pull code from a repository detecting the necessary runtime and building and starting a base image required to run that specific code in a Pod. If the image successfully gets built, it will be uploaded to the OpenShift internal image registry and the Pod can be deployed on the platform. External tools can be used to implement some of the CI features and extend the OpenShift CI/CD functionality for example with tests.

      Image streams

      Image streams can be used to detect changes in application code or source images, and force a Pod rebuild/re-deploy action to implement the changes. Image streams groups container images marked by tags and can manage the related container lifecycle accordingly. Image streams can automatically update a deployment if a new base image has been released onto the platform.

      OpenShift Pipelines

      With OpenShift Pipelines developers and cluster administrators can automate the processes of building, testing and deploying application code to the platform. With pipelines it is possible to minimize human error with a consistent process. A pipeline could include compiling code, unit tests, code analysis, security, installer creation, container build and deployment. Tekton based pipeline definitions are using Kubernetes CRDs (Custom resource Definitions) and the control plane to run pipeline tasks and can be integrated with Jenkins, Knative and others. In OpenShift pipelines each pipeline step is running in it’s own container allowing it to scale independently.

      These are the main components and features of OpenShift which help developers and cluster administrators to deliver value to their company and users much faster and easier.

      In the next blog post, I will walk through a step-by-step Azure Red Hat OpenShift (ARO) deployment.

      How could this work for your business?

      Come speak to us and we will walk you through exactly how it works.

      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

        Kubernetes: The Simple Way?

        This is the first blog post in a four-part series aimed at helping IT experts understand how they can leverage the benefits of OpenShift container platform.

        In this blog post I am comparing OpenShift with vanilla Kubernetes and showing the benefits of enterprise grade solutions in container orchestration.

        The second blog post will introduce some of the OpenShift basic concepts and architecture components.

        The third blog post is about how to deploy the ARO solution in Azure.

        The last blogpost covers how to use the ARO/OpenShift solution to host applications.


        As microservices architecture becomes more and more common in IT, enterprise companies are now beginning to look at the benefits of this. This challenge raises important strategic questions around how to do it securely, with the right amount of investment from an infrastructure and people perspective.

        The early adopters have been using containerisation (mainly docker) for a while now, and it soon became clear they required something to orchestrate and manage these containers. For a short period of time there were several competing orchestration engines, for example MesOS, Docker Swarm, Kubernetes. As of now, it’s clear Kubernetes won the race for the title of most used: this technology is rapidly becoming an Industry standard.

        Kubernetes container orchestration is fascinating, but as with every fascinating new cutting-edge technology, comes the steep learning curve and heavy investment for IT organisations. Operations, developer teams, and security teams all need to educate themselves in the topic, a keen understanding of the technology is vital. This is a huge investment from an enterprise company’s point of view. Usually this is one of the reasons that large companies have technical debts, and can find themselves playing to catch up with leaner, more agile, startup companies. IT leaders need to be sure that a technology is mature enough, has official training, the right amount of community-based knowledge and other enterprise companies have adopted the technology.

        Kubernetes helps companies to provide better service for their end users. It allows IT to react faster to changes in business focus, implement new features, and have higher availability and reliability of services. To achieve all this, Kubernetes comes with solutions such as self-healing, node-pools, readiness and liveness probes, autoscaling etc. To be able to implement and understand all these, as alluded to earlier, the learning curve is steep, as this technology differs greatly from traditional IT.

        Operation teams need to understand:

        • how to design/deploy/operate Kubernetes clusters; 
        • the basic components including master nodes with scheduler, etcd, kube controller, kubectl, API server and worker nodes with kubectl, kube proxy. 
        • general concepts such as Kubernetes networking, Pods, or objects such as deployments, replicas, ingress/egress
        • how to implement better observability with dashboards, how to monitor, implement logging, etc.

        Developer teams need to understand:

        • how to separate their monoliths into microservices. 
        • how to rewrite applications to communicate through APIs. 
        • how to use SaaS solutions from external providers as components of their application. 
        • how to containerize their application code with all its required libraries and dependencies, and how to write dockerfiles to achieve this. 
        • last bot not least the obvious one not mentioned yet: the use of container registries. 

        Security teams need to understand:

        • how to secure Kubernetes clusters.
        • how to secure the containers running on those.
        • how to keep containers, the code and all the components updated to have a secure environment. 
        • how to separate access, implement RBAC.
        • how to secure environments to adhere to regulations, etc.

        Unfortunately, with vanilla Kubernetes, many of these are not included out of the box. Most of the previously mentioned challenges need to be tackled with additional, third party components or require manual configuration. For many enterprise companies, this will require additional agreements, with (perhaps) multiple vendors, to achieve the desired state of the infrastructure. As a result, support processes can be made more complex when issues arise, which can create additional frustration when a quick resolution is required.

        Managed Kubernetes

        So how is it possible to make this simpler, how can Kubernetes be “simple”?

        Hyperscalers such as Azure, AWS and GCP are already offering “managed” Kubernetes solutions, AKS in Azure, EKS in AWS and GKE in GCP. These products provide a quick and easy solution for Operations to deploy and manage Kubernetes infrastructures. The master-node components are managed and operated by the Hyperscalers, in Azure AKS these are provided free of charge (correct at the time of writing this article 29.04.2020), and Operations only need to focus on the management of the worker-nodes, the deployments and network access. All these solutions are packaged with a cloud-centric monitoring solution and can rely on other PaaS/SaaS solutions from the cloud vendor to implement CI/CD, logging, better security. Unfortunately, certain other components, for example cluster external access with ingress, better observability with dashboards and autoscaling, all require a greater level of understanding around Kubernetes concepts and third-party solutions.

        From a development team perspective, managed Kubernetes has the same challenges, developers still need to understand how to create and secure containers, how to use container registries, or how to write Kubernetes configuration YAML(s) for their deployments.

        At many enterprises, security teams have already been looking into how to secure cloud-based workloads, Kubernetes infrastructure is no different and should be considered just another service. From a container security perspective, managed Kubernetes services bring the same challenges as vanilla Kubernetes (base images coming from untrusted sources, developer code and securing related libraries, etc.). All these vulnerabilities can be remediated with the same third-party tools and proper governance models, but can still require the involvement of additional third-parties.

        As highlighted, managed Kubernetes is certainly one of the ways forward, it allows companies to have a certain level of freedom to choose certain components, providing they have a good base understanding of Kubernetes. Managed Kubernetes Service takes over some of the operational burdens, allowing companies to focus on delivering more value to their customers, rather than spending time with operational challenges. 

        What if a company has no Kubernetes knowledge? What if their main focus is on development, and they don’t want to deal with complex support processes involving many third-parties? What can an enterprise company do if they want a turn-key solution, which allows their staff to easily and quickly build infrastructure for containerised workloads, on-premises or in the cloud?


        Let’s make it even more simple!

        If there is a demand in IT, then there must be a solution somewhere! The solution we are talking about in this context is called OpenShift from Red Hat. Red Hat needs little introduction, a large enterprise, recently acquired by IBM and since 1993 have been a prominent member of the open source community. From their inception, Red Hat have focused on the Linux/Unix operating systems and grew into a multinational company. They are able to offer enterprise ready solutions across the whole IT landscape, middleware, database, operating system, container orchestration and others.

        OpenShift was released in 2011, originally with custom developed technologies for the container and container orchestration technologies. From version 3, they have adopted Docker as the container technology, and Kubernetes as the container orchestration technology. 

        OpenShift is a turn-key solution provided by Red Hat. It is a platform-as-a-service product, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux. OpenShift comes with built in components such as dashboards, container registry, Red Hat service mesh, templates etc.

        Figure 1 – Red Hat OpenShift dashboards (source:

        It allows developer teams to concentrate on their primary task of developing code, by providing capabilities such as “source to image” and preset templates, negating the need for the developers to write any Kubernetes or docker related code to deploy their applications.

        As a well-tested and supported product from Red Hat, it provides the key assurance of security for enterprises.


        OpenShift uses the upstream Kubernetes as a basis and modifies some of its basic components to provide an enterprise grade service. Using the same master/worker  concept to provide HA for each component. 

        Application architecture: 

        OpenShift uses the notion of “projects” to provide isolation and distinction between admin and user access. Apps run in containers, which are built by the platform after commits to repositories.

        Kubernetes vs OpenShift

        To choose between Kubernetes and OpenShift a CTO/CIO should consider the following aspects.

        Should consider Kubernetes if;

        • Their company or companies are already using a mature Kubernetes platform or have existing knowledge of the Kubernetes product portfolio.
        • There is a requirement for Middleware, which is better suited to Kubernetes.
        • Utilising the latest open source technology is valued at the company.
        • Their company or companies have a preference or requirement, to keep CI/CD outside of the cluster.

        Should consider OpenShift if;

        • Their company or companies have existing Red Hat subscriptions and investment in OpenShift.
        • Red Hat based middleware is used or preferred.
        • Their company or companies value security hardened, pre integrated and/or tested open source solutions. 
        • A user-friendly turn-key solution with limited admin overhead is preferred.
        • Kubernetes environments in multi/hybrid cloud scenarios need to be managed.
        • Built-in CI/CD features are expected.

        Developers and Operations are always looking at IT solutions from different, sometimes closed-minded, points of view. 

        Developers within any Kubernetes environment need to learn how to containerize applications, work with image registries and deploy applications onto Kubernetes platforms. Operations are often more focused on observability, monitoring and logging capabilities. 

        These processes can be complex with a vanilla Kubernetes implementation. In comparison, OpenShift comes with solutions such as Source-to-image, templates and built-in CI/CD to help developers to focus on business goals. It also comes with integrated logging, monitoring and dashboard solutions with automated installation.

        Figure 2 – Kubernetes vs Red Hat OpenShift

        Even though a turn-key solution sounds impressive, they shouldn’t always be considered more straightforward and easy to start your journey. It’s because of this that Red Hat and Azure have partnered to support and provide OpenShift in Azure as a service. There are two options, OpenShift in Azure and Azure Red Hat OpenShift managed containerization.

        OpenShift on Azure

        There is a choice when it comes to choosing a container platform and a cloud solution to run the mission critical systems that power a business. With Red Hat OpenShift and Microsoft Azure, companies can quickly deploy a containerized, hybrid environment, to meet digital business needs.

        Primary Capabilities:

        • Supported, integrated, and automated architecture, with a validated cluster deployment.
        • Seamless Kubernetes deployment on the Azure public cloud.
        • Fully scalable, global and enterprise-grade public cloud with access to Azure Marketplace.

        Key benefits:

        • Accelerated time to market on a best-of-breed platform.
        • Consistent experience across your hybrid cloud.
        • Scalable, reliable and supported hybrid environment, with a certified ecosystem of proven ISV solutions.

        Challenges Addressed:

        • Keeping up with the ever-changing set of open source projects.
        • Servicing the increased needs of your app development teams.
        • Managing a diverse, complex non-compliant development, security and operations environment.


        • Joint development and engineering.
        • Quick issue resolution via co-located support.
        • Enhanced security for containers, network, storage, users, APIs and the cluster.
        • Containers with cloud-based consumption model and integrated billing.

        Having a unified solution is critical when working seamlessly across on-premises and cloud deployments, OpenShift can be easily deployed to any location. Red Hat and Microsoft are building on shared open source Linux technologies to ensure OpenShift success. 

        The co-located support engineers can resolve issues faster and more easily than a disjointed model where you do not know who owns the issue. Integrated support goes beyond break/fix and provides a set of best practices.

        Customers want reliability, dependability, and flexibility in a supported, sustained engineering lifecycle, this can be easily achieved with Microsoft Azure and Red Hat’s tested and trusted subscription model.

        Azure Red Hat OpenShift (ARO)

        When resources are constrained and skilled talent is scarce, businesses are looking to run containers in the cloud with minimal efforts on maintenance. Azure Red Hat OpenShift (ARO) lets customers gain all the benefits of a container platform, without the need to deploy and manage the environments. This changes the focus from infrastructure management to application development that results in business outcomes.

        Primary Capabilities:

        • Fully managed Red Hat OpenShift on Azure.
        • Jointly engineered, developed and supported by Microsoft and Red Hat. 
        • Access to hundreds of managed Azure services, like Azure Database for MySQL, Azure Cosmos DB, and Azure Cache for Redis, to develop apps.

        Key Benefits

        • The ability to focus on application development, not on container platform management.
        • The value of containers without deploying and managing the environment and platform yourself.
        • Reduced operational overhead. 

        Challenges Addressed

        • Finding the expertise and resources to build custom solutions.
        • Maintaining data sovereignty in hybrid environments.
        • Ensuring security and compliance across complex infrastructure environments.


        • Fully managed container offering.
        • Jointly engineered and supported with a 99.9% uptime SLA.
        • Containers with cloud-based consumption and the built-in ability to scale as needed.
        • The ability to leverage existing Azure commitments.

        Companies choosing ARO can implement a container orchestration platform with OpenShift on Azure without the need to hire and retain talent or maintain the budget to hire new operations people to manage new platforms. Allowing developers to concentrate on business innovation versus running infrastructure. 

        Eliminate the operational complexity of deploying and managing an enterprise container platform at scale and ensuring guaranteed uptime and availability with a defined SLA, security and compliance.

        Reduce operational costs by only paying for what you need, when you need it.

        Maintain a single agreement with Red Hat and Microsoft, requiring no separate contract or subscription while gaining joint support and security from Microsoft and Red Hat.

        Customer stories OpenShift on Azure

        Multinational Airline Technical Support Company


        A Multinational Airline Technical Support company developed its digital software as a service (SaaS) platform for maintenance, repair and overhaul operations using Red Hat Linux and other open-source technologies. The company wanted to move the solution to the cloud.


        The company chose to migrate the solution to Microsoft Azure for its robust and flexible infrastructure capabilities, its network of global data centers and its support for open-source solutions.


        The company can run its open-source technology stack easily on Azure, helping the company provide airlines with solutions that cut costs, optimize operations and improve safety. The stage is set for more exciting future developments.

        Customer stories ARO

        Midmarket Insurance Company


        A Midmarket Insurance Company was lacking in-house skills to effectively run or manage OpenShift themselves.


        ARO gave the company the ability to quickly deploy an OpenShift cluster on Azure, in specific regions, for quick consumption, lowering the time to drive value. 


        The Insurance Company was able to focus its efforts on business outcomes and end user benefits through application development and integration.

        How could this work for your business?

        Come speak to us and we will walk you through exactly how it works.

        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.