Getting started with Azure Red Hat OpenShift (ARO)

Post • 6 min read

This is the third blog post in a four-part series aimed at helping IT experts understand how they can leverage the benefits of OpenShift container platform.

The first blog post was comparing OpenShift with Kubernetes and showing the benefits of enterprise grade solutions in container orchestration.

The second blog post introduced some of the OpenShift basic concepts and architecture components.

This third blog post is about how to deploy the ARO solution in Azure.

The last blogpost will cover how to use the ARO/OpenShift solution to host applications.

ARO cluster deployment

Until recently ARO was a preview feature which had to be enabled – this is not necessary anymore.

Pre-requisites

I recommend upgrading to the latest Azure CLI. I am using an Ubuntu 18.04 as my host for interacting with the Azure API. Use the following command to only upgrade azure-cli before you start.

sudo apt-get install --only-upgrade azure-cli -y

After upgrading Azure CLI log in to your subscription

az login
az account set -s $subscriptionId

Resource groups

It is important to mention, before we start with the cluster deployment, that the user selected for the deployment needs permissions to be able to create resource groups in the subscription. ARO deployment, similarly to AKS, creates a separate resource group for cluster resources and uses 2 resource groups: one for the managed application and one for the cluster resources. You cannot just deploy ARO to an existing resource group.

I am using a new resource group for all my ARO related base resources, VNet, Manage Application as an example.

az group create --name dk-os-weu-sbx --location westeurope

VNet and SubNets

I am creating a new VNet for this deployment, however it is possible to use an existing VNet with two new SubNets for master and worker nodes.

az network vnet create \
  -g "dk-os-weu-sbx" \
  -n vnet-aro-weu-sbx \
  --address-prefixes 10.0.0.0/16 \
  >/dev/null 

After the VNet, I’m creating my two SubNets one for the workers and one for the masters. For the Workers it is important to allocate a large enough network range to allow for autoscaling. I am also enabling the Azure internal access to the Microsoft Container Registry service.

az network vnet subnet create \
    -g "dk-os-weu-sbx" \
    --vnet-name vnet-aro-weu-sbx \
    -n "snet-aro-weu-master-sbx" \
    --address-prefixes 10.0.1.0/24 \
    --service-endpoints Microsoft.ContainerRegistry \
    >/dev/null 
az network vnet subnet create \
    -g "dk-os-weu-sbx" \
    --vnet-name vnet-aro-weu-sbx \
    -n "snet-aro-weu-worker-sbx" \
    --address-prefixes 10.0.2.0/23 \
    --service-endpoints Microsoft.ContainerRegistry \
    >/dev/null

After creating the subnets, we need to disable the private link network policies on the subnets.

az network vnet subnet update \
  -g "dk-os-weu-sbx" \
  --vnet-name vnet-aro-weu-sbx \
  -n "snet-aro-weu-master-sbx" \
  --disable-private-link-service-network-policies true \
    >/dev/null
az network vnet subnet update \
  -g "dk-os-weu-sbx" \
  --vnet-name vnet-aro-weu-sbx \
  -n "snet-aro-weu-worker-sbx" \
  --disable-private-link-service-network-policies true \
    >/dev/null

Azure API providers

To be able to deploy an ARO cluster, several Azure providers need to be enabled first.

az provider register -n "Microsoft.RedHatOpenShift"  
az provider register -n "Microsoft.Authorization" 
az provider register -n "Microsoft.Network" 
az provider register -n "Microsoft.Compute"
az provider register -n "Microsoft.ContainerRegistry"
az provider register -n "Microsoft.ContainerService"
az provider register -n "Microsoft.KeyVault"
az provider register -n "Microsoft.Solutions"
az provider register -n "Microsoft.Storage"

Red Hat pull secret

In order to get Red Hat provided templates, an image pull secret needs to be acquired from Red Hat:

Cluster deployment

Create the cluster with the following command, replace the relevant parameters with your values.

az aro create \
  -g "dk-os-weu-sbx" \
  -n "aroncweutest" \
  --vnet vnet-aro-weu-sbx \
  --master-subnet "snet-aro-weu-master-sbx" \
  --worker-subnet "snet-aro-weu-worker-sbx" \
  --cluster-resource-group "aro-aronctest" \
  --location "West Europe" \
  --pull-secret @pull-secret.txt

If you are planning to use a company domain use the --domain parameter to define it. The full list of parameters can be found in the Azure CLI documentation.

Deployment takes about 30-40 minutes and as an output you will get back some of the relevant information.

apiserverProfile contains the API URL necessary to log in to the cluster

consoleProfile contains the URL for the WEB UI.

The command will deploy a cluster with 3 masters and 3 workers. The Masters will be D8s_v3 while the workers D4s_v3 machines deployed into three different Availability Zones within a Region. If these default sizes are not fitting for the purpose they can be parameterized within the create command with the --master-vm-size and --worker-vm-size parameters.

If a deployment fails or the cluster needs to be removed for any other reason, use the following command to delete it. The complete removal of a successfully deployed cluster can take about 30-40 minutes.

az aro delete -g "dk-os-weu-sbx"   -n "aroncweutest"

Connecting to a cluster

After the successful deployment of a cluster the cluster admin credentials can be acquired with the following command.

az aro list-credentials --name "aroncweutest" --resource-group "dk-os-weu-sbx"

The command will return the following JSON.

{ 
  "kubeadminPassword": "vJiK7-I9MZ7-RKrPP-9V5Gi", 
  "kubeadminUsername": "kubeadmin" 
}

To log in to the cluster WEB UI, open the URL provided by the consoleProfile.url property and enter the credentials.

After the successful login, the Dashboard will show the initial cluster health.

To log in to the API through the CLI, download the OC binary and execute the following command.

oc login apiserverProfile.url

Then enter the credentials and you can start to use the “oc” command to manage the cluster.

Azure AD Integration

With Azure Active Directory (AAD) integration through OAuth, companies can leverage the existing team structures, groups from their Active Directory to separate responsibilities, access to the OpenShift cluster.

To start with it first create a few environment variables to support the implementation

domain=$(az aro show -g dk-os-weu-sbx -n aroncweutest --query clusterProfile.domain -o tsv)  

location=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query location -o tsv)  

apiServer=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query apiserverProfile.url -o tsv)  

webConsole=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query consoleProfile.url -o tsv)  

oauthCallbackURL=https://oauth-openshift.apps.$domain.$location.aroapp.io/oauth2callback/AAD  

 
appName=app-dk-os-eus-sbx
appSecret="SOMESTRONGPASSWORD" 

tenantId=YOUR-TENANT-ID

An Azure AD Application needs to be created to integrate OpenShift authentication with Azure AD.

az ad app create \ 
  --query appId -o tsv \ 
  --display-name $appName \ 
  --reply-urls $oauthCallbackURL \ 
  --password $appSecret 

Create a new environment variable with the application id.

appId=$(az ad app list --display-name $appName | jq -r '.[] | "\(.appId)"')

This user will need an Azure Active Directory Graph scope (Azure Active Directory Graph.User.Read) permission.

az ad app permission add \
 --api 00000002-0000-0000-c000-000000000000 \
 --api-permissions 311a71cc-e848-46a1-bdf8-97ff7156d8e6=Scope \
 --id $appID

Create optional claims to use e-mail with an UPN fallback authentication.

cat > manifest.json<< EOF
[{
  "name": "upn",
  "source": null,
  "essential": false,
  "additionalProperties": []
},
{
"name": "email",
  "source": null,
  "essential": false,
  "additionalProperties": []
}] 
EOF 

Configure optional claims for the Application

az ad app update \
  --set optionalClaims.idToken=@manifest.json \
  --id $appId

Configure an OpenShift OpenID authentication secret.

oc create secret generic openid-client-secret-azuread \
  --namespace openshift-config \
  --from-literal=clientSecret=$appSecret

Create and openshift OAuth resource object which connects the cluster with the AAD.

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: AAD
    mappingMethod: claim
    type: OpenID
    openID:
      clientID: $appId
      clientSecret:
        name: openid-client-secret-azuread
      extraScopes:
      - email
      - profile
      extraAuthorizeParameters:
        include_granted_scopes: "true"
      claims:
        preferredUsername:
        - email
        - upn
        name:
        - name
        email:
        - email
      issuer: https://login.microsoftonline.com/$tenantId
EOF

Apply the YAML file to create the resource. (You need to be logged in with the kubeadmin user)

The reply URL in the AAD application needs to point to the oauthCallbackURL; this can be changed through the portal.

oc apply -f oidc.yaml

After a few minutes you should be able to log in on the OpenShift web UI with any AAD user.

After choosing the AAD option for login, using the AAD credentials, a user can log in and start to work.

In my next blog post I am going to continue with some basic application implementations on OpenShift ARO.

How could this work for your business? 

Come speak to us and we will walk you through exactly how it works.

CloudCloud

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

Ilja Summala
Ilja Summala LinkedIn
CTO
Ilja’s passion and tech knowledge help customers transform how they manage infrastructure and develop apps in cloud.