An introduction to OpenShift

This is the second blog post in a four-part series aimed at helping IT experts understand how they can leverage the benefits of OpenShift container platform.

In the first blog post I was comparing OpenShift with Kubernetes and showing the benefits of enterprise grade solutions in container orchestration.

This, the second blog post, introduces some of the basic OpenShift concepts and architecture components.

The third blog post is about how to deploy the ARO solution in Azure.

The last blog post covers how to use the ARO/OpenShift solution to host applications.

OpenShift Architecture

OpenShift is a turn-key enterprise grade, secure and reliable containerisation solution built on open source Kubernetes. It is built on the open source Kubernetes with additional components to provide out of the box self-service, dashboards, automation-CI/CD, container image registry, multilingual support, and other Kubernetes extensions, enterprise grade features.

The following diagram depicts the architecture of the OpenShift Containerisation Platform highlighting with green the components which were added/modified by Red Hat.

Figure 1 – OpenShift container platform Architecture (green ones are modified or new architecture components)

RHEL CoreOS – The base operating system is Red Hat Enterprise Linux CoreOS. CoreOS is a lightweight RHEL version providing essential OS features and combines the ease of over-the-air updates from Container Linux with the Red Hat Enterprise Linux kernel for container hosts.

CRI-OCRI-O is a lightweight Docker alternative. It’s a Kubernetes Container Runtime Interface enabling to use Open Container Initiative compatible runtimes. CRI-O supports OCI container images from any container registry.

KubernetesKubernetes is the de facto , industry standard container orchestration engine, managing several hosts master and workers to run containers. Kubernetes resources define how applications are built, operated, managed, etc.

ETCDETCD is a distributed database of key-value pairs, storing cluster, Kubernetes object configuration and state information.

OpenShift Kubernetes Extensions – OpenShift Kubernetes Extensions are Custom Resource Definitions (CRDs) in the Kubernetes ETCD database, providing additional functionality compared to a vanilla Kubernetes deployment.

Containerized Services – Most internal features run as containers on a Kubernetes environment. These are fulfilling the base infrastructure functions such as networking, authentication, etc.

Runtimes and xPaaS – These are base ready to use container images and templates for developers. A set of base images for JBoss middleware products such as JBoss EAP and ActiveMQ, and for other languages and databases Java, Node.JS, PHP, MongoDB, MySQL, etc.

DevOps Tools – Rest API provides the main point of interaction with the platform. WEB UI, CLI or other third-party CI/CD tools can connect to this API and allow end users to interact with the platform.

With the architecture components described in the previous section, OpenShift platform provides automated development workflows allowing developers to concentrate on business outcomes rather than learning about Kubernetes or containerization in detail.

Main OpenShift components

OpenShift Nodes

Similarly, to vanilla Kubernetes OpenShift makes distinction between two different node types, cluster master and cluster workers.

Cluster Masters

Cluster Masters are running services required to control the OpenShift cluster such as API Server, etcd and the Controller Manger Server.

The API Server validates and configures Kubernetes objects.

The etcd database stores the object configuration information and state

The Controller Manager Server watches the etcd database for changes and enforces those through the API server on the Kubernetes objects.

Kubelet is the service which manages request connected to local containers on the masters.

CRI-O and Kubelet are running as Systemd managed services.

Cluster Workers

Cluster Workers are running three main services: the Kubelet, Kube-Proxy and the Container Runtime CRI-O. Workers are grouped into MachineSets CRDs.

Kubelet is the service which accepts the requests coming from the Controller Manager Server implementing changes in resources, deploying destroying resources as requested.

Kube-Proxy manages communication to the Pods, and across worker nodes.

CRI-O is the container runtime.

Similarly, to vanilla Kubernetes the smallest object is the Pod in an OpenShift cluster.

MachineSets are custom resources grouping nodes like worker nodes to manage autoscaling and running of Kubernetes compute resources Pods.

High Availability is built into the platform by running control plane services on multiple masters and running application resources in ReplicaSets behind Services on worker nodes.

Operators

Operators are the preferred method of managing services on the OpenShift control plane. Operators integrate with Kubernetes APIs and CLI tools, performing health checks, managing updates, and ensuring that the service/application remains in a specified state.

Platform operators

Operators include critical networking, monitoring and credential services. Platform operators are responsible to manage services related to the entire OpenShift platform. These operators provide an API to allow administrators to configure these components.

Application operators

Application related operators are managed by the Cluster Operator Lifecycle Management. These operators are either Red Hat Operators or Certified operators from third parties and can be used to manage specific application workloads on the clusters.

Projects

Projects are custom resources used in OpenShift to group Kubernetes resources and to provide access for users based on these groupings. Projects can also receive quotas to limit the available resources, number of pods, volumes etc. A project allows a team to organize and manage their workloads in isolation from other teams.

Networking

OpenShift uses Service, Ingress and Route resources to manage network communication between pods and route traffic to the pods from cluster external sources.

Service resource exposes a single IP while load balances traffic between pods sitting behind it within the cluster.

A Route resource provides a DNS record, making the service available to cluster external sources.

The Ingress Operator implements an ingress controller API and enables external access to services running on the OpenShift Container Platform.

Service Mesh

OpenShift Service Mesh provides operational control over the service mesh functionality and a way to connect, secure and monitor microservice applications running on the platform. It is based on the Istio project, using a mesh of envoy proxies in a transparent way providing discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. The solution also provides A/B testing, canary releases, rate limiting, access control, and end-to-end authentication,

Logging

An integrated Elasticsearch, Fluentd, and Kibana (EFK) provides the cluster wide logging functionality. Fluentd is deployed to each nodes and collecting the all node and container logs writing those to Elastisearch. Kibana is the visualization tool where developers and administrators can create dashboards.

Monitoring

OpenShift has an integrated pre-installed monitoring solution based on the wider Prometheus ecosystem. It monitors cluster components and alerts cluster administrators about issues. It uses Grafana for visualization with dashboards.

Metering

Metering focuses on in-cluster metric data using Prometheus as a default source of information. Metering enables users to do reporting on namespaces, pods and other Kubernetes resources. It allows the generation of Reports with periodic ETL jobs using SQL queries.

Serverless

OpenShift Serverless can Kubernetes native APIs, as well as familiar languages and frameworks, to deploy applications and container workloads. OpenShift Serverless is based on the open source Knative project, providing portability and consistency across hybrid and multi-cloud environments.

Container-native virtualization

Container-native virtualization allows administrators and developers to run and manage virtual machine workloads alongside container workloads. It allows the platform to create and manage Linux and Windows virtual machines, import and clone existing virtual machines. It also provides the functionality of live migration of virtual machines between nodes.

Automation, CI/CD

OpenShift comes with integrated features such as Source-to-Image (S2I) and Image Streams to help developers execute changes on their application much quicker than in an vanilla Kubernetes environment.

Docker build

Docker build image build strategy allows developers with docker containerisation knowledge to define their own Dockerfile based image builds. It expects a repository with a Dockerfile and all required artefacts.

Source-to-Image

Source-to-image can pull code from a repository detecting the necessary runtime and building and starting a base image required to run that specific code in a Pod. If the image successfully gets built, it will be uploaded to the OpenShift internal image registry and the Pod can be deployed on the platform. External tools can be used to implement some of the CI features and extend the OpenShift CI/CD functionality for example with tests.

Image streams

Image streams can be used to detect changes in application code or source images, and force a Pod rebuild/re-deploy action to implement the changes. Image streams groups container images marked by tags and can manage the related container lifecycle accordingly. Image streams can automatically update a deployment if a new base image has been released onto the platform.

OpenShift Pipelines

With OpenShift Pipelines developers and cluster administrators can automate the processes of building, testing and deploying application code to the platform. With pipelines it is possible to minimize human error with a consistent process. A pipeline could include compiling code, unit tests, code analysis, security, installer creation, container build and deployment. Tekton based pipeline definitions are using Kubernetes CRDs (Custom resource Definitions) and the control plane to run pipeline tasks and can be integrated with Jenkins, Knative and others. In OpenShift pipelines each pipeline step is running in it’s own container allowing it to scale independently.

These are the main components and features of OpenShift which help developers and cluster administrators to deliver value to their company and users much faster and easier.

In the next blog post, I will walk through a step-by-step Azure Red Hat OpenShift (ARO) deployment.

How could this work for your business?

Come speak to us and we will walk you through exactly how it works.

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Blog

Getting started with ARO – Application deployment

This is the fourth blog post in a four-part series aimed at helping IT experts understand how they can leverage...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data quality and access issues, producing a material loss in the value that this data should be creating.

Diverse and disjointed approaches add to the problem, producing massive and repetitive efforts to produce quite simple yet highlight valuable insights.

Overall, costs get out of control, benefits are reduced and your business suffers.

Yet you and your business deserve so much more from your data. You deserve Data Estate Modernization. With harmonized cloud tech capabilities from Azure it’s possible to keep up-to-date and deal with the changes, and in some cases be ready before they hit.

So how does this work?

At Nordcloud we will make this simple for you, to efficiently use and maximize the value of your data, using Microsoft Azure. Data capabilities now need to be fitter than ever before to support an optimised business.

With Nordcloud and Microsoft Azure you gain a number of benefits including;

  • Rapid time to value by automating manual tasks and embracing the reuse of intellectual capital
  • Data quality continually improved across the entire organization
  • Bypass the bottle necks so common in IT processes 
  • Drive successful service deliveries with less complexity
  • Ensure security and compliance
  • Create a culture within your company of embracing data and the value that it can bring

Overall, its important to introduce a bit of joined up thinking, when it comes to your data estate, and Data Estate Modernization enables you to do just that. 

How can you get started?

Getting started is more easy than businesses think and, a great place to begin is cost optimization, which sets the stage for the overall journey towards new revenues and making your business, data driven.

But how can you get more out of your data estate, to deliver more with less investment?

Tech harmonization using modern tooling from Azure simplifies the generation of key insights and enables your teams to share, consume and process data from across your organization.

Platform simplification and automation drive higher efficiency and reduce the reliance on repetitive mundane tasks, making data democratization a part of your company’s DNA.

How can you get started with Data Estate Modernisation? 

Sign up to one of our upcoming free webinars to learn more.

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Blog

Stop murdering “Agile”, and be agile instead

“Agile Macabre” on techcamp.hamburg, Apr-2020 I’ve been leading projects since 2002, becoming a full-blown agilist around 2011. Not so long...

Blog

Getting started with ARO – Application deployment

This is the fourth blog post in a four-part series aimed at helping IT experts understand how they can leverage...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

The Google Coral Edge TPU is a new machine learning ASIC from Google. It performs fast TensorFlow Lite model inferencing with low power usage. We take a quick look at the Coral Dev Board, which includes the TPU chip and is available in online stores now.

Photo by Gravitylink

Overview

Google Coral is a general-purpose machine learning platform for edge applications. It can execute TensorFlow Lite models that have been trained in the cloud. It’s based on Mendel Linux, Google’s own flavor of Debian.

Object detection is a typical application for Google Coral. If you have a pre-trained machine learning model that detects objects in video streams, you can deploy your model to the Coral Edge TPU and use a local video camera as the input. The TPU will start detecting objects locally, without having to stream the video to the cloud.

The Coral Edge TPU chip is available in several packages. You probably want to buy the standalone Dev Board which includes the System-on-Module (SoM) and is easy to use for development. Alternatively you can buy a separate TPU accelerator device which connects to a PC through a USB, PCIe or M.2 connector. A System-on-Module is also available separately for integrating into custom hardware.

Comparing with AWS DeepLens

Google Coral is in many ways similar to AWS DeepLens. The main difference from a developer’s perspective is that DeepLens integrates to the AWS cloud. You manage your DeepLens devices and deploy your machine learning models using the AWS Console.

Google Coral, on the other hand, is a standalone edge device that doesn’t need a connection to the Google Cloud. In fact, setting up the development board requires performing some very low level operations like connecting a USB serial port and installing firmware.

DeepLens devices are physically consumer-grade plastic boxes and they include fixed video cameras. DeepLens is intended to be used by developers at an office, not integrated into custom products.

Google Coral’s System-on-Module, in contrast, packs the entire system in a 40×48 mm module. That includes all the processing units, networking features, connectors, 1GB of RAM and an 8GB eMMC where the operating system is installed. If you want build a custom hardware solution, you can build it around the Coral SoM.

The Coral Development Board

To get started with Google Coral, you should buy a Dev Board for about $150. The board is similar to Raspberry Pi devices. Once you have installed the board, it only requires a power source and a WiFi connection to operate.

Here are a couple of hints for installing the board for the first time.

  • Carefully read the instructions at https://coral.ai/docs/dev-board/get-started/. They take you through all the details of how to use the three different USB ports on the device and how to install the firmware.
  • You can use a Mac or a Linux computer but Windows won’t work. The firmware installation is based on a bash script and it also requires some special serial port drivers. They might work in Windows Subsystem for Linux, but using a Mac or a Linux PC is much easier.
  • If the USB port doesn’t seem to work, check that you aren’t using a charge-only USB cable. With a proper cable the virtual serial port device will appear on your computer.
  • The MDT tool (Mendel Development Tool) didn’t work for us. Instead, we had to use the serial port to login to the Linux system and setup SSH manually.
  • The default username/password of Mendel Linux is mendel/mendel. You can use those credentials to login through the serial port but the password doesn’t work through SSH. You’ll need to add your public key to .ssh/authorized_keys.
  • You can setup a WiFi network so you won’t need an ethernet cable. The getting started guide has instructions for this.

Once you have a working development board, you might want to take a look at Model Play (https://model.gravitylink.com/). It’s an Android application that lets you deploy machine learning models from the cloud to the Coral development board.

Model Play has a separate server installation guide at https://model.gravitylink.com/doc/guide.html. The server must be installed on the Coral development board before you can connect your smartphone to it. You also need to know the local IP address of the development board on your network.

Running Machine Learning Models

Let’s assume you now have a working Coral development board. You can connect to it from your computer with SSH and from your smartphone with the Model Play application.

The getting started guide has instructions for trying out the built-in demonstration application called edgetpu_demo. This application will work without a video camera. It uses a recorded video stream to perform real-time object recognition to detect cars in the video. You can see the output in your web browser.

You can also try out some TensorFlow Lite models through the SSH connection. If you have your own models, check out the documentation on how to make them compatible with the Coral Edge TPU at https://coral.ai/docs/edgetpu/models-intro/.

If you just want to play around with existing models, the Model Play application makes it very easy. Pick one of the provided models and tap the Free button to download it to your device. Then tap the Run button to execute it.

Connecting a Video Camera and Sensors

If you buy the Coral development board, make sure to also get the Video Camera and Sensor accessories for about $50 extra. They will let you apply your machine learning models to something more interesting than static video files.

Photo by Gravitylink

Alternatively you can also use a USB UVC compatible camera. Check the instructions at https://coral.ai/docs/dev-board/camera/#connect-a-usb-camera for details. You can use an HDMI monitor to view the output.

Future of the Edge

Google has partnered with Gravitylink for Coral product distribution. They also make the Model Play application that offers the Coral demos mentioned in this article. Gravitylink is trying to make machine learning fun and easy with simple user interfaces and a directory of pre-trained models.

Once you start developing more serious edge computing applications, you will need to think about issues like remote management and application deployment. At this point it is still unclear whether Google will integrate Coral and Mendel Linux to the Google Cloud Platform. This would involve device authentication, operating system updates and application deployments.

If you start building on Coral right now, you’ll most likely need a custom management solution. We at Nordcloud develop cloud-based management solutions for technologies like AWS Greengrass, AWS IoT and Docker. Feel free to contact us if you need a hand.

Blog

Stop murdering “Agile”, and be agile instead

“Agile Macabre” on techcamp.hamburg, Apr-2020 I’ve been leading projects since 2002, becoming a full-blown agilist around 2011. Not so long...

Blog

Make Work Better by Documentation

Sharing knowledge in a team is an ongoing challenge. Finding balance between writing documentation and keeping it up to date,...

Blog

Time to rescue your data operations

The value created by data can fundamentally influence key areas of your business, from enabling, optimizing and steering key functions. ...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

CATEGORIES

NewsPress releases

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services in the 2020 Magic Quadrant.

Gartner just published its 2020 Magic Quadrant for Professional and Managed Cloud Services. Nordcloud has been placed as the top position for hyperscaler only MSP and fourth in the world overall for cloud service execution. This is an outstanding achievement. The success is the result of four ingredients – our customers commitment to their cloud journey and ours, our people’s Nordcloudian cloud passion and skill, the ongoing strategic support from our hyperscale partners Amazon AWS, Microsoft Azure and Google cloud, and our unwavering ambition to enable the power of the cloud through tools and best practice.

Nordcloud’s speed and skill in the cloud journey is at the centre of the award. Within the research it shows that Nordclouds ability to use highly technical resources and a differentiated agile and flexible way of working (Gartner calls this mode 2) delivers the pace and agility needed by cloud adoption that legacy MSPs simply cannot deliver.

Our clients have clearly benefited from quicker and more robust, speed to business and technology outcomes. It is our cloud-native style and mindset which has given us an edge in the way we work, in a sustainably differentiating way. Put simply, we want customers to get the best out of public cloud without delay, in every case.

Rest assured, going forward, we will continue to challenge the traditional methods and models, bringing new styles of speed and value to our clients. Our clients deserve this given the success they have helped us achieve so far.

Want to embark on a cloud journey in an accelerated, cloud-native powered way?

Please don’t hesitate to contact our team.

Nordcloud is a scaling cloud-native Managed Services Provider that enables organisations to build their digital ambitions by leveraging the power of the public cloud. Nordcloud has 10 hubs in Europe and over 500 employees. Nordcloud powers organisations from mid sized to large corporates.

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Blog

Six capabilities modern ISVs need in order to future-proof their SaaS offering

Successful SaaS providers have built their business around 6 core capabilities

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You are so excited, counting the days when you will finally get to meet your new colleagues in person, get familiarized with the new office culture and perhaps even find a few new local favorite lunch spots.

None of that became a reality for me when I joined Nordcloud in the middle of the COVID-19 outbreak.

My name is Marta Clements and I thought I would share with you my story of what it has been like to be a new hire at Norcloud in these very unusual and challenging times.

Where are you from and how did you end up in Nordcloud?

I was born and raised in Barcelona with an American mother and a Spanish father. From an early age I was exposed to a very international upbringing so it is not surprising that I followed my parents footsteps wanting to experience different cultures. One opportunity led to the next and fast forward a few countries, a few jobs, a husband and a 6 year old son, here I am in Stockholm working as a Talent Acquisition Partner for a Finnish company.

My decision to join Nordcloud was super easy. Who wouldn’t want to join a super talented and diverse group of people working on cutting edge technology in a true collaborative spirit?

What is your core competence? Please tell us about your role.

I joined the Talent Acquisition team and as the role suggests my core competence is identifying the best talent to support the growth of the business. What I like the most about my role is that it doesn’t matter how many interviews I’ve conducted or been part of, I am yet to find two identical interviews.

Coming from a previous commercial background, I also get a kick when an offer has been made and a candidate says yes. It’s exciting to think one can have a positive impact in somebody’s life and equally find people that have a positive impact on the business and help us grow.

What has it been like to join Nordcloud during Covid -19?

Remote working has a new meaning for all of us now, but even more so when your first week in your new job also becomes remote. So much has happened in this last month but when I joined Nordcloud it was still early days of the virus outbreak in Sweden. 

Prior to joining I had received a very comprehensive welcome pack with all the information I needed during my first weeks. Included there was also a section explaining Nordcloud’s Covid-19 policy which urged all employees to work from home.

My new laptop was being delivered to the office, so one could say that my first day was still relatively normal as I got to meet two colleagues in person. In fact they are still the only colleagues I have physically met and it’s been almost a month! As the laptop delivery was experiencing delays, I also got to lock up the office and set the alarm on my first day. Now that’s what I call trusting your employees! Not bad at all, I told myself. I’m going to be doing just fine. 

The original plan for my induction at Nordcloud would have been categorized as a dream for any new hire: My line manager from Finland together with one of my colleagues from the Polish office were going to fly to Stockholm and help me get acclimatized to Nordcloud during my first week. Unfortunately that warm welcome never took place in person, but it’s amazing how fast we have adapted to our new reality and made the most of video conferencing. Who is to say that one cannot get to know your colleagues this way?

What has it been like to follow Nordcloud’s induction program remotely?

At Nordcloud it is common practice to group new hires together and have them start on the same day every month. This means we have people joining from very different backgrounds across the various European office locations. As such, the general induction sessions are usually conducted remotely and you can immediately sense that Nordcloud has a good grasp of it. The sessions are so well organized and prepared. During these sessions you also get to meet some of the leadership such as our CEO Jan Kritz, who goes over “Nordcloud in a nutshell”. Even though we were all remote, Jan still urged every single one of us to introduce ourselves to the group. He also made a point of switching on our cameras to make it more personal.

In addition to the group induction program, my team had also put together an induction plan relevant to my role. My entire first 3 weeks had been planned and thought through. They even had put together a presentation to outline the plan! That’s when you realize how much energy and effort they have put together to make you feel welcome and most importantly to help you be successful.

Remote or not, I can’t imagine how one can feel more welcomed.

What can you share about Nordcloud’s culture so far?

Prior to Covid-19, if you had asked me if one could grasp a company’s culture through videoconferencing and remote working, I would have probably been inclined to say that you couldn’t.

To my surprise: You absolutely can! As I mentioned earlier, I have only physically met two Nordcloudians and yet I feel as if I have a good understanding of the Nordcloud culture.

If I had to summarize it in 3 words I would say: Collaborative, transparent and positive problem solvers.

Yet there is so much more to the culture than that, but hopefully you will discover it yourself one day.

Remote coffee breaks definitely take the prize. I have never been invited to so many remote coffee breaks. If I wanted to I could attend a coffee break every day.  We even have an app that matches your availability with 3 other random colleagues across Europe and you get to have a remote break together. What a fantastic way to get to meet new people while keeping some mental sanity during these Covid-19 times.

What is the most useful thing you have learned at Nordcloud?

Having grown up in Spain, I’m always amazed how organized and structured conversations can be when you are engaging with Nordic people.

In Spain we usually interrupt each other and somehow the conversations have a different tempo. I was pleasantly surprised how friendly and chatty my team members are. But one thing is clear to me, there is no interrupting here!

This skill is even more necessary when dealing with video conferencing.

What do you do outside of work?

When we moved to Stockholm from London I wanted to experience the best that Sweden has to offer. To me that meant living in the suburbs close to nature but also close to the city. I can’t imagine a more ideal upbringing when you have a young family. Near our home there is a fantastic bike path surrounded by the beautiful landscape of the sea. We love going there together as a family and having a picnic. I can now officially say that my 6 year old is faster than me on the bike.

I look forward to meeting my colleagues soon!

Interested in joining Nordcloud? Have a look at our open positions and get in touch!

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

Meet Petri Kallberg – our CTO from Finland!

Get to know Petri Kallberg who works as a CTO in Finland!

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of working. At Nordcloud we have this in our DNA. However, this article is about something slightly different, although there are some general guidelines here.

Here is the story from Wladyslaw who started supporting recruitment in Scandinavia fully remotely… from our office in Poznań, Poland.

When I came to Nordcloud a year ago, I took on the challenge of supporting recruitment in Scandinavia. Back then, I already had a couple years of experience working as a recruiter and sourcer. Before that, I also worked as an Executive Assistant in two consulting companies, where I remotely supported managers and partners from German-speaking countries.

Needless to say, I felt quite ready for this. Everything seemed quite familiar to me. I knew how to work with remote colleagues, I was really into agile ways of working and I also had a plan in my head on how to do it. I have also been very interested in this region, since I studied Scandinavian politics & culture during my studies.

Now, after one year, I must admit that I had to revise a lot of my assumptions about remote recruitment. Was it challenging? Sometimes yes. Was it worth taking on this challenge? Definitely yes!

The current situation is forcing us to rethink our ways of working in many industries. Working as a remote recruiter in the IT industry is probably easier, because people you work with are a bit more used to it. I can imagine that it’s not always easy. People really appreciate personal contact a lot and it’s much easier to give a good impression if you meet someone face-to-face.

However, it’s not impossible. You can still do it successfully. The most important thing one needs to take into consideration is not to take anything for granted. When you meet someone in the conference room or in the lobby, interaction starts in a natural way. How many times have you been reminded of telling something important to your colleague when you met them in the kitchen? In a virtual world you need to create those situations proactively. 

So, what are the key takeaways from this past year? I would mention 4 of them.

  1. Building good relationships with the people

Considering a bigger picture, I think that the cornerstone of every effective remote cooperation is building good collaboration with your working peers. It is of course harder to do it when you don’t meet these people every day in the office, but it’s not impossible. I think that an important source of inspiration for me was the fact that I immediately started to work in a recruitment team that was already spread between many locations. Some of my colleagues that I closely cooperate with work in Helsinki, London & Stockholm. When I was in doubt, I could reflect on how great we can work together.

The biggest challenge with working in such a setup is that it seems counterintuitive. There’s no doubt about the fact that people are social animals – we like to interact with each other and we enjoy personal contact. Therefore, you need to make this as close to the real situation as possible. You need to set up regular catch-ups, ideally with video connections. 

However, it’s not always possible. You might be afraid of using instant messaging (like Slack, Teams etc.) but sometimes this is the only way of doing it. As much as recruitment is important for hiring managers, tech recruiters and other colleagues, you also need to remember that they often have a lot of other stuff to do. Your priorities are not their priorities. And this is normal. Accept it and carry on. Which leads us to the second point, which is…

  1. Persistence is the key!

You would probably agree on how annoying it seems when your partner or friend doesn’t want to respond to this super important message you sent them 20 minutes ago. You check your WhatsApp, SMS, Messenger or whatever and nothing is there. The same goes for your virtual workplace. How many times have you had an ideal candidate with two other offers on the table, but your business partners are away in client workshops and it’s super hard to get hold of them? 

Be clear about your goals. And remember that people expect you to be responsible for delivering them. As cruel as it sounds, no one will care that it was too hard to reach someone in charge of decision-making. It’s you who needs to push through. Even if you don’t see it now, this is how you build a strong credibility in front of your remote colleagues, peers and hiring managers. It will pay off one day. Trust me.

  1. Remembering that this is just a substitute of a personal contact

Let’s admit it – nothing will replace eye-2-eye contact. I know it sounds brutal and for some it can even be contradictory to what I just wrote. But I also believe that staying as true to reality as possible can help you avoid a lot of disappointment. This is how the world today works and the sooner you adapt to it the better for you. It’s irrelevant whether you perceive it as something positive or negative. This is a reality coming true. According to Forbes.com 50% of the U.S. workforce will soon be remote. The same will happen in other countries in the coming decades.

The current worldwide coronavirus crisis can even accelerate that, now that many companies have realized in practice how it works, and their legacy culture is not impeding it. 

This is a great opportunity for you to be ready when the future knocks on your door!

However, it really helps a lot if you can meet your colleagues in person once in a while. How often would it need to be? It’s really a very individual thing, but from my experience I would say that meeting at least once a year is essential.

Does it mean that if you have never met your colleagues, you will never build a successful and effective team? Of course not, it is totally possible! Then you just need to make this extra effort to make it work, but trust me, it’s really worth it! There are companies that already operate fully remotely. I think that the most famous one currently is Buffer, a company that has created a software application designed to manage accounts in social networks. But how to plan it in the most effective way? This leads me to the last point…

  1. Try agile!

Agile is a great idea that was created in the beginning of the century to improve the quality and delivery of software products. I’m pretty sure you’ve heard about it already, since it has also become very popular outside of the IT realm nowadays. Although agile offers some concrete tools or methodologies to use (SCRUM being the most famous one), it’s not about the tools you use but how you approach daily work. This is an amazing way of working in remote teams, because it forces you to think how you can work smarter every day. At Nordcloud we started implementing agile in HR by working in SCRUM. We also use tools like Trello and Slack to make our daily communication more effective. If you are an HR person looking for more specific guidelines, I recommend you visit an Agile HR Manifesto website, which was compiled by a team lead by Pia-Maria Thoren. 

To sum up

Remote work is constantly gaining on popularity and it will do so even more in the future. If you work as an HR person, you may have already come across this way of working. It may be even so that your employer allows you to work from home once a week. Maybe you have this business partner, hiring manager etc. that you struggle to work effectively with. 

Regardless of where you are now and what you do, this will become an even bigger part of your reality much sooner than you think. I hope you find some thoughts in this article useful on your path to success in this rapidly changing world. 

Interested in joining Nordcloud? Have a look at our open positions and get in touch!

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

Meet Petri Kallberg – our CTO from Finland!

Get to know Petri Kallberg who works as a CTO in Finland!

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








The Business Case for Multisourcing

The top IT leader challenge is managing with budget constraints, according to a latest Gartner survey. While dealing with that challenge, they also have to help build the business strategy (53%) and drive innovation (40%). 

Outsourcing has been the obvious way to square this circle – offloading low-value tasks so the business can focus on high-value opportunities. But traditional outsourcing models haven’t been meeting business needs. One study found that 60% of IT outsourcing projects fail to meet their pre-defined targets. That’s why multisourcing is increasingly popular. Instead of outsourcing to a single service provider, companies are using best-in-breed vendors for different elements of their IT landscape. 

Here’s why IT multisourcing is the future – both for cost savings and value delivery.

Single provider outsourcing models are no longer cost-effective

The business case for outsourcing to a single provider was based on economies of scale and convenience (which delivers cost benefits through improved operational efficiency). 

The reality, however, is that the single provider model hasn’t lived up to the savings expectations – and, in many cases, has created unnecessary cost.

Inflexible consumption models, contract lock-in, low agility and increased technical debt all lead to real costs within the business – which easily outweigh the savings from economies of scale and convenience.

Businesses need more agility and flexibility to meet market needs

With rapid changes to the technology landscape and customer expectations, IT has become intrinsic to value delivery. But it only delivers that value when you can flex consumption and services based on market needs. Traditional single provider processes aren’t designed to deliver this flexibility. 

When you multisource, you get a team of best-in-breed providers who use their domain expertise and tooling to fine-tune each service area. You can tweak your team at any time as required, ensuring your IT backbone is always working agilely to maximise your competitive advantage.

You then reap the following business benefits:

  • You’re on the front foot because IT auto-scales – you can bolt capacity/capabilities in and out as required, so your organisation is on the front foot (whether you’re launching new products/services or reacting to factors outside your control)
  • You only pay for what you use – you don’t need to lock in your system design or service requirements upfront. Instead, you pay based on your footprint on a monthly basis
  • You can react quickly to capitalise on opportunities – because you’re not tied into rigid SLAs and don’t have to waste time renegotiating changes

A leading cloud specialist I know summed it up nicely:

“What the business wanted was to say, ‘I need a new VM tomorrow.’ But because of how the multi-layer contract was set up, it would routinely take 3 weeks to get it online. This was because SLAs were based on static metrics like number of VMs. We therefore had to renegotiate the support side each time we needed more capacity. We then had a time-consuming discussion of which VMs would get the support coverage, which required impact assessments and risk analysis. This drastically reduced efficiency and evolution velocity, which had real and opportunity costs.”

Sourcing models need to deliver short-term savings and long-term value

Yes, it’s conceptually easier to offload everything on to a single provider, And yes, having an ecosystem of best-in-breed providers takes time to set up initially. But, when set up in the right way, multisourcing helps you maximise both short-term savings and long-term value. The best way to save money on an IT support arrangement is to introduce healthy co-opetition between a partner network rather than locking yourself into a multi-year, multi-layer contract with a monopoly provider. That way:

  • Partners are continually incentivised to keep costs down – so you maximise ROI and minimise TCO 
  • Your cost model is based on actual consumption at any given point – so you’re never paying for unused capacity in preparation for a future peak
  • You can easily extricate yourself if requirements change – without managing layers of unaligned disconnect clauses and minimum payment notice periods 

The agility and expertise you get from working with domain specialists also helps you maximise value ongoing:

  • You’re hiring the right people into your IT team instead of hiring someone else’s IT department to work for your business – which means you’re building exactly the right team for delivering value in your context
  • Your staff have more time to focus on areas that drive value to the business – because you have experts managing each stream and time isn’t wasted negotiating complex changes 
  • The business can leverage opportunities more easily and quickly – because experts help you drive innovation and optimise/scale each element of your tech stack

The right multisourcing operating model delivers greater, more sustainable value

The key to success with multisourcing is to establish the right operating model, not just for now, but for your desired state. 

With the right governance, tools and methodologies, your ecosystem operates as a slick, API-driven machine. You have partners, not vendors – who are aligned to a common purpose. You essentially have a bespoke, expert IT team adhering to the same standards, supporting each other and solving issues collaboratively. What does this best-practice operating model look like – and how do you implement it so you maximise the benefits of multisourcing?

In order to define this Nordcloud can help you through its advisory practice. How can this work for your business?

Click here to book in a session with our cloud advisors.

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Blog

Getting started with ARO – Application deployment

This is the fourth blog post in a four-part series aimed at helping IT experts understand how they can leverage...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Kubernetes: The Simple Way?

This is the first blog post in a four-part series aimed at helping IT experts understand how they can leverage the benefits of OpenShift container platform.

In this blog post I am comparing OpenShift with vanilla Kubernetes and showing the benefits of enterprise grade solutions in container orchestration.

The second blog post will introduce some of the OpenShift basic concepts and architecture components.

The third blog post is about how to deploy the ARO solution in Azure.

The last blogpost covers how to use the ARO/OpenShift solution to host applications.

Introduction

As microservices architecture becomes more and more common in IT, enterprise companies are now beginning to look at the benefits of this. This challenge raises important strategic questions around how to do it securely, with the right amount of investment from an infrastructure and people perspective.

The early adopters have been using containerisation (mainly docker) for a while now, and it soon became clear they required something to orchestrate and manage these containers. For a short period of time there were several competing orchestration engines, for example MesOS, Docker Swarm, Kubernetes. As of now, it’s clear Kubernetes won the race for the title of most used: this technology is rapidly becoming an Industry standard.

Kubernetes container orchestration is fascinating, but as with every fascinating new cutting-edge technology, comes the steep learning curve and heavy investment for IT organisations. Operations, developer teams, and security teams all need to educate themselves in the topic, a keen understanding of the technology is vital. This is a huge investment from an enterprise company’s point of view. Usually this is one of the reasons that large companies have technical debts, and can find themselves playing to catch up with leaner, more agile, startup companies. IT leaders need to be sure that a technology is mature enough, has official training, the right amount of community-based knowledge and other enterprise companies have adopted the technology.

Kubernetes helps companies to provide better service for their end users. It allows IT to react faster to changes in business focus, implement new features, and have higher availability and reliability of services. To achieve all this, Kubernetes comes with solutions such as self-healing, node-pools, readiness and liveness probes, autoscaling etc. To be able to implement and understand all these, as alluded to earlier, the learning curve is steep, as this technology differs greatly from traditional IT.

Operation teams need to understand:

  • how to design/deploy/operate Kubernetes clusters; 
  • the basic components including master nodes with scheduler, etcd, kube controller, kubectl, API server and worker nodes with kubectl, kube proxy. 
  • general concepts such as Kubernetes networking, Pods, or objects such as deployments, replicas, ingress/egress
  • how to implement better observability with dashboards, how to monitor, implement logging, etc.

Developer teams need to understand:

  • how to separate their monoliths into microservices. 
  • how to rewrite applications to communicate through APIs. 
  • how to use SaaS solutions from external providers as components of their application. 
  • how to containerize their application code with all its required libraries and dependencies, and how to write dockerfiles to achieve this. 
  • last bot not least the obvious one not mentioned yet: the use of container registries. 

Security teams need to understand:

  • how to secure Kubernetes clusters.
  • how to secure the containers running on those.
  • how to keep containers, the code and all the components updated to have a secure environment. 
  • how to separate access, implement RBAC.
  • how to secure environments to adhere to regulations, etc.

Unfortunately, with vanilla Kubernetes, many of these are not included out of the box. Most of the previously mentioned challenges need to be tackled with additional, third party components or require manual configuration. For many enterprise companies, this will require additional agreements, with (perhaps) multiple vendors, to achieve the desired state of the infrastructure. As a result, support processes can be made more complex when issues arise, which can create additional frustration when a quick resolution is required.

Managed Kubernetes

So how is it possible to make this simpler, how can Kubernetes be “simple”?

Hyperscalers such as Azure, AWS and GCP are already offering “managed” Kubernetes solutions, AKS in Azure, EKS in AWS and GKE in GCP. These products provide a quick and easy solution for Operations to deploy and manage Kubernetes infrastructures. The master-node components are managed and operated by the Hyperscalers, in Azure AKS these are provided free of charge (correct at the time of writing this article 29.04.2020), and Operations only need to focus on the management of the worker-nodes, the deployments and network access. All these solutions are packaged with a cloud-centric monitoring solution and can rely on other PaaS/SaaS solutions from the cloud vendor to implement CI/CD, logging, better security. Unfortunately, certain other components, for example cluster external access with ingress, better observability with dashboards and autoscaling, all require a greater level of understanding around Kubernetes concepts and third-party solutions.

From a development team perspective, managed Kubernetes has the same challenges, developers still need to understand how to create and secure containers, how to use container registries, or how to write Kubernetes configuration YAML(s) for their deployments.

At many enterprises, security teams have already been looking into how to secure cloud-based workloads, Kubernetes infrastructure is no different and should be considered just another service. From a container security perspective, managed Kubernetes services bring the same challenges as vanilla Kubernetes (base images coming from untrusted sources, developer code and securing related libraries, etc.). All these vulnerabilities can be remediated with the same third-party tools and proper governance models, but can still require the involvement of additional third-parties.

As highlighted, managed Kubernetes is certainly one of the ways forward, it allows companies to have a certain level of freedom to choose certain components, providing they have a good base understanding of Kubernetes. Managed Kubernetes Service takes over some of the operational burdens, allowing companies to focus on delivering more value to their customers, rather than spending time with operational challenges. 

What if a company has no Kubernetes knowledge? What if their main focus is on development, and they don’t want to deal with complex support processes involving many third-parties? What can an enterprise company do if they want a turn-key solution, which allows their staff to easily and quickly build infrastructure for containerised workloads, on-premises or in the cloud?

OpenShift

Let’s make it even more simple!

If there is a demand in IT, then there must be a solution somewhere! The solution we are talking about in this context is called OpenShift from Red Hat. Red Hat needs little introduction, a large enterprise, recently acquired by IBM and since 1993 have been a prominent member of the open source community. From their inception, Red Hat have focused on the Linux/Unix operating systems and grew into a multinational company. They are able to offer enterprise ready solutions across the whole IT landscape, middleware, database, operating system, container orchestration and others.

OpenShift was released in 2011, originally with custom developed technologies for the container and container orchestration technologies. From version 3, they have adopted Docker as the container technology, and Kubernetes as the container orchestration technology. 

OpenShift is a turn-key solution provided by Red Hat. It is a platform-as-a-service product, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux. OpenShift comes with built in components such as dashboards, container registry, Red Hat service mesh, templates etc.

Figure 1 – Red Hat OpenShift dashboards (source: openshift.com)

It allows developer teams to concentrate on their primary task of developing code, by providing capabilities such as “source to image” and preset templates, negating the need for the developers to write any Kubernetes or docker related code to deploy their applications.

As a well-tested and supported product from Red Hat, it provides the key assurance of security for enterprises.

Architecture:

OpenShift uses the upstream Kubernetes as a basis and modifies some of its basic components to provide an enterprise grade service. Using the same master/worker  concept to provide HA for each component. 

Application architecture: 

OpenShift uses the notion of “projects” to provide isolation and distinction between admin and user access. Apps run in containers, which are built by the platform after commits to repositories.

Kubernetes vs OpenShift

To choose between Kubernetes and OpenShift a CTO/CIO should consider the following aspects.

Should consider Kubernetes if;

  • Their company or companies are already using a mature Kubernetes platform or have existing knowledge of the Kubernetes product portfolio.
  • There is a requirement for Middleware, which is better suited to Kubernetes.
  • Utilising the latest open source technology is valued at the company.
  • Their company or companies have a preference or requirement, to keep CI/CD outside of the cluster.

Should consider OpenShift if;

  • Their company or companies have existing Red Hat subscriptions and investment in OpenShift.
  • Red Hat based middleware is used or preferred.
  • Their company or companies value security hardened, pre integrated and/or tested open source solutions. 
  • A user-friendly turn-key solution with limited admin overhead is preferred.
  • Kubernetes environments in multi/hybrid cloud scenarios need to be managed.
  • Built-in CI/CD features are expected.

Developers and Operations are always looking at IT solutions from different, sometimes closed-minded, points of view. 

Developers within any Kubernetes environment need to learn how to containerize applications, work with image registries and deploy applications onto Kubernetes platforms. Operations are often more focused on observability, monitoring and logging capabilities. 

These processes can be complex with a vanilla Kubernetes implementation. In comparison, OpenShift comes with solutions such as Source-to-image, templates and built-in CI/CD to help developers to focus on business goals. It also comes with integrated logging, monitoring and dashboard solutions with automated installation.

Figure 2 – Kubernetes vs Red Hat OpenShift

Even though a turn-key solution sounds impressive, they shouldn’t always be considered more straightforward and easy to start your journey. It’s because of this that Red Hat and Azure have partnered to support and provide OpenShift in Azure as a service. There are two options, OpenShift in Azure and Azure Red Hat OpenShift managed containerization.

OpenShift on Azure

There is a choice when it comes to choosing a container platform and a cloud solution to run the mission critical systems that power a business. With Red Hat OpenShift and Microsoft Azure, companies can quickly deploy a containerized, hybrid environment, to meet digital business needs.

Primary Capabilities:

  • Supported, integrated, and automated architecture, with a validated cluster deployment.
  • Seamless Kubernetes deployment on the Azure public cloud.
  • Fully scalable, global and enterprise-grade public cloud with access to Azure Marketplace.

Key benefits:

  • Accelerated time to market on a best-of-breed platform.
  • Consistent experience across your hybrid cloud.
  • Scalable, reliable and supported hybrid environment, with a certified ecosystem of proven ISV solutions.

Challenges Addressed:

  • Keeping up with the ever-changing set of open source projects.
  • Servicing the increased needs of your app development teams.
  • Managing a diverse, complex non-compliant development, security and operations environment.

Differentiators:

  • Joint development and engineering.
  • Quick issue resolution via co-located support.
  • Enhanced security for containers, network, storage, users, APIs and the cluster.
  • Containers with cloud-based consumption model and integrated billing.

Having a unified solution is critical when working seamlessly across on-premises and cloud deployments, OpenShift can be easily deployed to any location. Red Hat and Microsoft are building on shared open source Linux technologies to ensure OpenShift success. 

The co-located support engineers can resolve issues faster and more easily than a disjointed model where you do not know who owns the issue. Integrated support goes beyond break/fix and provides a set of best practices.

Customers want reliability, dependability, and flexibility in a supported, sustained engineering lifecycle, this can be easily achieved with Microsoft Azure and Red Hat’s tested and trusted subscription model.

Azure Red Hat OpenShift (ARO)

When resources are constrained and skilled talent is scarce, businesses are looking to run containers in the cloud with minimal efforts on maintenance. Azure Red Hat OpenShift (ARO) lets customers gain all the benefits of a container platform, without the need to deploy and manage the environments. This changes the focus from infrastructure management to application development that results in business outcomes.

Primary Capabilities:

  • Fully managed Red Hat OpenShift on Azure.
  • Jointly engineered, developed and supported by Microsoft and Red Hat. 
  • Access to hundreds of managed Azure services, like Azure Database for MySQL, Azure Cosmos DB, and Azure Cache for Redis, to develop apps.

Key Benefits

  • The ability to focus on application development, not on container platform management.
  • The value of containers without deploying and managing the environment and platform yourself.
  • Reduced operational overhead. 

Challenges Addressed

  • Finding the expertise and resources to build custom solutions.
  • Maintaining data sovereignty in hybrid environments.
  • Ensuring security and compliance across complex infrastructure environments.

Differentiators

  • Fully managed container offering.
  • Jointly engineered and supported with a 99.9% uptime SLA.
  • Containers with cloud-based consumption and the built-in ability to scale as needed.
  • The ability to leverage existing Azure commitments.

Companies choosing ARO can implement a container orchestration platform with OpenShift on Azure without the need to hire and retain talent or maintain the budget to hire new operations people to manage new platforms. Allowing developers to concentrate on business innovation versus running infrastructure. 

Eliminate the operational complexity of deploying and managing an enterprise container platform at scale and ensuring guaranteed uptime and availability with a defined SLA, security and compliance.

Reduce operational costs by only paying for what you need, when you need it.

Maintain a single agreement with Red Hat and Microsoft, requiring no separate contract or subscription while gaining joint support and security from Microsoft and Red Hat.

Customer stories OpenShift on Azure

Multinational Airline Technical Support Company

CHALLENGE

A Multinational Airline Technical Support company developed its digital software as a service (SaaS) platform for maintenance, repair and overhaul operations using Red Hat Linux and other open-source technologies. The company wanted to move the solution to the cloud.

SOLUTION

The company chose to migrate the solution to Microsoft Azure for its robust and flexible infrastructure capabilities, its network of global data centers and its support for open-source solutions.

RESULTS

The company can run its open-source technology stack easily on Azure, helping the company provide airlines with solutions that cut costs, optimize operations and improve safety. The stage is set for more exciting future developments.

Customer stories ARO

Midmarket Insurance Company

CHALLENGE

A Midmarket Insurance Company was lacking in-house skills to effectively run or manage OpenShift themselves.

SOLUTION

ARO gave the company the ability to quickly deploy an OpenShift cluster on Azure, in specific regions, for quick consumption, lowering the time to drive value. 

RESULTS

The Insurance Company was able to focus its efforts on business outcomes and end user benefits through application development and integration.

How could this work for your business?

Come speak to us and we will walk you through exactly how it works.

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Blog

Getting started with ARO – Application deployment

This is the fourth blog post in a four-part series aimed at helping IT experts understand how they can leverage...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








How can you architect Serverless SaaS Applications on AWS?

In my previous blog post I listed 6 key themes that separates successful SaaS vendors from the rest. In this post I dive more deeply into one of the themes discussed in the previous blog post, namely serverless and microservices. 

One of the great innovations of public cloud computing in recent years has been the advent of serverless computing. Serverless computing allows you to focus on writing and deploying software components without the need for focusing on the underlying infrastructure. The software components, often called functions, are executed based on a defined set of events and compute resources are consumed based on the usage during execution. AWS Lambda [1] was the first publicly available serverless computing offering. AWS Lambda supports natively Java, Go, PowerShell, Node.js, C#, Python, and Ruby runtimes. It also provides AWS Lambda layers or runtimes which allows you to use any additional programming languages to author your functions.

In this blog post I discuss some of the key considerations you need to make when designing a serverless SaaS architecture and give an overview of an example serverless microservice architecture for SaaS applications on AWS.

Benefits of serverless

When working with SaaS providers or independent software vendors (ISVs) wanting to transform their product into SaaS we typically advise them to design their application architecture using microservices and serverless capabilities. This offers a number of advantages:

  1. You focus on value adding activities like writing code instead of designing and managing infrastructure
  2. You speed up development time and simplify development of new functionality by breaking it down into small pieces of functionality
  3. You optimise infrastructure cost by consuming only the computing resources required to run the code with 100ms billing – you don’t pay for the idle time
  4. You get the benefits of autoscaling of infrastructure resources

There are also drawbacks of going serverless. Testing, especially integration testing, becomes more difficult as the units of integration become smaller. It is also often difficult to identify security vulnerabilities with traditional security tooling and debug your application in the serverless approach. Lastly, for many organisations moving to serverless requires a complete paradigm shift. You need to upskill your developers, architects and security teams to think and operate in the new environment.

Defining the microservices 

When designing the architecture one of the first things you need to do is to define the granularity of your microservices and functions. By making your functions too large, including lots of functionality into the same function, you lose some of the flexibility and speed of development. It also makes it harder to debug your functions and makes your software less fault tolerant. By making your functions small enough you enhance the fault tolerance of your application. You can design your application in a way that if one function fails the others can mostly still remain operational. On the other hand, making your functions too small you increase complexity and make it more difficult to understand and manage the overall architecture. There’s a middle ground that depends a lot on the software and functionality you are building. For example, if you want to provide tiering of functionality for your customers, so that some functionality is available only for certain subscription tier, you should decouple such functionality into separate microservices. Whatever granularity you choose, it is important to make sure that the microservices are loosely coupled to allow them to be developed and deployed independently of the other microservices. 

Adding tenant context

The second consideration is specific to the SaaS model. When building SaaS applications you need to be able to do tenant isolation, tenant management, tenant metering and monitoring. You need to be able to identify and authenticate tenants and offer different tenants different sets of functionality based on their subscription tier. You should also make sure the performance is fairly distributed among tenants and monitor the usage to identify upsell opportunities and gain valuable insights about usage patterns. On AWS you can implement all this with the help of Amazon Cognito [2]. You can use Cognito to manage user identities and to inject user context into the different layers of your application stack.

Example architecture 

Our simplified example is a serverless architecture for a SaaS application. The example uses S3 buckets for static web content, API Gateway for REST API, Cognito for user management and authentication, Lambda for serverless microservices, Amazon Aurora Serverless for SQL database and DynamoDB for NoSQL database.

Each of the Lambda functions can themselves trigger additional Lambdas. It is therefore easy to design even quite complex applications using simple functions. However we advise our customers to avoid so-called serverless monoliths and instead design their lambda functions to be as independent as possible. The best practice is to adopt an event-driven approach where each Lambda function is independent from each other and triggered by events. Lambda functions can then emit events to e.g. Amazon SNS, to trigger other functions. You can also use AWS Step Functions to coordinate the Lambda functions [3].

There are couple of strategies you can take when designing your Lambda functions:

  • Create one Lambda function per microservice, where each microservice is a unit that is able to work in isolation
  • For each microservice, create one Lambda function that handles all of the HTTP methods POST, GET, PUT, DELETE, etc…
  • For each HTTP function create a separate Lambda function

By choosing one of the strategies above you limit the complexity of your architecture. However, if you choose to have one Lambda function per microservice you need to make sure your microservices are quite granular. 

Our example architecture uses Cognito to make sure that each layer is aware of the tenant context. This allows you to offer the right functionality through the API and execute Lambda functions in a tenant specific context.

Next steps

One of the most important factors of a successful SaaS architecture is how you enable multitenancy and implement tenant isolation. In our example above we implicitly assumed that we have the same instances of the APIs, functions and databases for all the tenants and tenant isolation is handled using tenant context through Cognito. There are other ways to handle tenant isolation. However, going through the different options deserves its own blog post. So stay tuned.

How can you maximise the value of SaaS for your business?

Join our webinar to find out how…

References:

[1] https://aws.amazon.com/lambda/

[2] https://aws.amazon.com/cognito/

[3] https://aws.amazon.com/step-functions/

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Blog

Getting started with ARO – Application deployment

This is the fourth blog post in a four-part series aimed at helping IT experts understand how they can leverage...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.