Four compelling reasons to use Azure Kubernetes Service (AKS)


BlogTech Community

Management overhead, inflexibility and lack of automation all stifle application development. Containers help by moving applications and their dependencies between environments, and Kubernetes orchestrates containerisation effectively.

But there’s another piece to the puzzle.

Azure Kubernetes Service (AKS) is the best way to simplify and streamline Kubernetes so you can scale your app development with real confidence and agility.

Read on to discover more key benefits and why AKS is the advanced technology tool you need to supercharge your IT department, drive business growth and give your company a competitive edge over its rivals.

Why worry about the complexity of container orchestration, when you can use AKS?

1. Accelerated app development

75 percent of developer’s time is typically spent on bug-fixing. AKS removes much of the time-sink (and headache) of debugging by handling the following aspects of your development infrastructure:

  • Auto upgrades
  • Patching
  • Self-healing

Through AKS, container orchestration is simplified, saving you time and enabling your developers to remain more productive. It’s a way to breathe life into your application development by combatting one of developer’s biggest time-sinks.

2. Supports agile project management

As this PWC report shows, agile projects yield strong results and are typically 28 percent more successful than traditional projects.

This is another key benefit to AKS – it supports agile development programs, such as continuous integration (CI), continuous delivery/continuous deployment (CD) and dev-ops. This is done through integration with Azure DevOps, ACR, Azure Active Directory and Monitoring. An example of this is a developer who puts a container into a repository, moves the builds into Azure Container Registry (ACR), and then uses AKS to launch the workload.

3. Security and compliance done right

Cyber security must be a priority for all businesses moving forward. Last year, almost half of UK businesses suffered a cyber-attack and, according to IBM’s study, 60 percent of data breaches were caused by insiders. The threat is large, and it often comes from within.

AKS protects your business by enabling administrators to tailor access to Azure Active Directory (AD) and identity and group identities. When people only have the access they need, the threat from internal teams is greatly reduced.

You can also rest assured that AKS is totally compliant. AKS meets the regulatory requirements of System and Organisation Controls (SOC), as well as being compliant with ISO, HIPAA and HITRUST.

4. Use only the resources you need

AKS is a fully flexible system that adapts to use only the resources you need. Additional processing power is supported via graphic processing units (GPUs) – processor intensive operations, such as scientific computations, enables on-top processing power. If you need more resources, it’s as simple as clicking a button and letting the elasticity of Azure container instances do the rest.

When you only use the resources you need, your software (and your business) enjoys the following benefits:

  • Reduced cost – no extra GPUs need to be bought and integrated onsite.
  • Faster start-up speed compared to onsite hardware and software which takes time to set-up.
  • Easier scaling – get more done now without worrying about how to manage resources.

Scale at speed with AKS

The world of applications moves fast. For example, 6140 Android apps were released in the first quarter of 2018 alone. Ambitious companies can’t afford the risk of slowing down. Free up time and simplify the application of containerisation by implementing AKS and take your software development to the next level.

To find out how we get things done, check out Nordcloud’s approach to DevOps and agile application delivery.

Feel free to contact us if you need help in planning or executing your container workload.

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

    How containerized applications boost business growth


    BlogTech Community

    When it comes to applications and other software, businesses need to balance innovation, security and user experience against the cost of development.

    However, too many are stuck with legacy software that isn’t suitable for their needs and legacy development processes that increase the cost of building software without increasing its value.

    This is why more and more companies are embracing containerised applications to improve development productivity, time to market and innovation without sacrificing security or reliability.

    In this blog, we will not only explain what containers are, but show you how they can benefit your business.

    What are containers?

    Containers, in this context, are just another way to build and deploy applications. Instead of building monolithic applications that run in their entirety on servers, containerised applications are made up of Lego-like building blocks called containers. Each container is a standalone package of software that embodies an element of the overall application.

    Containerised applications take advantage of the scalability of cloud infrastructure, coping with peaks and troughs in demand. They also move quickly and reliably from one computing environment to another, for example moving from a development environment to a full-scale production environment.

    How containerized applications benefit your business?

    Software is not simple. To work properly, it relies on everything doing its job at the right time. With multiple applications running on the same operating system, you may find that some don’t mesh together well. Containerisation reduces this risk by letting you build, test, update and integrate individual containers separately. In this way containerisation supports a DevOps approach to development.

    In the past, this was done through virtual machines. But they take up a lot of storage, aren’t so portable and are difficult to update. What’s more they don’t provide the same level of continuous integration and delivery of service that containers do.

    Here are three other ways that containerisation can benefit your business:

    1. Agility

    Containers make it easier to manage application operations and updates. For the best results, you need to work with a specialist IT partner to provide automated container orchestration. This means you can reap the benefits of your containerised applications quickly and effectively. While containerisation can sound complicated, with the right help, it can revolutionise the way you think about IT.

    2. Scalability

    You can meet growing demand and business need by scaling all or parts of your containerised applications almost instantly. You can do this by using software solutions like Openshift on Azure. Openshift on Azure is a free managed container orchestration service, hosted on Azure. It allows you to scale your application infrastructure efficiently.

    3. Portability

    One of the greatest benefits of containerisation is portability. You can reduce or avoid integration issues that slow your day-to-day business activities in a traditional computer environment. You can even stay productive if you move your application onto another device, cloud platform or operating system.

    Make your business more productive and scalable with Openshift on Azure

    With Openshift on Azure, you can manage and scale your containerised applications easily. This free, secure app allows you to boost your DevOps ambitions by accelerating containerised application development. So, if you want to develop and scale your IT infrastructure with confidence, look no further than Openshift on Azure.

    Feel free to contact us if you need help in planning or executing your container workload.

    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

      Microservices and containerisation: 4 things every IT manager needs to know



      The popularity of microservices and containerisation has exploded in recent years, with 60 percent of businesses already adopting the technology in one form or another. And the trend shows no signs of slowing down.

      In fact, since the take-off of Docker in 2013, containerisation and microservices have been reinventing the IT landscape, becoming one of the most sought-after tools for digital transformation.

      But, as with any new technology, it’s important to look beyond the hype.

      What are microservices and containerisation?

      Containerisation is a method of virtualisation that separates applications and services at the operating level. Unlike hypervisor virtualisation, these containers aren’t split from the rest of your architecture, but instead share the same operating system kernel.

      Microservices use containerisation to deliver smaller, single-function modules, which work in tandem to create more agile, scalable applications. Due to this approach, there is no need to build and deploy an entirely new software version every time you change or scale a specific function.

      Deploying containerisation in your business

      When it comes to making containerisation and microservices a business reality, there are a few key points every IT manager needs to know.

      1. Docker and Kubernetes are the market leaders

      Since its conception, Docker has become synonymous with the containerisation industry. As of 2018, more than half of IT leaders said they ran Docker container technology in their organisations.

      In second place was Kubernetes – the container orchestration platform. Together, these technologies are revolutionising microservices and overseeing its rise as a viable replacement for traditional, monolithic infrastructures.

      2. Containerization is the natural successor to virtualization

      No one can deny the impact that virtual machines have had on IT but containerisation gives developers a new, more flexible, born-in-the-cloud and potentially more cost-effective way to build applications.

      This allows application developers to respond faster to changing market needs and growing

      Containerisation builds on the foundation virtualisation has laid by further optimising the use of hardware resources. As a result, IT managers and developers can now make changes to isolated workloads and components, without making significant changes to the application code.

      3. Portability and consistency are the main drivers

      Ever since it first arrived on the IT scene, containerisation has been integral to the DevOps movement. Its design makes it possible to move application components and workloads between a range of environments, from in-house servers to public cloud platforms.

      Remaining infrastructure agnostic gives microservices the edge over traditional application delivery methods, as there is little need for configuration or code changes when porting services. Software quality also becomes far more consistent when you use containerisation, ultimately leading to faster development cycles.

      4. Orchestration makes all the difference

      With a greater number of moving parts comes the potential for greater friction. While microservices are designed to streamline the delivery of applications and workloads, they still need some level of man-management.

      Often, organisations don’t see the full benefit of microservice adoption because they’re still running containers inside traditional VMs. This is like freeing a bird from its cage, but never letting it leave the house.

      To gain the most benefits from containerisation, your applications need the freedom to move around your entire estate – no matter how many environments it spans. This is where an orchestration tool, such as Kubernetes, becomes essential.

      Microservices are no small matter

      If you want to understand the true power of containerisation, look no further than Netflix. The company’s transition from a monolithic infrastructure to cloud microservices has become a core part of the recent technology canon.

      But they couldn’t have done it without the right tools and processes.

      In many cases, poorly implemented containerisation software can lead to more complexity and technical debt. Just as workforce expansion can result in increased HR involvement, transitioning to microservices requires the same level of professional support.

      To find out how we get things done, check out Nordcloud’s approach to DevOps and agile application delivery.

      Feel free to contact us if you need help in planning or executing your container workload.

      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

        Containers on AWS: a quick guide



        Containerisation allows development teams to move quickly and deploy more efficiently


        Instead of virtualising the hardware stack (as you would with virtual machines), containers run on top of the OS kernel, virtualising at the OS level.

        Here are the most popular container formats available:




        In 2010, a company known as Docker helped transform cloud containerisation. This new way of architecting paved the way for the DevOps movement. But what made containers so popular? Thanks to the huge improvements in virtualisation and the rapid increase of cloud computing, containers can allow for isolated workloads based on an OS, exposing and accessing only what is necessary.

        Within just a few years, Amazon Elastic Container Service (ECS) was introduced in November 13, 2014 and was the primary way to run containers in the public cloud. ECS is a container management service that allows you to run Docker containers on a cluster.




        Google released Kubernetes in June 2014, which was later released to the Cloud Native Computing Foundation (CNCF) community the following year. The Google Cloud Platform and Microsoft Azure were early adopters to Kubernetes, but with GCP being the only public cloud provider to have a working service called Google Kubernetes Engine (GKE). GKE was launched in 2015 and Azure Kubernetes Service (AKS) was released in the Fall of 2017 into preview mode.



        Amazon EKS

        Amazon Elastic Container Service for Kubernetes (EKS) is a fully managed service that makes it easy for you to use kubernetes on EKS runs upstream Kubernetes so you can connect to it with kubectl just like a self managed Kubernetes. AWS Introduced EKS at re:Invent 2017 and claims to upstream Kubernetes by using countless AWS growing services.



        AWS Fargate

        AWS has a hidden service that neither GCP or Azure have. AWS Fargate is a new service for running containers without needing to manage the underlying infrastructure. Fargate supports ECS and EKS but is also often closely compared with Lambda. You pay per computing second used without having to worry about the EC2 instances.

        Managing Kubernetes can be complicated and usually requires a deep understanding of how to schedule, manage your masters, pods, services, and additional orchestration of architecture on top of the virtualisation that was already abstracted from you.

        Fargate takes all of this away by streamlining deployments. The game-changer is that you do not need to start with Fargate, but that you can use EKS or ECS then migrate your workloads to Fargate when your program has matured further.





        KOPS was the go to method of deploying Kubernetes on ECS via EC2 instances or on EC2 instances. KOPS is an open sourced project that makes running kubernetes easy. KOPS is built using EC2 instances. KOPS provides a multitude of controls on deployments and good support for high availability.


        Containers are not just a hype, but they could be the future for at least the next few years. With AWS finally joining the Kubernetes club, and Fargate being a strong game-changer, anything is possible. However, there is is still a lot of unanswered questions that we hope will be addressed.

        EKS and Fargate are currently limited in Ohio and Virginia regions, but you should see a big push to use these services as more regions get rolled out.


        What do we do in the meantime? I’m reminded of this quote:


        “All we have to decide is what to do with the time that is given us.”


        Until then, I believe KOPS will be the best method to use.


        What containers do you use on AWS and are you waiting to explore with AWS EKS or Fargate? Let us know by contacting us here.

        Check also my previous blog post on Container security here


        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

          Container security: How to differ from the traditional



          Containerisation in the industry is rapidly evolving


          No, not shipping containers, but cloud containers. Fortune 500 organisations all use containers because they provide portability, simple scalability, and isolation. Linux distros have long been used, but this has since changed. Microsoft has now started to support Windows-based containers with Windows Server 2016 running on Windows Core or Nano. Even with a lot of organisations using containers, we are still seeing a lot of them reverting back to how security was for traditional VMs.


          If you already know anything about containers, then you probably know about Kubernetes, Docker, Mesos, CoreOS, but security measures still need to be carried out and therefore this is always a good topic for discussion.



          Hardened container image security

          Hardened container image security comes to mind first, because of how the image is deployed and if there are any vulnerabilities in the base image. A best practice would be to create a custom container image so that your organization knows exactly what is being deployed.

          Developers or software vendors should know every library installed and the vulnerabilities of those libraries. There is a lot of them, but try to focus on the host OScontainer dependencies, and most of all the application code. Application code is one of the biggest vulnerabilities, but practising DevOps can help prevent this. Reviewing your code for security vulnerabilities before committing it into production can cost time, but save you a lot of money if best practices are followed. It is also a good idea to keep an RSS feed on security blogs like Google Project Zero Team and  Fuzz Testing to find vulnerabilities.

          Infrastructure security

          Infrastructure security is a broad subject because it means identity management, logging, networking, and encryption.

          Controlling access to resources should be at the top of everyone’s list. Following the best practice of providing the least privileges is key to a traditional approach. Role-Based Access Control (RBAC) is one of the most common methods used. RBAC is used to restrict system access to only authorized users. The traditional method was to provide access to a wide range of security policies but now fine-tuned roles can be used.

          Logging onto the infrastructure layers is a must needed best practice. Audit logging using an API cloud vendor services such as AWS CloudWatchAWS CloudTrailsAzure OMS, and Google Stackdriver will allow you to measure trends and find abnormal behaviour.

          Networking is commonly overlooked because it is sometimes referred to as the magic unicorn. Understanding how traffic is flowing in and out of the containers is where the need for security truly starts. Networking theories make this complicated, but understanding the underlying tools like firewalls, proxy, and other cloud-enabled services like Security Groups can redirect or define the traffic to the correct endpoints. With Kubernetes, private clusters can be used to send traffic securely.

          How does the container store secrets? This is a question that your organization should ask when encrypting data at rest or throughout the OSI model.


          Runtime security

          Runtime security is often overlooked, but making sure that a team can detect and respond to security threats whilst running inside a container shouldn’t be overlooked. The should monitor abnormal behaviours like network calls, API calls, and even login attempts. If a threat is detected, what are the mitigation steps for that pod? Isolate the container on a different network, restarting it, or stopping it until the threat can be identified are all ways to mitigate if a threat is detected. Another overlooked runtime security is OS logging. Keeping the logs secured inside an encrypted read-only directory will limit tampering, but of course, someone will still have to sift through the logs looking for any abnormal behaviour.

          Whenever security is discussed an image like the one shown above is commonly depicted. When it comes to security, it is ultimately the organization’s responsibility to keep the Application, Data, Identity, and Access Control secured. Cloud Providers do not prevent malicious attackers from attacking the application or the data. If untrusted libraries or access is misconfigured inside or around the containers then everything falls back on the organization.

           Check also my blog post Containers on AWS: a quick guide

          Get in Touch.

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

            Market leaders always push the envelope



            In this blog post, I will be picking up on what my colleague Sandip discussed in his latest blog post, ‘Innovating by Making a Difference’. Based on that, I wanted to take the opportunity to talk about how Nordcloud Germany have managed to stay on top of the industry for the last year or two. It’s been about focussing on the right things at the right time. For example, we haven’t worked in the Private Cloud space, and we haven’t been involved in the SaaS world of productivity, collaboration or CRM. We have stayed focussed purely on leading Public Cloud platforms; AWS, Azure & Google to deliver full-stack consultancy and services.

            At Nordcloud, we’re able to keep our customers – not just ourselves – on top of the game, by understanding everything we can, identifying the most valuable for our customers and then adopting the latest services of each of the providers. These are, for example, services around containers, (Kubernetes for instance), and serverless (Lambda), and also the Internet of Things and Machine Learning. Our work with companies of all industries and sizes is the foundation of being able to filter the different technologies for what matters the most. In this sense, our customers are those who teach us how to help them best and we can then pick the best technologies to do just that.

            We were recently screened by the leading Cloud market analyst in Germany against how we deliver state of the art managed Cloud services. Check out CRISP’s perspective here (in German). 

            We’re proud to be recognised as a leading provider in the Cloud consulting and service industry, who stands out amongst a vast number of peers in the market. If there is one thing we have realised throughout the years – both as a company and as individuals – it’s that you shouldn’t stop innovating and questioning. To stay on top, it’s not enough to just do the basics well. You have to keep going forward and step beyond your comfort zone at all times. At the same time, you shouldn’t be running after each new hype, but picking your game wisely and then building up expertise and concepts around that area.

            Get in Touch.

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

              Throwback – AWS Tech Community Days Summit, Cologne



              For the first time ever the largest and most technical gathering of the German AWS community took place in Cologne. Cloud and software experts from all over Germany shared their experiences and best practices on how to use AWS in the best possible ways. Nordcloud, as an AWS Premier Partner, recognises the value and importance of this kind of event and participated with a team of experts, whilst supporting the event as a Gold Sponsor.

              The AWS tech community talks were categorised into four key areas: Cloud Software Architecture, DevOps, Big Data & AI (‘Streaming & Freeform), and Containers (‘Kubernetes’). This enabled experts to participate by talking about any of the topics in the context of real-life projects or theoretical work. This meant there was a daily activity composed of four talks running in parallel, which gave attendees the opportunity to switch between the four tracks based on their interests. Between each talk, there were opportunities to mingle with other attendees which provided a great networking platform and chance to talk about challenges that the speaker had discussed.

              Lessons Learned

              We were able to take away some nuggets of information home with us from the event. Here are our lessons learned:

              Containers Kubernetes

              Although the talks were mostly from independent speakers from different companies, most of them were attempting to solve or were challenged with similar problems when using Kubernetes. The challenge was the difficulty in finding an efficient and highly available Kubernetes solution in AWS. We hope in the future AWS will address this during the upcoming re:Invent in Las Vegas this November. We will have our team on the ground there to make sure the word is spread quickly.

              Cloud Software Architecture

              In this track, we had the chance to hear about a variety of different topics. One of the most exciting ones was about ‘how to achieve ‘more security with serverless’, where there was a lot of discussion about how to properly secure today’s most commonly used serverless solution setup, which includes API Getaway, Lambda, DynamoDB, and S3. We had the chance to see some best practices and real use cases from different industries.

              Streaming and Freeform

              In this track, we learned how to manage PostgreSQL RDS in order to store several customers without impacting query performance, whilst enabling blue-green deployment. We also discovered how to build a cost-effective and scalable infrastructure for handling near-real-time ingestion and analytics of large quantities of sensor data based on Lambda, S3, DynamoDB, and Redshift services.

              Upcoming AWS Transformation Day in Cologne

              Looking for other ways to add value to your business? This AWS Transformation Day in Cologne brings together enterprises that have already harnessed the advantages of the cloud to share and discuss with their peers about leveraging the potential of the Cloud. Get more information and register for free here.

              Get in Touch.

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.