Journey To CKA and CKAD


Life at Nordcloud

This article is about trusting yourself to accomplish new things, achieving your goals and specifically Daniel’s journey to CKA and CKAD.

(Picture of Daniel in Yosemite National Park.) 

Last year I was working at Huawei in a position what looking from the outside must have been interesting. However, I was not satisfied with it. I had started to look for something else what would be more into it for me from a technology point of view.

This is how I found Nordcloud and their UK based subsidiary Nordcloud LTD.

Nordcloud went through some serious expansion last year and are still hiring tens of people in several countries. We in the UK have a few open positions if anyone is interested.

I joined Nordcloud in January and I could not have made a better decision.

They provide me just the right amount of hands-on tasks to keep me in the game, and not just be a theoretical architect.

I always thought without real hands-on experience you cannot call yourself a technical architect.

Everybody can talk about technology (we have seen it with several brain dumpers), but being able to talk about it and also to implement it with proper design, that is where the real knowledge resides.

When I joined Nordcloud I was already into containerisation.

My friend, Vinayak Kumar, was an SRE at a company where he designed/managed several K8s clusters and a K8s based environment spanning through different regions/areas of the world. The technology was just fascinating.

I can kind of compare the whole experience with it for me, like when I first met with VMware virtualisation back in 2007-2008. I instantly knew I must work with this technology and become an expert in it.

Nordcloud LTD is not a huge consultancy yet; however we are growing and contributing to the group level directives and solutions as well.

We have agreed with my lead Harry Azariah, that I will pursue to become a K8s expert – being an Azure Senior Architect working on AKS and focusing on all managed and unmanaged K8s solutions.

So my journey began…

I started to build my own clusters based on Kelsey Hightower’s and Ivan Fioravanti’s Kubernetes the Hard Way git repository.

I was watching tens, maybe hundreds, of hours of Kubernetes videos from Kelsey Hightower and others. Luckily, I already had some experience with Docker – I had built Docker Swarm demo environments in Azure a few years before, but still K8s was a bit of a new territory and a challenge. When I thought I had enough knowledge to ask relevant questions, I called my friend Vinay and he was kind enough to jump over from 120 miles away to have a session with me. Yeah, we could have done it online but it’s always good to see a friend!

Anyhow after that session I knew a lot more and was sure this is the technology I want to focus on in the upcoming weeks, months, (years?! 😄)

Fortunately enough, we got a few leads at Nordcloud with K8s, AKS requirements. I got the chance to put all I had learned into practice. This is when I realised I know less than I thought. 😄

So, dwelled even deeper into the rabbit hole, I started to work with Ingress controllers such as Nginx. In one of our projects (which I’m still working on) I had the opportunity to start to work with Istio Service Mesh. The whole experience was like a roller coaster ride. Just when I thought yeah I’m confident, something new came up. I think this is what got me excited about the whole K8s experience,  technology I knew little about and constantly can provide challenges.

About this time I decided I want to be certified.

With my lead Harry, I agreed that the CKA exam should be the first I achieve. I jumped on Linux Academy and started the CKA course there. It’s an OK course; you can get enough information to understand those requirements which are shared on the CKA exam leaflet. However, do not expect to be able to pass if you only go through this training.

You must do more – as a bare minimum I would recommend going through the K8s the hard way material at least 5 times if you are not managing real world clusters on a day to day basis.

By this time I was already working with AKS for 4 months, but that is a managed K8s solution, so you have almost 0 tasks to manage the Master nodes, and you can bet you will have some questions related to those.

11% Cluster Maintenance, 12 % Installation, Configuration, Validation, 10% Trouble shooting; all these can mean you will have to look at some master components.

So I was going on a long journey to search for useful exam prep tests and K8s trainings and found this link at the Kubernetes slack.

It contains so much information that it’s an overkill in general for the exam, but some are worth to go through.

With the CKA exam you can expect to use all your 3 hours to answer the questions, maybe get 10 minutes at the end to review what you have done. I used up my time completely, had about that 10 minutes to review with 1 question unanswered (8% worth). I decided to go and look at my solutions for other questions and not to bother with that one.

The main reason you will use the 3 hours is that you have to type a lot, even if you know where to find some templates in the documentation (what you can use), it is still a lot to do. Even if you use “kubectl create dry-run -o yaml > pod.yaml” to generate your base config, it’s a lot to achieve on the k8s resource side, not to talk about the install/manage questions.

I definitely recommend to use completion source <(kubectl completion bash)”>>~/.bashrc

Personally, I have not used any alias configured by me, some people find that useful. I work with alias in my day-to-day environments, but for the exam I was not finding it useful (configured a few though).

  • Know the documentation and where to find what.
  • Watch as some links are navigating off from the domain ad that is not allowed, but in general if you know where to find things or how to ask the right question if you get stuck, this will be a life saver.
  • Build a AKS, EKS, GKE cluster and use that to prep with the Kubernetes resources (it’s faster to build than a K8s the hard way cluster and it does not depends on your setup).
  • Do deployments of objects until you feel like you are bored with it, when you literally wake up at night and hear your thoughts going around “apiVersion: v1 kind: Pod metadata: labels: app: someapp spec: containers:”

That is the time when you can feel confident about your knowledge… not joking…

  • Build a habit to use the commands which help you to generate templates, or create resources quickly. There is a really good cheat sheet from Denny Zhang to start with.
  • Do some tests in a practice environment, the exam environment is nothing too complex, but it’s a browser based exam, not a basic ssh from your tty client.
  • I have tried this environment to get a look and feel, from Arush Salil.
Practical tips for the day of the exam:
  • Find a place with good WiFi Coverage and without any distractions.
    I sat in a phone booth at the office, however my WiFi was awful when I shared my camera ( have not tested it before properly) so I had to find another place to do the exam from. Save your self that 20 minutes of worry which I had…
  • Do some video calls with someone from the location you will do your exam from.

Luckily the proctor was reasonable enough to give me time to find another place.

  • I would not recommend to do it from home. I’ve heard horror stories from others that proctors were asking to cover everything in a room and such.
  • Have a glass of water with you. As I have mentioned you won’t have much time to leave from the exam… No food, no headset, papers, other electronics, etc. is allowed on the desk or around you.
  • Your face and eyes must be always on the screen. I was asked several times to adjust my camera (Dell XPS 15) or position because I was leaning too close to the screen… That was annoying – probably an external camera would have been better to use.

After passing the CKA;

I must say the CKAD was like a walk in the park. I went through the Linux Academy course just to have training and then I took the exam.

With the extensive preparation I had spent for the CKA (you need a lot as it almost covers “everything”) and the Linux Academy course I easily passed the CKAD. The exam is only 2 hours long and I finished it about 15 minutes earlier.

I can’t say that anybody who passes CKA can easily pass CKAD but for me it was not a problem. However, it is worth to mention by this time it was already 5 months into an AKS project for me where I was working with Probes, Persistent Storages, Deployments on an almost day-to-day basis.

So where to from here?

I’m definitely going to stick with this technology; it gives me the chills with all the challenges and new aspects of technology it comes with. Nordcloud is a place which lets its employees flourish if you can and are willing to put in the additional effort.
There are some plans in my head to get to know other K8s versions better like OpenShift (already studying), dwell into EKS and GKE more, see how they really compare, and build a K8s practice at Nordcloud limited on the long run would be nice. My leads as far as I see are partners in this.

What is the conclusion of all this?

I think for me it is that never be afraid to change. Admit to yourself what you think you need/want. I mean I did this a bit more than 3 years ago when I came to the UK from several years of being a Solution/Enterprise Architect, went to a Senior Consultant position, and just about 7 months ago from a Product Owner/Architect Position I accepted Nordcloud’s offer for a Hands-On Senior Architect position.  I can clearly say it totally is worth it – my move 3 years ago, and my decision this year, I was never really happier than when I made these two decisions in my professional life.

There is really something in the saying from Confucius: “He who says he can and he who says he can’t are both usually right…” If you want something, do it; you just need to put the required time and effort into it and you can achieve anything.

Trust in yourself, and do not wait others to make your life happen! Because when you trust in yourself that is when magic happens in your life. 😊 

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

    Istio webinar by Nordcloud



    What is Istio?

    Istio is an open platform-independent service mesh that provides traffic management, policy enforcement, and telemetry collection.

    Istio addresses the challenges developers and operators face on the transition from monolithic architecture to a distributed microservice architecture. To learn how, it helps to take a more detailed look at Istio’s service mesh.


    Join the Istio webinar by Nordcloud!

    The webinar will be hosted (in polish language) by one of our Cloud Architects – Piotr Kieszczyński.


    31.01.2019 18.00 CET


    What is Istio?
    – Why it is important?
    – Quick Demo
    – Q&A

    Webinar Connection Info:

    Join Zoom Meeting

    One tap mobile
    +48223987356,,723434349# Poland
    +48223073488,,723434349# Poland

    Dial by your location
    +48 22 398 7356 Poland
    +48 22 307 3488 Poland
    Meeting ID: 723 434 349
    Find your local number:

    Join by Skype for Business

    Add to calendar:

    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

      Four compelling reasons to use Azure Kubernetes Service (AKS)


      BlogTech Community

      Management overhead, inflexibility and lack of automation all stifle application development. Containers help by moving applications and their dependencies between environments, and Kubernetes orchestrates containerisation effectively.

      But there’s another piece to the puzzle.

      Azure Kubernetes Service (AKS) is the best way to simplify and streamline Kubernetes so you can scale your app development with real confidence and agility.

      Read on to discover more key benefits and why AKS is the advanced technology tool you need to supercharge your IT department, drive business growth and give your company a competitive edge over its rivals.

      Why worry about the complexity of container orchestration, when you can use AKS?

      1. Accelerated app development

      75 percent of developer’s time is typically spent on bug-fixing. AKS removes much of the time-sink (and headache) of debugging by handling the following aspects of your development infrastructure:

      • Auto upgrades
      • Patching
      • Self-healing

      Through AKS, container orchestration is simplified, saving you time and enabling your developers to remain more productive. It’s a way to breathe life into your application development by combatting one of developer’s biggest time-sinks.

      2. Supports agile project management

      As this PWC report shows, agile projects yield strong results and are typically 28 percent more successful than traditional projects.

      This is another key benefit to AKS – it supports agile development programs, such as continuous integration (CI), continuous delivery/continuous deployment (CD) and dev-ops. This is done through integration with Azure DevOps, ACR, Azure Active Directory and Monitoring. An example of this is a developer who puts a container into a repository, moves the builds into Azure Container Registry (ACR), and then uses AKS to launch the workload.

      3. Security and compliance done right

      Cyber security must be a priority for all businesses moving forward. Last year, almost half of UK businesses suffered a cyber-attack and, according to IBM’s study, 60 percent of data breaches were caused by insiders. The threat is large, and it often comes from within.

      AKS protects your business by enabling administrators to tailor access to Azure Active Directory (AD) and identity and group identities. When people only have the access they need, the threat from internal teams is greatly reduced.

      You can also rest assured that AKS is totally compliant. AKS meets the regulatory requirements of System and Organisation Controls (SOC), as well as being compliant with ISO, HIPAA and HITRUST.

      4. Use only the resources you need

      AKS is a fully flexible system that adapts to use only the resources you need. Additional processing power is supported via graphic processing units (GPUs) – processor intensive operations, such as scientific computations, enables on-top processing power. If you need more resources, it’s as simple as clicking a button and letting the elasticity of Azure container instances do the rest.

      When you only use the resources you need, your software (and your business) enjoys the following benefits:

      • Reduced cost – no extra GPUs need to be bought and integrated onsite.
      • Faster start-up speed compared to onsite hardware and software which takes time to set-up.
      • Easier scaling – get more done now without worrying about how to manage resources.

      Scale at speed with AKS

      The world of applications moves fast. For example, 6140 Android apps were released in the first quarter of 2018 alone. Ambitious companies can’t afford the risk of slowing down. Free up time and simplify the application of containerisation by implementing AKS and take your software development to the next level.

      To find out how we get things done, check out Nordcloud’s approach to DevOps and agile application delivery.

      Feel free to contact us if you need help in planning or executing your container workload.

      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

        How containerized applications boost business growth


        BlogTech Community

        When it comes to applications and other software, businesses need to balance innovation, security and user experience against the cost of development.

        However, too many are stuck with legacy software that isn’t suitable for their needs and legacy development processes that increase the cost of building software without increasing its value.

        This is why more and more companies are embracing containerised applications to improve development productivity, time to market and innovation without sacrificing security or reliability.

        In this blog, we will not only explain what containers are, but show you how they can benefit your business.

        What are containers?

        Containers, in this context, are just another way to build and deploy applications. Instead of building monolithic applications that run in their entirety on servers, containerised applications are made up of Lego-like building blocks called containers. Each container is a standalone package of software that embodies an element of the overall application.

        Containerised applications take advantage of the scalability of cloud infrastructure, coping with peaks and troughs in demand. They also move quickly and reliably from one computing environment to another, for example moving from a development environment to a full-scale production environment.

        How containerized applications benefit your business?

        Software is not simple. To work properly, it relies on everything doing its job at the right time. With multiple applications running on the same operating system, you may find that some don’t mesh together well. Containerisation reduces this risk by letting you build, test, update and integrate individual containers separately. In this way containerisation supports a DevOps approach to development.

        In the past, this was done through virtual machines. But they take up a lot of storage, aren’t so portable and are difficult to update. What’s more they don’t provide the same level of continuous integration and delivery of service that containers do.

        Here are three other ways that containerisation can benefit your business:

        1. Agility

        Containers make it easier to manage application operations and updates. For the best results, you need to work with a specialist IT partner to provide automated container orchestration. This means you can reap the benefits of your containerised applications quickly and effectively. While containerisation can sound complicated, with the right help, it can revolutionise the way you think about IT.

        2. Scalability

        You can meet growing demand and business need by scaling all or parts of your containerised applications almost instantly. You can do this by using software solutions like Openshift on Azure. Openshift on Azure is a free managed container orchestration service, hosted on Azure. It allows you to scale your application infrastructure efficiently.

        3. Portability

        One of the greatest benefits of containerisation is portability. You can reduce or avoid integration issues that slow your day-to-day business activities in a traditional computer environment. You can even stay productive if you move your application onto another device, cloud platform or operating system.

        Make your business more productive and scalable with Openshift on Azure

        With Openshift on Azure, you can manage and scale your containerised applications easily. This free, secure app allows you to boost your DevOps ambitions by accelerating containerised application development. So, if you want to develop and scale your IT infrastructure with confidence, look no further than Openshift on Azure.

        Feel free to contact us if you need help in planning or executing your container workload.

        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

          Microservices and containerisation: 4 things every IT manager needs to know



          The popularity of microservices and containerisation has exploded in recent years, with 60 percent of businesses already adopting the technology in one form or another. And the trend shows no signs of slowing down.

          In fact, since the take-off of Docker in 2013, containerisation and microservices have been reinventing the IT landscape, becoming one of the most sought-after tools for digital transformation.

          But, as with any new technology, it’s important to look beyond the hype.

          What are microservices and containerisation?

          Containerisation is a method of virtualisation that separates applications and services at the operating level. Unlike hypervisor virtualisation, these containers aren’t split from the rest of your architecture, but instead share the same operating system kernel.

          Microservices use containerisation to deliver smaller, single-function modules, which work in tandem to create more agile, scalable applications. Due to this approach, there is no need to build and deploy an entirely new software version every time you change or scale a specific function.

          Deploying containerisation in your business

          When it comes to making containerisation and microservices a business reality, there are a few key points every IT manager needs to know.

          1. Docker and Kubernetes are the market leaders

          Since its conception, Docker has become synonymous with the containerisation industry. As of 2018, more than half of IT leaders said they ran Docker container technology in their organisations.

          In second place was Kubernetes – the container orchestration platform. Together, these technologies are revolutionising microservices and overseeing its rise as a viable replacement for traditional, monolithic infrastructures.

          2. Containerization is the natural successor to virtualization

          No one can deny the impact that virtual machines have had on IT but containerisation gives developers a new, more flexible, born-in-the-cloud and potentially more cost-effective way to build applications.

          This allows application developers to respond faster to changing market needs and growing

          Containerisation builds on the foundation virtualisation has laid by further optimising the use of hardware resources. As a result, IT managers and developers can now make changes to isolated workloads and components, without making significant changes to the application code.

          3. Portability and consistency are the main drivers

          Ever since it first arrived on the IT scene, containerisation has been integral to the DevOps movement. Its design makes it possible to move application components and workloads between a range of environments, from in-house servers to public cloud platforms.

          Remaining infrastructure agnostic gives microservices the edge over traditional application delivery methods, as there is little need for configuration or code changes when porting services. Software quality also becomes far more consistent when you use containerisation, ultimately leading to faster development cycles.

          4. Orchestration makes all the difference

          With a greater number of moving parts comes the potential for greater friction. While microservices are designed to streamline the delivery of applications and workloads, they still need some level of man-management.

          Often, organisations don’t see the full benefit of microservice adoption because they’re still running containers inside traditional VMs. This is like freeing a bird from its cage, but never letting it leave the house.

          To gain the most benefits from containerisation, your applications need the freedom to move around your entire estate – no matter how many environments it spans. This is where an orchestration tool, such as Kubernetes, becomes essential.

          Microservices are no small matter

          If you want to understand the true power of containerisation, look no further than Netflix. The company’s transition from a monolithic infrastructure to cloud microservices has become a core part of the recent technology canon.

          But they couldn’t have done it without the right tools and processes.

          In many cases, poorly implemented containerisation software can lead to more complexity and technical debt. Just as workforce expansion can result in increased HR involvement, transitioning to microservices requires the same level of professional support.

          To find out how we get things done, check out Nordcloud’s approach to DevOps and agile application delivery.

          Feel free to contact us if you need help in planning or executing your container workload.

          Get in Touch.

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

            Cloud computing news #10: Serverless, next-level cloud tech



            This week we focus on serverless computing which continues to grow and enables agility, speed of innovation and lower cost to organizations.

            Serverless Computing Spurs Business Innovation

            According to Digitalist Magazine, serverless computing is outpacing conventional patterns of emerging technology adoption. Organizations across the globe see technology-driven innovation as essential to compete. Serverless computing promises to enable faster innovation at a lower cost and simplify the creation of responsive business processes.

            But what does “serverless computing” mean and how can companies benefit from it?

            1. Innovate faster and at a lower cost: Serverless cloud computing execution model in which the cloud provider acts as the server, dynamically managing the allocation of machine resources. This means that developers are able to focus on coding instead of managing deployment and runtime environments. Also, pricing is based on the actual amount of resources consumed by an application. Thus, with serverless computing, an organization can innovate faster and at a lower cost. Serverless computing eliminates the risk and cost of overprovisioning, as it can scale resources dynamically with no up-front capacity planning required.
            2. Enable responsive business processes: Serverless function services – function as a service (FaaS) – can automatically activate and run application logic that carry out simple tasks in response to specific events. If the task enchained by an incoming event involves data management, developers can leverage serverless backends as a service (BaaS) for data caching, persistence, and analytics services via standard APIs. With this event-driven application infrastructure in place, one organization can decide at any moment to execute a new task in response to a given event.

            Organizations also need the flexibility to develop and deploy their innovations where it makes the most sense for their business. Platforms that rely on open standards, deploy on all the major hyperscale public clouds, and offer portability between the hyperscaler IaaS foundations are really the ideal choice for serverless environments.

            Read more in Digitalist Magazine

            Nordcloud tech blog: Developing serverless cloud components

            cloud component contains both your code and the necessary platform configuration to run it. The concept is similar to Docker containers, but here it is applied to serverless applications. Instead of wrapping an entire server in a container, a cloud component tells the cloud platform what services it depends on.

            A typical cloud component might include a REST API, a database table and the code needed to implement the related business logic. When you deploy the component, the necessary database services and API services are automatically provisioned in the cloud.

            Developers can assemble cloud applications from cloud components. This resembles the way they would compose traditional applications from software modules. The benefit is less repeated work to implement the same features in every project over and over again.

            Check out our tech blog that takes a look at some new technologies for developing cloud components

            Nordcloud Case study: Developing on AWS services using a serverless architecture for Kemppi 

            Nordcloud helped Kemppi build the initial architecture based on AWS IoT Core, API Gateway, Lambda and other AWS services. We also designed and developed the initial Angular.js based user interface for the solution.

            Developing on AWS services using a serverless architecture enabled Kemppi to develop the solution in half the time and cost compared to traditional, infrastucture based architectures. The serverless expertise of Nordcloud was key to enable a seamless rampup of development capabilities in the Kemppi development teams.

            Read more on our case study here

            Serverless at Nordcloud

            Nordcloud has a long track record with serverless, being among the first companies to adopt services such as AWS Lambda and API gateway for production projects already in 2015. Since then, Nordcloud has executed over 20 customer projects using serverless technologies for several use case such as web applications, IoT solutions, data platforms and cloud infrastructure monitoring or automation.

            Nordcloud is an AWS Lambda, API Gateway and DynamoDB parter, a Serverless framework partner and contributor to the serverless community via contribution to open source projects, events and initiatives such as the Serverless Finland meetup.

            How can we help you take your business to the next level with serverless?

            Get in Touch.

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

              AWS Fargate – Bringing Serverless to Microservice



              Microservices architecture

              Microservices architecture has been a key focus for a lot of organisations in the past few years. Organisations around the world are changing from the traditional monolithic architecture – to a faster time-to-market, automated, and deployable microservices architecture. Microservices architecture approach has its number of benefits, but the two that come up the most are how the software is deployed and how it is managed throughout its lifecycle.

              Pokémon Go & Kubernetes

              Let’s look at a real world scenario, Pokémon Go. We wouldn’t have Pokémon Go, if it wasn’t for Niantic Labs and Google’s Kubernetes. For those of you who played this once addictive game back in the summer of 2016, you know all about the technical issues they had. It was the microservice approach of using Kubernetes that allowed Pokémon Go to fix technical issues in a matter of hours, rather than weeks. This was due to the fact that each microservice was able to be updated with a new patch, and thousands of containers to be created during peak times within seconds.

              When it comes to microservices and using the popular container engine like docker with a container orchestration software like Kubernetes (K8’s), with a microservice architecture everything in the website server is broken down into its own individual API’s. Giving microservices more agility, flexible scaling, and the freedom to pick what programming language or version is used for that one API instead of all of them.

              It is can be defined more ways than one, but it is commonly used to deploy well-defined API’s, and to help make delivery and deployment streamlined.


              Serverless the next big thing

              Some experts believe that serverless will be the next big thing. Serverless doesn’t mean there is no servers, but it does mean that the management and capacity planning are hidden from the DevOps teams. Maybe you have heard about FaaS (Functions as a Service) or AWS Lambda. FaaS is not for everyone, but what if we could bring some of the serverless architecture along with the microservice architecture.


              AWS Fargate

              This is why back in November at the AWS re:Invent 2017 (see the deep dive here), AWS announced a new service called AWS Fargate. AWS Fargate is a container service that allows you to provision containers without the need to worry about the underlying infrastructure (VM/Container/Nodes instances). AWS Fargate will control ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service). Currently only available in the us-east-1 in Preview Mode.

              AWS Fargate, simplifies the complex management of microservices, by allowing developers to focus on the main task of creating API’s. You will still need to worry about the memory and CPU that is required for the API’s or application, but the beauty of AWS Fargate is that you never have to worry about provisioning servers or clusters. This is because AWS Fargate will autoscale for you. This is where microservices and Serverless meet.

              Get in Touch.

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.