Why not Both: Learning AWS and Azure Fundamentals Hands-On

CATEGORIES

BlogTech Community

The two biggest cloud vendors have of course their own certifications:  AWS Certified Cloud Practitioner and AZ-900 (Microsoft Certified: Azure Fundamental). These certifications allow you to have a deep dive on the basics of AWS or Azure. Both providers recommend that you have basic IT knowledge and at least 6 months of experience working with the provider in question before starting the process.

I’ll admit it, the task may sound daunting. Who has one minute to spare, let alone 6-12 months, watching videos that you have to pay attention to really closely, or reading a ton of boring white papers, 300+ pages long each? But what if I told you about another way: learning by doing. 

I didn’t want to just pass the exam. I wanted to learn new technologies surrounding the cloud and expand my understanding of what the cloud actually was.

In this post, I’ll give you an overview of both of these certifications and explain how I skipped reading hundreds of pages of manuals and learned by building my own solutions hands-on.

Below in detail I have included the curriculum to study to pass the exam. However, I didn’t want to just pass the exam. I wanted to learn new technologies that would expand my understanding of what the cloud actually is.

AWS Cloud Practitioner


I have previously written about my Cloud Practitioner journey, but in that post I didn’t exactly touch on how I prepared for the examination. So here we go.

I fired up my desktop computer and created an AWS account first. I went over to Qwiklabs and started to create real-life scenarios like “Introduction to Amazon Virtual Private Cloud“, “Creating an Amazon Virtual Private Cloud with AWS CloudFormation“, and “Introduction to AWS Identity and Access Management (IAM)“. How do I create a basic website or create a blog like the one you are reading from now. From here I started to understand the AWS Cloud architectural principles and basics.

Then comes the wallet hit; Oops, I didn’t know how much AWS resources started to add up.

Lucky for me, it was only $5, but that is money that I didn’t want to spend. I navigated over to the Simple Monthly Calculator provided by AWS and started to build my solutions here before implementing them in my Free-tier account. Sometimes it would say I needed to pay a lot, but it was a monthly calculator and not based on my usage for 1-2 days. Now that I understand the billing, account management, and pricing models it was time for me to learn how I can create a support ticket. Sorry AWS, but I had to learn.

I wanted to see if I could get a soft-limit of 5 VPC to increase to 6. So, I wrote a nice mail to AWS Support:

Hi AWS Support Team,
Can you increase the VPC limit to 6. Just testing out how to increase AWS limits.
Thanks 😀

Shocking to me it was done the next business day. That was with the basic support plan. Kudos to AWS.

 

Azure Fundamentals


With the AWS Certified Cloud Practitioner exam passed, it was time for me to understand more about Azure. When I got to the Azure exam in January 2019, they already released AZ-900 (Microsoft Certified: Azure Fundamental). As you might have already figured out, I do not like reading white papers, but rather work things out on my own. Once again, I tracked down my stationary desktop and got down to business.

Microsoft launched a new learning platform called Learn. If you have ever used the Microsoft Virtual Academy, it is an updated version of that. First thing I started with was Introduction to Azure and  Azure Fundamentals. Both courses took me just about 1-2 hours to complete.

What I really liked about Microsoft Learn was the ability to have Hands-on-learning. It allowed me to put my reading skills to practical use. I finished up my AZ-900 training with Architecting Great Solutions on Azure and Manage Resources in Azure. Both of the courses took me about 1-3 days to finish and I also went back to do more of the modules.

AWS vs Azure: Certification Comparison


Here you can see the similarities and differences between the two certifications.

AWS
Azure
Exam Objectives
  • Define what the AWS Cloud is and the basic global infrastructure
  • Describe basic AWS Cloud architectural principles
  • Describe the AWS Cloud value proposition
  • Describe key services on the AWS platform and their common use cases (for example, compute and, analytics)
  • Describe the basic security and compliance aspects of the AWS platform and the shared security model
  • Define the billing, account management, and pricing models
  • Identify sources of documentation or technical assistance (for example, whitepapers or support tickets)
  • Describe basic/core characteristics of deploying and operating in the AWS Cloud.
  • Describe the benefits and considerations of using cloud services
  • Describe the differences between Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS)
  • Describe the differences between Public, Private and Hybrid cloud models
  • Understand the core Azure architectural components
  • Describe some of the core products available in Azure
  • Describe some of the solutions available on Azure
  • Understand Azure management tools
  • Understand securing network connectivity in Azure
  • Describe core Azure Identity services
  • Describe security tools and features of Azure
  • Describe Azure governance methodologies
  • Understand monitoring and reporting options in Azure
  • Understand privacy, compliance and data protection standards in Azure
Main Subject Areas
  1. Domain 1: Cloud Concepts 28%
  2. Domain 2: Security 24%
  3. Domain 3: Technology 36%
  4. Domain 4: Billing and Pricing 12%

  1. Understand Cloud Concepts (15-20%)
  2. Understand Core Azure Services (30-35%)
  3. Understand Security, Privacy, Compliance, and Trust (25-30%)
  4. Understand Azure Pricing and Support (25-30%)
Preparation Notes
  • Create an AWS Account
  • Explore using the Free-tier access
  • Read AWS Whitepapers
  • Third-Party training material (Youtube, LinuxAcademy, Pluralsight)
  • Create an Azure Account
  • Setup a Free-tier subscription
  • Microsoft Learn
  • Third-Party training material (Youtube, LinuxAcademy, Pluralsight)
Exam Process
  • Most testing centers allow you to start the exam 1 hour before, but do consult with your exam center of choice.
  • Currently, the exam is 65 multiple choice questions that need to be completed within 90 minutes. It requires a score of 700 to pass.
  • Most testing centers allow you to start the exam 1 hour before but do consult with your exam center of choice.
  • Currently, the exam consists of 51 multiple choice, drag & drop, and drop down list questions. This all needs to be completed within 60 minutes and requires a score of 700 to pass.
  • If you are not living an a non-native English speaking country, you are able to ask for extra time. To do this, please request test accommodations from Pearson VUE or Certiport.
Pass/Fail
  • Amazon scores out of 100-1000 and is scored very similarly to the USA education system by taking the best score and curving it.
  • Do not expect to see the exam results after finishing, but you might notice a message either congratulating you for passing the exam or thanking you for participation. If you are like most of us, you might have never seen the message, because you got up and out of the testing center or simply didn’t remember what you just read. You will receive a separate email within 5-7 business days.  Once you get the results back, you will not be able to see what the grade curve was, but for this exam, anything above 700 is passing.
  • Microsoft scores out of 1-1000 and is scored very similarly to the USA education system by taking the best score and curving it.
  • The actual percentage of questions that you must answer correctly varies from exam to exam and may be more or less than 70%, depending on the input provided by the subject-matter experts and the difficulty of the questions delivered.
  • You will get your results back once you finish the exam. You will then be provided with a printed paper with a bar graph showing what you need to work on and what you did well on. Do note, that if the bar graph shows everything above 70% it doesn’t mean that you passed. The reason for this is because the passing score is 700; however, this is a scaled score.

I would state that doing the AWS training first definitely helped me pass the Azure training.

Wish you the best of luck on passing AWS Certified Cloud Practitioner and AZ-900 (Microsoft Certified: Azure Fundamental)!

Blog

Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

Blog

Building better SaaS products with UX Writing (Part 3)

UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

Blog

Building better SaaS products with UX Writing (Part 2)

The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








    Employee Training Is Essential For Organisation’s Success

    CATEGORIES

    Cloud Skills

    Bridge the digital transformation skills gap with cloud training.

    Talent Shortage Is A Top Emerging Risk Facing Organisations

     
    According to a 2017 report by Global Knowledge, a massive 68% of IT decision makers faced a present skills shortage in their teams. 

    Gartner research 2018 indicated that companies need to shift from external hiring strategies towards training their current workforces. Statistics clearly show the importance and benefits of employee development: a more competitive workforce, increased employee retention, and higher employee engagement.

    “Organizations face huge challenges from the pace of business change, accelerating privacy regulations and the digitalization of their industries,” said Matt Shinkman, managing vice president and risk practice leader at Gartner. 

    The use of cloud has been growing rapidly in the last 4-8 years. Compliance, automation, improved security, infrastructure as code, better DevOps practices, and developing cloud-native applications, are just some common reasons corporations want to move to the cloud. 

    Adopting a new cloud environment involves extensive change, from skills to processes to technology. With the growing technology changes, it is hard for companies to keep up, and even harder for the average employees.  

    Skills Development: Hard And Soft Skills

    Skills development is a process where we turn from a beginner to a junior, and ultimately a senior. Skills development comes down to two key factors: identifying the skills gaps and developing those skills.

    When developing skills, it is commonly broken down into hard and soft skills:

    • Hard skills are specific to a task and tend to be knowledge-based. Having skills in programming, software, or even another language is classified as a hard skill.
    • Soft skills are what we as humans have been learning from our first breath. Personality, leadership, time and stress management, decision making, ability to deal with adversity, and most of all networking.

    Hard skills are commonly looked at. It is great to have a smart co-worker, but it doesn’t help the corporation if the employee is not willing to share information or can’t handle stress. Therefore it is best to help employees build both skills.

    Bridge The Skills Gap With Cloud Training

    According to a recent study, 2017 U.S. training expenditures increased 32.5 percent to $90.6 billion. We at Nordcloud have also seen a drastic increase in our cloud training business. During the last year we saw a 66% increase in participation numbers.

    As an example, last year we arranged 101 AWS courses with an excellent (4,50/5 average in) participant satisfaction.“Excellent course, materials and instructor! Gave good overview of AWS. There were also some interesting questions asked, the answer were good and the instructor seems very educated surrounding the topic. All in all, a good training!”“The pace was very high which was great for me. I really enjoyed the presentation and the trainer was very knowledgeable. I learned a lot about the basic AWS technologies and all the technical questions were well answered. Great instructor, any question and the answer comes out like from a machine gun. Very much knowledge of the topic in her.” 

    As a cloud native Nordcloud knows the public cloud well, and we understand what it takes to succeed there. We provide cloud training for both individuals and companies.

    Check our Cloud Training here – and sign up!

    Blog

    We are digital builders born in the cloud. Come and build with us.

    We are approached by more and more businesses each day, wanting to turbocharge their move to the cloud. Many of...

    Blog

    Hiring with a bang: When the tech and recruitment teams work together

    Dorota Kowalik, Global Recruitment Manager for Nordcloud joins Dariusz Dwornikowski, the Global Head of Engineering in conversation on the superfast...

    Blog

    The Business Case for Multisourcing

    The top IT leader challenge is managing with budget constraints, according to a latest Gartner survey. While dealing with that...

    Get in Touch

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








      Employee training is essential for organisation’s success

      CATEGORIES

      Blog

      Talent shortage is a top emerging risk facing organisations

       
      According to a 2017 report by Global Knowledge, a massive 68% of IT decision makers faced a present skills shortage in their teams. 

      Gartner research 2018 indicated that companies need to shift from external hiring strategies towards training their current workforces. Statistics clearly show the importance and benefits of employee development: a more competitive workforce, increased employee retention, and higher employee engagement.

      “Organizations face huge challenges from the pace of business change, accelerating privacy regulations and the digitalization of their industries,” said Matt Shinkman, managing vice president and risk practice leader at Gartner. 

      The use of cloud has been growing rapidly in the last 4-8 years. Compliance, automation, improved security, infrastructure as code, better DevOps practices, and developing cloud-native applications, are just some common reasons corporations want to move to the cloud. 

      Adopting a new cloud environment involves extensive change, from skills to processes to technology. With the growing technology changes, it is hard for companies to keep up, and even harder for the average employees.  

       

      Skills development: hard and soft skills

      Skills development is a process where we turn from a beginner to a junior, and ultimately a senior. Skills development comes down to two key factors: identifying the skills gaps and developing those skills.

      When developing skills, it is commonly broken down into hard and soft skills:

      • Hard skills are specific to a task and tend to be knowledge-based. Having skills in programming, software, or even another language is classified as a hard skill.
      • Soft skills are what we as humans have been learning from our first breath. Personality, leadership, time and stress management, decision making, ability to deal with adversity, and most of all networking.

      Hard skills are commonly looked at. It is great to have a smart co-worker, but it doesn’t help the corporation if the employee is not willing to share information or can’t handle stress. Therefore it is best to help employees build both skills.

       

      Bridge the skills gap with cloud training

      According to a recent study, 2017 U.S. training expenditures increased 32.5 percent to $90.6 billion. We at Nordcloud have also seen a drastic increase in our cloud training business. During the last year we saw a 66% increase in participation numbers.

      As an example, last year we arranged 101 AWS courses with an excellent (4,50/5 average in) participant satisfaction.

      “Excellent course, materials and instructor! Gave good overview of AWS. There were also some interesting questions asked, the answer were good and the instructor seems very educated surrounding the topic. All in all, a good training!”
      “The pace was very high which was great for me. I really enjoyed the presentation and the trainer was very knowledgeable. I learned a lot about the basic AWS technologies and all the technical questions were well answered. Great instructor, any question and the answer comes out like from a machine gun. Very much knowledge of the topic in her.”
       

      As a cloud native Nordcloud knows the public cloud well, and we understand what it takes to succeed there. We provide cloud training for both individuals and companies.

       

      Check our Cloud Training here – and sign up!

      Blog

      Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

      When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

      Blog

      Building better SaaS products with UX Writing (Part 3)

      UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

      Blog

      Building better SaaS products with UX Writing (Part 2)

      The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

      Get in Touch

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








        Cloud-Native Development (AWS CodeStar with AWS Cloud9)

        CATEGORIES

        Blog

        Back in April 2017 AWS released CodeStar, a cloud service designed to make it easier to develop, build, and deploy applications on AWS by simplifying the setup of your entire development project. This sounds to me a lot like a DevOps dream.

        AWS CodeStar is used as a bundle service, because it uses CodeCommit, CodeBuild, CodePipeline, CodeDeploy, Lambda, EC2, Elastic Beanstalk, CloudFormation, and Cloud9.

        Let me guide you step by step on setting up AWS CodeStar.

        CodeStar Workflow

        First thing you are presented with are multiple templates that will help you get started with Web Application or Web Service. Within these templates you are presented with popular programming languages like C#, GO, HTML 5, Java, Node.js, PHP, Python, and Ruby. Each of the programming languages can be launched into other AWS services such as EC2, Elastic Beanstalk, and AWS Lambda. For this article, I have decided to create HTML 5 with the only AWS service provided for this language is EC2.

        codestar workflow

        Once you decide your programming language and AWS service you are presented with the Project details screen. Within this screen you are able to pick your project name, and the repository you would like to use. I have decided to go with AWS CodeCommit.

        project details codestar

        The repository part is one limitation that I noticed right away. You only get two repository options AWS CodeCommit or GitHub, but what about Bitbucket, GitLab or other Git repositories. Well that is when it becomes a little more difficult to setup, because it will require you to setup a Github hook to an Amazon API Gateway to AWS Lambda that pushes your code to Amazon S3.

        API codestar

        Now that we have named the project and picked the repository we get presented with a pipeline review. We see that AWS CodeCommit will be the Source, Build and Test currently have nothing, Deploy is using AWS CodeDeploy, and for Monitoring it will use Amazon CloudWatch.

        project codestar

        Since I picked EC2, I am able to pick the EC2 configuration. This allows me to pick the instance type, VPC, and subnet. What I do not like about this is the security side of it. CodeStar assumes that I have already configured the VPC and Subnet.

        configuration codestar

        Once you have configured the settings that were presented to you it will either bring you to the next create project step or if an EC2 was needed, it will ask you for an Amazon EC2 Key Pair. Which could be another limitation, because you have to have one created, but you can also create it in another window and refresh.

        EC2 codestar

        Maybe you have already been presented with this, but after the steps above you are presented with connectivity. You are allowed to use multiple tools outside of the ones listed, but the following are just the most common used, besides the AWS Cloud9 IDE tool. If you did pick GitHub it would ask you for integration with GitHub, but since I picked CodeCommit it provides me with an HTTPS/SSH connection string. HTTPS has been known to have issues pushing to Git well SSH hasn’t. It really depends on your preference and the size of the push.

        code codestar

        I have decided to pick the AWS Cloud9 IDE Tool. Depending on the tool you pick it will explain how to get your CodeCommit linked up to the tool of choice. Once done the creation of a CodeStar project is finished and the real fun starts.

        You can skip this section about me explaining AWS Cloud9 if you decide not to use that IDE tool, but it is something that you should definitely consider.

        What is AWS Cloud9?

        Back in 2016 AWS acquired a company called Cloud9. This company focused on created an integrated development environment for web and mobile developers to collaborate together. It wasn’t until re:Invent 2017 that AWS announced AWS Cloud9.

        The one major downside of Cloud9 is that it requires EC2 instances to run it. Now depending on your coding structure or how large of an instance you want to create AWS Cloud9 could cost you less than $2 per month or more.

        codestar environment

        You can also go into the advanced setting and change the network, tag, and cost-saving settings. The cost-saving settings (hibernation) is a really cool feature, because it allows the AWS Cloud9 EC2 instance to be shutdown an intervals of 30 minutes.

        network settings codestar

        For me the key purpose of using Cloud9 comes down to what is my production system is going to use. You will probably be using an EC2 environment that is based on the Amazon Machine Image (AMI) with the instance type of your choice. Most of the AMI will have your AWS CLI, git, Python, Java or even another programming language installed.

        Limitations of AWS Cloud9

        • Limited Live debugging
        • No offline functionality
        • Limited integrations with other AWS Services within Cloud9 console

        Finishing up CodeStar

        Now you have reached the final screen, or what is known in CodeStar the dashboard.The dashboard allows you to drag-and-drop the multiple sections around. The sections are as followed: project wiki section, CloudWatch metrics, API endpoint (not if you used EC2), and git history from CodeCommit or GitHub. If it is integrated with JIRA or GitHub, you are able to add issue tracking section.

        Congratulations you have now created your first CodeStar project.

        Limitations of CodeStar

        I did have an issue of taking my code from a my do it yourself pipeline to CodeStar. The code wouldn’t compile within CodeStar, but did just find in CodeBuild. Not sure if this an AMI issue, but would need a deeper debugging.

        • Max of 10 projects per user
        • No Custom Project templates
        • No Integration with BitBucket
        • No API endpoint for EC2 instances
        • Slow when deploying

        Thanks for reading this (extra long) I hope you come out learning more about AWS CodeStar and AWS Cloud9. Stay tuned for the next article about setting up HTML 5. If you’d like to know more about using CodeStar, please contact us here.

        Blog

        Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

        When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

        Blog

        Building better SaaS products with UX Writing (Part 3)

        UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

        Blog

        Building better SaaS products with UX Writing (Part 2)

        The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

        Get in Touch

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








          Containers on AWS: a quick guide

          CATEGORIES

          Blog

          Containerisation allows development teams to move quickly and deploy more efficiently

           

          Instead of virtualising the hardware stack (as you would with virtual machines), containers run on top of the OS kernel, virtualising at the OS level.

          Here are the most popular container formats available:

           

          Docker

           

          In 2010, a company known as Docker helped transform cloud containerisation. This new way of architecting paved the way for the DevOps movement. But what made containers so popular? Thanks to the huge improvements in virtualisation and the rapid increase of cloud computing, containers can allow for isolated workloads based on an OS, exposing and accessing only what is necessary.

          Within just a few years, Amazon Elastic Container Service (ECS) was introduced in November 13, 2014 and was the primary way to run containers in the public cloud. ECS is a container management service that allows you to run Docker containers on a cluster.

           

           

          Kubernetes

          Google released Kubernetes in June 2014, which was later released to the Cloud Native Computing Foundation (CNCF) community the following year. The Google Cloud Platform and Microsoft Azure were early adopters to Kubernetes, but with GCP being the only public cloud provider to have a working service called Google Kubernetes Engine (GKE). GKE was launched in 2015 and Azure Kubernetes Service (AKS) was released in the Fall of 2017 into preview mode.

           

           

          Amazon EKS

          Amazon Elastic Container Service for Kubernetes (EKS) is a fully managed service that makes it easy for you to use kubernetes on EKS runs upstream Kubernetes so you can connect to it with kubectl just like a self managed Kubernetes. AWS Introduced EKS at re:Invent 2017 and claims to upstream Kubernetes by using countless AWS growing services.

           

           

          AWS Fargate

          AWS has a hidden service that neither GCP or Azure have. AWS Fargate is a new service for running containers without needing to manage the underlying infrastructure. Fargate supports ECS and EKS but is also often closely compared with Lambda. You pay per computing second used without having to worry about the EC2 instances.

          Managing Kubernetes can be complicated and usually requires a deep understanding of how to schedule, manage your masters, pods, services, and additional orchestration of architecture on top of the virtualisation that was already abstracted from you.

          Fargate takes all of this away by streamlining deployments. The game-changer is that you do not need to start with Fargate, but that you can use EKS or ECS then migrate your workloads to Fargate when your program has matured further.

           

           

          KOPS

           

          KOPS was the go to method of deploying Kubernetes on ECS via EC2 instances or on EC2 instances. KOPS is an open sourced project that makes running kubernetes easy. KOPS is built using EC2 instances. KOPS provides a multitude of controls on deployments and good support for high availability.

           

          Containers are not just a hype, but they could be the future for at least the next few years. With AWS finally joining the Kubernetes club, and Fargate being a strong game-changer, anything is possible. However, there is is still a lot of unanswered questions that we hope will be addressed.

          EKS and Fargate are currently limited in Ohio and Virginia regions, but you should see a big push to use these services as more regions get rolled out.

           

          What do we do in the meantime? I’m reminded of this quote:

           

          “All we have to decide is what to do with the time that is given us.”
          Gandalf

           

          Until then, I believe KOPS will be the best method to use.

           

          What containers do you use on AWS and are you waiting to explore with AWS EKS or Fargate? Let us know by contacting us here.

          Check also my previous blog post on Container security here

           

          Blog

          Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

          When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

          Blog

          Building better SaaS products with UX Writing (Part 3)

          UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

          Blog

          Building better SaaS products with UX Writing (Part 2)

          The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

          Get in Touch

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








            My journey to AWS Certification

            CATEGORIES

            Blog

            Getting AWS Certified is only half the battle. A certification is much more than a piece of paper – it is used as an assurance to showcase that you have the basic understanding of the product that you’re looking to be certified in and each certification uses strict requirements and procedures.

            AWS certified

            Understanding the organisational challenges

            Everything starts with you and ends with you, but of course, having an organisation that values training and improving employees is a big plus. Organisations are constantly weighing in options for transitioning into the cloud. Everyone has heard of the vast number of enablers like faster time-to-market, infrastructure as code, DevOps, automation, a fresh start, and most of all the growing cloud services.

            Organisations have been overworking IT employees for a long time and now they want them to be trained in a new mindset. This is easier said than done because people usually resist change, but in fact, the organisation already has these invaluable resources. This knowledge usually consists of networking, operating systems, database, managed services, and so much more.

            The issue now is that some organisation’s executives do not have the understanding of the key services provided by the different cloud platforms. So they push the employees to get this training, but don’t provide the incentives.

            Employees who are willing to continuously self-develop and improve themselves are becoming very valuable in the market. This increases the competition in the growing market. Every day, certified employees are being contacted by HR or headhunters who are offering 10%+ raise. Money is a big enabler for a lot of employees, but so is dedication and respect of the current organisation. Most people are juggling life and work responsibilities, but some still have the motivation to add another commitment to their already hectic schedules, especially the ones that are taking technical courses to gain proficiency.

            Offering competitive wages and keeping those wages fair amongst current and future employees should be something that the organisation automatically does, but since this is usually not the case providing incentives can be the motivation that keeps that employee committed to their current organisation. This can be anything from a pay raise, one-time bonus, stocks, donation to a charity of choice to something as simple as giving flexible work hours during the study period.

             

            How to become an AWS Partner

            It is not just for you to show growth in the industry, but your organisation also needs it to become an APN Partner. Currently, there are three performance tiers (Standard, Advanced, Premier) based on training, customer engagements, and overall business investment and getting certified will help your organisation look more mature compared to other competitors.

            At re:Invent 2014, they announced a change to the APN Partners requirements for 2015, showcasing that AWS wants to help customers identify successful APN Partners. One way was by increasing the certifications needed to achieve the different tiers. The Premier Tier was “8 Associate Levels” and “4 Professional Levels” certifications needed. Below is the current 2016 requirements:

            As you can see from the 2016 APN Partner requirements that Associate Level has increased by 250% and Professional Level by 200%.

            AWS Certification roadmap:

            There is a lot of material available to help you prepare for an AWS certificate, but the internet is also full of older material that can lead you down the wrong rabbit hole. The first place to start is by looking at the AWS Certification Roadmap:

            *Note that you have to take an associate certification before you can take a professional cert. AWS Certified Solutions Architect – Associate is broken up into 5 domains.

            Each domain will challenge your understanding of AWS Services, AWS Best Practices, and most of all the Well-Architected Framework.

            AWS recommends a three-day training course titled “Architecting on AWS” and a 4-hour “AWS Certification Exam Readiness Workshop”. Before taking the certification exam, I recently joined a Nordcloud “Architecting on AWS” Training course and was able to use it as a refresher course. Architecting on AWS is a course designed to teach solution architects how to optimise and get a deeper understanding of AWS Services and to showcase how the numerous services fit into Architecting on AWS.

            The key focus for the Solution Architect – Associate is High-Availability, In/outs of VPC, EC2, RDS, and the plentiful storage solutions.

            My journey to Amazon AWS Certification:

            I am going to share with you how I prepared for the certification because I believe it’s best to hear from the source. When I was new to AWS, I started by taking the AWS Accreditation courses: “AWS TCO and Cloud Economics” and “AWS Technical Professional” AWS Accreditations are only provided to APN Partners. It took me just under one week to get accredited following the curriculum during my spare time.  

            As I started to prepare for the Architect exam, another exam called “Cloud Practitioner” came out and I wanted to make sure that I knew the basics of AWS and had a good feeling about how to take the exam. I changed gears and took some AWS training courses that focussed on “Cloud Practitioner” exam. This took me about 3 weeks.

            My cloud practitioner training path

            AWS Free Training Path:

            AWS Cloud Practitioner Essentials

            AWS Solutions Training for Partners – Best Practices: Well-Architected

            AWS Well-Architected Training

            Nordcloud Training

            AWS Technical Essentials Day

            I picked a date and signed up for the exam. I also knew that I was going to take the Architect exam about 1 month later so I  registered for this too as I didn’t want to wait until I felt like I could take the test. Nobody is ready to take a test because of the fear of failure!

            Note: If you do not hold a passport from a Native Speaking Country you are able to request up to an additional 30 minutes.

             

            Non-English Speaker Steps:

            To request a 30-minute extension for your exam, please log into your AWS Certification Account (not the PSI account) and take the following steps:

            1. From the top navigation, click Upcoming Exams
            2. On the right, click the Request Exam Accommodations button
            3. Click the Request Accommodation button
            4. Select ESL +30 Minutes from the accommodation dropdown
            5. Click Create

            Now when you go to schedule your exam the time will be 30 minutes longer than normal. Note that you MUST request the accommodation BEFORE you schedule the exam.

             

            My solution architect: associate training path:

             

            AWS Free Training Path:

            AWS Security Fundamentals

            Preview Course: Deep Dive into Amazon Elastic Block Store (EBS)

            Preview Course: Deep Dive into Elastic File System (EFS)

            Whitepapers

            Nordcloud Training:

            Security Operations on AWS

            Architecting on AWS

            QwikLabs

            Introduction to AWS Identity and Access Management (IAM)

            Introduction to Amazon Virtual Private Cloud (VPC)

            Introduction to AWS Lambda

            Introduction to Amazon DynamoDB

            Introduction to Amazon Route 53

            Challenge Lab

            Maintaining High Availability with Auto Scaling (for Linux)

            Working with Amazon Elastic Block Store (EBS)

             

            Days leading up to the Exam

            My mindset ever since my first test/exams in grade school has always been understanding things and not just memorising them. The same goes for AWS exams because I really want to understand how each service works and all each feature helps to enhance the key services.

            Does this hurt me on exams? Of course, as I can’t remember the exact IOPS or throughput of the various EBS Volume Types. I do know that HDD EBS Volumes are mainly used Big Data or log processing. I know that Provisioned IOPS SSD allows for more than 10,000 IOPS and are mainly used for large databases.

            Spot Instances cost more than a regular EC2 instance, but I do not know by how much. I do know it depends on the region and that they are primarily used for a short burst of intensive jobs in CI/CD pipelines or batch processing jobs.

            Exam Day!

            It was finally the exam day and I was very nervous. I didn’t want to let the organisation down or be a failure. I arrived at the testing centre about 20 minutes before and had a coffee to try to calm my nerves.

            Exam Tips!

            Sometimes I didn’t understand what the question was trying to ask. I just moved on and came back to it at the end of the exam. AWS Exams allow you to mark the questions so you are able to identify the ones you skipped or needed more in-depth thinking on.

            Scenario-based questions are time to consume, but what I did was skip them until later. I understand this could be risky if I didn’t finish the exam on time, but spending a long time on understanding the harder question could make me miss out on easier ones later.

            Exam Results:

            When I finished the Solution Architect: Associate and Practitioner Exam the results came about 2 days later. PASSED

            Blog

            Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

            When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

            Blog

            Building better SaaS products with UX Writing (Part 3)

            UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

            Blog

            Building better SaaS products with UX Writing (Part 2)

            The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

            Get in Touch

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








              Cloud security: Don’t be a security idiot

              CATEGORIES

              Blog

              The cloud has some great advantages: storing a large amount of data and only paying for what you use without buying it all upfront or using hundreds of different services or API’s offered by a cloud provider.

              We commonly hear that security is a major step when moving to the cloud, but we actually see quite the opposite. By the time a lift-and-shift or a refractor approach gets completed, the organisation has already invested in so much that they need the system up and running. Studies show that the movement to public cloud computing is not going to decrease anytime soon, but will increase by 100 billion USD. With this increase, be sure to expect not only a growth in security breaches but attacks as well.

               

              Cloud Security Breaches & Attacks

              In today’s digital world, data is the new currency. Attackers had a massive impact on businesses with the ransomware outbreaks like WannaCry and Petya, and with the increase of attacks and poor security standards, everyone and everything is vulnerable.

              It might be easy to think we are all part of some sort of Darwin experiment because the same things keep happening around the industry. Budget cuts and time-to-market are both enablers that affect security. As a society, we have our security methods back to front and upside down and we forget the internet is relatively young.

              We see it time to time again where organisations are deploying unsecured best practice approaches. For example, Accenture back in October 2017 left an S3 bucket open to the world. This was later found out by the public, but the biggest issue was the content inside the S3 bucket was a list of passwords and KMS (AWS Key Management System). It is unknown if the keys were used maliciously, but they are not the first nor will they be the last to let this slip.

              Later in November, a programmer at DXC was sending the code to GitHub. Without thinking, this individual failed to realise that the code also had hard-coded AWS Keys into the code. It took 4 days before this was found out, but over 244 virtual machines were created in the meantime, costing the company a whopping 64,000 USD.
              Dont_be_a_security_idiot_picture1

              Sometime you can’t control the security issues, but that doesn’t mean you shouldn’t worry about it. A chip security flaw was announced to the public at the beginning of 2018 called Meltdown and Spectre that was released by a team called Google Project Zero. The chip flaw infected all Intel processors and attacked at the kernel level.

              This meant that someone with that knowledge could theoretically create a virtual machine on any public cloud and could view the data inside the kernel level of all the virtual machines on that bare metal server. Most companies patched this back in the fall of 2017, but not everyone keeps the most updated security patches on the OS layer.

              UPDATE: Intel has just released that not every CPU can be patched.
              UPDATE: New Variation

               

              Shared Responsibility

              Cloud providers are paying close attention to the security risk, but they all have a shared-responsibility model. What this means is that a customer is 100-per cent accountable for securing the cloud. As the cloud provider doesn’t know the workload being used, they can’t limit all security risks. What the provider guarantees is the security of their data centres, usually software used to provide you with the API’s needed to create resources in the cloud.

              Most providers will explain to you (multiple times!) that there is a shared-responsibility model, the above diagram shows the most up-to-date version.

               

              Data Centre Security

              Another big question that is commonly asked is, “What makes the cloud provider data centre more secure than my own data centre?”. To answer this question we first need to find out what the current Data Centre Tier is and compare that to a cloud provider.

              Data Centres are often associated with data centre “Tier”, or its level of service. This standard came into existence back in 2005 from the Telecommunications Industry Association. The 4-tiers were developed by Uptime Institute. Both are maintained separately but have similar criteria. There are 4 tier rankings (I, II, III, or IV), and each tier reflects on the physical, cooling and power infrastructure, the redundancy level, and promised uptime.

               

              Tier I
              A Tier I data center is the simplest of the 4 tiers, offering little (if any) levels of redundancy, and not really aiming to promise a maximum level of uptime:

              • Single path for power and cooling to the server equipment, with no redundant components.
              • Typically lacks features seen in larger data centers, such as a backup cooling system or generator.

              Expected uptime levels of 99.671% (1,729 minutes of annual downtime)

              Tier II
              The next level up, a Tier II data center has more measures and infrastructure in place that ensure it is not as susceptible to unplanned downtime as a Tier 1 data center:

              • Will typically have a single path for both power and cooling, but will utilise some redundant components.
              • These data centers will have some backup elements, such as a backup cooling system and/or a generator.

              Expected uptime levels of 99.741% (1,361 minutes of annual downtime)

              Tier III
              In addition to meeting the requirements for both Tier I and Tier II, a Tier III data center is required to have a more sophisticated infrastructure that allows for greater redundancy and higher uptime:

              • Multiple power and cooling distribution paths to the server equipment. The equipment is served by one distribution path, but in the event that path fails, another takes over as a failover.
              • Multiple power sources for all IT equipment.
              • Specific procedures in place that allow for maintenance/updates to be done in the data center, without causing downtime.

              Expected uptime levels of 99.982% (95 minutes of annual downtime)

              Tier IV
              At the top level, a Tier IV ranking represents a data centre that has the infrastructure, capacity, and processes in place to provide a truly maximum level of uptime:

              • Fully meets all requirements for Tiers I, II, and III.
              • Infrastructure that is fully fault tolerant, meaning it can function as normal, even in the event of one or more equipment failures.
              • Redundancy in everything: Multiple cooling units, backup generators, power sources, chillers, etc. If one piece of equipment fails, another can start up and replace its output instantaneously.

              Expected uptime levels of 99.995% (26 minutes of annual downtime)

              Now that we understand the tier level, where does your data centre fit?

              For AWS, Azure, and GCP, the data centre tier is not relative to such a large scale, because none of them follows the TIA-942 or uptime institute standards. The reason for this is because each data centre would be classified as a Tier 4, but since you can build the cloud to your own criteria or based on each application, it’s difficult to put it into a box. Once you add the vast number of services, availability zones, and multi-regions, then this would be out of the scope of the Tier-X standards.

              Don’t be a Security Idiot!

              When it comes to security in the cloud, it all falls down to the end user. An end user is anyone with an internet connection or an internet enabled device and a good rule of thumb is to think that anyone can be hacked or any device can be stolen. Everything stems from the organisation and should be looked at from a top-down approach. Management must follow and also be on board with training and best practices when dealing with security.

              Most organisations do not have security policies in place, and the ones who do haven’t updated them for years. The IT world changes every few hours and someone is always willing to commit a crime against you or your organisation.

              password

               

              Considerations

              YOU ARE the first line of defence! Know if the data is being stored in a secured manner by using encryption and if backups are being stored offsite or in an isolated location.

              Common Sense

              Complacency: Wireless devices are common now, but does your organisation have a policy about this? Once or multiple times a year, all of your employees should have to review a security policy.

              Strong Password policies: A typical password should be 16 characters long and consist of special characters, lowercase and capital letters. Something like: I<3Marino&MyDogs (This password would take years to crack with current technology). Suggestion: don’t use this exact password!

              Multi-Factor Authentication: Multi-Factor authentication means “something you know” (like a password) and “something you have,” which can be an object like a mobile phone. MFA has been around a long time. When you use a debit/credit card it requires you to know the Pin Code and have the card. You do not want anyone taking your money, so why not use MFA on all your user data.

              Security Patches: WannaCry is a perfect example of what happens when people don’t update security patches. Microsoft released a fix in March 2017, but still, 150 countries and thousands of businesses got hit by the attack in the Summer of 2017. This could all have been avoided if Security Patches were enforced. Always make sure your device is updated!

              Surroundings: Situational awareness is key to staying safe. Knowing what is going on around you can help avoid social engineering. Maybe you are waiting for a meeting at a local coffee shop and decide to work a little before the meeting. The first thing you do is connect to an Open Wi-Fi and then you check your email. The person behind you is watching what you are doing and also has a keylogger running. They know what website you went to and what you typed in. Keep your screensaver password protected and locked after so many seconds of inactivity.

              Report incidents: You are checking your email and received a zip file from a future client. You unzip the file and see a .exe, but think no more of it. You open the .exe and find out that your computer is now infected with malware or ransomware. The first thing you should do is turn off the internet or turn off your computer. Call or use your mobile to send a message to IT, and explain what has happened.

              Education: The best way to prevent a security breach is to know what to look for and how to report incidents. Keep updated on new security trends and upcoming security vulnerabilities.

              Reporting: Who do you report to if you notice or come into contact with a security issue? Know who to send reports to, whether it is IT staff or an email dedicated to incidents…

              Encryption: Make sure that you are using HTTPS websites and that your data is encrypted both during transit and at rest.

              Most of all when it comes to public cloud security, you share the security with the platform. The cloud platform is responsible for the infrastructure and for physical security. Ultimately, YOU ARE responsible for securing everything else in the cloud.

              Blog

              Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

              When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

              Blog

              Building better SaaS products with UX Writing (Part 3)

              UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

              Blog

              Building better SaaS products with UX Writing (Part 2)

              The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

              Get in Touch

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








                Container security: How to differ from the traditional

                CATEGORIES

                Blog

                Containerisation in the industry is rapidly evolving

                 

                No, not shipping containers, but cloud containers. Fortune 500 organisations all use containers because they provide portability, simple scalability, and isolation. Linux distros have long been used, but this has since changed. Microsoft has now started to support Windows-based containers with Windows Server 2016 running on Windows Core or Nano. Even with a lot of organisations using containers, we are still seeing a lot of them reverting back to how security was for traditional VMs.

                 

                If you already know anything about containers, then you probably know about Kubernetes, Docker, Mesos, CoreOS, but security measures still need to be carried out and therefore this is always a good topic for discussion.

                 

                 

                Hardened container image security

                Hardened container image security comes to mind first, because of how the image is deployed and if there are any vulnerabilities in the base image. A best practice would be to create a custom container image so that your organization knows exactly what is being deployed.

                Developers or software vendors should know every library installed and the vulnerabilities of those libraries. There is a lot of them, but try to focus on the host OScontainer dependencies, and most of all the application code. Application code is one of the biggest vulnerabilities, but practising DevOps can help prevent this. Reviewing your code for security vulnerabilities before committing it into production can cost time, but save you a lot of money if best practices are followed. It is also a good idea to keep an RSS feed on security blogs like Google Project Zero Team and  Fuzz Testing to find vulnerabilities.

                Infrastructure security

                Infrastructure security is a broad subject because it means identity management, logging, networking, and encryption.

                Controlling access to resources should be at the top of everyone’s list. Following the best practice of providing the least privileges is key to a traditional approach. Role-Based Access Control (RBAC) is one of the most common methods used. RBAC is used to restrict system access to only authorized users. The traditional method was to provide access to a wide range of security policies but now fine-tuned roles can be used.

                Logging onto the infrastructure layers is a must needed best practice. Audit logging using an API cloud vendor services such as AWS CloudWatchAWS CloudTrailsAzure OMS, and Google Stackdriver will allow you to measure trends and find abnormal behaviour.

                Networking is commonly overlooked because it is sometimes referred to as the magic unicorn. Understanding how traffic is flowing in and out of the containers is where the need for security truly starts. Networking theories make this complicated, but understanding the underlying tools like firewalls, proxy, and other cloud-enabled services like Security Groups can redirect or define the traffic to the correct endpoints. With Kubernetes, private clusters can be used to send traffic securely.

                How does the container store secrets? This is a question that your organization should ask when encrypting data at rest or throughout the OSI model.

                 

                Runtime security

                Runtime security is often overlooked, but making sure that a team can detect and respond to security threats whilst running inside a container shouldn’t be overlooked. The should monitor abnormal behaviours like network calls, API calls, and even login attempts. If a threat is detected, what are the mitigation steps for that pod? Isolate the container on a different network, restarting it, or stopping it until the threat can be identified are all ways to mitigate if a threat is detected. Another overlooked runtime security is OS logging. Keeping the logs secured inside an encrypted read-only directory will limit tampering, but of course, someone will still have to sift through the logs looking for any abnormal behaviour.

                Whenever security is discussed an image like the one shown above is commonly depicted. When it comes to security, it is ultimately the organization’s responsibility to keep the Application, Data, Identity, and Access Control secured. Cloud Providers do not prevent malicious attackers from attacking the application or the data. If untrusted libraries or access is misconfigured inside or around the containers then everything falls back on the organization.

                 Check also my blog post Containers on AWS: a quick guide

                Blog

                Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

                When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

                Blog

                Building better SaaS products with UX Writing (Part 3)

                UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

                Blog

                Building better SaaS products with UX Writing (Part 2)

                The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

                Get in Touch

                Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








                  AWSome Day Oslo: We Had An Awesome day!

                  CATEGORIES

                  Blog

                  We know what you’re thinking, but that’s not a typo.

                  AWSome Days (a play on words reflecting Amazon Web Services), are hosted around the world and will take you through a step-by-step deep-dive into AWS core services such as Compute, Storage, Database, and Networking.

                  Nordcloud has been a proud sponsor since the first Nordic AWSome Day in Helsinki back in 2014, where we showcased our AWS Authorized Training Partner, AWS Premier Consulting Partner, and our ongoing dedicated partnership. We have a strong collaboration with AWS that has been going on for several years, and this has helped us provide an accelerated cloud transformation among our customers, from migrating to multiple cloud technologies, or assisting with cloud-based innovation.

                   

                  As an AWS APN Authorized Training Partner, we provide official AWS training, with the most up to date AWS services, and with certified training engineers like Olle Sundqvist, Michaela Vikman, and Juho Jantunen teaching the next wave of Cloud Architects. We currently host the following training sessions: Technical Essentials, Architecting on AWS, SysOps on AWS, Developing on AWS, Security Operations on AWS, and DevOps Engineering on AWS. We always have public and dedicated training going on, but keep an eye on our scheduled courses.

                  We’re still running an amazing discount (AWSOME) on the courses at a huge 25% off until March 16th. Be sure to have a look at what’s on offer and don’t forget to register to get the discount.

                  Nordcloud helps organizations use the cloud services from AWS, and other cloud providers to improve their productivity and efficiency.  We look forward to attending a lot more AWSome Days in the coming months, and continue to provide a growing partnership with AWS, providing the best advantages for our customers!

                  Hope to see you all at the next Nordic AWSome Days event in Helsinki this week!

                  Finally, a big shout out to our Nintendo Switch winner Mehrdad and the two Raspberry Pie winners: Sturla and Leszek.

                  Blog

                  Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

                  When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

                  Blog

                  Building better SaaS products with UX Writing (Part 3)

                  UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

                  Blog

                  Building better SaaS products with UX Writing (Part 2)

                  The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

                  Get in Touch

                  Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








                    AWS Fargate – Bringing Serverless to Microservice

                    CATEGORIES

                    Blog

                    Microservices architecture

                    Microservices architecture has been a key focus for a lot of organisations in the past few years. Organisations around the world are changing from the traditional monolithic architecture – to a faster time-to-market, automated, and deployable microservices architecture. Microservices architecture approach has its number of benefits, but the two that come up the most are how the software is deployed and how it is managed throughout its lifecycle.

                    Pokémon Go & Kubernetes

                    Let’s look at a real world scenario, Pokémon Go. We wouldn’t have Pokémon Go, if it wasn’t for Niantic Labs and Google’s Kubernetes. For those of you who played this once addictive game back in the summer of 2016, you know all about the technical issues they had. It was the microservice approach of using Kubernetes that allowed Pokémon Go to fix technical issues in a matter of hours, rather than weeks. This was due to the fact that each microservice was able to be updated with a new patch, and thousands of containers to be created during peak times within seconds.

                    When it comes to microservices and using the popular container engine like docker with a container orchestration software like Kubernetes (K8’s), with a microservice architecture everything in the website server is broken down into its own individual API’s. Giving microservices more agility, flexible scaling, and the freedom to pick what programming language or version is used for that one API instead of all of them.

                    It is can be defined more ways than one, but it is commonly used to deploy well-defined API’s, and to help make delivery and deployment streamlined.

                     

                    Serverless the next big thing

                    Some experts believe that serverless will be the next big thing. Serverless doesn’t mean there is no servers, but it does mean that the management and capacity planning are hidden from the DevOps teams. Maybe you have heard about FaaS (Functions as a Service) or AWS Lambda. FaaS is not for everyone, but what if we could bring some of the serverless architecture along with the microservice architecture.

                     

                    AWS Fargate

                    This is why back in November at the AWS re:Invent 2017 (see the deep dive here), AWS announced a new service called AWS Fargate. AWS Fargate is a container service that allows you to provision containers without the need to worry about the underlying infrastructure (VM/Container/Nodes instances). AWS Fargate will control ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service). Currently only available in the us-east-1 in Preview Mode.

                    AWS Fargate, simplifies the complex management of microservices, by allowing developers to focus on the main task of creating API’s. You will still need to worry about the memory and CPU that is required for the API’s or application, but the beauty of AWS Fargate is that you never have to worry about provisioning servers or clusters. This is because AWS Fargate will autoscale for you. This is where microservices and Serverless meet.

                    Blog

                    Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

                    When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

                    Blog

                    Building better SaaS products with UX Writing (Part 3)

                    UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

                    Blog

                    Building better SaaS products with UX Writing (Part 2)

                    The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

                    Get in Touch

                    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.