Containers on AWS: a quick guide

CATEGORIES

Blog

Containerisation allows development teams to move quickly and deploy more efficiently

 

Instead of virtualising the hardware stack (as you would with virtual machines), containers run on top of the OS kernel, virtualising at the OS level.

Here are the most popular container formats available:

 

Docker

 

In 2010, a company known as Docker helped transform cloud containerisation. This new way of architecting paved the way for the DevOps movement. But what made containers so popular? Thanks to the huge improvements in virtualisation and the rapid increase of cloud computing, containers can allow for isolated workloads based on an OS, exposing and accessing only what is necessary.

Within just a few years, Amazon Elastic Container Service (ECS) was introduced in November 13, 2014 and was the primary way to run containers in the public cloud. ECS is a container management service that allows you to run Docker containers on a cluster.

 

 

Kubernetes

Google released Kubernetes in June 2014, which was later released to the Cloud Native Computing Foundation (CNCF) community the following year. The Google Cloud Platform and Microsoft Azure were early adopters to Kubernetes, but with GCP being the only public cloud provider to have a working service called Google Kubernetes Engine (GKE). GKE was launched in 2015 and Azure Kubernetes Service (AKS) was released in the Fall of 2017 into preview mode.

 

 

Amazon EKS

Amazon Elastic Container Service for Kubernetes (EKS) is a fully managed service that makes it easy for you to use kubernetes on EKS runs upstream Kubernetes so you can connect to it with kubectl just like a self managed Kubernetes. AWS Introduced EKS at re:Invent 2017 and claims to upstream Kubernetes by using countless AWS growing services.

 

 

AWS Fargate

AWS has a hidden service that neither GCP or Azure have. AWS Fargate is a new service for running containers without needing to manage the underlying infrastructure. Fargate supports ECS and EKS but is also often closely compared with Lambda. You pay per computing second used without having to worry about the EC2 instances.

Managing Kubernetes can be complicated and usually requires a deep understanding of how to schedule, manage your masters, pods, services, and additional orchestration of architecture on top of the virtualisation that was already abstracted from you.

Fargate takes all of this away by streamlining deployments. The game-changer is that you do not need to start with Fargate, but that you can use EKS or ECS then migrate your workloads to Fargate when your program has matured further.

 

 

KOPS

 

KOPS was the go to method of deploying Kubernetes on ECS via EC2 instances or on EC2 instances. KOPS is an open sourced project that makes running kubernetes easy. KOPS is built using EC2 instances. KOPS provides a multitude of controls on deployments and good support for high availability.

 

Containers are not just a hype, but they could be the future for at least the next few years. With AWS finally joining the Kubernetes club, and Fargate being a strong game-changer, anything is possible. However, there is is still a lot of unanswered questions that we hope will be addressed.

EKS and Fargate are currently limited in Ohio and Virginia regions, but you should see a big push to use these services as more regions get rolled out.

 

What do we do in the meantime? I’m reminded of this quote:

 

“All we have to decide is what to do with the time that is given us.”
Gandalf

 

Until then, I believe KOPS will be the best method to use.

 

What containers do you use on AWS and are you waiting to explore with AWS EKS or Fargate? Let us know by contacting us here.

Check also my previous blog post on Container security here

 

Blog

Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

Blog

Building better SaaS products with UX Writing (Part 3)

UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

Blog

Building better SaaS products with UX Writing (Part 2)

The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








    Container security: How to differ from the traditional

    CATEGORIES

    Blog

    Containerisation in the industry is rapidly evolving

     

    No, not shipping containers, but cloud containers. Fortune 500 organisations all use containers because they provide portability, simple scalability, and isolation. Linux distros have long been used, but this has since changed. Microsoft has now started to support Windows-based containers with Windows Server 2016 running on Windows Core or Nano. Even with a lot of organisations using containers, we are still seeing a lot of them reverting back to how security was for traditional VMs.

     

    If you already know anything about containers, then you probably know about Kubernetes, Docker, Mesos, CoreOS, but security measures still need to be carried out and therefore this is always a good topic for discussion.

     

     

    Hardened container image security

    Hardened container image security comes to mind first, because of how the image is deployed and if there are any vulnerabilities in the base image. A best practice would be to create a custom container image so that your organization knows exactly what is being deployed.

    Developers or software vendors should know every library installed and the vulnerabilities of those libraries. There is a lot of them, but try to focus on the host OScontainer dependencies, and most of all the application code. Application code is one of the biggest vulnerabilities, but practising DevOps can help prevent this. Reviewing your code for security vulnerabilities before committing it into production can cost time, but save you a lot of money if best practices are followed. It is also a good idea to keep an RSS feed on security blogs like Google Project Zero Team and  Fuzz Testing to find vulnerabilities.

    Infrastructure security

    Infrastructure security is a broad subject because it means identity management, logging, networking, and encryption.

    Controlling access to resources should be at the top of everyone’s list. Following the best practice of providing the least privileges is key to a traditional approach. Role-Based Access Control (RBAC) is one of the most common methods used. RBAC is used to restrict system access to only authorized users. The traditional method was to provide access to a wide range of security policies but now fine-tuned roles can be used.

    Logging onto the infrastructure layers is a must needed best practice. Audit logging using an API cloud vendor services such as AWS CloudWatchAWS CloudTrailsAzure OMS, and Google Stackdriver will allow you to measure trends and find abnormal behaviour.

    Networking is commonly overlooked because it is sometimes referred to as the magic unicorn. Understanding how traffic is flowing in and out of the containers is where the need for security truly starts. Networking theories make this complicated, but understanding the underlying tools like firewalls, proxy, and other cloud-enabled services like Security Groups can redirect or define the traffic to the correct endpoints. With Kubernetes, private clusters can be used to send traffic securely.

    How does the container store secrets? This is a question that your organization should ask when encrypting data at rest or throughout the OSI model.

     

    Runtime security

    Runtime security is often overlooked, but making sure that a team can detect and respond to security threats whilst running inside a container shouldn’t be overlooked. The should monitor abnormal behaviours like network calls, API calls, and even login attempts. If a threat is detected, what are the mitigation steps for that pod? Isolate the container on a different network, restarting it, or stopping it until the threat can be identified are all ways to mitigate if a threat is detected. Another overlooked runtime security is OS logging. Keeping the logs secured inside an encrypted read-only directory will limit tampering, but of course, someone will still have to sift through the logs looking for any abnormal behaviour.

    Whenever security is discussed an image like the one shown above is commonly depicted. When it comes to security, it is ultimately the organization’s responsibility to keep the Application, Data, Identity, and Access Control secured. Cloud Providers do not prevent malicious attackers from attacking the application or the data. If untrusted libraries or access is misconfigured inside or around the containers then everything falls back on the organization.

     Check also my blog post Containers on AWS: a quick guide

    Blog

    Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

    When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

    Blog

    Building better SaaS products with UX Writing (Part 3)

    UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

    Blog

    Building better SaaS products with UX Writing (Part 2)

    The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

    Get in Touch

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








      Persisting Docker Volumes in ECS using EFS

      CATEGORIES

      Blog

      Last week we faced a new challenge to persist our Docker Volume using EFS. Sounds easy, right? Well, it turned out to be a bit more challenging than expected and we were only able to find a few tips here and there. That is why we wrote this post so others may succeed faster.

      Before digging into the solution, let’s take a minute to describe our context to elaborate a bit more on the challenge.
      First of all, we believe in Infrastructure as Code and thereby we use CloudFormation to be able to recreate our environments. Luckily Amazon provides a working sample and we got EFS working quite easily. The next part was to get Docker to use a volume from EFS. We got lucky a second time as Amazon provides another working sample.

      We managed to combine these resources and everything looked alright, but a closer look revealed that the changes did not persist. We found one explanation for why it didn’t work. It appears that we mount EFS after the Docker daemon starts and therefore the volume mounts an empty non-existing directory. In order to fix that we did two things, first we orchestrated the setup and then we added EFS to fstab in order to auto-mount on reboot.

      The solution looks a bit like the following:

        
      EcsCluster:
          Type: AWS::ECS::Cluster
          Properties: {}
        LaunchConfiguration:
          Type: AWS::AutoScaling::LaunchConfiguration
          Metadata:
            AWS::CloudFormation::Init:
              configSets:
                MountConfig:
                - setup
                - mount
              setup:
                packages:
                  yum:
                    nfs-utils: []
                files:
                  "/home/ec2-user/post_nfsstat":
                    content: !Sub |
                      #!/bin/bash
      
                      INPUT="$(cat)"
                      CW_JSON_OPEN='{ "Namespace": "EFS", "MetricData": [ '
                      CW_JSON_CLOSE=' ] }'
                      CW_JSON_METRIC=''
                      METRIC_COUNTER=0
      
                      for COL in 1 2 3 4 5 6; do
      
                       COUNTER=0
                       METRIC_FIELD=$COL
                       DATA_FIELD=$(($COL+($COL-1)))
      
                       while read line; do
                         if [[ COUNTER -gt 0 ]]; then
      
                           LINE=`echo $line | tr -s ' ' `
                           AWS_COMMAND="aws cloudwatch put-metric-data --region ${AWS::Region}"
                           MOD=$(( $COUNTER % 2))
      
                           if [ $MOD -eq 1 ]; then
                             METRIC_NAME=`echo $LINE | cut -d ' ' -f $METRIC_FIELD`
                           else
                             METRIC_VALUE=`echo $LINE | cut -d ' ' -f $DATA_FIELD`
                           fi
      
                           if [[ -n "$METRIC_NAME" && -n "$METRIC_VALUE" ]]; then
                             INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
                             CW_JSON_METRIC="$CW_JSON_METRIC { \"MetricName\": \"$METRIC_NAME\", \"Dimensions\": [{\"Name\": \"InstanceId\", \"Value\": \"$INSTANCE_ID\"} ], \"Value\": $METRIC_VALUE },"
                             unset METRIC_NAME
                             unset METRIC_VALUE
      
                             METRIC_COUNTER=$((METRIC_COUNTER+1))
                             if [ $METRIC_COUNTER -eq 20 ]; then
                               # 20 is max metric collection size, so we have to submit here
                               aws cloudwatch put-metric-data --region ${AWS::Region} --cli-input-json "`echo $CW_JSON_OPEN ${!CW_JSON_METRIC%?} $CW_JSON_CLOSE`"
      
                               # reset
                               METRIC_COUNTER=0
                               CW_JSON_METRIC=''
                             fi
                           fi
      
      
      
                           COUNTER=$((COUNTER+1))
                         fi
      
                         if [[ "$line" == "Client nfs v4:" ]]; then
                           # the next line is the good stuff
                           COUNTER=$((COUNTER+1))
                         fi
                       done <<< "$INPUT"
                      done
      
                      # submit whatever is left
                      aws cloudwatch put-metric-data --region ${AWS::Region} --cli-input-json "`echo $CW_JSON_OPEN ${!CW_JSON_METRIC%?} $CW_JSON_CLOSE`"
                    mode: '000755'
                    owner: ec2-user
                    group: ec2-user
                  "/home/ec2-user/crontab":
                    content: "* * * * * /usr/sbin/nfsstat | /home/ec2-user/post_nfsstat\n"
                    owner: ec2-user
                    group: ec2-user
                commands:
                  01_createdir:
                    command: !Sub "mkdir -p /${MountPoint}"
              mount:
                commands:
                  01_mount:
                    command:
                      Fn::Join:
                        - ""
                        - - "mount -t nfs4 -o nfsvers=4.1 "
                          - Fn::ImportValue:
                              Ref: FileSystem
                          - ".efs."
                          - Ref: AWS::Region
                          - ".amazonaws.com:/ /"
                          - Ref: MountPoint
                  02_fstab:
                    command:
                      Fn::Join:
                        - ""
                        - - "echo \""
                          - Fn::ImportValue:
                              Ref: FileSystem
                          - ".efs."
                          - Ref: AWS::Region
                          - ".amazonaws.com:/ /"
                          - Ref: MountPoint
                          - " nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0\" >> /etc/fstab"
                  03_permissions:
                    command: !Sub "chown -R ec2-user:ec2-user /${MountPoint}"
                  04_restart_docker_and_ecs:
                    command: !Sub "service docker restart && start ecs"
          Properties:
            AssociatePublicIpAddress: true
            ImageId:
              Fn::FindInMap:
              - AWSRegionArch2AMI
              - Ref: AWS::Region
              - Fn::FindInMap:
                - AWSInstanceType2Arch
                - Ref: InstanceType
                - Arch
            InstanceType:
              Ref: InstanceType
            KeyName:
              Ref: KeyName
            SecurityGroups:
            - Fn::ImportValue:
                Ref: SecuritygrpEcsAgentPort
            - Ref: InstanceSecurityGroup
            IamInstanceProfile:
              Ref: CloudWatchPutMetricsInstanceProfile
            UserData:
              Fn::Base64: !Sub |
                #!/bin/bash -xe
                echo ECS_CLUSTER=${EcsCluster} >> /etc/ecs/ecs.config
                yum update -y
                yum install -y aws-cfn-bootstrap
                /opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource LaunchConfiguration --configsets MountConfig --region ${AWS::Region}
                crontab /home/ec2-user/crontab
                /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource AutoScalingGroup --region ${AWS::Region}
          DependsOn:
          - EcsCluster
      
      
      Here is what we did compared to the original AWS provided template:
      1. extracted FileSystem EFS into another CF template and exported the EFS identifier so that we can use ImportValue
      2. added -p to the mkdir command just in case
      3. enhanced mount to use imported filesystem reference
      4. added mount to fstab so that we auto-mount on reboot
      5. recursive changed EFS mount ownership
      6. restarted Docker daemon to include mounted EFS and started ECS as it does not automatically restart when the Docker daemon restarts
      7. added ECS cluster info to ECS configuration
      8. added ECS agent security group so that port 51678 which the ECS agent uses is open
      9. added yum update just in case
      10. included launch configuration into auto scaling group for the ECS cluster and added depends on ECS cluster

      We were a bit surprised that EFS does not require an additional volume driver to function. It appears to work out-of-the-box and turned out to be quite straightforward. Thank you for reading and enjoy using EFS as a means to persist your Docker Volumes in your ECS cluster!

      Blog

      Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

      When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

      Blog

      Building better SaaS products with UX Writing (Part 3)

      UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

      Blog

      Building better SaaS products with UX Writing (Part 2)

      The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

      Get in Touch

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.