A Glimpse into The Future – Latest from Nordcloud Engineering

CATEGORIES

BlogTech Community

1. AWS CDK – a Glimpse into The Future

Since the announcement of the AWS Cloud Development Kit (CDK) in August 2018, we were excited about using it in production-ready environments. Since it’s still in developer preview, we shouldn’t really use it for business-critical applications, but we can play with it on our own playground. Then, a great opportunity appeared—we had to build a roadmapper tool for internal use. We’ve used CDK to create few buckets, CDN and Cognito – and we’ve liked it a lot! We encourage everyone to play and get familiar with it. It might be a great way to manage your cloud infrastructure in the future.

 AWS Cloud Development Kit might be a great way to manage your cloud infrastructure in the future.

2. Use Curl Seamlessly to Call AWS API Gateway with AWS Cognito based authorizer

We use AWS serverless services a lot: almost all of our backends use Lambda and API Gateway.  Last year we decided to switch all of our API authorizers to Cognito based ones, and soon realized that there is a gap that we need to fill in . How do you easily sign calls to API Gateway made from CLI? Inspired by Amplify, we’ve decided to create a CLI tool that takes care of signing in against Cognito User Pool, persists tokens, and takes care of token rotation behind the scenes. We <3 Open Source, so we’ve decided to share the tool with the community. You can download it using npm. The source code and documentation are available in our github.

We ❤️Open Source, so we’ve decided to share the tool with the community.

 

3. Create beautiful PDFs with Golang and AWS Lambda

A huge part of our codebase is written in Golang. In this article, we’ve described how we’ve used HTML templates to create PDF reports.  Templates are mostly used for generating HTML documents that can be served by an HTTP server, but we’ve used them to feed our PDF printer Lambda.

We at the Nordcloud internal software development team like to share the knowledge with other developers. At the beginning of 2019, we started providing insights about our tech stack on our Medium publication. To stay on top of the latest of Nordcloud Engineering, follow us on Medium!

Blog

Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

Blog

Building better SaaS products with UX Writing (Part 3)

UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

Blog

Building better SaaS products with UX Writing (Part 2)

The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








    Lambda layers for Python runtime

    CATEGORIES

    BlogTech Community

    AWS Lambda

    AWS Lambda is one of the most popular serverless compute services in the public cloud, released in November 2014. It runs your code in the response to events like DynamoDB, SNS or HTTP triggers without provisioning or managing any infrastructure. Lambda takes care of most of the things required to run your code and provides high availability. It allows you to execute even up to 1000 parallel functions at once! Using AWS lambda you can build applications like:

    • Web APIs
    • Data processing pipelines
    • IoT applications
    • Mobile backends
    • and many many more…

    Creating AWS Lambda is super simple: you just need to create a zip file with your code, dependencies and upload it to S3 bucket. There are also frameworks like serverless or SAM that handles deploying AWS lambda for you, so you don’t have to manually create and upload the zip file.

    There is, however, one problem.

    You have created a simple function which depends on a large number of other packages. AWS lambda requires you to zip everything together. As a result, you have to upload a lot of code that never changes what increases your deployment time, takes space, and costs more.

    AWS Lambda Layers

    Fast forward 4 years later at 2018 re:Invent AWS Lambda Layers are released. This feature allows you to centrally store and manage data that is shared across different functions in the single or even multiple AWS accounts! It solves a certain number of issues like:

    • You do not have to upload dependencies on every change of your code. Just create an additional layer with all required packages.
    • You can create custom runtime that supports any programming language.
    • Adjust default runtime by adding data required by your employees. For example, there is a team of Cloud Architects that builds Cloud Formation templates using the troposphere library. However, they are no developers and do not know how to manage python dependencies… With AWS lambda layer you can create a custom environment with all required data so they could code in the AWS console.

    But how does the layer work?

    When you invoke your function, all the AWS Lambda layers are mounted to the /opt directory in the Lambda container. You can add up to 5 different layers. The order is really important because layers with the higher order can override files from the previously mounted layers. When using Python runtime you do not need to do any additional operations in your code, just import library in the standard way. But, how will my python code know where to find my data?

    That’s super simple, /opt/bin is added to the $PATH environment variable. To check this let’s create a very simple Python function:

    
    import os
    def lambda_handler(event, context):
        path = os.popen("echo $PATH").read()
        return {'path': path}
    

    The response is:

     
    {
        "path": "/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin\n"
    }
    

     

    Existing pre-defined layers

    AWS layers have been released together with a single, publicly accessible library for data processing containing 2 libraries: NumPyand SciPy. Once you have created your lambda you can click  `Add a layer` in the lambda configuration. You should be able to see and select the AWSLambda-Python36-SciPy1x layer. Once you have added your layer you can use these libraries in your code. Let’s do a simple test:

    
    import numpy as np
    import json
    
    
    def lambda_handler(event, context):
        matrix = np.random.randint(6, size=(2, 2))
        
        return {
            'matrix': json.dumps(matrix.tolist())
        }
    

    The function response is:

     >code>
    {
      "matrix": "[[2, 1], [4, 2]]"
    }
    

     

    As you can see it works without any effort.

    What’s inside?

    Now let’s check what is in the pre-defined layer. To check the mounted layer content I prepared simple script:

    
    import os
    def lambda_handler(event, context):
        directories = os.popen("find /opt/* -type d -maxdepth 4").read().split("\n")
        return {
            'directories': directories
        }
    

    In the function response you will receive the list of directories that exist in the /opt directory:

    
    {
      "directories": [
        "/opt/python",
        "/opt/python/lib",
        "/opt/python/lib/python3.6",
        "/opt/python/lib/python3.6/site-packages",
        "/opt/python/lib/python3.6/site-packages/numpy",
        "/opt/python/lib/python3.6/site-packages/numpy-1.15.4.dist-info",
        "/opt/python/lib/python3.6/site-packages/scipy",
        "/opt/python/lib/python3.6/site-packages/scipy-1.1.0.dist-info"
      ]
    }
    

    Ok, so it contains python dependencies installed in the standard way and nothing else. Our custom layer should have a similar structure.

    Create Your own layer!

    Our use case is to create an environment for our Cloud Architects to easily build Cloud Formation templates using troposphere and awacs libraries. The steps comprise:
    <h3″>Create virtual env and install dependencies

    To manage the python dependencies we will use pipenv.

    Let’s create a new virtual environment and install there all required libraries:

    
    pipenv --python 3.6
    pipenv shell
    pipenv install troposphere
    pipenv install awacs
    

    It should result in the following Pipfile:

    
    [[source]]
    url = "https://pypi.org/simple"
    verify_ssl = true
    name = "pypi"
    [packages]
    troposphere = "*"
    awacs = "*"
    [dev-packages]
    [requires]
    python_version = "3.6"
    

    Build a deployment package

    All the dependent packages have been installed in the $VIRTUAL_ENV directory created by pipenv. You can check what is in this directory using ls command:

     
    ls $VIRTUAL_ENV
    

    Now let’s prepare a simple script that creates a zipped deployment package:

    
    PY_DIR='build/python/lib/python3.6/site-packages'
    mkdir -p $PY_DIR                                              #Create temporary build directory
    pipenv lock -r > requirements.txt                             #Generate requirements file
    pip install -r requirements.txt --no-deps -t $PY_DIR     #Install packages into the target directory
    cd build
    zip -r ../tropo_layer.zip .                                  #Zip files
    cd ..
    rm -r build                                                   #Remove temporary directory
    
    

    When you execute this script it will create a zipped package that you can upload to AWS Layer.

     

    Create a layer and a test AWS function

    You can create a custom layer and AWS lambda by clicking in AWS console. However, real experts use CLI (AWS lambda is the new feature so you have to update your awscli to the latest version).

    To publish new Lambda Layer you can use the following command (my zip file is named tropo_layer.zip):

    
    aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip
    

    As the response, you should receive the layer arn and some other data:

    
    {
        "Content": {
            "CodeSize": 14909144,
            "CodeSha256": "qUz...",
            "Location": "https://awslambda-eu-cent-1-layers.s3.eu-central-1.amazonaws.com/snapshots..."
        },
        "LayerVersionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1",
        "Version": 1,
        "Description": "",
        "CreatedDate": "2018-12-01T22:07:32.626+0000",
        "LayerArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test"
    }
    

    The next step is to create AWS lambda. Yor lambda will be a very simple script that generates Cloud Formation template to create EC2 instance:

     
    from troposphere import Ref, Template
    import troposphere.ec2 as ec2
    import json
    def lambda_handler(event, context):
        t = Template()
        instance = ec2.Instance("myinstance")
        instance.ImageId = "ami-951945d0"
        instance.InstanceType = "t1.micro"
        t.add_resource(instance)
        return {"data": json.loads(t.to_json())}
    

    Now we have to create a zipped package that contains only our function:

    
    zip tropo_lambda.zip handler.py
    

    And create new lambda using this file (I used an IAM role that already exists on my account. If you do not have any role that you can use you have to create one before creating AWS lambda):

    
    aws lambda create-function --function-name tropo_function_test --runtime python3.6 
    --handler handler.lambda_handler 
    --role arn:aws:iam::xxxxxxxxxxxx:role/service-role/some-lambda-role 
    --zip-file fileb://tropo_lambda.zip
    

    In the response, you should get the newly created lambda details:

    
    {
        "TracingConfig": {
            "Mode": "PassThrough"
        },
        "CodeSha256": "l...",
        "FunctionName": "tropo_function_test",
        "CodeSize": 356,
        "RevisionId": "...",
        "MemorySize": 128,
        "FunctionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:function:tropo_function_test",
        "Version": "$LATEST",
        "Role": "arn:aws:iam::xxxxxxxxx:role/service-role/some-lambda-role",
        "Timeout": 3,
        "LastModified": "2018-12-01T22:22:43.665+0000",
        "Handler": "handler.lambda_handler",
        "Runtime": "python3.6",
        "Description": ""
    }
    

    Now let’s try to invoke our function:

    
    aws lambda invoke --function-name tropo_function_test --payload '{}' output
    cat output
    {"errorMessage": "Unable to import module 'handler'"}
    
    

    Oh no… It doesn’t work. In the CloudWatch you can find detailed log message: `Unable to import module ‘handler’: No module named ‘troposphere’` This error is obvious. Default python3.6 runtime does not contain troposphere library. Now let’s add layer we created in the previous step to our function:

    
    aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1
    

    When you invoke lambda again you should get the correct response:

    
    {
      "data": {
        "Resources": {
          "myinstance": {
            "Properties": {
              "ImageId": "ami-951945d0",
              "InstanceType": "t1.micro"
            },
            "Type": "AWS::EC2::Instance"
          }
        }
      }
    }
    

    Add a local library to your layer

    We already know how to create a custom layer with python dependencies, but what if we want to include our local code? The simplest solution is to manually copy your local files to the /python/lib/python3.6/site-packages directory.

    First, let prepare the test module that will be pushed to the layer:

    
    $ find local_module
    local_module
    local_module/__init__.py
    local_module/echo.py
    $ cat cat local_module/echo.py
    def echo_hello():
        return "hello world!"
    

    To manually copy your local module to the correct path you just need to add the following line to the previously used script (before zipping package):

    
    cp -r local_module 'build/python/lib/python3.6/site-packages'
    

    This works, however, we strongly advise transforming your local library into the pip module and installing it in the standard way.

    Update Lambda layer

    To update lambda layer you have to run the same code as before you used to create a new layer:

    
    aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip
    

    The request should return LayerVersionArn with incremented version number (arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2 in my case).

    Now update lambda configuration with the new layer version:

     
    aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2
    
    

    Now you should be able to import local_module in your code and use the echo_hello function.

     

    Serverless framework Layers support

    Serverless is a framework that helps you to build applications based on the AWS Lambda Service. It already supports deploying and using Lambda Layers. The configuration is really simple – in the serverless.yml file, you provid the path to the layer location on your disk (it has to path to the directory – you cannot use zipped package, it will be done automatically). You can either create a separate serverless.yml configuration for deploying Lambda Layer or deploy it together with your application.

    We’ll show the second example. However, if you want to benefit from all the Lambda Layers advantages you should deploy it separately.

    
    service: tropoLayer
    package:
      individually: true
    provider:
      name: aws
      runtime: python3.6
    layers:
      tropoLayer:
        path: build             # Build directory contains all python dependencies
        compatibleRuntimes:     # supported runtime
          - python3.6
    functions:
      tropo_test:
        handler: handler.lambda_handler
        package:
          exclude:
           - node_modules/**
           - build/**
        layers:
          - {Ref: TropoLayerLambdaLayer } # Ref to the created layer. You have to append 'LambdaLayer'
    string to the end of layer name to make it working
    

    I used the following script to create a build directory with all the python dependencies:

    
    PY_DIR='build/python/lib/python3.6/site-packages'
    mkdir -p $PY_DIR                                              #Create temporary build directory
    pipenv lock -r > requirements.txt                             #Generate requirements file
    pip install -r requirements.txt -t $PY_DIR                   #Install packages into the target direct
    

    This example individually packs a Lambda Layer with dependencies and your lambda handler. The funny thing is that you have to convert your lambda layer name to be TitleCased and add the `LambdaLayer` suffix if you want to refer to that resource.

    Deploy your lambda together with the layer, and test if it works:

    
    sls deploy -v --region eu-central-1
    sls invoke -f tropo_test --region eu-central-1
    

    Summary

    It was a lot of fun to test Lambda Layers and investigate how it technically works. We will surely use it in our projects.

    In my opinion, AWS Lambda Layers is a really great feature that solves a lot of common issues in the serverless world. Of course, it is not suitable for all the use cases. If you have a simple app, that does not require a huge number of dependencies it’s easier for you to have everything in the single zip file because you do not need to manage additional layers.

    Read more on AWS Lambda in our blog!

    Notes from AWS re:Invent 2018 – Lambda@edge optimisation

    Running AWS Lambda@Edge code in edge locations

    Amazon SQS as a Lambda event source

    Blog

    Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

    When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

    Blog

    Building better SaaS products with UX Writing (Part 3)

    UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

    Blog

    Building better SaaS products with UX Writing (Part 2)

    The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

    Get in Touch

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








      Notes from AWS Chalk session at AWS re:Invent 2018 – Lambda@Edge optimisations

      CATEGORIES

      Blog

      Lambda@Edge makes it possible to run Lambda code in Edge locations to modify viewer/origin requests. This can be used to modify HTTP headers, change content based on user-agent and more. We’ve written about it previously, so feel free to read this blog post if you want an introduction: https://nordcloud.com/aws-lambdaedge-running-lambda-code-in-edge-locations/

      There are quite a few limitations for Lambda@Edge which depends on which request event you are responding on. For example the maximum response that is generated by the lambda function is totally different depending on if it is a viewer or origin response (40 Kb vs 1 Mb). The function in itself also has limits, such as maximum 3GB of memory allocation and 50 Mb zipped deployment package size.

      This means that most use-cases have a need for optimisation. First thing first: evaluate if you really need to use Lambda@Edge. Cloudfront currently have a lot of functionality that is possible to take part of before trying to reinvent the wheel – caching depending on device, selecting headers to base caching on, regional blocks with WAF, etc. Even your origin can sometimes handle header rewrites and other header manipulation, which means that there is no need to spend the time to build it yourself. So you should only use Lambda@Edge if you know that cloudfront can’t do it and that there will be a benefit to rendering or serving your content at the edge.

      Optimise before the function

      If you’ve decided to use Lambda@Edge you should first look into the optimisations you can do before the function is invoked by the event. Cloudfront does a lot of optimisations for you. It groups requests so that if the response time of the object fetch is the same it will put them together and do only one get instead of sending all of them to the origin. Note that cloudfront is a multilayered CDN which will try to catch the cache from the closest location in cloudfront on miss in a specific region as well, so there is no need to build multi-region caching yourself. Another thing to look at in cloudfront is the origin paths that the event reacts upon. Perhaps the function only needs to react on a very specific HTTP path. If possible it is also always better to let the function react on origin events instead of viewer events which in turn makes the amount of events to react upon fewer and you have higher limitations for function size, response time and resource allocation.

      Coding optimisations

      When writing the function you should try to utilise global variables as much as possible since they are re-used between invocations and cached on the workers for a couple of hours. Small things such as keeping TCP sockets usable and perhaps using UDP instead of TCP can make a difference especially since Lambda@Edge is synchronous.

      Deployment testing

      When deploying the function, look at minimising the code with different tools such as browserify. Also note that Lambda@Edge can be deployed with different memory allocations so make sure that you test which size gives you the best bang for the buck – sometimes raising the memory usage from 128 Mb to 256 Mb gives you much faster responses without costing that much more.

      S3 performance

      If you are fetching content from S3, try using S3 Select to get just what you need from a subset of data from an object by using simple SQL expressions, and even better, try to use cached content in Cloudfront instead of trying to fetch it from S3 or other origins. This makes a lot of sense especially if the data can be cached.

      Last but not least: Remove the function when not in use. Don’t use Lambda@Edge if you don’t need to anymore.

      If you’d like to learn more about moving your business to the Cloud, please contact us here.

      Blog

      Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

      When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

      Blog

      Building better SaaS products with UX Writing (Part 3)

      UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

      Blog

      Building better SaaS products with UX Writing (Part 2)

      The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

      Get in Touch

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








        Cloud computing news #10: Serverless, next-level cloud tech

        CATEGORIES

        Blog

        This week we focus on serverless computing which continues to grow and enables agility, speed of innovation and lower cost to organizations.

        Serverless Computing Spurs Business Innovation

        According to Digitalist Magazine, serverless computing is outpacing conventional patterns of emerging technology adoption. Organizations across the globe see technology-driven innovation as essential to compete. Serverless computing promises to enable faster innovation at a lower cost and simplify the creation of responsive business processes.

        But what does “serverless computing” mean and how can companies benefit from it?

        1. Innovate faster and at a lower cost: Serverless cloud computing execution model in which the cloud provider acts as the server, dynamically managing the allocation of machine resources. This means that developers are able to focus on coding instead of managing deployment and runtime environments. Also, pricing is based on the actual amount of resources consumed by an application. Thus, with serverless computing, an organization can innovate faster and at a lower cost. Serverless computing eliminates the risk and cost of overprovisioning, as it can scale resources dynamically with no up-front capacity planning required.
        2. Enable responsive business processes: Serverless function services – function as a service (FaaS) – can automatically activate and run application logic that carry out simple tasks in response to specific events. If the task enchained by an incoming event involves data management, developers can leverage serverless backends as a service (BaaS) for data caching, persistence, and analytics services via standard APIs. With this event-driven application infrastructure in place, one organization can decide at any moment to execute a new task in response to a given event.

        Organizations also need the flexibility to develop and deploy their innovations where it makes the most sense for their business. Platforms that rely on open standards, deploy on all the major hyperscale public clouds, and offer portability between the hyperscaler IaaS foundations are really the ideal choice for serverless environments.

        Read more in Digitalist Magazine

        Nordcloud tech blog: Developing serverless cloud components

        cloud component contains both your code and the necessary platform configuration to run it. The concept is similar to Docker containers, but here it is applied to serverless applications. Instead of wrapping an entire server in a container, a cloud component tells the cloud platform what services it depends on.

        A typical cloud component might include a REST API, a database table and the code needed to implement the related business logic. When you deploy the component, the necessary database services and API services are automatically provisioned in the cloud.

        Developers can assemble cloud applications from cloud components. This resembles the way they would compose traditional applications from software modules. The benefit is less repeated work to implement the same features in every project over and over again.

        Check out our tech blog that takes a look at some new technologies for developing cloud components

        Nordcloud Case study: Developing on AWS services using a serverless architecture for Kemppi 

        Nordcloud helped Kemppi build the initial architecture based on AWS IoT Core, API Gateway, Lambda and other AWS services. We also designed and developed the initial Angular.js based user interface for the solution.

        Developing on AWS services using a serverless architecture enabled Kemppi to develop the solution in half the time and cost compared to traditional, infrastucture based architectures. The serverless expertise of Nordcloud was key to enable a seamless rampup of development capabilities in the Kemppi development teams.

        Read more on our case study here

        Serverless at Nordcloud

        Nordcloud has a long track record with serverless, being among the first companies to adopt services such as AWS Lambda and API gateway for production projects already in 2015. Since then, Nordcloud has executed over 20 customer projects using serverless technologies for several use case such as web applications, IoT solutions, data platforms and cloud infrastructure monitoring or automation.

        Nordcloud is an AWS Lambda, API Gateway and DynamoDB parter, a Serverless framework partner and contributor to the serverless community via contribution to open source projects, events and initiatives such as the Serverless Finland meetup.

        How can we help you take your business to the next level with serverless?

        Blog

        Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

        When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

        Blog

        Building better SaaS products with UX Writing (Part 3)

        UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

        Blog

        Building better SaaS products with UX Writing (Part 2)

        The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

        Get in Touch

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








          Leveraging AWS Greengrass for Edge IoT Solutions

          CATEGORIES

          Blog

          There is a growing demand for intelligent edge solutions that not only collect data, but also control on-premise equipment at industrial customer sites. Historically such solutions have often been based on low-level custom firmware that has required technical specialists to develop and maintain.

          AWS Greengrass has significantly lowered the barrier for edge IoT development by extending familiar cloud technologies to the edge. Cloud architects and cloud application developers can use their existing knowledge of serverless development and programming languages they already master. In many cases the same exact code can be run both in the cloud and at the edge as a Greengrass Lambda application. This has proven very useful for use cases like KPI algorithms and diagnostic logic that need to be executed both centrally in the cloud and in distributed fashion on the equipment located at the edge.

          Building blocks for IoT

          It’s important to keep in mind that Amazon usually offers the building blocks for making applications, not the actual end-user applications. This also applies to Greengrass and AWS IoT in general. You get an extensive set of features for building IoT applications, but you still need to put them together into an application that solves the business case requirements. Amazon calls this eliminating the “undifferentiated heavy lifting”. Application developers don’t have to deal with low level issues like scaling databases or designing communication protocols which have already been solved in general. Instead they can focus on implementing the business-specific features and logic relevant to the use case.

          In fact, as the AWS IoT platform has evolved in recent years, the need custom databases has been almost completely eliminated. AWS IoT Device Management provides a flexible way to organize IoT devices into groups and hierarchies. Custom metadata can be attached to the devices, enabling indexing and searching. You no longer start a project by designing database tables from scratch, but instead you first look at what AWS IoT already offers you out-of-the-box.

          The same principle applies to business logic. In many cases there is no need to write custom code, because AWS IoT’s MQTT based messaging platform offers simpler ways to filter, route and process data. This is particularly important for datalake solutions, because the amount of data processed can be quite large. If you can completely omit custom code, you don’t have to worry about scaling it. The best datalake solutions simply connect a few services like AWS IoT, Kinesis Firehose and Amazon S3 together, and the data is automatically collected into S3 buckets regardless of its size and bandwidth.

          Business logic at the Edge

          In the case of Greengrass edge solutions you still usually need Lambda functions to implement business logic. Greengrass contains functionality for topic-based MQTT routing, but to process the contents of MQTT messages, some code is needed. However, the implementation can be just a few lines of code to execute the required algorithm as a Lambda function. Developers don’t have to worry about building containers, opening network connections or configuring security settings. Greengrass takes care of all the details of deploying the Lambda function.

          It’s worth noting though that larger customers usually prefer to build a customized management system on top of AWS IoT and Greengrass. There are lots of exposed details and moving parts when dealing with “raw” AWS IoT devices and Greengrass deployments. When a lightweight business-specific management layer is built on top of them, end-users can deal with familiar concepts and ignore most unnecessary details. Power users can still access the underlying technologies simply by using the AWS Console.

          Blog

          Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

          When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

          Blog

          Building better SaaS products with UX Writing (Part 3)

          UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

          Blog

          Building better SaaS products with UX Writing (Part 2)

          The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

          Get in Touch

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








            Cloud Computing News #4: IoT in the Cloud

            CATEGORIES

            Blog

            This week we focus on IoT in the cloud.

             

            AWS IOT platform is great for startups

            IoT for all lists 7 reasons why start up companies like iRobot, GoPro, and Under Armour have chosen AWS IoT platform:

            1. Starting with AWS IOT is easy: The AWS IoT platform connects IoT devices to the cloud and allows them to securely interact with each other and various IoT applications.
            2.  High IoT security: Amazon doesn’t spare resources to protect its customers’ data, devices, and communication.
            3. AWS cherises and cultivates startup culture: AWS has helped multiple IoT startups get off the ground and startups are a valuable category of Amazon’s target audience.
            4. Serverless approach and AWS Lambda are right for startups: startups can reduce the cost of building prototypes and add agility to the development process as well as build a highly customizable and flexible, serverless back end that is highly automated.
            5. AWS IoT Analytics paired with AI and Machine LearningAWS IoT Analytics and Amazon Kinesis Analytics answer to high demand for data-analytic capacities in IoT.
            6. Amazon partners with a broad network of IoT device manufacturers, IoT device startups, and IoT software providers.
            7. The range of AWS products and services: the top provider of cloud services has a range of solutions tailored for major customer categories, including startups.

            Read more in IoT for all

             

            IoT – 5 predictions for 2019 and their impact

            Forbes makes five IoT predictions for 2019:

            1. Growth across the board: IoT market and connectivity statistics show numbers mostly in the billions (check the article below)
            2. Manufacturing and healthcare – deeper penetration: Market analysts predict the number of connected devices in the manufacturing industry will double between 2017 and 2020.
            3. Increased security at all end points: Increase in end point security solutions to prevent data loss and give insights into network health and threat protection.
            4. Smart areas or smart neighborhoods in cities. Smart sensors around the neighborhood will record everything from walking routes, shared car use, sewage flow, and temperature choice 24/7.
            5. Smart cars – increased market penetration for IoT: Diagnostic information, connected apps, voice search, current traffic information, and more to come.

            Read more on these predictions in Forbes

             

            IoT is growing at an exponential rate

            According to Forbes IoT is one of the most-researched emerging markets globally. The magazine lists 10 charts on the explosive growth of IoT adoption and market.

            Here below a few teasers, check all charts in Forbes.

            1. According to Statista, by 2020, Discrete Manufacturing, Transportation & Logistics and Utilities industries are projected to spend $40B each on IoT platforms, systems, and services.
            2. McKinsey predicts the IoT market will be worth $581B for ICT-based spend alone by 2020, growing at a Compound Annual Growth Rate (CAGR) between 7 and 15%.
            3. Smart Cities (23%), Connected Industry (17%) and Connected Buildings (12%) are the top three IoT projects in progress (IoT Analytics).
            4. GE found that Industrial Internet of Things (IIoT) applications are relied on by 64% power and energy (utilities) companies to succeed with their digital transformation initiatives.
            5. Industrial products lead all industries in IoT adoption at 45% with an additional 22% planning in 12 months, according to Forrester.
            6. Harley Davidson reduced its build-to-order cycle by a factor of 36 and grew overall profitability by 3% to 4% by shifting production to a fully IoT-enabled plant according to Deloitte.

             

            Philips is tapping into the IoT market with AWS

            According to the NetworkWorld, IDC forecasts the IoT market will reach $1.29 trillion by 2020. Philips is turning toothbrushes and MRI machines into IoT devices to tap this market and to keep patients more healthy and the machines running more smoothly.

            “We’re transforming from mainly a device-focused business to a health technology company focused on the health continuum of care and service”, says Dale Wiggins, VP and General Manager of the Philips HealthSuite Digital Platform. “By connecting our devices and modalities in the hospital or consumer environment, it provides more data that can be used to benefit our customers.”

            Philips relies on a combination of AWS services and tools, including the company’s IoT platform, Amazon’s CloudWatch and Cloud Formation. Philips uses predictive algorithms and data analysis tools to monitor activity, identify trends and report abnormal behavior.

            Read more in NetworkWorld

             

            DATA DRIVEN SOLUTIONS AT NORDCLOUD

            Our data driven solutions will make an impact on your business with better control and valuable business insight with IoT, modern data platforms and advanced analytics based on machine learning. How can we help you take your business to the next level? 

             

             

             

            Blog

            Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

            When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

            Blog

            Building better SaaS products with UX Writing (Part 3)

            UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

            Blog

            Building better SaaS products with UX Writing (Part 2)

            The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

            Get in Touch

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








              Two new Amazon CloudFront Edge Locations in the Nordics

              CATEGORIES

              Blog

              Amazon just announced really exciting news to the current and future Nordic customers.

              Amazon CloudFront coverage in the Nordic countries has developed very rapidly. In 2016 Amazon had one Amazon CloudFront edge location in Stockholm. Today Amazon has a total of 6 edge locations in the Nordic region – three in Stockholm and one in Copenhagen, Helsinki and Oslo.

              The current Amazon CloudFront edge location coverage, together with the expected Sweden Region launch later this year, makes Amazon CloudFront hard to pass when considering a solution for content delivery and DDoS protection in the region.

              FASTER CONTENT DELIVERY AND GREATER BANDWIDTH

              The new Amazon CloudFront Edge locations mean faster content delivery and greater bandwidth in the whole region with the AWS hallmark ease of use and pay-as-you-go pricing scheme. The locations bring the AWS private data transfer backbone closer to the customers for improved security. In addition, Amazon CloudFront plays a crucial role in the delivery of managed DDoS protection with the AWS Shield Advanced.

              “Our Nordic customers have benefited from Amazon CloudFront´s ease of use and pay-as-you-go model for years. The speed of launching new edge locations is really remarkable as the service performance improves without any action or additional costs to our customers. As for any other Amazon CloudFront location, we are able to offer preferential pricing for significant customer use,” says Jaakko Kontiainen, Nordcloud Alliance Lead for AWS.

              AMAZON CLOUDFRONT

              Amazon CloudFront is a global content delivery network (CDN) service that securely delivers data, videos applications, and APIs to your viewers with low latency and transfer speeds. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as software that works seamlessly with services including AWS shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and AWS Lambda to run custom code close to your viewers.

              KEY BENEFITS OF CLOUDFRONT

              Some of the key benefits include extensive integration with other AWS services, ease of use, a cost-effective pay-as-you-go pricing scheme, and the growing global distribution network exhibited by the latest Edge location. For more information, please go to: https://aws.amazon.com/cloudfront/details/

              Nordcloud is an AWS Premier Consulting Partner and participates in the Amazon Cloudfront Service Delivery Program recognising our experience in delivering solutions using Amazon CloudFront. If you would like to talk to us further about Amazon CloudFront, other Amazon Web Service offerings, or migrating your business onto the AWS Cloud, please contact us here.

              Blog

              Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

              When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

              Blog

              Building better SaaS products with UX Writing (Part 3)

              UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

              Blog

              Building better SaaS products with UX Writing (Part 2)

              The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

              Get in Touch

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








                Amazon SQS as a Lambda event source = all the fun!

                CATEGORIES

                Blog

                What is Amazon SQS and Lambda and why should I care?

                Amazon Simple Queue Service (Amazon SQS) is a distributed, fully managed message queueing service which was released as one of the first AWS services. It allows you to decouple your application into components which communicate using asynchronous messages. Using a simple, programmatic API you can get started and poll for messages that can be sent from many different sources. It acts as a buffer for your workers, greatly reducing the time spent on a synchronous call by a user – meaning you can send a response and do the work later.

                In November 2014 Amazon released AWS Lambda, which is one of the most recognisable services in Cloud Computing and in my opinion – the best available implementation of Serverless paradigm. It runs code in response to certain events, eg. file uploaded to S3 or just an HTTP request. You don’t need to provision any compute resources.

                But what if you want to connect these two services and make SQS messages trigger Lambda functions? We’ve been waiting for this feature for a very long time, and were tired of creating custom containers with pollers or using SNS as a bad alternative.

                In Nordcloud R&D, we are partial to Serverless and event-driven paradigms however sometimes our Lambda functions call each other asynchronously and become huge, rapidly exceeding concurrency limits and throwing exceptions all over the place. Using SQS to trigger Lambda functions acts like a buffer. We know that lambda has a maximum time limit of 5 minutes, so we can use all the good things that come with SQS – visibility timeouts, at-least-once delivery, dead letter queues and so on. Now it’s possible to not have to provision any containers or EC2 instances (just Serverless code) and let Amazon handle everything for us.

                But before you start using SQS as your event source for Lambda functions, you should know how it’s implemented and what to expect.

                How is it implemented?

                When working with SQS, you need to wait for messages to be received, process them and delete from the queue. If you don’t delete the message, it will come back after specified VisibilityTimeout, because SQS thinks the processing failed and makes it available for consuming again, so you won’t lose any messages. This process is not applicable when using SQS as an event source for Lambda as you don’t touch the SQS part!

                Lambda polls for messages internally then calls your function and, if it completes successfully, deletes the message on your behalf. Make sure that your code throws exceptions if you want to process the message again. Equally important is that you need to return a successful code so you won’t get into an endless loop of duplicated messages. Remember that you are billed for every API call that is made by the internal poller.

                Another thing is that Lambda is invoked synchronously. There’s no retries and the Dead Letter Queue on Lambda has no use. Everything will be handled by Amazon SQS, so find the optimal settings for VisibilityTimeoutmaxReceiveCount and definitely configure DLQ policy. Even though it shouldn’t be a problem, please refrain from setting the VisibilityTimeout equal to the function timeout, as the polling mechanism will consume some additional time and it will be counted as in a processing state.

                You are also limited by the function level concurrent execution limit which defaults to a shared pool of unreserved concurrency allocation (1000 per region). You can lower that by specifying the reserved concurrent executions parameter to a subset of your account’s limit. However, it will subtract that number from your shared pool and it may affect other functions! Plus, if your Lambda is VPC-enabled then Amazon EC2 limits will apply (think ENI).

                If you like taking Amazon SQS up a level like us, you’ll notice that the number of messages in flight will begin to rise. That’s your Lambda gradually scaling out in response to the queue size, eventually hitting the concurrency limit. These messages will be consumed and synchronous invocation will fail with an exception. That’s when your Amazon SQS retry policy comes in hand. Although it is not confirmed anywhere, this behaviour may lead to starvation of certain messages, but you should be already prepared for that!

                One more thing from our R&D division. What happens if you add one queue as an event source for two different functions? That’s right, it will act as a load balancer.

                sqs lambda trigger

                Does it really work?

                We ran some tests with the following assumptions:

                • all messages were available in the queue before enabling the Lambda trigger
                • SQS visibility timeout is set to 1h
                • all test cases are in separate environments and time
                • Lambda does nothing, just sleeps for some specified amount of time

                 

                This is what we got:

                Normal use case

                1000 messages, sleep for 3 seconds – nothing really interesting, works as good as we expected it to, it consumed our messages pretty quickly, Cloudwatch didn’t even register the scaling process.

                use case

                Normal use case, heavy load

                Again, 3 seconds sleep but 10000 messages. This is over our concurrency limit, but the scale-out process took more than executing first Lambdas, so it didn’t throttle. It took a little bit longer to consume all of our messages.

                use case 2

                 

                Long-running lambdas

                Let’s get back to 1000 messages, but with 240 seconds of sleep. Now AWS is handling the scale-out process for internal workers. You’ll have noticed we have managed to get about 550 concurrent lambdas running. Good news!

                lambdas

                 

                Hitting the concurrency limit

                Again, 240 seconds of sleep but let’s push it to the limit: 10000 messages, concurrency limit set to 1000.

                What happened? Again, AWS reacts to the number of messages available in Amazon SQS, so scales internal workers up to a certain point, when the concurrency limit is reached. Of course, in the world of distributed computing and eventual consistency, there is no way it can predict how many Lambdas it can run, so we can finally see it throttle. Throttled Lambdas return exceptions to AWS workers – that’s the signal to stop, but it still tries because perhaps that’s not our global limit and it’s just other functions taking our pool. What is important is that AWS won’t retry function execution, this message will come back to the queue after defined VisibilityTimeout,  you’ll see some invocations after 23:30 (yes, we can’t sleep).

                The same thing happens when you set your own reserved concurrency pool. We ran the same test for a maximum of 50 concurrent executions. Based on the throttling, it was too low.

                Multiple Lambda workers

                This is simply awesome! Amazon SQS gives you a possibility to subscribe multiple functions to one queue! We sent 10000 messages to a queue set as an event source for 4 different functions. You’ll notice that every Lambda was executed about 2500 times. It means that this setup behaves like a load-balancer. However, it’s not possible to subscribe Lambdas from different regions and create a global load balancer.

                We had so much fun trying out this feature. Amazon SQS as an event source for Lambda allows you to easily process messages without using containers or EC2 instances. When it was released, we were thinking about the design of our new project. It matched our requirements perfectly and we are already using it! But remember that these are Lambda workers and the solution is not suitable for heavy load processing, because you are limited to the 5-minute timeout, memory constraints, and concurrency limit. Do you need to queue lots of short tasks? Maybe you need some kind of a buffer to securely execute asynchronous calls? Give it a try, it’s awesome!

                If you’d like to find out more about this service, get in contact with one of our experts here.

                Blog

                Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

                When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

                Blog

                Building better SaaS products with UX Writing (Part 3)

                UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

                Blog

                Building better SaaS products with UX Writing (Part 2)

                The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

                Get in Touch

                Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








                  AWS Fargate – Bringing Serverless to Microservice

                  CATEGORIES

                  Blog

                  Microservices architecture

                  Microservices architecture has been a key focus for a lot of organisations in the past few years. Organisations around the world are changing from the traditional monolithic architecture – to a faster time-to-market, automated, and deployable microservices architecture. Microservices architecture approach has its number of benefits, but the two that come up the most are how the software is deployed and how it is managed throughout its lifecycle.

                  Pokémon Go & Kubernetes

                  Let’s look at a real world scenario, Pokémon Go. We wouldn’t have Pokémon Go, if it wasn’t for Niantic Labs and Google’s Kubernetes. For those of you who played this once addictive game back in the summer of 2016, you know all about the technical issues they had. It was the microservice approach of using Kubernetes that allowed Pokémon Go to fix technical issues in a matter of hours, rather than weeks. This was due to the fact that each microservice was able to be updated with a new patch, and thousands of containers to be created during peak times within seconds.

                  When it comes to microservices and using the popular container engine like docker with a container orchestration software like Kubernetes (K8’s), with a microservice architecture everything in the website server is broken down into its own individual API’s. Giving microservices more agility, flexible scaling, and the freedom to pick what programming language or version is used for that one API instead of all of them.

                  It is can be defined more ways than one, but it is commonly used to deploy well-defined API’s, and to help make delivery and deployment streamlined.

                   

                  Serverless the next big thing

                  Some experts believe that serverless will be the next big thing. Serverless doesn’t mean there is no servers, but it does mean that the management and capacity planning are hidden from the DevOps teams. Maybe you have heard about FaaS (Functions as a Service) or AWS Lambda. FaaS is not for everyone, but what if we could bring some of the serverless architecture along with the microservice architecture.

                   

                  AWS Fargate

                  This is why back in November at the AWS re:Invent 2017 (see the deep dive here), AWS announced a new service called AWS Fargate. AWS Fargate is a container service that allows you to provision containers without the need to worry about the underlying infrastructure (VM/Container/Nodes instances). AWS Fargate will control ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service). Currently only available in the us-east-1 in Preview Mode.

                  AWS Fargate, simplifies the complex management of microservices, by allowing developers to focus on the main task of creating API’s. You will still need to worry about the memory and CPU that is required for the API’s or application, but the beauty of AWS Fargate is that you never have to worry about provisioning servers or clusters. This is because AWS Fargate will autoscale for you. This is where microservices and Serverless meet.

                  Blog

                  Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

                  When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

                  Blog

                  Building better SaaS products with UX Writing (Part 3)

                  UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

                  Blog

                  Building better SaaS products with UX Writing (Part 2)

                  The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

                  Get in Touch

                  Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








                    Market leaders always push the envelope

                    CATEGORIES

                    Blog

                    In this blog post, I will be picking up on what my colleague Sandip discussed in his latest blog post, ‘Innovating by Making a Difference’. Based on that, I wanted to take the opportunity to talk about how Nordcloud Germany have managed to stay on top of the industry for the last year or two. It’s been about focussing on the right things at the right time. For example, we haven’t worked in the Private Cloud space, and we haven’t been involved in the SaaS world of productivity, collaboration or CRM. We have stayed focussed purely on leading Public Cloud platforms; AWS, Azure & Google to deliver full-stack consultancy and services.

                    At Nordcloud, we’re able to keep our customers – not just ourselves – on top of the game, by understanding everything we can, identifying the most valuable for our customers and then adopting the latest services of each of the providers. These are, for example, services around containers, (Kubernetes for instance), and serverless (Lambda), and also the Internet of Things and Machine Learning. Our work with companies of all industries and sizes is the foundation of being able to filter the different technologies for what matters the most. In this sense, our customers are those who teach us how to help them best and we can then pick the best technologies to do just that.

                    We were recently screened by the leading Cloud market analyst in Germany against how we deliver state of the art managed Cloud services. Check out CRISP’s perspective here (in German). 

                    We’re proud to be recognised as a leading provider in the Cloud consulting and service industry, who stands out amongst a vast number of peers in the market. If there is one thing we have realised throughout the years – both as a company and as individuals – it’s that you shouldn’t stop innovating and questioning. To stay on top, it’s not enough to just do the basics well. You have to keep going forward and step beyond your comfort zone at all times. At the same time, you shouldn’t be running after each new hype, but picking your game wisely and then building up expertise and concepts around that area.

                    Blog

                    Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

                    When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

                    Blog

                    Building better SaaS products with UX Writing (Part 3)

                    UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

                    Blog

                    Building better SaaS products with UX Writing (Part 2)

                    The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

                    Get in Touch

                    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.