Monitoring your home temperature – Part 3: Visualizing your data with Power BI

Finally, we have our infrastructure up and running and we can start visualizing it with Power BI.

No.. not like this 🙂

Just login to your Power BI account:

https://app.powerbi.com/

Select My Workspace and click Datasets + dataflows. Click the three dots from your dataset and choose Create Report.

Let’s create line charts for temperature, humidity and air pressure. Each of them on a separate page. I’ll show how to configure the temperature first.

Go to Visualizations pane and choose Line chart:

Stretch the chart to fill your page and go under Fields and expand your table. Drag ‘EventEnqueuedUtcTime’ to Axis and ‘temperature’ to Values:

You should already see the graph of your temperature. You can rename Axis and Values to a more friendly name like Time and Temperature (Celsius). Change the color of your graph etc..

Add filter just next to the graph to show data from the past 7 days:

The end result should be something like this:

Temperature

Mine has been customised for Finnish language. It also has an average temperature line. Just for demo purpose, I placed my RuuviTag outside, so there are some changes for temperature (If you wonder why my floor temperature is 7 degrees at night..) You can add the humidity and the air pressure with the same method to the same page or you can create own separate pages for them. To make it more clear, I have separate pages for each of them:

Air pressure
Humidity

I also added one page for minimum, maximum and average values:

So go ahead and customize however you like and leave a comment if you have good suggestions for customization. Remember to save your report from File and Save. If you want to share it, you have an option to publish it to anyone with Publish to web.

After this, you can scale up your environment by adding more RuuviTags. This is the end of this series, thanks for reading and let me know if you have any questions!


Get to know the whole project by reading the parts 1 and 2 of the series here and here!

This blog text is originally published in Senior Cloud Architect Sami Lahti’s personal Tech Life blog.

Blog

Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

Blog

Building better SaaS products with UX Writing (Part 3)

UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

Blog

Building better SaaS products with UX Writing (Part 2)

The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








    Monitoring your home temperature – Part 2: Setting up Azure

    Now we need to setup the Azure side and for that you need an Azure subscription. You can get a new subscription with 170€ of credits for 30 days with this link, if you don’t have anything yet:

    https://azure.microsoft.com/en-us/free/

    As this is a demo environment, we do not focus heavily on security. You need some basic understanding how to deploy resources. I’ll guide you on the configuration side. Btw, If you want to learn basics of Azure, there is a nice learning path at Microsoft Learning:

    https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/

    Here you can see my demo environment:

    IoT Hub is for receiving temperature messages from Raspberry Zero. Stream Analytics Job is for pushing those messages to the Power BI. The Automation Account is for starting and shutting down the Stream Analytics job, so it won’t consume too much credits.

    Normally, we would deploy this process with ARM, but for make it more convenience to present and instruct, let’s do it from the Portal. And if you like, you can follow naming convention from CAF:

    https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging

    Start deploying the IoT Hub. Use a globally unique name and public endpoints (You can restrict access to your home ip-address if you like) and choose F1: Free Tier. With the Free Tier you can send 8000 messages per day, so if you have five Raspberry Pis, each of them can send one message per minute. Enough for home use usually.

    After you have created the IoT Hub, you need to create an IoT Device under it. I used same hostname as my Raspberry Pi has:

    Then you need to copy your Primary Connection String from your IoT Device:

    After copying that string, you need to create an environment variable for your Raspberry Pi. You can use a script to automatically add it after every boot.

    Here is an example how you create an environment variable:

    export IOTHUB_DEVICE_CONNECTION_STRING="HostName=YourIoTHub-devices.net;DeviceId=YourIotDevice;SharedAccessKey=YourKey"

    Last thing to IoT Hub is to add a consumer group. You can add it under Built-in Endpoints and Events. Just add a custom name under $Default. Here you can see, that I added ‘ruuvitagcg’:

    Next, you want to create a Stream Analytics Job. For that, you need only one unit and environment should be Cloud. There is no free tier for this and it costs some money to keep it running. Luckily, we can turn it off whenever we don’t use it. I used the Automation Account to start it a few minutes before I receive a message and turn it off few minutes after. There is a minor cost for the Automation Account also, but without it, the total cost would be much higher. I receive a message only once per hour, so Stream Analytics is running only 96 minutes every day, instead of 1440 minutes. The total monthly cost is something like 4€. Normally it would be almost 70€.

    Here are my Automation Account scripts:

    $connectionName = "RuuvitagConnection"
    $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName         
    
    Connect-AzAccount `
        -ServicePrincipal `
        -TenantId $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
    
    Start-AzStreamAnalyticsJob -ResourceGroupName "YourRG" -Name "YourStreamAnalytics" -OutputStartMode "JobStartTime"
    
    $connectionName = "RuuvitagConnection"
    $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName         
    
    Connect-AzAccount `
        -ServicePrincipal `
        -TenantId $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
    
    Stop-AzStreamAnalyticsJob -ResourceGroupName "YourRG" -Name "YourStreamAnalytics"

    Next, head to Stream Analytics Job and click Inputs.

    Click Add stream input. Above is my configuration, you just need to add your own consumer group configured earlier. For Endpoint, choose Messaging. Use service for Shared access policy name (It is created default with a new IoT Hub).

    Now, move on to Outputs and click Add. Choose Power Bi and click Authorize, if you dont have Power BI yet, you can Sign Up.

    Fill in Dataset name and Table name. For the Authentication mode, we need to use User token if you have a Free Power BI version (v2 upgrade is not yet possible in Free).

    Then create the following Query and click Test query, you should see some results:

    Now, the only thing to do is to visualize our data with the Power BI. We will cover that part in the next post, but the infrastructure side is ready to rock. Grab a cup of coffee and pat yourself to the back 🙂


    Get to know the whole project by reading the parts 1 and 3 of the series here and here!

    This blog text is originally published in Senior Cloud Architect Sami Lahti’s personal Tech Life blog. Follow the blog to stay up to date about Sami’s writings!

    Blog

    Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

    When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

    Blog

    Building better SaaS products with UX Writing (Part 3)

    UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

    Blog

    Building better SaaS products with UX Writing (Part 2)

    The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

    Get in Touch

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








      Monitoring your home temperature – Part 1: Setting up RuuviTags and Raspberry Pi

      We moved to a new house three years ago and since then, we have had issues with floor temperatures. It’s hard to maintain steady temperature for all rooms and I wanted to build a solution to keep track of it and see temperature trend for a whole week. That way, I know exactly what’s happening.

      I will guide how to setup your own environment using same method.

      First, you need temperature sensors and those I recommend RuuviTags. You can find detailed information at their website, but they are very handy bluetooth beacons with battery that lasts multiple years. You can measure temperature, movement, humidity and air pressure. There is a mobile app for them also, but it only shows current status, so it didn’t fit to my purpose.

      So, I needed to push data to somewhere and an obvious choice was Azure. I will write more about Azure side of things in the next part of the blog. But in this part, we will setup a single RuuviTag and Raspberry Pi for sending data. You can add more RuuviTags later, like I did when everything is working as expected.

      First, I recommend updating RuuviTag to the latest firmware. This page has instructions for it:

      https://lab.ruuvi.com/dfu/

      Updating firmware to latest with nRF Connect

      I used an iOS app called nRF Connect for it and it went quite smoothly. You can check that your RuuviTag still works after updating with Ruuvi Station app:

      iOS or Android

      You will also need Raspberry Pi for sending data to the cloud. I recommend using Raspberry Zero W, because we need only WiFi and Bluetooth for this. I have mine plugged in my kitchen at the moment, but you will just need it to be range of the RuuviTags bluetooth signal (and of course WiFi).

      Raspberry Pi Zero W with clear plastic case

      Mine has a clear acrylic case for protection, a power supply and a memory card. Data is not saved to Raspberry memory card, so no need for bigger memory card than usual. I installed Raspbian Buster, because at time, there were issues with Azure IoT Hub Python modules with the latest Raspbian image.

      Here is a page with instructions how to install a Raspbian image to an SD card:

      https://www.raspberrypi.org/documentation/installation/installing-images/

      After you have installed image, boot up your Raspberry and do basic stuff like update it, change passwords etc…

      You can find how to configure your Raspberry here:

      https://www.raspberrypi.org/documentation/configuration/

      After you have done everything, you need to setup Python scripts and modules. Two modules are needed: RuuviTag (ruuvitag_sensor) and Azure IoT Hub Client (azure-iot-device). And you need MAC of your RuuviTag, you can find it with Ruuvi Station App under Settings of your RuuviTag.

      Then you need to create a Python script to get that temperature data, here is my script:

      import asyncio
      import os
      from azure.iot.device.aio import IoTHubDeviceClient
      from ruuvitag_sensor.ruuvitag import RuuviTag
      
      #Replace 'xx:xx:xx:xx:xx:xx' with your RuuviTags MAC
      sensor = RuuviTag('xx:xx:xx:xx:xx:xx')
      state = sensor.update()
      state = str(sensor.state)
      
      
      async def main():
          # Fetch the connection string from an enviornment variable
          conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")
          
          # Create instance of the device client using the connection string
          device_client = IoTHubDeviceClient.create_from_connection_string(conn_str)
      
          # Send a single message
          try:
            print("Sending message...")
            await device_client.send_message(state)
            print("Message successfully sent!")
          
          except:
            print("Message sent failed!")
      
          # finally, disconnect
          await device_client.disconnect()
      
      
      if __name__ == "__main__":
          asyncio.run(main())

      Code gets current temperature from RuuviTag and sends it to Azure IoT Hub. You can see that this code needs IOTHUB_DEVICE_CONNECTION_STRING environment variable and you don’t have that yet, so we need to update it later.

      You can put this code to crontab and run it like every 15min or whatever suits your needs.

      Next time, we will setup Azure side…

      Links:

      https://lab.ruuvi.com/dfu/ (RuuviTag firmware)

      https://github.com/ttu/ruuvitag-sensor (RuuviTag module)

      https://github.com/Azure/azure-iot-sdk-python (Azure IoT module)

      https://www.raspberrypi.org/ (Raspberry Pi)


      Get to know the whole project by reading the parts 2 and 3 of the series here and here.

      This blog text is originally published in Senior Cloud Architect Sami Lahti’s personal Tech Life blog. Follow the blog to stay up to date about Sami’s writings!

      Blog

      Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

      When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

      Blog

      Building better SaaS products with UX Writing (Part 3)

      UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

      Blog

      Building better SaaS products with UX Writing (Part 2)

      The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

      Get in Touch

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








        Lambda layers for Python runtime

        CATEGORIES

        BlogTech Community

        AWS Lambda

        AWS Lambda is one of the most popular serverless compute services in the public cloud, released in November 2014. It runs your code in the response to events like DynamoDB, SNS or HTTP triggers without provisioning or managing any infrastructure. Lambda takes care of most of the things required to run your code and provides high availability. It allows you to execute even up to 1000 parallel functions at once! Using AWS lambda you can build applications like:

        • Web APIs
        • Data processing pipelines
        • IoT applications
        • Mobile backends
        • and many many more…

        Creating AWS Lambda is super simple: you just need to create a zip file with your code, dependencies and upload it to S3 bucket. There are also frameworks like serverless or SAM that handles deploying AWS lambda for you, so you don’t have to manually create and upload the zip file.

        There is, however, one problem.

        You have created a simple function which depends on a large number of other packages. AWS lambda requires you to zip everything together. As a result, you have to upload a lot of code that never changes what increases your deployment time, takes space, and costs more.

        AWS Lambda Layers

        Fast forward 4 years later at 2018 re:Invent AWS Lambda Layers are released. This feature allows you to centrally store and manage data that is shared across different functions in the single or even multiple AWS accounts! It solves a certain number of issues like:

        • You do not have to upload dependencies on every change of your code. Just create an additional layer with all required packages.
        • You can create custom runtime that supports any programming language.
        • Adjust default runtime by adding data required by your employees. For example, there is a team of Cloud Architects that builds Cloud Formation templates using the troposphere library. However, they are no developers and do not know how to manage python dependencies… With AWS lambda layer you can create a custom environment with all required data so they could code in the AWS console.

        But how does the layer work?

        When you invoke your function, all the AWS Lambda layers are mounted to the /opt directory in the Lambda container. You can add up to 5 different layers. The order is really important because layers with the higher order can override files from the previously mounted layers. When using Python runtime you do not need to do any additional operations in your code, just import library in the standard way. But, how will my python code know where to find my data?

        That’s super simple, /opt/bin is added to the $PATH environment variable. To check this let’s create a very simple Python function:

        
        import os
        def lambda_handler(event, context):
            path = os.popen("echo $PATH").read()
            return {'path': path}
        

        The response is:

         
        {
            "path": "/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin\n"
        }
        

         

        Existing pre-defined layers

        AWS layers have been released together with a single, publicly accessible library for data processing containing 2 libraries: NumPyand SciPy. Once you have created your lambda you can click  `Add a layer` in the lambda configuration. You should be able to see and select the AWSLambda-Python36-SciPy1x layer. Once you have added your layer you can use these libraries in your code. Let’s do a simple test:

        
        import numpy as np
        import json
        
        
        def lambda_handler(event, context):
            matrix = np.random.randint(6, size=(2, 2))
            
            return {
                'matrix': json.dumps(matrix.tolist())
            }
        

        The function response is:

         >code>
        {
          "matrix": "[[2, 1], [4, 2]]"
        }
        

         

        As you can see it works without any effort.

        What’s inside?

        Now let’s check what is in the pre-defined layer. To check the mounted layer content I prepared simple script:

        
        import os
        def lambda_handler(event, context):
            directories = os.popen("find /opt/* -type d -maxdepth 4").read().split("\n")
            return {
                'directories': directories
            }
        

        In the function response you will receive the list of directories that exist in the /opt directory:

        
        {
          "directories": [
            "/opt/python",
            "/opt/python/lib",
            "/opt/python/lib/python3.6",
            "/opt/python/lib/python3.6/site-packages",
            "/opt/python/lib/python3.6/site-packages/numpy",
            "/opt/python/lib/python3.6/site-packages/numpy-1.15.4.dist-info",
            "/opt/python/lib/python3.6/site-packages/scipy",
            "/opt/python/lib/python3.6/site-packages/scipy-1.1.0.dist-info"
          ]
        }
        

        Ok, so it contains python dependencies installed in the standard way and nothing else. Our custom layer should have a similar structure.

        Create Your own layer!

        Our use case is to create an environment for our Cloud Architects to easily build Cloud Formation templates using troposphere and awacs libraries. The steps comprise:
        <h3″>Create virtual env and install dependencies

        To manage the python dependencies we will use pipenv.

        Let’s create a new virtual environment and install there all required libraries:

        
        pipenv --python 3.6
        pipenv shell
        pipenv install troposphere
        pipenv install awacs
        

        It should result in the following Pipfile:

        
        [[source]]
        url = "https://pypi.org/simple"
        verify_ssl = true
        name = "pypi"
        [packages]
        troposphere = "*"
        awacs = "*"
        [dev-packages]
        [requires]
        python_version = "3.6"
        

        Build a deployment package

        All the dependent packages have been installed in the $VIRTUAL_ENV directory created by pipenv. You can check what is in this directory using ls command:

         
        ls $VIRTUAL_ENV
        

        Now let’s prepare a simple script that creates a zipped deployment package:

        
        PY_DIR='build/python/lib/python3.6/site-packages'
        mkdir -p $PY_DIR                                              #Create temporary build directory
        pipenv lock -r > requirements.txt                             #Generate requirements file
        pip install -r requirements.txt --no-deps -t $PY_DIR     #Install packages into the target directory
        cd build
        zip -r ../tropo_layer.zip .                                  #Zip files
        cd ..
        rm -r build                                                   #Remove temporary directory
        
        

        When you execute this script it will create a zipped package that you can upload to AWS Layer.

         

        Create a layer and a test AWS function

        You can create a custom layer and AWS lambda by clicking in AWS console. However, real experts use CLI (AWS lambda is the new feature so you have to update your awscli to the latest version).

        To publish new Lambda Layer you can use the following command (my zip file is named tropo_layer.zip):

        
        aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip
        

        As the response, you should receive the layer arn and some other data:

        
        {
            "Content": {
                "CodeSize": 14909144,
                "CodeSha256": "qUz...",
                "Location": "https://awslambda-eu-cent-1-layers.s3.eu-central-1.amazonaws.com/snapshots..."
            },
            "LayerVersionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1",
            "Version": 1,
            "Description": "",
            "CreatedDate": "2018-12-01T22:07:32.626+0000",
            "LayerArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test"
        }
        

        The next step is to create AWS lambda. Yor lambda will be a very simple script that generates Cloud Formation template to create EC2 instance:

         
        from troposphere import Ref, Template
        import troposphere.ec2 as ec2
        import json
        def lambda_handler(event, context):
            t = Template()
            instance = ec2.Instance("myinstance")
            instance.ImageId = "ami-951945d0"
            instance.InstanceType = "t1.micro"
            t.add_resource(instance)
            return {"data": json.loads(t.to_json())}
        

        Now we have to create a zipped package that contains only our function:

        
        zip tropo_lambda.zip handler.py
        

        And create new lambda using this file (I used an IAM role that already exists on my account. If you do not have any role that you can use you have to create one before creating AWS lambda):

        
        aws lambda create-function --function-name tropo_function_test --runtime python3.6 
        --handler handler.lambda_handler 
        --role arn:aws:iam::xxxxxxxxxxxx:role/service-role/some-lambda-role 
        --zip-file fileb://tropo_lambda.zip
        

        In the response, you should get the newly created lambda details:

        
        {
            "TracingConfig": {
                "Mode": "PassThrough"
            },
            "CodeSha256": "l...",
            "FunctionName": "tropo_function_test",
            "CodeSize": 356,
            "RevisionId": "...",
            "MemorySize": 128,
            "FunctionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:function:tropo_function_test",
            "Version": "$LATEST",
            "Role": "arn:aws:iam::xxxxxxxxx:role/service-role/some-lambda-role",
            "Timeout": 3,
            "LastModified": "2018-12-01T22:22:43.665+0000",
            "Handler": "handler.lambda_handler",
            "Runtime": "python3.6",
            "Description": ""
        }
        

        Now let’s try to invoke our function:

        
        aws lambda invoke --function-name tropo_function_test --payload '{}' output
        cat output
        {"errorMessage": "Unable to import module 'handler'"}
        
        

        Oh no… It doesn’t work. In the CloudWatch you can find detailed log message: `Unable to import module ‘handler’: No module named ‘troposphere’` This error is obvious. Default python3.6 runtime does not contain troposphere library. Now let’s add layer we created in the previous step to our function:

        
        aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1
        

        When you invoke lambda again you should get the correct response:

        
        {
          "data": {
            "Resources": {
              "myinstance": {
                "Properties": {
                  "ImageId": "ami-951945d0",
                  "InstanceType": "t1.micro"
                },
                "Type": "AWS::EC2::Instance"
              }
            }
          }
        }
        

        Add a local library to your layer

        We already know how to create a custom layer with python dependencies, but what if we want to include our local code? The simplest solution is to manually copy your local files to the /python/lib/python3.6/site-packages directory.

        First, let prepare the test module that will be pushed to the layer:

        
        $ find local_module
        local_module
        local_module/__init__.py
        local_module/echo.py
        $ cat cat local_module/echo.py
        def echo_hello():
            return "hello world!"
        

        To manually copy your local module to the correct path you just need to add the following line to the previously used script (before zipping package):

        
        cp -r local_module 'build/python/lib/python3.6/site-packages'
        

        This works, however, we strongly advise transforming your local library into the pip module and installing it in the standard way.

        Update Lambda layer

        To update lambda layer you have to run the same code as before you used to create a new layer:

        
        aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip
        

        The request should return LayerVersionArn with incremented version number (arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2 in my case).

        Now update lambda configuration with the new layer version:

         
        aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2
        
        

        Now you should be able to import local_module in your code and use the echo_hello function.

         

        Serverless framework Layers support

        Serverless is a framework that helps you to build applications based on the AWS Lambda Service. It already supports deploying and using Lambda Layers. The configuration is really simple – in the serverless.yml file, you provid the path to the layer location on your disk (it has to path to the directory – you cannot use zipped package, it will be done automatically). You can either create a separate serverless.yml configuration for deploying Lambda Layer or deploy it together with your application.

        We’ll show the second example. However, if you want to benefit from all the Lambda Layers advantages you should deploy it separately.

        
        service: tropoLayer
        package:
          individually: true
        provider:
          name: aws
          runtime: python3.6
        layers:
          tropoLayer:
            path: build             # Build directory contains all python dependencies
            compatibleRuntimes:     # supported runtime
              - python3.6
        functions:
          tropo_test:
            handler: handler.lambda_handler
            package:
              exclude:
               - node_modules/**
               - build/**
            layers:
              - {Ref: TropoLayerLambdaLayer } # Ref to the created layer. You have to append 'LambdaLayer'
        string to the end of layer name to make it working
        

        I used the following script to create a build directory with all the python dependencies:

        
        PY_DIR='build/python/lib/python3.6/site-packages'
        mkdir -p $PY_DIR                                              #Create temporary build directory
        pipenv lock -r > requirements.txt                             #Generate requirements file
        pip install -r requirements.txt -t $PY_DIR                   #Install packages into the target direct
        

        This example individually packs a Lambda Layer with dependencies and your lambda handler. The funny thing is that you have to convert your lambda layer name to be TitleCased and add the `LambdaLayer` suffix if you want to refer to that resource.

        Deploy your lambda together with the layer, and test if it works:

        
        sls deploy -v --region eu-central-1
        sls invoke -f tropo_test --region eu-central-1
        

        Summary

        It was a lot of fun to test Lambda Layers and investigate how it technically works. We will surely use it in our projects.

        In my opinion, AWS Lambda Layers is a really great feature that solves a lot of common issues in the serverless world. Of course, it is not suitable for all the use cases. If you have a simple app, that does not require a huge number of dependencies it’s easier for you to have everything in the single zip file because you do not need to manage additional layers.

        Read more on AWS Lambda in our blog!

        Notes from AWS re:Invent 2018 – Lambda@edge optimisation

        Running AWS Lambda@Edge code in edge locations

        Amazon SQS as a Lambda event source

        Blog

        Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

        When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

        Blog

        Building better SaaS products with UX Writing (Part 3)

        UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

        Blog

        Building better SaaS products with UX Writing (Part 2)

        The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

        Get in Touch

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.