Counting Faces with AWS DeepLens and IoT Analytics

CATEGORIES

BlogTech Community

It’s pretty easy to detect faces with AWS DeepLens. Amazon provides a pre-trained machine learning model for face detection so you won’t have to deal with any low-level algorithms or training data. You just deploy the ML model and a Lambda function to your DeepLens device and it starts automatically sending data to the cloud.

In the cloud you can leverage AWS IoT and IoT Analytics to collect and process the data received from DeepLens. No programming is needed. All you need to do is orchestrate the services to work together and enter one SQL query that calculates daily averages of the faces seen.

Connecting DeepLens to the cloud

We’ll assume that you have been able to obtain a DeepLens device. They are currently only being sold in the US, so if you live in another country, you may need to get creative.

Before you can do anything with your DeepLens, you must connect it to the Amazon cloud. You can do this by opening the DeepLens service in AWS Console and following the instructions to register your device. We won’t go through the details here since AWS already provides pretty good setup instructions.

Deploying a DeepLens project

To deploy a machine learning application on DeepLens, you need to create a project. Amazon provides a sample project template for face detection. When you create a DeepLens project based on this template, AWS automatically creates a Lambda function and attaches the pre-trained face detection machine learning model to the project.

The default face detection model is based on MXNet. You can also import your own machine learning models developed with TensorFlow, Caffe and other deep learning frameworks. You’ll be able to train these models with the AWS SageMaker service or using a custom solution. For now, you can just stick with the pre-trained model to get your first application running.

Once the project has been created, you can deploy it to your DeepLens device.  DeepLens can run only one project at a time, so your device will be dedicated to running just one machine learning model and Lambda function continuously.

After a successful deployment, you will start receiving AWS IoT MQTT messages from the device. The sample application sends messages continuously, even if no faces are detected.

You probably want to optimize the Lambda function by adding an “if” clause to only send messages when one or more faces are actually detected. Otherwise you’ll be sending empty data every second. This is fairly easy to change in the Python code, so we’ll leave it as an exercise for the reader.

At this point, take note of your DeepLens infer topic. You can find the topic by going to the DeepLens Console and finding the Project Output view under your Device. Use the Copy button to copy it to your clipboard.

Setting up AWS IoT Analytics

You can now set up AWS IoT Analytics to process your application data. Keep in mind that because DeepLens currently only works in the North Virginia region (us-east-1), you also need to create your AWS IoT Analytics resources in this region.

First you’ll need to create a Channel. You can choose any Channel ID and keep most of the settings at their defaults.

When you’re asked for the IoT Core topic filter, paste the topic you copied earlier from the Project Output view. Also, use the Create new IAM role button to automatically create the necessary role for this application.

Next you’ll create a Pipeline. Select the previously created Channel and choose Actions / Create a pipeline from this channel.

AWS Console will ask you to select some Attributes for the pipeline, but you can ignore them for now and leave the Pipeline activities empty. These activities can be used to preprocess messages before they enter the Data Store. For now, we just want to messages to be passed through as they are.

At the end of the pipeline creation, you’ll be asked to create a Data Store to use as the pipeline’s output. Go ahead and create it with the default settings and choose any name for it.

Once the Pipeline and the Data Store have been created, you will have a fully functional AWS IoT Analytics application. The Channel will start receiving incoming DeepLens messages from the IoT topic and sending them through the Pipeline to the Data Store.

The Data Store is basically a database that you can query using SQL. We will get back to that in a moment.

Reviewing the auto-created AWS IoT Rule

At this point it’s a good idea to take a look at the AWS IoT Rule that AWS IoT Analytics created automatically for the Channel you created.

You will find IoT Rules in the AWS IoT Core Console under the Act tab. The rule will have one automatically created IoT Action, which forwards all messages to the IoT Analytics Channel you created.

Querying data with AWS IoT Analytics

You can now proceed to create a Data Set in IoT Analytics. The Data Set will execute a SQL query over the data in the Data Store you created earlier.

Find your way to the Analyze / Data sets section in the IoT Analytics Console. Select Create and then Create SQL.

The console will ask you to enter an ID for the Data Set. You’ll also need to select the Data Store you created earlier to use as the data source.

The console will then ask you to enter this SQL query:

SELECT DATE_TRUNC(‘day’, __dt) as Day, COUNT(*) AS Faces
FROM deeplensfaces
GROUP BY DATE_TRUNC(‘day’, __dt)
ORDER BY DATE_TRUNC(‘day’, __dt) DESC

Note that “deeplensfaces” is the ID of the Data Source you created earlier. Make sure you use the same name consistently. Our screenshots may have different identifiers.

The Data selection window can be left to None.

Use the Frequency setting to setup a schedule for your SQL query. Select Daily so that the SQL query will run automatically every day and replace the previous results in the Data Set.

Finally, use Actions / Run Now to execute the query. You will see a preview of the current face count results, aggregated as daily total sums. These results will be automatically updated every day according to the schedule you defined.

Accessing the Data Set from applications

Congratulations! You now have IoT Analytics all set up and it will automatically refresh the face counts every day.

To access the face counts from your own applications, you can write a Lambda function and use the AWS SDK to retrieve the current Data Set content. This example uses Node.js:

const AWS = require('aws-sdk')
const iotanalytics = new AWS.IoTAnalytics()
iotanalytics.getDatasetContent({
  datasetName: 'deeplensfaces',
}).promise().then(function (response) {
  // Download response.dataURI
})

The response contains a signed dataURI which points to the S3 bucket with the actual results in CSV format. Once you download the content, you can do whatever you wish with the CSV data.

Conclusion

This has been a brief look at how to use DeepLens and IoT Analytics to count the number of faces detected by the DeepLens camera.

There’s still room for improvement. Amazon’s default face detection model detects faces in every video frame, but it doesn’t keep track whether the same face has already been seen in previous frames.

It gets a little more complicated to enhance the system to detect individual persons, or to keep track of faces entering and exiting frames. We’ll leave all that as an exercise for now.

If you’d like some help in developing machine learning applications, please feel free to contact us.

Blog

Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

Blog

Building better SaaS products with UX Writing (Part 3)

UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

Blog

Building better SaaS products with UX Writing (Part 2)

The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








    Lambda layers for Python runtime

    CATEGORIES

    BlogTech Community

    AWS Lambda

    AWS Lambda is one of the most popular serverless compute services in the public cloud, released in November 2014. It runs your code in the response to events like DynamoDB, SNS or HTTP triggers without provisioning or managing any infrastructure. Lambda takes care of most of the things required to run your code and provides high availability. It allows you to execute even up to 1000 parallel functions at once! Using AWS lambda you can build applications like:

    • Web APIs
    • Data processing pipelines
    • IoT applications
    • Mobile backends
    • and many many more…

    Creating AWS Lambda is super simple: you just need to create a zip file with your code, dependencies and upload it to S3 bucket. There are also frameworks like serverless or SAM that handles deploying AWS lambda for you, so you don’t have to manually create and upload the zip file.

    There is, however, one problem.

    You have created a simple function which depends on a large number of other packages. AWS lambda requires you to zip everything together. As a result, you have to upload a lot of code that never changes what increases your deployment time, takes space, and costs more.

    AWS Lambda Layers

    Fast forward 4 years later at 2018 re:Invent AWS Lambda Layers are released. This feature allows you to centrally store and manage data that is shared across different functions in the single or even multiple AWS accounts! It solves a certain number of issues like:

    • You do not have to upload dependencies on every change of your code. Just create an additional layer with all required packages.
    • You can create custom runtime that supports any programming language.
    • Adjust default runtime by adding data required by your employees. For example, there is a team of Cloud Architects that builds Cloud Formation templates using the troposphere library. However, they are no developers and do not know how to manage python dependencies… With AWS lambda layer you can create a custom environment with all required data so they could code in the AWS console.

    But how does the layer work?

    When you invoke your function, all the AWS Lambda layers are mounted to the /opt directory in the Lambda container. You can add up to 5 different layers. The order is really important because layers with the higher order can override files from the previously mounted layers. When using Python runtime you do not need to do any additional operations in your code, just import library in the standard way. But, how will my python code know where to find my data?

    That’s super simple, /opt/bin is added to the $PATH environment variable. To check this let’s create a very simple Python function:

    
    import os
    def lambda_handler(event, context):
        path = os.popen("echo $PATH").read()
        return {'path': path}
    

    The response is:

     
    {
        "path": "/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin\n"
    }
    

     

    Existing pre-defined layers

    AWS layers have been released together with a single, publicly accessible library for data processing containing 2 libraries: NumPyand SciPy. Once you have created your lambda you can click  `Add a layer` in the lambda configuration. You should be able to see and select the AWSLambda-Python36-SciPy1x layer. Once you have added your layer you can use these libraries in your code. Let’s do a simple test:

    
    import numpy as np
    import json
    
    
    def lambda_handler(event, context):
        matrix = np.random.randint(6, size=(2, 2))
        
        return {
            'matrix': json.dumps(matrix.tolist())
        }
    

    The function response is:

     >code>
    {
      "matrix": "[[2, 1], [4, 2]]"
    }
    

     

    As you can see it works without any effort.

    What’s inside?

    Now let’s check what is in the pre-defined layer. To check the mounted layer content I prepared simple script:

    
    import os
    def lambda_handler(event, context):
        directories = os.popen("find /opt/* -type d -maxdepth 4").read().split("\n")
        return {
            'directories': directories
        }
    

    In the function response you will receive the list of directories that exist in the /opt directory:

    
    {
      "directories": [
        "/opt/python",
        "/opt/python/lib",
        "/opt/python/lib/python3.6",
        "/opt/python/lib/python3.6/site-packages",
        "/opt/python/lib/python3.6/site-packages/numpy",
        "/opt/python/lib/python3.6/site-packages/numpy-1.15.4.dist-info",
        "/opt/python/lib/python3.6/site-packages/scipy",
        "/opt/python/lib/python3.6/site-packages/scipy-1.1.0.dist-info"
      ]
    }
    

    Ok, so it contains python dependencies installed in the standard way and nothing else. Our custom layer should have a similar structure.

    Create Your own layer!

    Our use case is to create an environment for our Cloud Architects to easily build Cloud Formation templates using troposphere and awacs libraries. The steps comprise:
    <h3″>Create virtual env and install dependencies

    To manage the python dependencies we will use pipenv.

    Let’s create a new virtual environment and install there all required libraries:

    
    pipenv --python 3.6
    pipenv shell
    pipenv install troposphere
    pipenv install awacs
    

    It should result in the following Pipfile:

    
    [[source]]
    url = "https://pypi.org/simple"
    verify_ssl = true
    name = "pypi"
    [packages]
    troposphere = "*"
    awacs = "*"
    [dev-packages]
    [requires]
    python_version = "3.6"
    

    Build a deployment package

    All the dependent packages have been installed in the $VIRTUAL_ENV directory created by pipenv. You can check what is in this directory using ls command:

     
    ls $VIRTUAL_ENV
    

    Now let’s prepare a simple script that creates a zipped deployment package:

    
    PY_DIR='build/python/lib/python3.6/site-packages'
    mkdir -p $PY_DIR                                              #Create temporary build directory
    pipenv lock -r > requirements.txt                             #Generate requirements file
    pip install -r requirements.txt --no-deps -t $PY_DIR     #Install packages into the target directory
    cd build
    zip -r ../tropo_layer.zip .                                  #Zip files
    cd ..
    rm -r build                                                   #Remove temporary directory
    
    

    When you execute this script it will create a zipped package that you can upload to AWS Layer.

     

    Create a layer and a test AWS function

    You can create a custom layer and AWS lambda by clicking in AWS console. However, real experts use CLI (AWS lambda is the new feature so you have to update your awscli to the latest version).

    To publish new Lambda Layer you can use the following command (my zip file is named tropo_layer.zip):

    
    aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip
    

    As the response, you should receive the layer arn and some other data:

    
    {
        "Content": {
            "CodeSize": 14909144,
            "CodeSha256": "qUz...",
            "Location": "https://awslambda-eu-cent-1-layers.s3.eu-central-1.amazonaws.com/snapshots..."
        },
        "LayerVersionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1",
        "Version": 1,
        "Description": "",
        "CreatedDate": "2018-12-01T22:07:32.626+0000",
        "LayerArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test"
    }
    

    The next step is to create AWS lambda. Yor lambda will be a very simple script that generates Cloud Formation template to create EC2 instance:

     
    from troposphere import Ref, Template
    import troposphere.ec2 as ec2
    import json
    def lambda_handler(event, context):
        t = Template()
        instance = ec2.Instance("myinstance")
        instance.ImageId = "ami-951945d0"
        instance.InstanceType = "t1.micro"
        t.add_resource(instance)
        return {"data": json.loads(t.to_json())}
    

    Now we have to create a zipped package that contains only our function:

    
    zip tropo_lambda.zip handler.py
    

    And create new lambda using this file (I used an IAM role that already exists on my account. If you do not have any role that you can use you have to create one before creating AWS lambda):

    
    aws lambda create-function --function-name tropo_function_test --runtime python3.6 
    --handler handler.lambda_handler 
    --role arn:aws:iam::xxxxxxxxxxxx:role/service-role/some-lambda-role 
    --zip-file fileb://tropo_lambda.zip
    

    In the response, you should get the newly created lambda details:

    
    {
        "TracingConfig": {
            "Mode": "PassThrough"
        },
        "CodeSha256": "l...",
        "FunctionName": "tropo_function_test",
        "CodeSize": 356,
        "RevisionId": "...",
        "MemorySize": 128,
        "FunctionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:function:tropo_function_test",
        "Version": "$LATEST",
        "Role": "arn:aws:iam::xxxxxxxxx:role/service-role/some-lambda-role",
        "Timeout": 3,
        "LastModified": "2018-12-01T22:22:43.665+0000",
        "Handler": "handler.lambda_handler",
        "Runtime": "python3.6",
        "Description": ""
    }
    

    Now let’s try to invoke our function:

    
    aws lambda invoke --function-name tropo_function_test --payload '{}' output
    cat output
    {"errorMessage": "Unable to import module 'handler'"}
    
    

    Oh no… It doesn’t work. In the CloudWatch you can find detailed log message: `Unable to import module ‘handler’: No module named ‘troposphere’` This error is obvious. Default python3.6 runtime does not contain troposphere library. Now let’s add layer we created in the previous step to our function:

    
    aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1
    

    When you invoke lambda again you should get the correct response:

    
    {
      "data": {
        "Resources": {
          "myinstance": {
            "Properties": {
              "ImageId": "ami-951945d0",
              "InstanceType": "t1.micro"
            },
            "Type": "AWS::EC2::Instance"
          }
        }
      }
    }
    

    Add a local library to your layer

    We already know how to create a custom layer with python dependencies, but what if we want to include our local code? The simplest solution is to manually copy your local files to the /python/lib/python3.6/site-packages directory.

    First, let prepare the test module that will be pushed to the layer:

    
    $ find local_module
    local_module
    local_module/__init__.py
    local_module/echo.py
    $ cat cat local_module/echo.py
    def echo_hello():
        return "hello world!"
    

    To manually copy your local module to the correct path you just need to add the following line to the previously used script (before zipping package):

    
    cp -r local_module 'build/python/lib/python3.6/site-packages'
    

    This works, however, we strongly advise transforming your local library into the pip module and installing it in the standard way.

    Update Lambda layer

    To update lambda layer you have to run the same code as before you used to create a new layer:

    
    aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip
    

    The request should return LayerVersionArn with incremented version number (arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2 in my case).

    Now update lambda configuration with the new layer version:

     
    aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2
    
    

    Now you should be able to import local_module in your code and use the echo_hello function.

     

    Serverless framework Layers support

    Serverless is a framework that helps you to build applications based on the AWS Lambda Service. It already supports deploying and using Lambda Layers. The configuration is really simple – in the serverless.yml file, you provid the path to the layer location on your disk (it has to path to the directory – you cannot use zipped package, it will be done automatically). You can either create a separate serverless.yml configuration for deploying Lambda Layer or deploy it together with your application.

    We’ll show the second example. However, if you want to benefit from all the Lambda Layers advantages you should deploy it separately.

    
    service: tropoLayer
    package:
      individually: true
    provider:
      name: aws
      runtime: python3.6
    layers:
      tropoLayer:
        path: build             # Build directory contains all python dependencies
        compatibleRuntimes:     # supported runtime
          - python3.6
    functions:
      tropo_test:
        handler: handler.lambda_handler
        package:
          exclude:
           - node_modules/**
           - build/**
        layers:
          - {Ref: TropoLayerLambdaLayer } # Ref to the created layer. You have to append 'LambdaLayer'
    string to the end of layer name to make it working
    

    I used the following script to create a build directory with all the python dependencies:

    
    PY_DIR='build/python/lib/python3.6/site-packages'
    mkdir -p $PY_DIR                                              #Create temporary build directory
    pipenv lock -r > requirements.txt                             #Generate requirements file
    pip install -r requirements.txt -t $PY_DIR                   #Install packages into the target direct
    

    This example individually packs a Lambda Layer with dependencies and your lambda handler. The funny thing is that you have to convert your lambda layer name to be TitleCased and add the `LambdaLayer` suffix if you want to refer to that resource.

    Deploy your lambda together with the layer, and test if it works:

    
    sls deploy -v --region eu-central-1
    sls invoke -f tropo_test --region eu-central-1
    

    Summary

    It was a lot of fun to test Lambda Layers and investigate how it technically works. We will surely use it in our projects.

    In my opinion, AWS Lambda Layers is a really great feature that solves a lot of common issues in the serverless world. Of course, it is not suitable for all the use cases. If you have a simple app, that does not require a huge number of dependencies it’s easier for you to have everything in the single zip file because you do not need to manage additional layers.

    Read more on AWS Lambda in our blog!

    Notes from AWS re:Invent 2018 – Lambda@edge optimisation

    Running AWS Lambda@Edge code in edge locations

    Amazon SQS as a Lambda event source

    Blog

    Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

    When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

    Blog

    Building better SaaS products with UX Writing (Part 3)

    UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

    Blog

    Building better SaaS products with UX Writing (Part 2)

    The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

    Get in Touch

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








      What is Amazon FreeRTOS and why should you care?

      CATEGORIES

      BlogTech Community

      At Nordcloud, we’ve been working with AWS IoT since Amazon released it

      We’ve enabled some great customer success stories by leveraging the high-level features of AWS IoT. We combine those features with our Serverless development expertise to create awesome cloud applications. Our projects have ranged from simple data collection and device management to large-scale data lakes and advanced edge computing solutions.

       

      In this article we’ll take a look at what Amazon FreeRTOS can offer for IoT solutions

      First released in November 2017, Amazon FreeRTOS is a microcontroller (MCU) operating system. It’s designed for connecting lightweight microcontroller-based devices to AWS IoT and AWS Greengrass. This means you can have your sensor and actuator devices connect directly to the cloud, without having smart gateways acting as intermediaries.

      DSC06409 - Saini Patala Photography

      What are microcontrollers?

      If you’re unfamiliar with microcontrollers, you can think of them as a category of smart devices that are too lightweight to run a full Linux operating system. Instead, they run a single application customized for some particular purpose. We usually call these applications firmware. Developers combine various operating system components and application components into a firmware image and “burn” it on the flash memory of the device. The device then keeps performing its task until a new firmware is installed.

      Firmware developers have long used the original FreeRTOS operating system to develop applications on various hardware platforms. Amazon has extended FreeRTOS with a number of features to make it easy for applications to connect to AWS IoT and AWS Greengrass, which are Amazon’s solutions for cloud based and edge based IoT. Amazon FreeRTOS currently includes components for basic MQTT communication, Shadow updates, AWS Greengrass endpoint discovery and Over-The-Air (OTA) firmware updates. You get these features out-of-the-box when you build your application on top of Amazon FreeRTOS.

      Amazon also runs a FreeRTOS qualification program for hardware partners. Qualified products have certain minimum requirements to ensure that they support Amazon FreeRTOS cloud features properly.

      Use cases and scenarios

      Why should you use Amazon FreeRTOS instead of Linux? Perhaps your current IoT solution depends on a separate Linux based gateway device, which you could eliminate to cut costs and simplify the solution. If your ARM-based sensor devices already support WiFi and are capable of running Amazon FreeRTOS, they could connect directly to AWS IoT without requiring a separate gateway.

      Edge computing scenarios might require a more powerful, Linux based smart gateway that runs AWS Greengrass. In such cases you can use Amazon FreeRTOS to implement additional lightweight devices such as sensors and actuators. These devices will use MQTT to talk to the Greengrass core, which means you don’t need to worry about integrating other communications protocols to your system.

      In general, microcontroller based applications have the benefit of being much more simple than Linux based systems. You don’t need to deal with operating system updates, dependency conflicts and other moving parts. Your own firmware code might introduce its own bugs and security issues, but the attack surface is radically smaller than a full operating system installation.

      How to try it out

      If you are interested in Amazon FreeRTOS, you might want to order one of the many compatible microcontroller boards. They all sell for less than $100 online. Each board comes with its own set of features and a toolchain for building applications. Make sure to pick one that fits your purpose and requirements. In particular, not all of the compatible boards include support for Over-The-Air (OTA) firmware upgrades.

      At Nordcloud we have tried out two Amazon-qualified boards at the time of writing:

      • STM32L4 Discovery Kit
      • Espressif ESP-WROVER-KIT (with Over-The-Air update support)

      ST provides their own SystemWorkBench Ac6 IDE for developing applications on STM32 boards. You may need to navigate the websites a bit, but you’ll find versions for Mac, Linux and Windows. Amazon provides instructions for setting everything up and downloading a preconfigured Amazon FreeRTOS distribution suitable for the device. You’ll be able to open it in the IDE, customize it and deploy it.

      Espressif provides a command line based toolchain for developing applications on ESP32 boards which works on Mac, Linux and Windows. Amazon provides instructions on how to set it up for Amazon FreeRTOS. Once the basic setup is working and you are able to flash your device, there are more instructions for setting up Over-The-Air updates.

      Both of these devices are development boards that will let you get started easily with any USB-equipped computer. For actual IoT deployments you’ll probably want to look into more customized hardware.

      Conclusion

      We hope you’ll find Amazon FreeRTOS useful in your IoT applications.

      If you need any help in planning and implementing your IoT solutions, feel free to contact us.

      Blog

      Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

      When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

      Blog

      Building better SaaS products with UX Writing (Part 3)

      UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

      Blog

      Building better SaaS products with UX Writing (Part 2)

      The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

      Get in Touch

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








        Leveraging AWS Greengrass for Edge IoT Solutions

        CATEGORIES

        Blog

        There is a growing demand for intelligent edge solutions that not only collect data, but also control on-premise equipment at industrial customer sites. Historically such solutions have often been based on low-level custom firmware that has required technical specialists to develop and maintain.

        AWS Greengrass has significantly lowered the barrier for edge IoT development by extending familiar cloud technologies to the edge. Cloud architects and cloud application developers can use their existing knowledge of serverless development and programming languages they already master. In many cases the same exact code can be run both in the cloud and at the edge as a Greengrass Lambda application. This has proven very useful for use cases like KPI algorithms and diagnostic logic that need to be executed both centrally in the cloud and in distributed fashion on the equipment located at the edge.

        Building blocks for IoT

        It’s important to keep in mind that Amazon usually offers the building blocks for making applications, not the actual end-user applications. This also applies to Greengrass and AWS IoT in general. You get an extensive set of features for building IoT applications, but you still need to put them together into an application that solves the business case requirements. Amazon calls this eliminating the “undifferentiated heavy lifting”. Application developers don’t have to deal with low level issues like scaling databases or designing communication protocols which have already been solved in general. Instead they can focus on implementing the business-specific features and logic relevant to the use case.

        In fact, as the AWS IoT platform has evolved in recent years, the need custom databases has been almost completely eliminated. AWS IoT Device Management provides a flexible way to organize IoT devices into groups and hierarchies. Custom metadata can be attached to the devices, enabling indexing and searching. You no longer start a project by designing database tables from scratch, but instead you first look at what AWS IoT already offers you out-of-the-box.

        The same principle applies to business logic. In many cases there is no need to write custom code, because AWS IoT’s MQTT based messaging platform offers simpler ways to filter, route and process data. This is particularly important for datalake solutions, because the amount of data processed can be quite large. If you can completely omit custom code, you don’t have to worry about scaling it. The best datalake solutions simply connect a few services like AWS IoT, Kinesis Firehose and Amazon S3 together, and the data is automatically collected into S3 buckets regardless of its size and bandwidth.

        Business logic at the Edge

        In the case of Greengrass edge solutions you still usually need Lambda functions to implement business logic. Greengrass contains functionality for topic-based MQTT routing, but to process the contents of MQTT messages, some code is needed. However, the implementation can be just a few lines of code to execute the required algorithm as a Lambda function. Developers don’t have to worry about building containers, opening network connections or configuring security settings. Greengrass takes care of all the details of deploying the Lambda function.

        It’s worth noting though that larger customers usually prefer to build a customized management system on top of AWS IoT and Greengrass. There are lots of exposed details and moving parts when dealing with “raw” AWS IoT devices and Greengrass deployments. When a lightweight business-specific management layer is built on top of them, end-users can deal with familiar concepts and ignore most unnecessary details. Power users can still access the underlying technologies simply by using the AWS Console.

        Blog

        Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

        When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

        Blog

        Building better SaaS products with UX Writing (Part 3)

        UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

        Blog

        Building better SaaS products with UX Writing (Part 2)

        The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

        Get in Touch

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








          Cloud Computing News #4: IoT in the Cloud

          CATEGORIES

          Blog

          This week we focus on IoT in the cloud.

           

          AWS IOT platform is great for startups

          IoT for all lists 7 reasons why start up companies like iRobot, GoPro, and Under Armour have chosen AWS IoT platform:

          1. Starting with AWS IOT is easy: The AWS IoT platform connects IoT devices to the cloud and allows them to securely interact with each other and various IoT applications.
          2.  High IoT security: Amazon doesn’t spare resources to protect its customers’ data, devices, and communication.
          3. AWS cherises and cultivates startup culture: AWS has helped multiple IoT startups get off the ground and startups are a valuable category of Amazon’s target audience.
          4. Serverless approach and AWS Lambda are right for startups: startups can reduce the cost of building prototypes and add agility to the development process as well as build a highly customizable and flexible, serverless back end that is highly automated.
          5. AWS IoT Analytics paired with AI and Machine LearningAWS IoT Analytics and Amazon Kinesis Analytics answer to high demand for data-analytic capacities in IoT.
          6. Amazon partners with a broad network of IoT device manufacturers, IoT device startups, and IoT software providers.
          7. The range of AWS products and services: the top provider of cloud services has a range of solutions tailored for major customer categories, including startups.

          Read more in IoT for all

           

          IoT – 5 predictions for 2019 and their impact

          Forbes makes five IoT predictions for 2019:

          1. Growth across the board: IoT market and connectivity statistics show numbers mostly in the billions (check the article below)
          2. Manufacturing and healthcare – deeper penetration: Market analysts predict the number of connected devices in the manufacturing industry will double between 2017 and 2020.
          3. Increased security at all end points: Increase in end point security solutions to prevent data loss and give insights into network health and threat protection.
          4. Smart areas or smart neighborhoods in cities. Smart sensors around the neighborhood will record everything from walking routes, shared car use, sewage flow, and temperature choice 24/7.
          5. Smart cars – increased market penetration for IoT: Diagnostic information, connected apps, voice search, current traffic information, and more to come.

          Read more on these predictions in Forbes

           

          IoT is growing at an exponential rate

          According to Forbes IoT is one of the most-researched emerging markets globally. The magazine lists 10 charts on the explosive growth of IoT adoption and market.

          Here below a few teasers, check all charts in Forbes.

          1. According to Statista, by 2020, Discrete Manufacturing, Transportation & Logistics and Utilities industries are projected to spend $40B each on IoT platforms, systems, and services.
          2. McKinsey predicts the IoT market will be worth $581B for ICT-based spend alone by 2020, growing at a Compound Annual Growth Rate (CAGR) between 7 and 15%.
          3. Smart Cities (23%), Connected Industry (17%) and Connected Buildings (12%) are the top three IoT projects in progress (IoT Analytics).
          4. GE found that Industrial Internet of Things (IIoT) applications are relied on by 64% power and energy (utilities) companies to succeed with their digital transformation initiatives.
          5. Industrial products lead all industries in IoT adoption at 45% with an additional 22% planning in 12 months, according to Forrester.
          6. Harley Davidson reduced its build-to-order cycle by a factor of 36 and grew overall profitability by 3% to 4% by shifting production to a fully IoT-enabled plant according to Deloitte.

           

          Philips is tapping into the IoT market with AWS

          According to the NetworkWorld, IDC forecasts the IoT market will reach $1.29 trillion by 2020. Philips is turning toothbrushes and MRI machines into IoT devices to tap this market and to keep patients more healthy and the machines running more smoothly.

          “We’re transforming from mainly a device-focused business to a health technology company focused on the health continuum of care and service”, says Dale Wiggins, VP and General Manager of the Philips HealthSuite Digital Platform. “By connecting our devices and modalities in the hospital or consumer environment, it provides more data that can be used to benefit our customers.”

          Philips relies on a combination of AWS services and tools, including the company’s IoT platform, Amazon’s CloudWatch and Cloud Formation. Philips uses predictive algorithms and data analysis tools to monitor activity, identify trends and report abnormal behavior.

          Read more in NetworkWorld

           

          DATA DRIVEN SOLUTIONS AT NORDCLOUD

          Our data driven solutions will make an impact on your business with better control and valuable business insight with IoT, modern data platforms and advanced analytics based on machine learning. How can we help you take your business to the next level? 

           

           

           

          Blog

          Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

          When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

          Blog

          Building better SaaS products with UX Writing (Part 3)

          UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

          Blog

          Building better SaaS products with UX Writing (Part 2)

          The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

          Get in Touch

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








            How Amazon’s IoT platform controls things without servers

            CATEGORIES

            Blog

            Amazon’s IoT platform is a framework for connecting smart devices to the cloud. It aims to make the basic processes of collecting data and controlling devices as simple as possible. AWS IoT is a fully managed service, which means the customer doesn’t have to worry about configuring servers or updating operating systems. The platform simply exposes a set of APIs and automatically scales from a single device to millions of devices.

            I recently wrote an article (in Finnish) in my personal blog about using AWS IoT for home automation. AWS IoT is not exactly designed for this purpose, but if you are tech savvy enough, it can be used for it. The pricing is currently set at $5 per million messages, which lasts a long time when you’re only dealing with a couple of devices sending occasional messages.

            The home automation experiment provides a convenient context for discussing the basic concepts of AWS IoT. In the next few sections, I will refer to the elements of a simple home system that detects human presence in rooms and turns on the lights if it happens at a certain time of the day. All the devices are connected to the Amazon cloud via public Internet.

            Device Registration

            The first step in most IoT projects is to register the devices (also called “things”) into a centrally managed database. AWS IoT provides this database for free and lets you add any number of devices in it. The registration is important because each device also gets its own SSL/TLS certificate and private key, which are used for authentication and encryption. The devices can only be connected to AWS IoT by using their certificates and private keys.

            The AWS IoT device registry also works as a simple asset management database. It lets you attach attributes to devices and maintain information such as customer IDs. The device registry can later be queried based on these attribute values. For example, you can find all devices belonging to a specific customer ID. The attributes are optional, so they can just be ignored if they’re not needed.

            In the home automation experiment, two devices were added to the registry: A wireless human presence detector and a Philips Hue light control bridge.

            Data Collection

            Almost any IoT scenario involves collecting device data. Amazon provides the AWS IoT Device SDK for connecting devices to the IoT platform. The SDK is typically used to develop a small application that runs on the device (or on a gateway connected to the device) and transmits data to the cloud.

            There are two ways to deliver data to the AWS IoT platform. The first one is to send raw MQTT messages, which are usually small JSON objects. You can then setup AWS IoT rules to forward these messages to other Amazon cloud services for further processing. In the home automation scenario, a rule specifies that all messages received under the topic “presence-detected” should be forwarded to an Amazon Lambda microservice, which then decides what to do with the information.

            The other way is to use Thing Shadows, which are built into the AWS IoT platform. Every registered device has a “shadow” which contains its latest reported state. The state is stored as a JSON document, which can contain 8 kilobytes worth of fields and values. This makes it easy and cost-effective to store the current state of any device in the cloud, without requiring an external database. For instance, a device equipped with a thermometer might regularly report its current state as a JSON object that looks like this: {“temperature”:22}.

            Moreover, It’s important to understand that Thing Shadows cannot be used as a general-purpose database. You can only look up a single Thing Shadow at a time, and it will only contain the current state. Indeed, you will need a separate database if you want to analyze historical time series of data. However, keep in mind that Amazon offers a wide range of databases you can easily connect to AWS IoT, by forwarding Thing Shadow updates to services like DynamoDB or Kinesis. This seamless integration between all Amazon cloud services is one of the key advantages of AWS IoT.

            Data Analysis and Decision Making

            Since Amazon already offers a wide range of data analysis services, the AWS IoT platform itself doesn’t include any new tools for analyzing data. Existing analysis services include products like Redshift, Elastic MapReduce, Amazon Machine Learning and various others. Device data is typically collected into S3 buckets using Kinesis Firehose and then processed by these services.

            Device data can also be forwarded to Amazon Lambda microservices for real-time decision making. A JavaScript function will be executed every time a data point is received. This is suitable for the home automation scenario, where a single IoT message is sent whenever presence is detected in a room. The JavaScript function considers various factors, such as the current time of day, and decides whether to turn the lights on.

            In addition to existing solutions, Amazon has announced an upcoming product called Kinesis Analytics. It will enable real-time analytics of streaming IoT data, similar to Apache Storm. This means that data can be analyzed on-the-fly without storing it in a database. For instance, you could maintain a rolling average of values and react to it instead of individual data points.

            Device Control

            The AWS IoT platform can control devices in the same two ways it collects data. The first way is to send raw MQTT messages directly to devices. Devices will react to the messages when they receive them. The problem with this approach is that devices might sometimes have network or electricity issues, which may cause the loss of some control messages.

            Thing Shadows provide a more reliable way to have devices enter a desired state. A Thing Shadow will remember the new desired state and keep retrying until the device has acknowledged it.

            In the home automation scenario, when presence is detected, the desired state of a lamp is set to {“light”:true}. When the lamp receives this desired state, it turns on the light and reports its current state back to AWS IoT as {“light”:true}. Once the reported state is the same as the desired state, the Thing Shadow of the lamp is known to be in sync.

            User Interfaces and Data Visualization

            You may use the AWS IoT Console to manually control devices by modifying their desired state. The console will show the current state and update it on the screen as it changes. This is, of course, a very low-level way to control lighting since you need to log in as a cloud administrator and then manually edit the JSON documents.

            Then again, a better way is to build a web application that integrates to AWS IoT and offers a friendly user interface for controlling things. AWS provides rich infrastructure options for developing integrated mobile and web applications. Amazon API Gateway and Lambda are typically used to build a backend API that lets applications access IoT data. The data itself may be stored in a database like DynamoDB or Postgres. The access can be limited to authenticated users only using Amazon Cognito or a custom IAM solution.

            For data visualization purposes, Amazon has recently announced an upcoming product called Amazon QuickSight, which will integrate with other Amazon services and databases. There are also many third-party solutions available through the AWS Marketplace. If any of these options doesn’t fit the use case well, a custom solution can always be developed as part of a web application.

            My Findings

            AWS IoT is a fast and easy way to get started on the Internet of Things. All the scenarios discussed in this article are based on managed cloud services. This means that you never have to maintain your own servers or worry about scaling.

            For small-scale projects the operating costs are negligible. For larger scale projects, the costs will depend on the amount and frequency of the data being transferred. There are no fixed monthly or hourly fees, which makes personal experimentation at home very convenient.

            Blog

            Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

            When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

            Blog

            Building better SaaS products with UX Writing (Part 3)

            UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

            Blog

            Building better SaaS products with UX Writing (Part 2)

            The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

            Get in Touch

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








              ARCHITECTURE FOR THE 21ST CENTURY GENERATION – THIS TIME, RE:IMAGINED

              CATEGORIES

              Blog

              Much like the last 5 years at re:Invent, we were treated on the Thursday to a keynote by Werner Vogels, speaking at the MGM Grand Garden Arena. It’s a huge space and the production values that AWS brings to their keynotes (coupled with the 16,800 capacity) made for an electric start to the morning.

              Vogels started the keynote by reflecting on the keynotes he has delivered over the last 5 years. During his first ever keynote back in 2012, Vogels discussed 21st-century architecture. He provided 4 guiding commandments: Controllable, Resilient, Adaptive, and Data Driven. He returned to this theme by calling this particular keynote ’21st Century Architectures, re:Imagined’

               

              It was made clear from the start that, unlike previous years, there would be relatively few announcements. He was true to his word, and instead focussed on just a few key themes. Vogels took time to thank AWS’s customers, reflecting that in the beginning, they knew they had to be collaborative to succeed. They wanted to build a collection of ‘nimble’ tools which could be assembled to build what customers needed. AWS listen to customer feedback, launching services that are rock solid, then working with customers to set the roadmap and development priorities.

              AWS want to help you build services for the future, and a lot of the announcements this week are enabled by developments in technology that have come about in the last 2-3 years.

               

              Voice As A Control System

              One of the themes Vogels spoke about was IoT and allowing whole environments to become accessible. Every device has the ability to become an input or output device, but with so many out there, it’s good to consider how we interact with all of them and their systems. Vogels believes that digital interfaces of the future will be human-centric, and the things that we as humans use to communicate will become the inputs to systems. The first of these will be the voice as it’s the most natural and easiest interaction.

              Once you can use your voice to control systems, Vogel suggested people won’t look back, from surgeons operating theatre equipment, to simply controlling the lighting or heating in your house, it will unlock digital systems for everyone.

              To demonstrate this point, Vogels talked about the International Rice Research Institute who provide rice farmers advice on how much and which fertiliser to put on their crops based on their years of research. Consumption of this information was very low until they invested in a voice interface. Farmers can call, select from one of 27 dialects, and provide information on their land and crop conditions. They then use voice recognition and machine learning to read back to the farmer which fertilizer they need.

              This was building up to the announcement of Alexa Business, a service that ‘makes it easier for you to introduce Alexa to your organization, providing the tools you need to set-up and manage Alexa enabled devices, enroll users, and assign skills at scale’

               

              Ensure You Are Well Architected

              The next theme of the keynote was architecture. Typically, systems have three planes: Admin, Control, and Data. (Vogels suggested architecture that extensive was difficult to visualise on marketing slides!) The AWS Well Architected Framework was launched two years ago and has grown from a single document to five pillars across five documents with two ‘lenses’. It guides the user on how to architect for specific use cases, (currently HPC and Serverless). The framework is included in AWS certifications and AWS regularly run boot camps and ‘Well Architected Reviews’ for its customers.

               

              Dance Like No One Is Watching, Encrypt Like Everyone Is

              This particular section had a strong focus on security and availability. On security, Vogels recapped everything you need to ensure you are doing, from implementing a strong identity foundation to automating security best practices. The need to encrypt everything was also highlighted and security has become a problem for all. Developers are now seen to be the new security team and everything needs to be remembered. For example, ensuring the security of the CI/CD pipeline, as well as ensuring security within the pipeline.

              Development has also changed over time, meaning you need to be more security aware. It’s more collaborative, there are more languages, and more services and teams are combining. To help out, AWS have launched Cloud9a cloud-based IDE, including a code editor, debugger, and a terminal pre-packaged with essential tools (JavaScript, PHP, Python), to allow you to write, run and debug your code, so you don’t need to set-up your development environments to start new projects.

               

              Everything Will Fail. All The Time

              Availability, reliability, and resilience were discussed, from the basics, (hard dependencies reduce availability, redundant dependencies increase availability) to the best practices of Distributed Systems, through to deployment automation and testing. Nora Jones (Netflix), gave the example of using Chaos Engineering and how they do this at Netflix.

              Vogels highlighted that available systems cost more and therefore it becomes a business decision whether to easily run something in a single availability zone, but only achieve 99% of uptime. If you want to increase this you need to distribute your services across multiple availability zones or even regions. DynamoDB Global Tables, for example, help you to do this, becoming the ultimate tool in reliability design. Although this has little to do with AWS (and more to do with decisions made within organisations), AWS can make this much easier for you. This brings us nicely onto the final part of the keynote – letting AWS do the ‘heavy lifting’ through its managed services.

              Galls Law says, “A complex system that worked is invariably found to have evolved from a simple system that worked”. AWS allows you to keep your systems simple by providing nimble services which you can assemble to build what you need. If you run your own RDBMS, you have to take care of the control and data planes. If you run on AWS, AWS manages the control plane. AWS Managed Services are designed for AWS to control the complex and hard to manage moving parts. making it simpler for you. This was demonstrated by Abby Fuller speaking about containers on AWS, and how Amazon Fargate can help you to make your environment much more simple. AWS will continue to release managed services over the next year.

               

              Serverless

              Serverless was something that couldn’t possibly be missed out of this keynote, with it being the ultimate AWS Managed Service. There is no server management, has flexible scaling, high availability, and no idle capacity. Here are the final (Lambda) product announcements

              In addition, the AWS Serverless Application Repository was also announcedallowing users to discover collections of serverless apps and easily deploy these into your account in a few clicks. You can also publish your own apps to share with the community, allowing you to easily consume their 3rd party Lambda functions and apply them to your environments.

               

              If you would like to understand how Nordcloud can help you take advantage of AWS Managed Services, help discuss whether your environment is well architected for, or discuss any other of the releases made this week, please get in touch. 

               

              Blog

              Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

              When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

              Blog

              Building better SaaS products with UX Writing (Part 3)

              UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

              Blog

              Building better SaaS products with UX Writing (Part 2)

              The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

              Get in Touch

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








                Day 2 at Re:Invent – Builders & Musicians Come Together

                CATEGORIES

                Blog

                When Werner Vogels makes bold statements, expectations are set high. So when Vogel’s tweeted 15 minutes before the start of re:Invent’s day 2 keynote, we had to wonder what was coming.

                And how right we were. The close to 3 hours spent in the Venetian hotel in Las Vegas was an experience in itself.

                Andy Jassy opened the keynote with a long list of customers and partners, alongside the latest business figures. AWS are currently running at an 18 billion run rate with an incredible 42% YoY growth. With millions of active customers – defined as accounts that have used AWS in the last 30 days – the platform is by far the most used on the planet.

                As per Gartner’s 2016 Worldwide Market Segment Share analysis, the company (successfully led by Jassy), has achieved a 44.1% market share in 2016, up from 39% in 2015, more than everyone else combined. This became easily noticeable when AWS displayed an entire catalogue of new services throughout the keynote. The general stance Jassy took this year was that AWS are trying to serve their customers exactly what they asked for in terms of new products. The mission of AWS is nothing short of fixing the IT industry in favour of the end-users and customers.

                The first on stage was a live ‘house’ band, performing a segment of ‘Everything is Everything’ by Lauryn Hill, the chorus rhyming with ‘after winter must come spring’. Presumably, AWS was referring to the world of IT still being in a kind of eternal ‘winter’. The concept we also heard here was that AWS would not stop building their portfolio and that they want to offer all the tools their ‘builders’ and customers need.

                AWS used Jassy’s keynote for some big announcements (of course, set to music), with themes across the following areas:

                • Compute
                • Database
                • Data Analytics
                • Machine Learning and
                • IoT

                The Compute Revolution Goes On

                Starting in the compute services area, an overview of the vast number of compute instance types and families were shown, with special emphasis given to the Elastic GPU options. There were a few announcements also made on the Tuesday night, including Bare Metal InstancesStreamlined Access to Spot Capacity & Hibernationmaking it easier for you to get up to 90% of savings on normal pricing. There was also M5 instances which offer better-priced performance than their predecessors, and H1 instances offering fast and dense storage for Big Data applications.

                However, with the arrival of Kubernetes in the industry, it was the release of the Elastic Kubernetes that was the most eagerly anticipated. Not only have AWS recognised that their customers wanted Kubernetes on AWS, but they also realise that there’s a lot of manual labour involved in maintaining and managing the servers that run ECS & EKS.

                To solve this particular problem, AWS announced AWS Fargate, a fully managed service for both ECS & EKS meaning no more server management and therefore increasing the ROI in running containers on the platform. This is available for ECS now and will be available for EKS in early 2018.

                Having started with servers and containers, Jassy then moved on to the next logical evolution of infrastructure services: Serverless. With a 300% usage growth, it’s fair to say that if you’re not running something on Lambda yet, you will be soon. Jassy reiterated that AWS are building services that integrate with the rest of the AWS platform to ensure that builders don’t have to compromise. They want to make progress and get things done fast. Ultimately, this is what AWS compute will mean to the world: faster results. Look out for a dedicated EKS blog post coming soon!

                Database Freedom

                The next section of the keynote must have had some of AWS’s lawyers on the edge of their seats, and also the founder of a certain database vendor… AWS seem to have a clear goal to put an end to the historically painful ‘lock-in’ some customers experience, referring frequently to ‘database freedom’. There’s a lot of cool things happening with databases at the moment, and many of the great services and solutions shown at re:Invent are built using AWS database services. Out of all of these, Aurora is by far growing the fastest, and actually is the fastest growing service in the entire history of AWS.

                People love Aurora because it can scale out for millions of reads per second. It can also autoscale new read replicas and offers seamless recovery from reading replica failures. People want to be able to do this faster, which is why AWS launched a new Aurora features, Auto Multi-Master. This allows for zero application downtime due to any write node failure (previously, AWS suggested this took around 30 seconds), and zero downtime due to an availability zone failure. During 2018 AWS will also introduce the ability to have multi-region masters – this will allow customers to easily scale their applications across regions have a single, consistent data source.

                Lastly, and certainly not least, was the announcement of Aurora Serverless. which is an on-demand, auto-scaling, Serverless version of Aurora. The users pay by the second – an unbelievably powerful feature for many use cases.

                Finally, Jassy turned its focus point to DynamoDB service, which scaled to ~12.9 million requests per second at its peak during the last Amazon Prime Day. Just let that sink in for a moment! The DynamoDB service is used by a huge number of major global companies, powering mission-critical workloads of all kinds. The reason for this is, from our perspective, is the fact that it’s very easy to access and use as a service. What was announced today was the new feature DynamoDB Global Tables. This enables users to build high performance, globally distributed applications.

                The final database feature released for DynamoDB was managed back-up & restore, allowing for on-demand backups, point-in-time recovery (in the past 35 days), allowing backups for data archival or regulatory requirements to be taken of hundreds of TB with no interruption.

                Jassy wrapped up the database section of his keynote by announcing Amazon Neptune, a fully managed graph database which will make it easy to build and run applications that work with highly connected data sets.

                Analytics

                Next Jassy turned to Analytics, commenting that people want to be using S3 as their data lake. Athena allows for easy querying of structured data within S3, however, most analytics jobs involve processing only a subset of the data stored within S3 objects and Athena requires the whole object to the processed. To ease the pain, AWS released S3 Select – allowing for applications, (including Athena) to retrieve a subset of data from an S3 object using simple SQL expressions – AWS claim drastic performance increases – possibly up to 400% performance.

                Many of our customers are required by regulation to store logs for up to 7 years and as such ship them to Glacier to reduce the cost of storage. This becomes problematic if you need to query this data though. How great would it be if this could become part of your data lake? Jassy asked, before announcing Glacier Select. Glacier Select allows for queries to be run directly on data stored in Glacier, extending your data lake into Glacier while reducing your storage costs.

                Machine Learning

                The house band introduced Machine Learning with ‘Let it Rain’ from Eric Clapton. Dr Matt Woods made an appearance and highlighted how important machine learning is to Amazon itself. The company uses a lot of it, from personal recommendations on Amazon.com to Fulfillment automation & inventory in its warehouses.

                Jassy highlighted that AWS only invests in building technology that its customers need, (and, remember Amazon.com is a customer!) not because it is cool, or it is funky. Jassy described three tiers of Machine Learning: Frameworks and Interfaces, Platform Services & Application Services.

                At the Frameworks and Interfaces tier emphasis was placed on the broad range of frameworks that could be used on AWS, recognising that one shoe does not fit every foot and the best results come when using the correct tool for the job. Moving to the Platform Services tier, Jassy highlighted that most companies do not have to expect machine learning practitioners (yet) – it is after all a complex beast. To make this easy for developers, Amazon SageMaker was announced – a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models at any scale.

                Also at the platform tier, AWS launched DeepLens, a deep learning enabled wireless video camera designed to help developers grow their machine learning skills. This integrates directly with SageMaker giving developers an end-to-end solution to learn, develop and test machine learning applications. DeepLens will ship in early 2018, available on Amazon.com for $249.

                The machine learning announcements did not stop there! As Jassy moved into the Application Services tier AWS launched:

                IoT

                Finally, Jassy turned to IoT – identifying five ‘frontiers’ each with its own release, either available now, or in early 2018:

                1. Getting into the game – IoT One Click (in Preview) will make it easy for simple devices to trigger AWS Lambda functions that execute a specific action.
                2. Device Management – AWS IoT Device Management will provide fleet management of connected devices, including the onboarding, organisation, monitor and remote management through a devices lifetime.
                3. IoT Security – AWS IoT Device Defender (early 2018) will provide security management to your fleet of IoT devices, including auditing to ensure your fleet meets best practice.
                4. IoT Analytics – AWS IoT Analytics, making it easy to cleanse, process, enrich, store, and analyze IoT data at scale.
                5. Smaller Devices – Amazon FreeRTOS, an operating system for microcontrollers.

                Over the next weeks and days, the Nordcloud team will be diving deeper into these new announcements, (including our first thoughts after getting our hands on the new releases) We’ll also publish our thoughts and how they can benefit you.

                It should be noted that, compared to previous years, AWS are announcing more outside the keynotes, in sessions and on their Twitch Channel and so there are many new releases which are not gaining the attention they might deserve. Examples include T2 UnlimitedInter-Region VPC Peering and Launch Templates for EC2 – as always the best place to keep up-to-date is the AWS ‘whats new‘ page.

                If you would like to discuss how any of today’s announcements could benefit your business, please get in touch.

                Blog

                Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

                When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

                Blog

                Building better SaaS products with UX Writing (Part 3)

                UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

                Blog

                Building better SaaS products with UX Writing (Part 2)

                The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

                Get in Touch

                Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








                  Keeping up with the latest skills: AWS IoT, Polly, and Rekognition

                  CATEGORIES

                  Blog

                  Recently, I secured a number of AWS IoT Buttons for our office to play with and wanted to try to see how easy they would be to set-up and use in various mock-up applications. In the spirit of playing around with the buttons and keeping up my technical skills related to the AWS platform, I decided to make a small proof-of-concept project around them by collecting some old Android devices I had lying around, and various bits and pieces of AWS services such as Image recognition.

                  The concept I finally settled with is a remote surveillance camera solution which can be triggered remotely with the AWS IoT Button, and which performs simple image recognition labelling the image content in the form of gender, roughage, mood, and other parameters. The solution will update a “monitoring” website where the latest surveillance image will be shown and the recognised characteristics spoke out for the viewer, removing the need to read the monitor in detail.

                  For building the actual solution I selected the following tools and technologies together with the AWS platform:

                  • Android tablet – I like to repurpose and recycle old and unused items, so I decided to use a decommissioned tablet as the IoT device which will act as the camera module for the system. Android devices are, in my opinion, one of the best toys to have lying around for building solutions requiring mobile, IoT, or embedded components. The platform is quite easy to use and easy to write applications in.
                  • NodeRed – Since I didn’t want to spend too much time configuring and setting up the IoT libraries and framework in the Android devices, I decided to use NodeRed as the solution providing the MQTT protocol support, as it provides easy to use programming tools for doing quick PoCs around IoT. Running NodeRed requires SSH-access to the device, which I established using Termux and associated modules or controlling the camera etc.
                  • The AWS IoT Button – This was an obvious choice as it was one of the technology components I wanted to test and one that also made me start working with the project in the first place.

                  As the main idea of the solution was to build something around the AWS IoT Button and see how easy it is to set-up and use, this meant using the AWS platform as the IoT “backend”. For the rest of the solution, (as I didn’t want to start maintaining or setting up servers myself) I decided to use as many platform services as possible in AWS. I ended up working with the following AWS services:

                  AWS IoT

                  Using the AWS IoT platform for the message brokering, connectivities, and overall management of the IoT solution.

                  AWS IAM

                  The requirement here was to configure the various access roles and rights for all the architectural components in a secure way.

                  AWS S3

                  Using two distinct S3 buckets. One for uploading the images taken by the camera, one for hosting the website for the “monitoring” purposes.

                  AWS Lambda

                  Lambda functions were used to perform the required calculations and actions in a “serverless”-fashion and to remove the need for maintaining infrastructure components.

                  AWS Polly

                  Text-to-speech service used for creating the audio-streams required by the solution.

                  AWS Rekognition

                  Image recognition service used for analysing, and labelling the images.

                  AWS CloudWatch and logs

                  Used for monitoring and debugging the solution during the project.

                  AWS CloudFormation

                  Used for creating the resources, functions, roles etc. in the solution.

                  Python/Boto3

                  I selected to use Python as the programming language as the Boto3 libraries provide easy APIs to utilise the AWS services. Python was used to write all the Lambda functions to perform the processing required by the overall solution.

                  How everything was brought together

                  After registering the AWS IoT button (which was easily done with the AWS Android app), and Android devices to AWS IoT framework and provisioning the security credentials for them, they were good to be used as part of the solution. The architectural idea was to press a button to trigger a Lambda function which will do a few checks on the “upload” S3 bucket, creating a temporary signed URL for the S3 bucket. It will then use the AWS IoT topic to notify the Android devices on the image capture trigger. The Android device would then take the picture of whatever is standing in front of the camera and upload it securely to the “upload” S3 bucket using the temporary upload URL provided via the MQTT message it received earlier.

                  Whenever new images are uploaded to the S3 bucket, this will trigger another serverless action in the background. This Lambda-function will take the image and use AWS Rekognition for performing the image recognition on it. The recognised labels and objects will then be run through AWS Polly to create the required audio stream. After all the new content is created, the Lambda-function will upload the content to the other S3 bucket where the website is hosted to show and play the content for whoever is watching the “monitoring” website. The separation of the S3 buckets provides added security measures, (a DMZ of sorts) to safeguard the website for the potentially harmful content which could, in theory, be uploaded to the upload bucket if the temporary upload URL was somehow acquired by an attacker.

                  The whole solution is secured by AWS IAM by providing the least amount of necessary privileges for all the components to perform their actions in the exact resources they are using.

                  Enabling Cloudwatch monitoring and logging is a good choice for debugging the solution, at least during the development phase. This enabled me to catch unnecessary typing errors in the granular IAM policies in the Lambda function’s IAM Role during the set-up.

                  My findings

                  This was a rather quick and fun project to work with and provided some insight into using the AWS IoT Button and Android devices as part of the AWS IoT ecosystem. The devices themselves were rather easy to get registered and functioning in the set-up. Of course in a large-scale real-world environment the set-up, certification creation, and installation of the IoT devices would need to be automated as well to make it feasible. Incorporating small Lambda-functions with image recognition and text-to-speech was quite straightforward and worked as a good learning platform for the technologies.

                  When applying the project to a customer situation, I would definitely improve it by adding image transcoding for different screen sizes, create a proper web-service with searchable UI and proper picture database/index etc. All in all, I can highly recommend playing around with the IoT framework, IoT button, and NodeRed in Android. Creating these kinds of small side-projects is the perfect platform for people in our business to continue improving our skills and know-how around the ever-expanding technology selection in modern IT environment.

                  Nordcloud offers deep-dive workshop which will help to identify the opportunities that impact your business and help you shape data-driven solutions which will take your business to the next level, contact us for more information.

                  Blog

                  Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

                  When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

                  Blog

                  Building better SaaS products with UX Writing (Part 3)

                  UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

                  Blog

                  Building better SaaS products with UX Writing (Part 2)

                  The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

                  Get in Touch

                  Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.