Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

The Google Coral Edge TPU is a new machine learning ASIC from Google. It performs fast TensorFlow Lite model inferencing with low power usage. We take a quick look at the Coral Dev Board, which includes the TPU chip and is available in online stores now.

Photo by Gravitylink


Google Coral is a general-purpose machine learning platform for edge applications. It can execute TensorFlow Lite models that have been trained in the cloud. It’s based on Mendel Linux, Google’s own flavor of Debian.

Object detection is a typical application for Google Coral. If you have a pre-trained machine learning model that detects objects in video streams, you can deploy your model to the Coral Edge TPU and use a local video camera as the input. The TPU will start detecting objects locally, without having to stream the video to the cloud.

The Coral Edge TPU chip is available in several packages. You probably want to buy the standalone Dev Board which includes the System-on-Module (SoM) and is easy to use for development. Alternatively you can buy a separate TPU accelerator device which connects to a PC through a USB, PCIe or M.2 connector. A System-on-Module is also available separately for integrating into custom hardware.

Comparing with AWS DeepLens

Google Coral is in many ways similar to AWS DeepLens. The main difference from a developer’s perspective is that DeepLens integrates to the AWS cloud. You manage your DeepLens devices and deploy your machine learning models using the AWS Console.

Google Coral, on the other hand, is a standalone edge device that doesn’t need a connection to the Google Cloud. In fact, setting up the development board requires performing some very low level operations like connecting a USB serial port and installing firmware.

DeepLens devices are physically consumer-grade plastic boxes and they include fixed video cameras. DeepLens is intended to be used by developers at an office, not integrated into custom products.

Google Coral’s System-on-Module, in contrast, packs the entire system in a 40×48 mm module. That includes all the processing units, networking features, connectors, 1GB of RAM and an 8GB eMMC where the operating system is installed. If you want build a custom hardware solution, you can build it around the Coral SoM.

The Coral Development Board

To get started with Google Coral, you should buy a Dev Board for about $150. The board is similar to Raspberry Pi devices. Once you have installed the board, it only requires a power source and a WiFi connection to operate.

Here are a couple of hints for installing the board for the first time.

  • Carefully read the instructions at They take you through all the details of how to use the three different USB ports on the device and how to install the firmware.
  • You can use a Mac or a Linux computer but Windows won’t work. The firmware installation is based on a bash script and it also requires some special serial port drivers. They might work in Windows Subsystem for Linux, but using a Mac or a Linux PC is much easier.
  • If the USB port doesn’t seem to work, check that you aren’t using a charge-only USB cable. With a proper cable the virtual serial port device will appear on your computer.
  • The MDT tool (Mendel Development Tool) didn’t work for us. Instead, we had to use the serial port to login to the Linux system and setup SSH manually.
  • The default username/password of Mendel Linux is mendel/mendel. You can use those credentials to login through the serial port but the password doesn’t work through SSH. You’ll need to add your public key to .ssh/authorized_keys.
  • You can setup a WiFi network so you won’t need an ethernet cable. The getting started guide has instructions for this.

Once you have a working development board, you might want to take a look at Model Play ( It’s an Android application that lets you deploy machine learning models from the cloud to the Coral development board.

Model Play has a separate server installation guide at The server must be installed on the Coral development board before you can connect your smartphone to it. You also need to know the local IP address of the development board on your network.

Running Machine Learning Models

Let’s assume you now have a working Coral development board. You can connect to it from your computer with SSH and from your smartphone with the Model Play application.

The getting started guide has instructions for trying out the built-in demonstration application called edgetpu_demo. This application will work without a video camera. It uses a recorded video stream to perform real-time object recognition to detect cars in the video. You can see the output in your web browser.

You can also try out some TensorFlow Lite models through the SSH connection. If you have your own models, check out the documentation on how to make them compatible with the Coral Edge TPU at

If you just want to play around with existing models, the Model Play application makes it very easy. Pick one of the provided models and tap the Free button to download it to your device. Then tap the Run button to execute it.

Connecting a Video Camera and Sensors

If you buy the Coral development board, make sure to also get the Video Camera and Sensor accessories for about $50 extra. They will let you apply your machine learning models to something more interesting than static video files.

Photo by Gravitylink

Alternatively you can also use a USB UVC compatible camera. Check the instructions at for details. You can use an HDMI monitor to view the output.

Future of the Edge

Google has partnered with Gravitylink for Coral product distribution. They also make the Model Play application that offers the Coral demos mentioned in this article. Gravitylink is trying to make machine learning fun and easy with simple user interfaces and a directory of pre-trained models.

Once you start developing more serious edge computing applications, you will need to think about issues like remote management and application deployment. At this point it is still unclear whether Google will integrate Coral and Mendel Linux to the Google Cloud Platform. This would involve device authentication, operating system updates and application deployments.

If you start building on Coral right now, you’ll most likely need a custom management solution. We at Nordcloud develop cloud-based management solutions for technologies like AWS Greengrass, AWS IoT and Docker. Feel free to contact us if you need a hand.

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

    Problems with DynamoDB Single Table Design


    BlogTech Community


    DynamoDB is Amazon’s managed NoSQL database service. DynamoDB provides a simple, schemaless database structure and very high scalability based on partitioning. It also offers an online management console, which lets you query and edit data and makes the overall developer experience very convenient.

    There are two main approaches to designing DynamoDB databases. Multi Table Design strores each database entity in a separate table. Single Table Design stores all entities in one big common table.

    This article focuses mostly on the development experience of creating DynamoDB applications. If you’re working with a large scale project, performance and scalability may be more important aspects for you. However, you can’t completely ignore the developer experience. If you apply Single Table Design, the developer experience will be more cumbersome and less intuitive than with Multi Table Design.

    Multi Table Design Overview

    DynamoDB is based on individual tables that have no relationships between each other. Despite the limitation, we tend to use them in the same way as SQL database tables. We name a DynamoDB table according to a database entity, and then store instances of that database entity in that table. Each entity gets their own table.

    We can call this approach Multi Table Design, because an application usually requires multiple entities. It’s the default way most of us create DynamoDB applications.

    Let’s say we have the entities User, Drive, Folder and File. We would typically then have four DynamoDB tables as shown in the database layout below.

    The boldface headers are field names, and the numbers are field values organized into table rows. For simplicity, we’re only dealing with numeric identifiers.

    UserId(PK)  DriveId(SK)
    1           1
    1           2
    UserId(PK)  FolderId(SK)  ParentDriveId
    1           1             1  
    1           2             2
    UserId(PK)  FileId(SK)    ParentFolderId
    1           1             1
    1           2             2
    1           3             2

    Note: PK means Partition Key and SK means Sort Key. Together they are the table’s unique primary key.

    It’s pretty easy to understand the structure of this database. Everything is partitioned by UserId. Underneath each User there are Drives which may contain Folders. Folders may contain Files.

    The main limitation of Multi Table Design is that you can only retrieve data from one table in one query. If you want to retrieve a User and all their Drives, Folders and Files, you need to make four separate queries. This is particularly inefficient in use cases where you cannot make all the queries in parallel. You need to first look up some data in one table, so that you can find the related data in another table.

    Single Table Design Overview

    Single Table Design is the opposite of Multi Table Design. Amazon has advocated this design pattern in various technical presentations. For an example, see DAT401 Advanced Design Patterns for DynamoDB by Rick Houlihan.

    The basic idea is to store all database entities in a single table. You can do this because of DynamoDB’s schemaless design. You can then makes queries that retrieve several kinds of entities at the same time, because they are all in the same table.

    The primary key usually contains the entity type as part of it. The table might thus contain an entity called “User-1” and an entity called “Folder-1”. The first one is a User with identifier “1”. The second one is a Folder with identifier “1”. They are separate because of the entity prefix, and can be stored in the same table.

    Let’s say we have the entities User, Drive, Folder and File that make up a hierarchy. A table containing a bunch of these entities might look like this:

    PK        SK         HierarchyId
    User-1    User-1     User-1/
    User-1    Drive-1    User-1/Drive-1/
    User-1    Folder-1   User-1/Drive-1/Folder-1/
    User-1    File-1     User-1/Drive-1/Folder-1/File-1/
    User-1    Folder-2   User-1/Drive-1/Folder-2/
    User-1    File-2     User-1/Drive-1/Folder-2/File-2/
    User-1    File-3     User-1/Drive-1/Folder-2/File-3/

    Note: PK means Partition Key and SK means Sort Key. Together they are the table’s unique primary key. We’ll explain HierarchyId in just a moment.

    As you can see, all items are in the same table. The partition key is always User-1, so that all of User-1’s data resides in the same partition.

    Advantages of Single Table Design

    The main advantage that you get from Single Table Design is the ability to retrieve a hierarchy of entities with a single query. You can achieve this by using Secondary Indexes. A Secondary index provides a way to query the items in a table in a specific order.

    Let’s say we create a Secondary Index where the partition key is PK and the sort key is HierarchyId. It’s now possible to query all the items whose PK is “User-1” and that have a HierarchyId beginning with “User-1/Drive-1/”. We get all the folders and files that the user has stored on Drive-1, and also the Drive-1 entity itself, as the result.

    The same would have been possible with Multi Table Design, just not as efficiently. We would have defined similar Secondary Indexes to implement the relationships. Then we would have separately queried the user’s drives from the Drives table, folders from the Folders table, and files from the Files table, and combined all the results.

    Single Table Design can also handle other kinds of access patterns more efficiently than Multi Table Design. Check the YouTube video mentioned in the beginning of this article to learn more about them.

    Complexity of Single Table Design

    Why would we not always use Single Table Design when creating DynamoDB based applications? Do we lose something significant by applying it to every use case?

    The answer is yes. We lose simplicity in database design. When using Single Table Design, the application becomes more complicated and unintuitive to develop. As we add new features and access patterns over time, the complexity keeps growing.

    Just managing one huge DynamoDB table is complicated in itself. We have to remember to include the “User-” entity prefix in all queries when working with AWS Console. Simple table scans aren’t possible without specifying a prefix.

    We also need to manually maintain the HierarchyId composite key whenever we create or update entities. It’s easy to cause weird bugs by forgetting to update HierarchyId in some edge case or when editing the database manually.

    As we start adding sorting and filtering capabilities to our database queries, things get even more complicated.

    Things Get More Complicated

    Now, let’s allow sorting files by their creation date. Extending our example, we might have a table design like this:

    PK      SK        HierarchyId                      CreatedAt
    User-1  User-1    User-1/                          2019-07-01
    User-1  Drive-1   User-1/Drive-1/                  2019-07-02
    User-1  Folder-1  User-1/Drive-1/Folder-1/         2019-07-03
    User-1  File-1    User-1/Drive-1/Folder-1/File-1/  2019-07-04
    User-1  Folder-2  User-1/Drive-1/Folder-2/         2019-07-05
    User-1  File-2    User-1/Drive-1/Folder-2/File-2/  2019-07-06
    User-1  File-3    User-1/Drive-1/Folder-2/File-3/  2019-07-07

    How do we retrieve the contents of Folder-2 ordered by the CreatedAt field? We add a Global Secondary Index for this access pattern, which will consist of GSI1PK and GSI1SK:

    PK      SK        HierarchyId                      CreatedAt   GSI1PK            GSI1SK
    User-1  User-1    User-1/                          2019-07-01  User-1/           ~
    User-1  Drive-1   User-1/Drive-1/                  2019-07-02  User-1/           2019-07-02
    User-1  Folder-1  User-1/Drive-1/Folder-1/         2019-07-03  User-1/Folder-1/  ~
    User-1  File-1    User-1/Drive-1/Folder-1/File-1/  2019-07-04  User-1/Folder-1/  2019-07-04
    User-1  Folder-2  User-1/Drive-1/Folder-2/         2019-07-05  User-1/Folder-2/  ~
    User-1  File-2    User-1/Drive-1/Folder-2/File-2/  2019-07-06  User-1/Folder-2/  2019-07-06
    User-1  File-3    User-1/Drive-1/Folder-2/File-3/  2019-07-07  User-1/Folder-2/  2019-07-07

    We’ll get to the semantics of GSI1PK and GSI1SK in just a moment.

    But why did we call these fields GSI1PK and GSI1SK instead of something meaningful? Because they will contain different kinds of values depending on the entity stored in each database item. GSI1PK and GSI1SK will be calculated differently depending on whether the item is a User, Drive, Folder or File.

    Overloading Names Adds Cognitive Load

    Since it’s not possible to give GSI keys sensible names, we just call them GSI1PK and GSI1SK. These kind of generic field names add cognitive load, because the fields are no longer self-explanatary. Developers need to check development documentation to find out what exactly GSI1PK and GSI1SK mean for some  particular entity.

    So, why is the GSI1PK field not the same as HierarchyId? Because in DynamoDB you cannot query for a range of partition key values. You have to query for one specific partition key. In this use case, we can query for GSI1PK = “User-1/” to get items under a user, and query for GSI1PK  = “User-1/Folder-1” to get items under a user’s folder.

    What about the tilde (~) characters in some GS1SK values? They implement reverse date sorting in a way that also allows pagination. Tilde is the last printable character in the ASCII character set and will sort after all other characters. It’s a nice hack, but it also adds even more cognitive load to understanding what’s happening.

    When we query for GSI1PK = “User-1/Folder-1/”  and sort the results by GSI1SK in descending key order, the first result is Folder-1 (because ~ comes after all other keys) and the following results are File-2 and File-3 in descending date order. Assuming there are lots of files, we could continue this query using the LastEvaluatedKey feature of DynamoDB and retrieve more pages. The parent object (Folder-1) always appears in the first page of items.

    Overloaded GSI Keys Can’t Overlap

    You may have noticed that we can now also query a user’s drives in creation date order. The GSI1PK and GSI1SK fields apply to this relationship as well. This works because the relationship between the User and Drive entities does not not overlap with the relationship between the Folder and File entities.

    But what happens if we need to query all the Folders under a Drive? Let’s say the results must, again, be in creation date order.

    We can’t use the GSI1 index for this query because the GSI1PK and GSI1SK fields already have different semantics. We already use those keys to retrieve items under Users or Folders.

    So, we’ll create a new Global Secondary Index called GSI2, where GSI2PK and GSI2SK define a new relationship. The fields are shown in the table below:

    PK      SK        HierarchyId                      CreatedAt   GSI1PK            GSI1SK      GSI2PK           GSI2SK
    User-1  User-1    User-1/                          2019-07-01  User-1/           ~
    User-1  Drive-1   User-1/Drive-1/                  2019-07-02  User-1/           2019-07-02  User-1/Drive-1/  ~
    User-1  Folder-1  User-1/Drive-1/Folder-1/         2019-07-03  User-1/Folder-1/  ~           User-1/Drive-1/  2019-07-03
    User-1  File-1    User-1/Drive-1/Folder-1/File-1/  2019-07-04  User-1/Folder-1/  2019-07-04  User-1/Drive-1/  2019-07-04
    User-1  Folder-2  User-1/Drive-1/Folder-2/         2019-07-05  User-1/Folder-2/  ~           User-1/Drive-1/  2019-07-05
    User-1  File-2    User-1/Drive-1/Folder-2/File-2/  2019-07-06  User-1/Folder-2/  2019-07-06
    User-1  File-3    User-1/Drive-1/Folder-2/File-3/  2019-07-07  User-1/Folder-2/  2019-07-07

    Note: Please scroll the table horizontally if necessary.

    Using this new index we can query for GSI2PK = “User-1/Drive-1/” and sort the results by GSI2SK to get the folders in creation date order. Drive-1 has a tilde (~) as the sort key to ensure it comes as the first result on the first page of the query.

    Now It Gets Really Complicated

    At this point it’s becoming increasingly more complicated to keep track of all those GSI fields. Can you still remember what exactly GSI1PK and GSI2SK mean? The cognitive load is increasing because you’re dealing with abstract identifiers instead of meaningful field names.

    The bad news is that it only gets worse. As we add more entities and access patterns, we have to add more Global Secondary Indexes. Each of them will have a different meaning in different situations. Your documentation becomes very important. Developers need to check it all the time to find out what each GSI means.

    Let’s add a new Status field to Files and Folders. We will now allow querying for Files and Folders based on their Status, which may be VISIBLE, HIDDEN or DELETED. The results must be sorted by creation time.

    We end up with a design that requires three new Global Secondary Indexes. GSI3 will contain files that have a VISIBLE status. GSI4 will contain files that have a HIDDEN status. GSI5 will contain files that have a DELETED status. Here’s what the table will look like:

    PK      SK        HierarchyId                      CreatedAt   GSI1PK            GSI1SK      GSI2PK           GSI2SK      Status    GSI3PK                    GSI3SK      GSI4PK                   GSI4SK      GSI5PK                     GSI5SK
    User-1  User-1    User-1/                          2019-07-01  User-1/           ~
    User-1  Drive-1   User-1/Drive-1/                  2019-07-02  User-1/           2019-07-02  User-1/Drive-1/  ~
    User-1  Folder-1  User-1/Drive-1/Folder-1/         2019-07-03  User-1/Folder-1/  ~           User-1/Drive-1/  2019-07-03  VISIBLE   User-1/Folder-1/VISIBLE/  ~           User-1/Folder-1/HIDDEN/  ~           User-1/Folder-1/DELETED/   ~
    User-1  File-1    User-1/Drive-1/Folder-1/File-1/  2019-07-04  User-1/Folder-1/  2019-07-04  User-1/Drive-1/  2019-07-04  VISIBLE   User-1/Folder-1/VISIBLE/  2019-07-04  User-1/Folder-1/HIDDEN/  2019-07-04  User-1/Folder-1/DELETED/
    User-1  Folder-2  User-1/Drive-1/Folder-2/         2019-07-05  User-1/Folder-2/  ~           User-1/Drive-1/  2019-07-05  VISIBLE   User-1/Folder-2/VISIBLE/  ~           User-1/Folder-2/HIDDEN/  ~           User-1/Folder-2/DELETED/   ~
    User-1  File-2    User-1/Drive-1/Folder-2/File-2/  2019-07-06  User-1/Folder-2/  2019-07-06                               HIDDEN    User-1/Folder-2/VISIBLE/              User-1/Folder-2/HIDDEN/  2019-07-06  User-1/Folder-2/DELETED/
    User-1  File-3    User-1/Drive-1/Folder-2/File-3/  2019-07-07  User-1/Folder-2/  2019-07-07                               DELETED   User-1/Folder-2/VISIBLE/              User-1/Folder-2/HIDDEN/              User-1/Folder-2/DELETED/   2019-07-07

    Note: Please scroll the table horizontally if necessary.

    You may think this is getting a bit too complicated. It’s complicated because we still want to be able to retrieve both a parent item and its children in just one query.

    For example, let’s say we want to retrieve all VISIBLE files in Folder-1. We query for GSI3PK = “User-1/Folder-1/VISIBLE/” and again sort the results in descending order as earlier. We get back Folder-1 as the first result and File-1 as the second result. Pagination will also work if there are more results. If there are no VISIBLE files under the folder, we only get a single result, the folder.

    That’s nice. But can you now figure out how to retrieve all DELETED files in Folder-2? Which GSI will you use and what do you query for? You probably need to stop your development work for a while and spend some time reading the documentation.

    The Complexity Multiplies

    Let’s say we need to add a new Status value called ARCHIVED. This will involve creating a yet another GSI and adding application code in all the places where Files or Folders are created or updated. The new code needs to make sure that GSI6PK and GSI6SK are generated correctly.

    That’s a lot of development and testing work. It will happen every time we add a new Status value or some other way to perform conditional queries.

    Later we might also want to add new sort fields called ModifiedAt and ArchivedAt. Each new sort field will require its own set of Global Secondary Indexes. We have to create a new GSI for every possible Status value and sort key combination, so we end up with quite a lot of them. In fact, our application will now have GSI1-GSI18, and developers will need to understand what GSI1PK-GSI18PK and GSI1SK-GSI18SK mean.

    In fairness, this complexity is not unique to Single Table Design. We would have similar challenges when applying Multi Table Design and implementing many different ways to query data.

    What’s different in Multi Table Design is that each entity will live in its own table where the field names don’t have to be overloaded. If you add a feature that involves Folders, you only need to deal with the Folders table. Indexes and keys will have semantically meaningful names like “UserId-Status-CreatedAt-index”. Developers can understand them intuitively without referring to documentation all the time.

    Looking for a Compromise

    We can make compromises between Single Table Design and Multi Table Design to reduce complexity. Here are some suggestions.

    First of all, you should think of Single Table Design as an optimization that you might be applying prematurely. If you design all new applications from scratch using Single Table Design, you’re basically optimizing before knowing the real problems and bottlenecks.

    You should also consider whether the database entities will truly benefit from Single Table Design or not. If the use case involves retrieving a deep hierarchy of entities, it makes sense to combine those entities into a single table. Other entities can still live in their own tables.

    In many real-life use cases the only benefit from Single Table Design is the ability to retrieve a parent entity and its children using a single DynamoDB query. In such cases the benefit is pretty small. You could just as well make two parallel requests. Retrieve the parent using GetItem and the children using a Query. In an API based web application the user interface can perform these requests in parallel and combine the results in the frontend.

    Many of the design patterns related to Single Table Design also apply to Multi Table Design. For instance, overloaded composite keys and secondary indexes are sometimes quite helpful in modeling hierarchies and relationships. You can use them in Multi Table Design without paying the full price of complexity that Single Table Design would add.

    In summary, you should use your judgment case by case. Don’t make blanket policy to design every application using either Single Table Design or Multi Table Design. Learn the design patterns and apply them where they make sense.

    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

      Counting Faces with AWS DeepLens and IoT Analytics


      BlogTech Community

      It’s pretty easy to detect faces with AWS DeepLens. Amazon provides a pre-trained machine learning model for face detection so you won’t have to deal with any low-level algorithms or training data. You just deploy the ML model and a Lambda function to your DeepLens device and it starts automatically sending data to the cloud.

      In the cloud you can leverage AWS IoT and IoT Analytics to collect and process the data received from DeepLens. No programming is needed. All you need to do is orchestrate the services to work together and enter one SQL query that calculates daily averages of the faces seen.

      Connecting DeepLens to the cloud

      We’ll assume that you have been able to obtain a DeepLens device. They are currently only being sold in the US, so if you live in another country, you may need to get creative.

      Before you can do anything with your DeepLens, you must connect it to the Amazon cloud. You can do this by opening the DeepLens service in AWS Console and following the instructions to register your device. We won’t go through the details here since AWS already provides pretty good setup instructions.

      Deploying a DeepLens project

      To deploy a machine learning application on DeepLens, you need to create a project. Amazon provides a sample project template for face detection. When you create a DeepLens project based on this template, AWS automatically creates a Lambda function and attaches the pre-trained face detection machine learning model to the project.

      The default face detection model is based on MXNet. You can also import your own machine learning models developed with TensorFlow, Caffe and other deep learning frameworks. You’ll be able to train these models with the AWS SageMaker service or using a custom solution. For now, you can just stick with the pre-trained model to get your first application running.

      Once the project has been created, you can deploy it to your DeepLens device.  DeepLens can run only one project at a time, so your device will be dedicated to running just one machine learning model and Lambda function continuously.

      After a successful deployment, you will start receiving AWS IoT MQTT messages from the device. The sample application sends messages continuously, even if no faces are detected.

      You probably want to optimize the Lambda function by adding an “if” clause to only send messages when one or more faces are actually detected. Otherwise you’ll be sending empty data every second. This is fairly easy to change in the Python code, so we’ll leave it as an exercise for the reader.

      At this point, take note of your DeepLens infer topic. You can find the topic by going to the DeepLens Console and finding the Project Output view under your Device. Use the Copy button to copy it to your clipboard.

      Setting up AWS IoT Analytics

      You can now set up AWS IoT Analytics to process your application data. Keep in mind that because DeepLens currently only works in the North Virginia region (us-east-1), you also need to create your AWS IoT Analytics resources in this region.

      First you’ll need to create a Channel. You can choose any Channel ID and keep most of the settings at their defaults.

      When you’re asked for the IoT Core topic filter, paste the topic you copied earlier from the Project Output view. Also, use the Create new IAM role button to automatically create the necessary role for this application.

      Next you’ll create a Pipeline. Select the previously created Channel and choose Actions / Create a pipeline from this channel.

      AWS Console will ask you to select some Attributes for the pipeline, but you can ignore them for now and leave the Pipeline activities empty. These activities can be used to preprocess messages before they enter the Data Store. For now, we just want to messages to be passed through as they are.

      At the end of the pipeline creation, you’ll be asked to create a Data Store to use as the pipeline’s output. Go ahead and create it with the default settings and choose any name for it.

      Once the Pipeline and the Data Store have been created, you will have a fully functional AWS IoT Analytics application. The Channel will start receiving incoming DeepLens messages from the IoT topic and sending them through the Pipeline to the Data Store.

      The Data Store is basically a database that you can query using SQL. We will get back to that in a moment.

      Reviewing the auto-created AWS IoT Rule

      At this point it’s a good idea to take a look at the AWS IoT Rule that AWS IoT Analytics created automatically for the Channel you created.

      You will find IoT Rules in the AWS IoT Core Console under the Act tab. The rule will have one automatically created IoT Action, which forwards all messages to the IoT Analytics Channel you created.

      Querying data with AWS IoT Analytics

      You can now proceed to create a Data Set in IoT Analytics. The Data Set will execute a SQL query over the data in the Data Store you created earlier.

      Find your way to the Analyze / Data sets section in the IoT Analytics Console. Select Create and then Create SQL.

      The console will ask you to enter an ID for the Data Set. You’ll also need to select the Data Store you created earlier to use as the data source.

      The console will then ask you to enter this SQL query:

      SELECT DATE_TRUNC(‘day’, __dt) as Day, COUNT(*) AS Faces
      FROM deeplensfaces
      GROUP BY DATE_TRUNC(‘day’, __dt)
      ORDER BY DATE_TRUNC(‘day’, __dt) DESC

      Note that “deeplensfaces” is the ID of the Data Source you created earlier. Make sure you use the same name consistently. Our screenshots may have different identifiers.

      The Data selection window can be left to None.

      Use the Frequency setting to setup a schedule for your SQL query. Select Daily so that the SQL query will run automatically every day and replace the previous results in the Data Set.

      Finally, use Actions / Run Now to execute the query. You will see a preview of the current face count results, aggregated as daily total sums. These results will be automatically updated every day according to the schedule you defined.

      Accessing the Data Set from applications

      Congratulations! You now have IoT Analytics all set up and it will automatically refresh the face counts every day.

      To access the face counts from your own applications, you can write a Lambda function and use the AWS SDK to retrieve the current Data Set content. This example uses Node.js:

      const AWS = require('aws-sdk')
      const iotanalytics = new AWS.IoTAnalytics()
        datasetName: 'deeplensfaces',
      }).promise().then(function (response) {
        // Download response.dataURI

      The response contains a signed dataURI which points to the S3 bucket with the actual results in CSV format. Once you download the content, you can do whatever you wish with the CSV data.


      This has been a brief look at how to use DeepLens and IoT Analytics to count the number of faces detected by the DeepLens camera.

      There’s still room for improvement. Amazon’s default face detection model detects faces in every video frame, but it doesn’t keep track whether the same face has already been seen in previous frames.

      It gets a little more complicated to enhance the system to detect individual persons, or to keep track of faces entering and exiting frames. We’ll leave all that as an exercise for now.

      If you’d like some help in developing machine learning applications, please feel free to contact us.

      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

        What is Amazon FreeRTOS and why should you care?


        BlogTech Community

        At Nordcloud, we’ve been working with AWS IoT since Amazon released it

        We’ve enabled some great customer success stories by leveraging the high-level features of AWS IoT. We combine those features with our Serverless development expertise to create awesome cloud applications. Our projects have ranged from simple data collection and device management to large-scale data lakes and advanced edge computing solutions.


        In this article we’ll take a look at what Amazon FreeRTOS can offer for IoT solutions

        First released in November 2017, Amazon FreeRTOS is a microcontroller (MCU) operating system. It’s designed for connecting lightweight microcontroller-based devices to AWS IoT and AWS Greengrass. This means you can have your sensor and actuator devices connect directly to the cloud, without having smart gateways acting as intermediaries.

        DSC06409 - Saini Patala Photography

        What are microcontrollers?

        If you’re unfamiliar with microcontrollers, you can think of them as a category of smart devices that are too lightweight to run a full Linux operating system. Instead, they run a single application customized for some particular purpose. We usually call these applications firmware. Developers combine various operating system components and application components into a firmware image and “burn” it on the flash memory of the device. The device then keeps performing its task until a new firmware is installed.

        Firmware developers have long used the original FreeRTOS operating system to develop applications on various hardware platforms. Amazon has extended FreeRTOS with a number of features to make it easy for applications to connect to AWS IoT and AWS Greengrass, which are Amazon’s solutions for cloud based and edge based IoT. Amazon FreeRTOS currently includes components for basic MQTT communication, Shadow updates, AWS Greengrass endpoint discovery and Over-The-Air (OTA) firmware updates. You get these features out-of-the-box when you build your application on top of Amazon FreeRTOS.

        Amazon also runs a FreeRTOS qualification program for hardware partners. Qualified products have certain minimum requirements to ensure that they support Amazon FreeRTOS cloud features properly.

        Use cases and scenarios

        Why should you use Amazon FreeRTOS instead of Linux? Perhaps your current IoT solution depends on a separate Linux based gateway device, which you could eliminate to cut costs and simplify the solution. If your ARM-based sensor devices already support WiFi and are capable of running Amazon FreeRTOS, they could connect directly to AWS IoT without requiring a separate gateway.

        Edge computing scenarios might require a more powerful, Linux based smart gateway that runs AWS Greengrass. In such cases you can use Amazon FreeRTOS to implement additional lightweight devices such as sensors and actuators. These devices will use MQTT to talk to the Greengrass core, which means you don’t need to worry about integrating other communications protocols to your system.

        In general, microcontroller based applications have the benefit of being much more simple than Linux based systems. You don’t need to deal with operating system updates, dependency conflicts and other moving parts. Your own firmware code might introduce its own bugs and security issues, but the attack surface is radically smaller than a full operating system installation.

        How to try it out

        If you are interested in Amazon FreeRTOS, you might want to order one of the many compatible microcontroller boards. They all sell for less than $100 online. Each board comes with its own set of features and a toolchain for building applications. Make sure to pick one that fits your purpose and requirements. In particular, not all of the compatible boards include support for Over-The-Air (OTA) firmware upgrades.

        At Nordcloud we have tried out two Amazon-qualified boards at the time of writing:

        • STM32L4 Discovery Kit
        • Espressif ESP-WROVER-KIT (with Over-The-Air update support)

        ST provides their own SystemWorkBench Ac6 IDE for developing applications on STM32 boards. You may need to navigate the websites a bit, but you’ll find versions for Mac, Linux and Windows. Amazon provides instructions for setting everything up and downloading a preconfigured Amazon FreeRTOS distribution suitable for the device. You’ll be able to open it in the IDE, customize it and deploy it.

        Espressif provides a command line based toolchain for developing applications on ESP32 boards which works on Mac, Linux and Windows. Amazon provides instructions on how to set it up for Amazon FreeRTOS. Once the basic setup is working and you are able to flash your device, there are more instructions for setting up Over-The-Air updates.

        Both of these devices are development boards that will let you get started easily with any USB-equipped computer. For actual IoT deployments you’ll probably want to look into more customized hardware.


        We hope you’ll find Amazon FreeRTOS useful in your IoT applications.

        If you need any help in planning and implementing your IoT solutions, feel free to contact us.

        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

          Developing Serverless Cloud Components



          A cloud component contains both your code and the necessary platform configuration to run it. The concept is similar to Docker containers, but here it is applied to serverless applications. Instead of wrapping an entire server in a container, a cloud component tells the cloud platform what services it depends on.

          A typical cloud component might include a REST API, a database table and the code needed to implement the related business logic. When you deploy the component, the necessary database services and API services are automatically provisioned in the cloud.

          Developers can assemble cloud applications from cloud components. This resembles the way they would compose traditional applications from software modules. The benefit is less repeated work to implement the same features in every project over and over again.

          In the following sections we’ll take a look at some new technologies for developing cloud components.

          AWS CDK

          AWS CDK, short for Cloud Development Kit, is Amazon’s new framework for defining AWS cloud infrastructure with code. It currently supports TypeScript, JavaScript and Java with more language support coming later.

          When developing with AWS CDK, you use code to define both infrastructure and business logic. These codebases are separate. You define your component’s deployment logic in one script file, and your Lambda function code in another script file. These files don’t have to be written in the same programming language.

          AWS CDK includes the AWS Construct Library, which provides a selection of predefined cloud components to be used in applications. It covers a large portion of Amazon’s AWS cloud services, although not all of them.

          These predefined constructs are the smallest building blocks available in AWS CDK. For instance, you can use the AWS DynamoDB construct to create a database table. The deployment process translates this construct into a CloudFormation resource, and CloudFormation creates the actual table.

          The real power of AWS CDK comes from the ability to combine the smaller constructs into larger reusable components. You can define an entire microservice, including all the cloud resources it needs, and use it as a component in a larger application.

          This modularity can also help standardize multi-team deployments. When everybody delivers their service as an AWS CDK construct, it’s straightforward to put all the services together without spending lots of time writing custom deployment scripts.

          AWS CDK may become very important for cloud application development if third parties start publishing their own Construct Libraries online. There could eventually be very large selection of reusable cloud components available in an easily distributable and deployable format. Right now the framework is still pending a 1.0 release before freezing its APIs.

          Serverless Components

          Serverless Components is an ambitious new project by the makers of the hugely popular Serverless Framework. It aims to offer a cloud-agnostic way of developing reusable cloud components. These components can be assembled into applications or into higher order components.

          The basic idea of Serverless Components is similar to AWS CDK. But while CDK uses a programming language to define components, Serverless has chosen a declarative YAML syntax instead. This results in simpler component definitions but you also lose a lot of flexibility. To remedy this, Serverless Components lets you add custom JavaScript files to perform additional deployment operations.

          The Serverless Components project has its own component registry. The registry includes some basic components for Amazon AWS, Google Cloud, Netlify and GitHub. Unlike in some other projects, developers are writing these components manually instead of auto-generating them from service definitions. It will probably take a while before all cloud features are supported.

          One controversial design decision of Serverless Components is to bypass the AWS CloudFormation stack management service. The tool creates components directly on AWS and other cloud platforms. It writes their state to a local state.json file, which developers must share.

          This approach offers speed, flexibility and multi-cloud support, but also requires Serverless Components to handle deployments flawlessly in every situation. Enterprise AWS users will probably be wary of adopting a solution that bypasses CloudFormation entirely.


 is a cloud component startup offering a SaaS service subscription combined with an open source framework. Essentially Pulumi aims to replace AWS CloudFormation and other cloud deployment tools with its own stack management solution. Pulumi’s cloud service deploys the actual cloud applications to Amazon AWS, Microsoft Azure, Google Cloud, Kubernetes or OpenStack.

          Pulumi supports a higher level of abstraction than the other component technologies discussed here. When you implement a serverless service using Pulumi’s JavaScript syntax, the code gets translated to a format suitable for the platform you are deploying on. You write your business logic as JavaScript handler functions for Express API endpoints. Pulumi’s tool extracts those handlers from the source code and deploys them as AWS Lambda functions, Azure Functions or Google Cloud Functions.

          Writing completely cloud-agnostic code is challenging even with Pulumi’s framework. For certain things it offers cloud-agnostic abstractions like the cloud.Table component. When you use cloud. Table, your code automatically adapts to use either DynamoDB or Azure Table Storage depending on which cloud platform you deploy it on.

          For many other things you have to write cloud-specific code. Or, you can write your own abstraction layer to complement Pulumi’s framework. Such abstraction layers tend to add complexity to applications, making it harder for developers to understand what the code is actually doing.

          Ultimately it’s up to you to decide whether you want to commit to developing everything on top of an abstraction layer which everybody must learn. Also, as with Serverless Components, you can’t use AWS CloudFormation to manage your Pulumi-based stacks.


          The main issue to consider in choosing a cloud component technology is whether you need multi-cloud support or not. Single-cloud development is arguably more productive and lets developers leverage higher level cloud services. On the other hand this results in increased vendor-lock, which may or may not be a problem.

          For developers focusing on Amazon AWS, the AWS CDK is a fairly obvious choice. AWS CDK is likely to become a de-facto standard way of packaging AWS-based cloud components. As serverless applications get more and more popular, AWS CDK fills some important blank spots in the CloudFormation deployment process and in the reusability of components. And since AWS CDK still uses CloudFormation under the hood, adopters will be familiar with the underlying technology.

          Developers that truly require multi-cloud will have to consider whether it’s acceptable to rely on Pulumi’s third party SaaS service for deployments. If the SaaS service goes down, deployed applications will keep working but you can’t update them. This is probably not a big problem for short periods of time. It will be more problematic if Pulumi ever shuts down the service permanently. For projects where this is not an issue, Pulumi may offer a very compelling multi-cloud scenario.

          Multi-cloud developers that want to contribute to open source may want to check out the Serverless Components project. It’s too early to recommend using this project for actual use cases yet, but it may have an interesting future ahead. The project may attract a lot of existing users if the developers are able to provide a clear migration path from Serverless Framework.

          If you would like more information on how Nordcloud can help you with serverless technologies, contact us here.

          Get in Touch.

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

            How Amazon’s IoT platform controls things without servers



            Amazon’s IoT platform is a framework for connecting smart devices to the cloud. It aims to make the basic processes of collecting data and controlling devices as simple as possible. AWS IoT is a fully managed service, which means the customer doesn’t have to worry about configuring servers or updating operating systems. The platform simply exposes a set of APIs and automatically scales from a single device to millions of devices.

            I recently wrote an article (in Finnish) in my personal blog about using AWS IoT for home automation. AWS IoT is not exactly designed for this purpose, but if you are tech savvy enough, it can be used for it. The pricing is currently set at $5 per million messages, which lasts a long time when you’re only dealing with a couple of devices sending occasional messages.

            The home automation experiment provides a convenient context for discussing the basic concepts of AWS IoT. In the next few sections, I will refer to the elements of a simple home system that detects human presence in rooms and turns on the lights if it happens at a certain time of the day. All the devices are connected to the Amazon cloud via public Internet.

            Device Registration

            The first step in most IoT projects is to register the devices (also called “things”) into a centrally managed database. AWS IoT provides this database for free and lets you add any number of devices in it. The registration is important because each device also gets its own SSL/TLS certificate and private key, which are used for authentication and encryption. The devices can only be connected to AWS IoT by using their certificates and private keys.

            The AWS IoT device registry also works as a simple asset management database. It lets you attach attributes to devices and maintain information such as customer IDs. The device registry can later be queried based on these attribute values. For example, you can find all devices belonging to a specific customer ID. The attributes are optional, so they can just be ignored if they’re not needed.

            In the home automation experiment, two devices were added to the registry: A wireless human presence detector and a Philips Hue light control bridge.

            Data Collection

            Almost any IoT scenario involves collecting device data. Amazon provides the AWS IoT Device SDK for connecting devices to the IoT platform. The SDK is typically used to develop a small application that runs on the device (or on a gateway connected to the device) and transmits data to the cloud.

            There are two ways to deliver data to the AWS IoT platform. The first one is to send raw MQTT messages, which are usually small JSON objects. You can then setup AWS IoT rules to forward these messages to other Amazon cloud services for further processing. In the home automation scenario, a rule specifies that all messages received under the topic “presence-detected” should be forwarded to an Amazon Lambda microservice, which then decides what to do with the information.

            The other way is to use Thing Shadows, which are built into the AWS IoT platform. Every registered device has a “shadow” which contains its latest reported state. The state is stored as a JSON document, which can contain 8 kilobytes worth of fields and values. This makes it easy and cost-effective to store the current state of any device in the cloud, without requiring an external database. For instance, a device equipped with a thermometer might regularly report its current state as a JSON object that looks like this: {“temperature”:22}.

            Moreover, It’s important to understand that Thing Shadows cannot be used as a general-purpose database. You can only look up a single Thing Shadow at a time, and it will only contain the current state. Indeed, you will need a separate database if you want to analyze historical time series of data. However, keep in mind that Amazon offers a wide range of databases you can easily connect to AWS IoT, by forwarding Thing Shadow updates to services like DynamoDB or Kinesis. This seamless integration between all Amazon cloud services is one of the key advantages of AWS IoT.

            Data Analysis and Decision Making

            Since Amazon already offers a wide range of data analysis services, the AWS IoT platform itself doesn’t include any new tools for analyzing data. Existing analysis services include products like Redshift, Elastic MapReduce, Amazon Machine Learning and various others. Device data is typically collected into S3 buckets using Kinesis Firehose and then processed by these services.

            Device data can also be forwarded to Amazon Lambda microservices for real-time decision making. A JavaScript function will be executed every time a data point is received. This is suitable for the home automation scenario, where a single IoT message is sent whenever presence is detected in a room. The JavaScript function considers various factors, such as the current time of day, and decides whether to turn the lights on.

            In addition to existing solutions, Amazon has announced an upcoming product called Kinesis Analytics. It will enable real-time analytics of streaming IoT data, similar to Apache Storm. This means that data can be analyzed on-the-fly without storing it in a database. For instance, you could maintain a rolling average of values and react to it instead of individual data points.

            Device Control

            The AWS IoT platform can control devices in the same two ways it collects data. The first way is to send raw MQTT messages directly to devices. Devices will react to the messages when they receive them. The problem with this approach is that devices might sometimes have network or electricity issues, which may cause the loss of some control messages.

            Thing Shadows provide a more reliable way to have devices enter a desired state. A Thing Shadow will remember the new desired state and keep retrying until the device has acknowledged it.

            In the home automation scenario, when presence is detected, the desired state of a lamp is set to {“light”:true}. When the lamp receives this desired state, it turns on the light and reports its current state back to AWS IoT as {“light”:true}. Once the reported state is the same as the desired state, the Thing Shadow of the lamp is known to be in sync.

            User Interfaces and Data Visualization

            You may use the AWS IoT Console to manually control devices by modifying their desired state. The console will show the current state and update it on the screen as it changes. This is, of course, a very low-level way to control lighting since you need to log in as a cloud administrator and then manually edit the JSON documents.

            Then again, a better way is to build a web application that integrates to AWS IoT and offers a friendly user interface for controlling things. AWS provides rich infrastructure options for developing integrated mobile and web applications. Amazon API Gateway and Lambda are typically used to build a backend API that lets applications access IoT data. The data itself may be stored in a database like DynamoDB or Postgres. The access can be limited to authenticated users only using Amazon Cognito or a custom IAM solution.

            For data visualization purposes, Amazon has recently announced an upcoming product called Amazon QuickSight, which will integrate with other Amazon services and databases. There are also many third-party solutions available through the AWS Marketplace. If any of these options doesn’t fit the use case well, a custom solution can always be developed as part of a web application.

            My Findings

            AWS IoT is a fast and easy way to get started on the Internet of Things. All the scenarios discussed in this article are based on managed cloud services. This means that you never have to maintain your own servers or worry about scaling.

            For small-scale projects the operating costs are negligible. For larger scale projects, the costs will depend on the amount and frequency of the data being transferred. There are no fixed monthly or hourly fees, which makes personal experimentation at home very convenient.

            Get in Touch.

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.