Monitoring your home temperature – Part 3: Visualizing your data with Power BI

Finally, we have our infrastructure up and running and we can start visualizing it with Power BI.

No.. not like this 🙂

Just login to your Power BI account:

Select My Workspace and click Datasets + dataflows. Click the three dots from your dataset and choose Create Report.

Let’s create line charts for temperature, humidity and air pressure. Each of them on a separate page. I’ll show how to configure the temperature first.

Go to Visualizations pane and choose Line chart:

Stretch the chart to fill your page and go under Fields and expand your table. Drag ‘EventEnqueuedUtcTime’ to Axis and ‘temperature’ to Values:

You should already see the graph of your temperature. You can rename Axis and Values to a more friendly name like Time and Temperature (Celsius). Change the color of your graph etc..

Add filter just next to the graph to show data from the past 7 days:

The end result should be something like this:


Mine has been customised for Finnish language. It also has an average temperature line. Just for demo purpose, I placed my RuuviTag outside, so there are some changes for temperature (If you wonder why my floor temperature is 7 degrees at night..) You can add the humidity and the air pressure with the same method to the same page or you can create own separate pages for them. To make it more clear, I have separate pages for each of them:

Air pressure

I also added one page for minimum, maximum and average values:

So go ahead and customize however you like and leave a comment if you have good suggestions for customization. Remember to save your report from File and Save. If you want to share it, you have an option to publish it to anyone with Publish to web.

After this, you can scale up your environment by adding more RuuviTags. This is the end of this series, thanks for reading and let me know if you have any questions!

Get to know the whole project by reading the parts 1 and 2 of the series here and here!

This blog text is originally published in Senior Cloud Architect Sami Lahti’s personal Tech Life blog.

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

    Monitoring your home temperature – Part 2: Setting up Azure

    Now we need to setup the Azure side and for that you need an Azure subscription. You can get a new subscription with 170€ of credits for 30 days with this link, if you don’t have anything yet:

    As this is a demo environment, we do not focus heavily on security. You need some basic understanding how to deploy resources. I’ll guide you on the configuration side. Btw, If you want to learn basics of Azure, there is a nice learning path at Microsoft Learning:

    Here you can see my demo environment:

    IoT Hub is for receiving temperature messages from Raspberry Zero. Stream Analytics Job is for pushing those messages to the Power BI. The Automation Account is for starting and shutting down the Stream Analytics job, so it won’t consume too much credits.

    Normally, we would deploy this process with ARM, but for make it more convenience to present and instruct, let’s do it from the Portal. And if you like, you can follow naming convention from CAF:

    Start deploying the IoT Hub. Use a globally unique name and public endpoints (You can restrict access to your home ip-address if you like) and choose F1: Free Tier. With the Free Tier you can send 8000 messages per day, so if you have five Raspberry Pis, each of them can send one message per minute. Enough for home use usually.

    After you have created the IoT Hub, you need to create an IoT Device under it. I used same hostname as my Raspberry Pi has:

    Then you need to copy your Primary Connection String from your IoT Device:

    After copying that string, you need to create an environment variable for your Raspberry Pi. You can use a script to automatically add it after every boot.

    Here is an example how you create an environment variable:

    export IOTHUB_DEVICE_CONNECTION_STRING=";DeviceId=YourIotDevice;SharedAccessKey=YourKey"

    Last thing to IoT Hub is to add a consumer group. You can add it under Built-in Endpoints and Events. Just add a custom name under $Default. Here you can see, that I added ‘ruuvitagcg’:

    Next, you want to create a Stream Analytics Job. For that, you need only one unit and environment should be Cloud. There is no free tier for this and it costs some money to keep it running. Luckily, we can turn it off whenever we don’t use it. I used the Automation Account to start it a few minutes before I receive a message and turn it off few minutes after. There is a minor cost for the Automation Account also, but without it, the total cost would be much higher. I receive a message only once per hour, so Stream Analytics is running only 96 minutes every day, instead of 1440 minutes. The total monthly cost is something like 4€. Normally it would be almost 70€.

    Here are my Automation Account scripts:

    $connectionName = "RuuvitagConnection"
    $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName         
    Connect-AzAccount `
        -ServicePrincipal `
        -TenantId $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
    Start-AzStreamAnalyticsJob -ResourceGroupName "YourRG" -Name "YourStreamAnalytics" -OutputStartMode "JobStartTime"
    $connectionName = "RuuvitagConnection"
    $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName         
    Connect-AzAccount `
        -ServicePrincipal `
        -TenantId $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
    Stop-AzStreamAnalyticsJob -ResourceGroupName "YourRG" -Name "YourStreamAnalytics"

    Next, head to Stream Analytics Job and click Inputs.

    Click Add stream input. Above is my configuration, you just need to add your own consumer group configured earlier. For Endpoint, choose Messaging. Use service for Shared access policy name (It is created default with a new IoT Hub).

    Now, move on to Outputs and click Add. Choose Power Bi and click Authorize, if you dont have Power BI yet, you can Sign Up.

    Fill in Dataset name and Table name. For the Authentication mode, we need to use User token if you have a Free Power BI version (v2 upgrade is not yet possible in Free).

    Then create the following Query and click Test query, you should see some results:

    Now, the only thing to do is to visualize our data with the Power BI. We will cover that part in the next post, but the infrastructure side is ready to rock. Grab a cup of coffee and pat yourself to the back 🙂

    Get to know the whole project by reading the parts 1 and 3 of the series here and here!

    This blog text is originally published in Senior Cloud Architect Sami Lahti’s personal Tech Life blog. Follow the blog to stay up to date about Sami’s writings!

    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

      Monitoring your home temperature – Part 1: Setting up RuuviTags and Raspberry Pi

      We moved to a new house three years ago and since then, we have had issues with floor temperatures. It’s hard to maintain steady temperature for all rooms and I wanted to build a solution to keep track of it and see temperature trend for a whole week. That way, I know exactly what’s happening.

      I will guide how to setup your own environment using same method.

      First, you need temperature sensors and those I recommend RuuviTags. You can find detailed information at their website, but they are very handy bluetooth beacons with battery that lasts multiple years. You can measure temperature, movement, humidity and air pressure. There is a mobile app for them also, but it only shows current status, so it didn’t fit to my purpose.

      So, I needed to push data to somewhere and an obvious choice was Azure. I will write more about Azure side of things in the next part of the blog. But in this part, we will setup a single RuuviTag and Raspberry Pi for sending data. You can add more RuuviTags later, like I did when everything is working as expected.

      First, I recommend updating RuuviTag to the latest firmware. This page has instructions for it:

      Updating firmware to latest with nRF Connect

      I used an iOS app called nRF Connect for it and it went quite smoothly. You can check that your RuuviTag still works after updating with Ruuvi Station app:

      iOS or Android

      You will also need Raspberry Pi for sending data to the cloud. I recommend using Raspberry Zero W, because we need only WiFi and Bluetooth for this. I have mine plugged in my kitchen at the moment, but you will just need it to be range of the RuuviTags bluetooth signal (and of course WiFi).

      Raspberry Pi Zero W with clear plastic case

      Mine has a clear acrylic case for protection, a power supply and a memory card. Data is not saved to Raspberry memory card, so no need for bigger memory card than usual. I installed Raspbian Buster, because at time, there were issues with Azure IoT Hub Python modules with the latest Raspbian image.

      Here is a page with instructions how to install a Raspbian image to an SD card:

      After you have installed image, boot up your Raspberry and do basic stuff like update it, change passwords etc…

      You can find how to configure your Raspberry here:

      After you have done everything, you need to setup Python scripts and modules. Two modules are needed: RuuviTag (ruuvitag_sensor) and Azure IoT Hub Client (azure-iot-device). And you need MAC of your RuuviTag, you can find it with Ruuvi Station App under Settings of your RuuviTag.

      Then you need to create a Python script to get that temperature data, here is my script:

      import asyncio
      import os
      from azure.iot.device.aio import IoTHubDeviceClient
      from ruuvitag_sensor.ruuvitag import RuuviTag
      #Replace 'xx:xx:xx:xx:xx:xx' with your RuuviTags MAC
      sensor = RuuviTag('xx:xx:xx:xx:xx:xx')
      state = sensor.update()
      state = str(sensor.state)
      async def main():
          # Fetch the connection string from an enviornment variable
          conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")
          # Create instance of the device client using the connection string
          device_client = IoTHubDeviceClient.create_from_connection_string(conn_str)
          # Send a single message
            print("Sending message...")
            await device_client.send_message(state)
            print("Message successfully sent!")
            print("Message sent failed!")
          # finally, disconnect
          await device_client.disconnect()
      if __name__ == "__main__":

      Code gets current temperature from RuuviTag and sends it to Azure IoT Hub. You can see that this code needs IOTHUB_DEVICE_CONNECTION_STRING environment variable and you don’t have that yet, so we need to update it later.

      You can put this code to crontab and run it like every 15min or whatever suits your needs.

      Next time, we will setup Azure side…

      Links: (RuuviTag firmware) (RuuviTag module) (Azure IoT module) (Raspberry Pi)

      Get to know the whole project by reading the parts 2 and 3 of the series here and here.

      This blog text is originally published in Senior Cloud Architect Sami Lahti’s personal Tech Life blog. Follow the blog to stay up to date about Sami’s writings!

      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

        Controlling lights with Ikea Trådfri, Raspberry Pi and AWS


        BlogTech Community

        A few months back we purchased Ikea Trådfri smart lights to our home. However, after the initial hype, those were almost forgotten as controlling was just too complicated via ready-made tools. For example, the Ikea Home app works only on mobile and it’s only possible to use it when connected to wifi. Luckily, Trådfri offers an API for controlling the lights, so it was possible to build our customized UI.

        Connecting to Trådfri

        With quick googling, I found a tutorial by Pimoroni that helped me get started. I already had Raspberry Pi running as a home server, so all it took was to download and install the required packages provided by the tutorial. However, at the time of writing this article (and implementing my solution), the Pimoroni tutorial was a bit outdated. Because of that I just couldn’t get the communication working, but after banging my head against the wall for a while I found out that in 2017 Ikea changed the authentication method. I’ve contacted Pimoroni and asked them to update the article.

        After getting the communication between Raspberry Pi and Trådfri Gateway working I started writing the middleware server on Raspberry Pi. As I’m a javascript developer I chose to build this with NodeJS. Luckily, there is a node-tradfri-client package available that made the integration simple. Here’s a code example of how I’m connecting to Trådfri and storing the devices in application memory.

        I also added ExpressJS to handle requests. With just a few lines of code, I had API endpoints for

        • Listing all devices in our house
        • Toggling a lightbulb or socket
        • Turning lightbulb or socket on
        • Turning lightbulb or socket off
        • Setting brightness of a lightbulb

        Writing the client application

        As we wanted to control the lights from anywhere with any device, we chose to build a web app that can be used on a laptop, mobile, and tablet without installing. After the first POC, we decided on the most common use cases and Eija designed the UI on Sketch. Actual implementation was done using ReactJS with help of Ant Design and react-draggable.

        Source codes for the client app is available in my Github.

        Making the app accessible from anywhere

        In Finland, we have fast and unlimited data plans on mobile and because of that we rarely have wifi enabled on mobile (nothing is more irritating than bad wifi connection on the yard). To solve this, we chose to publish the app to the public web. As the UI is built as a single-page app, it’s basically free to host it with AWS S3 and Cloudfront. Since Cloudfront domains are random strings, we decided that this is enough security for now. This means that knowing the Cloudfront domain, anyone can control our lights. If this becomes a problem, it’s quite simple to integrate some authentication methods too.

        The app is also hosted on the Raspberry Pi on our local network, so guests can control the lights if they are connected to our wifi.

        The bridge between physical & digital world is not yet seamless

        Even with this accessible application we quickly figured out that we still need physical control buttons for the lights. For example, when going upstairs and not bringing the phone with you, you might end up in a dark room without the possibility to turn on the lights. Luckily Ikea provides physical switches for the Trådfri lights, so we had to make one more Ikea trip to get the extra controller upstairs.

        Another way to reduce the need for physical switches would be using a smart speaker with voice recognition. Unfortunately, only Apple Home Pod is the only speaker that currently understands Finnish and it’s a tad out of our budget and probably not possible to integrate into our system either. Once Amazon adds Finnish support for Alexa we’ll definitely try that.

        …and while writing the previous chapter, I figured that since Apple supports Finnish, it’s possible to create a Siri Shortcut to control our lights. With few more lines of code in the web app, it now supports anchor links from Shortcut to trigger a preset lighting mode.


        It’s great that companies like Ikea provide open access to their smart lights since at least for us the ready-made tooling was not enough. Also with the help of the AWS serverless offering, we can host this solution securely in the cloud for free. If you have any questions about our solution, please feel free to get in touch.

        For more tech content follow Arto and Nordcloud Engineering in Medium.

        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

          Look ma, I created a home IoT setup with AWS, Raspberry Pi, Telegram and RuuviTags


          BlogTech Community

          Hobby projects are a fun way to try and learn new things. This time, I decided to build a simple IoT setup for home, to collect and visualise information like temperature, humidity and pressure. While learning by doing was definitely one of the reasons I decided to embark the project, I for example wanted to control the radiators located in the attic: Not necessarily by switching power on/off, but getting alarms if I’m heating it too much or little, so that I can tune the power manually. Saving some money, in practice. Also, it is nice the get reminders from humidor that the cigars are getting dried out 😉

          I personally learned several things while working on it, and via this blog post, hopefully you can too!


          Idea of the project is relatively simple: Place a few RuuviTag -sensors around the house, collect the data and push it into AWS cloud for permanent storage and additional processing. From there, several solutions can be built around the data, visualisation and alarms being being only few of them.

          Overview of the setup

          Solution is built using AWS serverless technologies that keeps the running expenses low while requiring almost non-existing maintenance. Following code samples are only snippets from the complete solution, but I’ve tried to collect the relevant parts.

          Collect data with RuuviTags and Raspberry Pi

          Tag sensors broadcasts their data (humidity, temperature, pressure etc.) via Bluetooth LE periodically. Because Ruuvi is an open source friendly product, there are already several ready-made solutions and libraries to utilise. I went with node-ruuvitag, which is a Node.js module (Note: I found that module works best with Linux and Node 8.x but you may be successful with other combinations, too).

          Raspberry Pi runs a small Node.js application that both listens the incoming messages from RuuviTags and forwards them into AWS IoT service. App communicates with AWS cloud using thingShadow client, found in AWS IoT Device SDK module. Application authenticates using X.509 certificates generated by you or AWS IoT Core.

          The scripts runs as a Linux service. While tags broadcast data every second or so, the app in Raspberry Pi forwards the data only once in 10 minutes for each tag, which is more than sufficient for the purpose. This is also an easy way to keep processing and storing costs very low in AWS.

          When building an IoT or big data solution, one may initially aim for near real-time data transfers and high data resolutions while the solution built on top of it may not really require it. Alternatively, consider sending data in batches once an hour and with 10 minute resolution may be sufficient and is also cheaper to execute.

          When running the broadcast listening script in Raspberry Pi, there are couple things to consider:

          • All the tags may not appear at first reading: (Re)run ruuvi.findTags() every 30mins or so, to ensure all the tags get collected
          • Raspberry Pi can drop from WLANSetup a script to automatically reconnect in a case that happens

          With these in place, the setup have been working without issues, so far.

          Process data in AWS using IoT Core and friends

          AWS processing overview

          Once the data hits the AWS IoT Core there can be several rules for handling the incoming data. In this case, I setup a lambda to be triggered for each message. AWS IoT provides also a way to do the DynamoDB inserts directly from the messages, but I found it more versatile and development friendly approach to use the lambda between, instead.

          AWS IoT Core act rule

          DynamoDB works well as permanent storage in this case: Data structure is simple and service provides on demand based scalability and billing. Just pay attention when designing the table structure and make sure it fits with you use cases as changes done afterwards may be laborious. For more information about the topic, I recommend you to watch a talk about Advanced Design Patterns for DynamoDB.

          DynamoDB structure I end up using

          Visualise data with React and Highcharts

          Once we have the data stored in semi structured format in AWS cloud, it can be visualised or processed further. I set up a periodic lambda to retrieve the data from DynamoDB and generate CSV files into public S3 bucket, for React clients to pick up. CSV format was preferred over for example JSON to decrease the file size. At some point, I may also try out using the Parquet -format and see if it suits even better for the purpose.

          Overview visualisations for each tag

          The React application fetches the CSV file from S3 using custom hook and passes it to Highcharts -component.

          During my professional career, I’ve learnt the data visualisations are often causing various challenges due to limitations and/or bugs with the implementation. After using several chart components, I personally prefer using Highcharts over other libraries, if possible.

          Snapshot from the tag placed outside

          Send notifications with Telegram bots

          Visualisations works well to see the status and how the values vary by the time. However, in case something drastic happens, like humidor humidity gets below preferred level, I’d like to get an immediate notification about it. This can be done for example using Telegram bots:

          1. Define the limits for each tag for example into DynamoDB table
          2. Compare limits with actual measurement whenever data arrives in custom lambda
          3. If value exceeds the limit, trigger SNS message (so that we can subscribe several actions to it)
          4. Listen into SNS topic and send Telegram message to message group you’re participating in
          5. Profit!

          Limits in DynamoDB



          By now, you should have some kind of understanding how one can combine IoT sensor, AWS services and outputs like web apps and Telegram nicely together using serverless technologies. If you’ve built something similar or taken very different approach, I’d be happy hear it!

          Price tag

          Building and running your own IoT solution using RuuviTags, Raspberry Pi and AWS Cloud does not require big investments. Here are some approximate expenses from the setup:

          • 3-pack of RuuviTags: 90e (ok, I wish these were a little bit cheaper so I’d buy these more since the product is nice)
          • Raspberry Pi with accessories: 50e
          • Energy used by RPi:
          • Lambda executions: $0,3/month
          • SNS notifications: $0,01/month
          • S3 storage: $0,01/month
          • DynamoDB: $0,01/month

          And after looking into numbers, there are several places to optimise as well. For example, some lambdas are executed more often than really needed.

          Next steps

          I’m happy say this hobby project has achieved that certain level of readiness, where it is running smoothly days through and being valuable for me. As a next steps, I’m planning to add some kind of time range selection. As the amount of data is increasing, it will be interesting to see how values vary in long term. Also, it would be a good exercise to integrate some additional AWS services, detect drastic changes or communication failures between device and cloud when they happen. This or that, at least now I have a good base for continue from here or build something totally different next time 🙂

          References, credits and derivative work

          This project is no by means a snowflake and has been inspired by existing projects and work:


          For more content follow Juha and Nordcloud Engineering on Medium.

          At Nordcloud we are always looking for talented people. If you enjoy reading this post and would like to work with public cloud projects on a daily basis — check out our open positions here.

          Get in Touch.

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

            Building Cloud-Based IoT Solutions and Serverless Web-Apps

            Our Cloud Application Architect Afaque Hussain has been on his cloud journey for some years already. At Nordcloud he builds cloud-native IoT solutions in our Data-Driven team. Here’s his story!

            aws cloud IoT internet of things web applications development

            1. Where are you from and how did you end up at Nordcloud?

            I’ve been living in Finland for the past 7 years and I’m from India. Prior to Nordcloud, I was working at Helvar, developing cloud-based, IoT enabled, lighting solutions. I’ve been excited about public cloud services ever since I got to know them and I generally attend cloud conferences and meetups. During one such conference, I met the Nordcloud team who introduced me to the company and invited me for an interview and since then, my Nordcloud journey has begun.

            2. What is your core competence? On top of that, please also tell about your role and projects shortly.

            My core-competence is building cloud-based web-services, that act as an IoT platform to which IoT devices connect and exchange data. Generally preferring Serverless computing and Infrastructure as Code, I primarily use AWS and Javascript (Node.js) in our projects.

            My current role is Cloud Application Architect, where I’m involved in our customer projects in designing and implementing end-to-end IoT solutions. In our current project, we’re building a web-service using which our customer can connect, monitor and manage their large fleet of sensors and gateways. The CI/CD pipelines for our project have been built using AWS Developer Tools such CodePipeline, CodeBuild & CodeDeploy. Our CI/CD pipelines have been implemented as Infrastructure as Code, which enables us to deploy another instance of our CI/CD pipelines in a short period time. Cool!

            3. What sets you on fire / what’s your favourite thing technically with public cloud?

            The ever increasing serverless service offerings by public cloud vendors, which enables us to rapidly build web-applications & services.

            4. What do you like most about working at Nordcloud?

            Apart from the opportunity to work on interesting projects, I like my peers. They’re very talented, knowledgeable and ready to offer help when needed.

            5. What is the most useful thing you have learned at Nordcloud?

            Although I’ve learnt a lot at Nordcloud, I believe the knowledge of  the toolkit and best practices for cloud-based web-application development has been the most useful thing I’ve learnt.

            6. What do you do outside work?

            I like doing sports and I generally play cricket, tennis or hit the gym. During the weekends, I generally spend time with my family, exploring the beautiful Finnish nature, people or different cuisines. 

            7. How would you describe Nordcloud’s culture in 3 words?

            Nurturing, collaborative & rewarding.

            8. Best Nordcloudian memory?

            Breakfast @ Nordcloud every Thursday. I always look forward to this day. I get to meet other Norcloudians, exchange ideas or just catch-up over a delicious breakfast!


            Get in Touch.

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

              Counting Faces with AWS DeepLens and IoT Analytics


              BlogTech Community

              It’s pretty easy to detect faces with AWS DeepLens. Amazon provides a pre-trained machine learning model for face detection so you won’t have to deal with any low-level algorithms or training data. You just deploy the ML model and a Lambda function to your DeepLens device and it starts automatically sending data to the cloud.

              In the cloud you can leverage AWS IoT and IoT Analytics to collect and process the data received from DeepLens. No programming is needed. All you need to do is orchestrate the services to work together and enter one SQL query that calculates daily averages of the faces seen.

              Connecting DeepLens to the cloud

              We’ll assume that you have been able to obtain a DeepLens device. They are currently only being sold in the US, so if you live in another country, you may need to get creative.

              Before you can do anything with your DeepLens, you must connect it to the Amazon cloud. You can do this by opening the DeepLens service in AWS Console and following the instructions to register your device. We won’t go through the details here since AWS already provides pretty good setup instructions.

              Deploying a DeepLens project

              To deploy a machine learning application on DeepLens, you need to create a project. Amazon provides a sample project template for face detection. When you create a DeepLens project based on this template, AWS automatically creates a Lambda function and attaches the pre-trained face detection machine learning model to the project.

              The default face detection model is based on MXNet. You can also import your own machine learning models developed with TensorFlow, Caffe and other deep learning frameworks. You’ll be able to train these models with the AWS SageMaker service or using a custom solution. For now, you can just stick with the pre-trained model to get your first application running.

              Once the project has been created, you can deploy it to your DeepLens device.  DeepLens can run only one project at a time, so your device will be dedicated to running just one machine learning model and Lambda function continuously.

              After a successful deployment, you will start receiving AWS IoT MQTT messages from the device. The sample application sends messages continuously, even if no faces are detected.

              You probably want to optimize the Lambda function by adding an “if” clause to only send messages when one or more faces are actually detected. Otherwise you’ll be sending empty data every second. This is fairly easy to change in the Python code, so we’ll leave it as an exercise for the reader.

              At this point, take note of your DeepLens infer topic. You can find the topic by going to the DeepLens Console and finding the Project Output view under your Device. Use the Copy button to copy it to your clipboard.

              Setting up AWS IoT Analytics

              You can now set up AWS IoT Analytics to process your application data. Keep in mind that because DeepLens currently only works in the North Virginia region (us-east-1), you also need to create your AWS IoT Analytics resources in this region.

              First you’ll need to create a Channel. You can choose any Channel ID and keep most of the settings at their defaults.

              When you’re asked for the IoT Core topic filter, paste the topic you copied earlier from the Project Output view. Also, use the Create new IAM role button to automatically create the necessary role for this application.

              Next you’ll create a Pipeline. Select the previously created Channel and choose Actions / Create a pipeline from this channel.

              AWS Console will ask you to select some Attributes for the pipeline, but you can ignore them for now and leave the Pipeline activities empty. These activities can be used to preprocess messages before they enter the Data Store. For now, we just want to messages to be passed through as they are.

              At the end of the pipeline creation, you’ll be asked to create a Data Store to use as the pipeline’s output. Go ahead and create it with the default settings and choose any name for it.

              Once the Pipeline and the Data Store have been created, you will have a fully functional AWS IoT Analytics application. The Channel will start receiving incoming DeepLens messages from the IoT topic and sending them through the Pipeline to the Data Store.

              The Data Store is basically a database that you can query using SQL. We will get back to that in a moment.

              Reviewing the auto-created AWS IoT Rule

              At this point it’s a good idea to take a look at the AWS IoT Rule that AWS IoT Analytics created automatically for the Channel you created.

              You will find IoT Rules in the AWS IoT Core Console under the Act tab. The rule will have one automatically created IoT Action, which forwards all messages to the IoT Analytics Channel you created.

              Querying data with AWS IoT Analytics

              You can now proceed to create a Data Set in IoT Analytics. The Data Set will execute a SQL query over the data in the Data Store you created earlier.

              Find your way to the Analyze / Data sets section in the IoT Analytics Console. Select Create and then Create SQL.

              The console will ask you to enter an ID for the Data Set. You’ll also need to select the Data Store you created earlier to use as the data source.

              The console will then ask you to enter this SQL query:

              SELECT DATE_TRUNC(‘day’, __dt) as Day, COUNT(*) AS Faces
              FROM deeplensfaces
              GROUP BY DATE_TRUNC(‘day’, __dt)
              ORDER BY DATE_TRUNC(‘day’, __dt) DESC

              Note that “deeplensfaces” is the ID of the Data Source you created earlier. Make sure you use the same name consistently. Our screenshots may have different identifiers.

              The Data selection window can be left to None.

              Use the Frequency setting to setup a schedule for your SQL query. Select Daily so that the SQL query will run automatically every day and replace the previous results in the Data Set.

              Finally, use Actions / Run Now to execute the query. You will see a preview of the current face count results, aggregated as daily total sums. These results will be automatically updated every day according to the schedule you defined.

              Accessing the Data Set from applications

              Congratulations! You now have IoT Analytics all set up and it will automatically refresh the face counts every day.

              To access the face counts from your own applications, you can write a Lambda function and use the AWS SDK to retrieve the current Data Set content. This example uses Node.js:

              const AWS = require('aws-sdk')
              const iotanalytics = new AWS.IoTAnalytics()
                datasetName: 'deeplensfaces',
              }).promise().then(function (response) {
                // Download response.dataURI

              The response contains a signed dataURI which points to the S3 bucket with the actual results in CSV format. Once you download the content, you can do whatever you wish with the CSV data.


              This has been a brief look at how to use DeepLens and IoT Analytics to count the number of faces detected by the DeepLens camera.

              There’s still room for improvement. Amazon’s default face detection model detects faces in every video frame, but it doesn’t keep track whether the same face has already been seen in previous frames.

              It gets a little more complicated to enhance the system to detect individual persons, or to keep track of faces entering and exiting frames. We’ll leave all that as an exercise for now.

              If you’d like some help in developing machine learning applications, please feel free to contact us.

              Get in Touch.

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                Looking ahead: what’s next for AI in manufacturing?


                BlogTech Community

                AI and manufacturing have been on an exciting journey together. It’s a combination that is fast changing the world of manufacturing: 92 percent of senior manufacturing executives believe that the ‘Smart Factory’ will empower their staff to work smarter and increase productivity.

                How does AI benefit manufacturers?

                Some of the biggest companies are already adopting AI. Why? A big reason is increased uptime and productivity through predictive maintenance. AI enables industrial technology to track its own performance and spot trends and looming problems that humans might miss. This gives the operator a better chance of planning critical downtime and avoiding surprises.

                But what’s the next big thing? Let’s look to the immediate future, to what is on the horizon and a very real possibility for manufacturers.

                Digital twinning

                ‘A digital twin is an evolving digital profile of the historical and current behaviour of a physical object or process that helps optimize business performance.’ – According to Deloitte.

                Digital twinning will be effective in the manufacturing industry because it could replace computer-aided design (CAD). CAD is highly effective in computer-simulated environments and has shown some success in modelling complex environments, yet its limitations lay in the interactions between the components and the full lifecycle processes.

                The power of a digital twin is in its ability to provide a real-time link between the digital and physical world of any given product or system. A digital twin is capable of providing more realistic measurements of unpredictability. The first steps in this direction have been taken by cloud-based building information modelling (BIM), within the AEC industry. It enables a manufacturer to make huge design and process changes ahead of real-life occurrences.

                Predictive maintenance

                Take a wind farm. You’re manufacturing the turbines that will stand in a wind farm for hundreds of years. With the help of a CAD design you might be able to ‘guesstimate’ the long-term wear, tear and stress that those turbines might encounter in different weather conditions. But a digital twin will use predictive machine learning to show the likely effects of varying environmental events, and what impact that will have on the machinery.

                This will then affect future designs and real-time manufacturing changes. The really futuristic aspect will be the incredible increases in accuracy as the AI is ‘trained.’

                Smart factories

                An example of a digital twin in a smart factory setting would be to create a virtual replica of what is happening on the factory floor in (almost) real-time. Using thousands or even millions of sensors to capture real-time performance and data, artificial intelligence can assess (over a period of time) the performance of a process, machine or even a person. Cloud-based AI, such as those technologies offered by Microsoft Azure, have the flexibility and capacity to process this volume of data.

                This would enable the user to uncover unacceptable trends in performance. Decision-making around changes and training will be based on data, not gut feeling. This will enhance productivity and profitability.

                The uses of AI in future manufacturing technologies are varied. Contact us to discuss the possibilities and see how we can help you take the next steps towards the future.

                Get in Touch.

                Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                  What is Amazon FreeRTOS and why should you care?


                  BlogTech Community

                  At Nordcloud, we’ve been working with AWS IoT since Amazon released it

                  We’ve enabled some great customer success stories by leveraging the high-level features of AWS IoT. We combine those features with our Serverless development expertise to create awesome cloud applications. Our projects have ranged from simple data collection and device management to large-scale data lakes and advanced edge computing solutions.


                  In this article we’ll take a look at what Amazon FreeRTOS can offer for IoT solutions

                  First released in November 2017, Amazon FreeRTOS is a microcontroller (MCU) operating system. It’s designed for connecting lightweight microcontroller-based devices to AWS IoT and AWS Greengrass. This means you can have your sensor and actuator devices connect directly to the cloud, without having smart gateways acting as intermediaries.

                  DSC06409 - Saini Patala Photography

                  What are microcontrollers?

                  If you’re unfamiliar with microcontrollers, you can think of them as a category of smart devices that are too lightweight to run a full Linux operating system. Instead, they run a single application customized for some particular purpose. We usually call these applications firmware. Developers combine various operating system components and application components into a firmware image and “burn” it on the flash memory of the device. The device then keeps performing its task until a new firmware is installed.

                  Firmware developers have long used the original FreeRTOS operating system to develop applications on various hardware platforms. Amazon has extended FreeRTOS with a number of features to make it easy for applications to connect to AWS IoT and AWS Greengrass, which are Amazon’s solutions for cloud based and edge based IoT. Amazon FreeRTOS currently includes components for basic MQTT communication, Shadow updates, AWS Greengrass endpoint discovery and Over-The-Air (OTA) firmware updates. You get these features out-of-the-box when you build your application on top of Amazon FreeRTOS.

                  Amazon also runs a FreeRTOS qualification program for hardware partners. Qualified products have certain minimum requirements to ensure that they support Amazon FreeRTOS cloud features properly.

                  Use cases and scenarios

                  Why should you use Amazon FreeRTOS instead of Linux? Perhaps your current IoT solution depends on a separate Linux based gateway device, which you could eliminate to cut costs and simplify the solution. If your ARM-based sensor devices already support WiFi and are capable of running Amazon FreeRTOS, they could connect directly to AWS IoT without requiring a separate gateway.

                  Edge computing scenarios might require a more powerful, Linux based smart gateway that runs AWS Greengrass. In such cases you can use Amazon FreeRTOS to implement additional lightweight devices such as sensors and actuators. These devices will use MQTT to talk to the Greengrass core, which means you don’t need to worry about integrating other communications protocols to your system.

                  In general, microcontroller based applications have the benefit of being much more simple than Linux based systems. You don’t need to deal with operating system updates, dependency conflicts and other moving parts. Your own firmware code might introduce its own bugs and security issues, but the attack surface is radically smaller than a full operating system installation.

                  How to try it out

                  If you are interested in Amazon FreeRTOS, you might want to order one of the many compatible microcontroller boards. They all sell for less than $100 online. Each board comes with its own set of features and a toolchain for building applications. Make sure to pick one that fits your purpose and requirements. In particular, not all of the compatible boards include support for Over-The-Air (OTA) firmware upgrades.

                  At Nordcloud we have tried out two Amazon-qualified boards at the time of writing:

                  • STM32L4 Discovery Kit
                  • Espressif ESP-WROVER-KIT (with Over-The-Air update support)

                  ST provides their own SystemWorkBench Ac6 IDE for developing applications on STM32 boards. You may need to navigate the websites a bit, but you’ll find versions for Mac, Linux and Windows. Amazon provides instructions for setting everything up and downloading a preconfigured Amazon FreeRTOS distribution suitable for the device. You’ll be able to open it in the IDE, customize it and deploy it.

                  Espressif provides a command line based toolchain for developing applications on ESP32 boards which works on Mac, Linux and Windows. Amazon provides instructions on how to set it up for Amazon FreeRTOS. Once the basic setup is working and you are able to flash your device, there are more instructions for setting up Over-The-Air updates.

                  Both of these devices are development boards that will let you get started easily with any USB-equipped computer. For actual IoT deployments you’ll probably want to look into more customized hardware.


                  We hope you’ll find Amazon FreeRTOS useful in your IoT applications.

                  If you need any help in planning and implementing your IoT solutions, feel free to contact us.

                  Get in Touch.

                  Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                    Nordcloud @ Smart Factory 2018 in Jyväskylä – 20-22.11.2018



                    Nordcloud at Smart Factory 2018 Jyväskylä

                    Make sure to visit Nordcloud’s booth (C430) at the ‘Smart Factory 2018’ event, which is held in The Congress and Trade Fair Centre Jyväskylän Paviljonki, Jyväskylä between the 20-22.11.2018.


                    Smart Factory 2018 is an event focused on how to utilise opportunities offered by digitalisation

                    The event gathers together the themes of Industry 4.0 and the related technology, service and expertise offering. Smart Factory 2018 is targeted at all operators who are involved with changes associated with digital transformation in production activities and related new services and concepts. It strongly emphasizes the already known future-building themes, such as automation, machine vision, robotics, industrial internet and cybersecurity.

                    You can register for the event here:

                    Register to Smart Factory 2018

                    See you there!

                    Nordcloud at Smart Factory 2018

                    Get in Touch.

                    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.