Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

The Google Coral Edge TPU is a new machine learning ASIC from Google. It performs fast TensorFlow Lite model inferencing with low power usage. We take a quick look at the Coral Dev Board, which includes the TPU chip and is available in online stores now.

Photo by Gravitylink


Google Coral is a general-purpose machine learning platform for edge applications. It can execute TensorFlow Lite models that have been trained in the cloud. It’s based on Mendel Linux, Google’s own flavor of Debian.

Object detection is a typical application for Google Coral. If you have a pre-trained machine learning model that detects objects in video streams, you can deploy your model to the Coral Edge TPU and use a local video camera as the input. The TPU will start detecting objects locally, without having to stream the video to the cloud.

The Coral Edge TPU chip is available in several packages. You probably want to buy the standalone Dev Board which includes the System-on-Module (SoM) and is easy to use for development. Alternatively you can buy a separate TPU accelerator device which connects to a PC through a USB, PCIe or M.2 connector. A System-on-Module is also available separately for integrating into custom hardware.

Comparing with AWS DeepLens

Google Coral is in many ways similar to AWS DeepLens. The main difference from a developer’s perspective is that DeepLens integrates to the AWS cloud. You manage your DeepLens devices and deploy your machine learning models using the AWS Console.

Google Coral, on the other hand, is a standalone edge device that doesn’t need a connection to the Google Cloud. In fact, setting up the development board requires performing some very low level operations like connecting a USB serial port and installing firmware.

DeepLens devices are physically consumer-grade plastic boxes and they include fixed video cameras. DeepLens is intended to be used by developers at an office, not integrated into custom products.

Google Coral’s System-on-Module, in contrast, packs the entire system in a 40×48 mm module. That includes all the processing units, networking features, connectors, 1GB of RAM and an 8GB eMMC where the operating system is installed. If you want build a custom hardware solution, you can build it around the Coral SoM.

The Coral Development Board

To get started with Google Coral, you should buy a Dev Board for about $150. The board is similar to Raspberry Pi devices. Once you have installed the board, it only requires a power source and a WiFi connection to operate.

Here are a couple of hints for installing the board for the first time.

  • Carefully read the instructions at They take you through all the details of how to use the three different USB ports on the device and how to install the firmware.
  • You can use a Mac or a Linux computer but Windows won’t work. The firmware installation is based on a bash script and it also requires some special serial port drivers. They might work in Windows Subsystem for Linux, but using a Mac or a Linux PC is much easier.
  • If the USB port doesn’t seem to work, check that you aren’t using a charge-only USB cable. With a proper cable the virtual serial port device will appear on your computer.
  • The MDT tool (Mendel Development Tool) didn’t work for us. Instead, we had to use the serial port to login to the Linux system and setup SSH manually.
  • The default username/password of Mendel Linux is mendel/mendel. You can use those credentials to login through the serial port but the password doesn’t work through SSH. You’ll need to add your public key to .ssh/authorized_keys.
  • You can setup a WiFi network so you won’t need an ethernet cable. The getting started guide has instructions for this.

Once you have a working development board, you might want to take a look at Model Play ( It’s an Android application that lets you deploy machine learning models from the cloud to the Coral development board.

Model Play has a separate server installation guide at The server must be installed on the Coral development board before you can connect your smartphone to it. You also need to know the local IP address of the development board on your network.

Running Machine Learning Models

Let’s assume you now have a working Coral development board. You can connect to it from your computer with SSH and from your smartphone with the Model Play application.

The getting started guide has instructions for trying out the built-in demonstration application called edgetpu_demo. This application will work without a video camera. It uses a recorded video stream to perform real-time object recognition to detect cars in the video. You can see the output in your web browser.

You can also try out some TensorFlow Lite models through the SSH connection. If you have your own models, check out the documentation on how to make them compatible with the Coral Edge TPU at

If you just want to play around with existing models, the Model Play application makes it very easy. Pick one of the provided models and tap the Free button to download it to your device. Then tap the Run button to execute it.

Connecting a Video Camera and Sensors

If you buy the Coral development board, make sure to also get the Video Camera and Sensor accessories for about $50 extra. They will let you apply your machine learning models to something more interesting than static video files.

Photo by Gravitylink

Alternatively you can also use a USB UVC compatible camera. Check the instructions at for details. You can use an HDMI monitor to view the output.

Future of the Edge

Google has partnered with Gravitylink for Coral product distribution. They also make the Model Play application that offers the Coral demos mentioned in this article. Gravitylink is trying to make machine learning fun and easy with simple user interfaces and a directory of pre-trained models.

Once you start developing more serious edge computing applications, you will need to think about issues like remote management and application deployment. At this point it is still unclear whether Google will integrate Coral and Mendel Linux to the Google Cloud Platform. This would involve device authentication, operating system updates and application deployments.

If you start building on Coral right now, you’ll most likely need a custom management solution. We at Nordcloud develop cloud-based management solutions for technologies like AWS Greengrass, AWS IoT and Docker. Feel free to contact us if you need a hand.

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

    Looking ahead: what’s next for AI in manufacturing?


    BlogTech Community

    AI and manufacturing have been on an exciting journey together. It’s a combination that is fast changing the world of manufacturing: 92 percent of senior manufacturing executives believe that the ‘Smart Factory’ will empower their staff to work smarter and increase productivity.

    How does AI benefit manufacturers?

    Some of the biggest companies are already adopting AI. Why? A big reason is increased uptime and productivity through predictive maintenance. AI enables industrial technology to track its own performance and spot trends and looming problems that humans might miss. This gives the operator a better chance of planning critical downtime and avoiding surprises.

    But what’s the next big thing? Let’s look to the immediate future, to what is on the horizon and a very real possibility for manufacturers.

    Digital twinning

    ‘A digital twin is an evolving digital profile of the historical and current behaviour of a physical object or process that helps optimize business performance.’ – According to Deloitte.

    Digital twinning will be effective in the manufacturing industry because it could replace computer-aided design (CAD). CAD is highly effective in computer-simulated environments and has shown some success in modelling complex environments, yet its limitations lay in the interactions between the components and the full lifecycle processes.

    The power of a digital twin is in its ability to provide a real-time link between the digital and physical world of any given product or system. A digital twin is capable of providing more realistic measurements of unpredictability. The first steps in this direction have been taken by cloud-based building information modelling (BIM), within the AEC industry. It enables a manufacturer to make huge design and process changes ahead of real-life occurrences.

    Predictive maintenance

    Take a wind farm. You’re manufacturing the turbines that will stand in a wind farm for hundreds of years. With the help of a CAD design you might be able to ‘guesstimate’ the long-term wear, tear and stress that those turbines might encounter in different weather conditions. But a digital twin will use predictive machine learning to show the likely effects of varying environmental events, and what impact that will have on the machinery.

    This will then affect future designs and real-time manufacturing changes. The really futuristic aspect will be the incredible increases in accuracy as the AI is ‘trained.’

    Smart factories

    An example of a digital twin in a smart factory setting would be to create a virtual replica of what is happening on the factory floor in (almost) real-time. Using thousands or even millions of sensors to capture real-time performance and data, artificial intelligence can assess (over a period of time) the performance of a process, machine or even a person. Cloud-based AI, such as those technologies offered by Microsoft Azure, have the flexibility and capacity to process this volume of data.

    This would enable the user to uncover unacceptable trends in performance. Decision-making around changes and training will be based on data, not gut feeling. This will enhance productivity and profitability.

    The uses of AI in future manufacturing technologies are varied. Contact us to discuss the possibilities and see how we can help you take the next steps towards the future.

    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

      10 examples of AI in manufacturing to inspire your smart factory


      BlogTech Community

      AI in manufacturing promises massive leaps forward in productivity, environmental friendliness and quality of life, but research shows that while 58 percent of manufacturers are actively interested, only 12 percent are implementing it.

      We’ve gathered 10 examples of AI at work in smart factories to bridge the gap between research and implementation, and to give you an idea of some of the ways you might use it in your own manufacturing.

      1. Quality checks

      Factories creating intricate products like microchips and circuit boards are making use of ‘machine vision’, which equips AI with incredibly high-resolution cameras. The technology is able to pick out minute details and defects far more reliably than the human eye. When integrated with a cloud-based data processing framework, defects are instantly flagged and a response is automatically coordinated.

      2. Maintenance

      Smart factories like those operated by LG are making use of Azure Machine Learning to detect and predict defects in their machinery before issues arise. This allows for predictive maintenance that can cut down on unexpected delays, which can cost tens of thousands of pounds.

      3. Faster, more reliable design

      AI is being used by companies like Airbus to create thousands of component designs in the time it takes to enter a few numbers into a computer. Using what’s called ‘generative design’, AI giant Autodesk is able to massively reduce the time it takes for manufacturers to test new ideas.

      4. Reduced environmental impact

      Siemens outfits its gas turbines with hundreds of sensors that feed into an AI-operated data processing system, which adjusts fuel valves in order to keep emissions as low as possible.

      5. Harnessing useful data

      Hitachi has been paying close attention to the productivity and output of its factories using AI. Previously unused data is continuously gathered and processed by their AI, unlocking insights that were too time-consuming to analyse in the past.

      6. Supply chain communication

      The aforementioned data can also be used to communicate with the links in the supply chain, keeping delays to a minimum as real-time updates and requests are instantly available. Fero Labs is a frontrunner in predictive communication using machine learning.

      7. Cutting waste

      Steel industry uses Fero Labs’ technology to cut down on ‘mill scaling’, which results in 3 percent of steel being lost. The AI was able to reduce this by 15 percent, saving millions of dollars in the process.

      8. Integration

      Cloud-based machine learning – like Azure’s Cognitive Services – is allowing manufacturers to streamline communication between their many branches. Data collected on one production line can be interpreted and shared with other branches to automate material provision, maintenance and other previously manual undertakings.

      9. Improved customer service

      Nokia is leading the charge in implementing AI in customer service, creating what it calls a ‘holistic, real-time view of the customer experience’. This allows them to prioritise issues and identify key customers and pain points.

      10. Post-production support

      Finnish elevator and escalator manufacturer KONE is using its ‘24/7 Connected Services’ to monitor how its products are used and to provide this information to its clients. This allows them not only to predict defects, but to show clients how their products are being used in practice.

      AI in manufacturing is reaching a wider and wider level of adoption, and for good reason. McKinsey predicts that ‘smart factories’ will drive $37 trillion in new value by 2025, giving rise to research projects like Reboot Finland IoT Factory, which involves organisations as diverse as Nokia and GE Healthcare. The technology is here and the research is ready – how will AI revolutionise your industry?

      Check out our whitepaper: “Industry 4.0: 7 steps to implement smart manufacturing”


      The uses of AI in future manufacturing technologies are varied. Contact us to discuss the possibilities and see how we can help you take the next steps towards the future.

      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

        Cloud Computing News #11: Quantum Computing is the New Space Race



        This week we focus on quantum computing.

        Classical computers store information in bits that are either 1 or 0, but quantum computers use qubits, which can be thought to exist in both states of 1 and 0 at the same time, and also influence one another instantaneously via a process known as “entanglement”. These exotic new qualities for quantum bits mean that upcoming quantum computers computing power will be exponentially larger and faster.

        Quantum computing is expected to, for example, boost machine learning and have a big impact on artificial intelligence – and cloud services are being looked on as the method for providing access to quantum processing.

        Now as Nordcloud´s partners Google and Microsoft are investing massively into quantum computing, we are keenly following this development to be ready to bring this power to our customers in the future.

        BlackBerry races ahead of security curve with quantum-resistant solution

        According to TechCrunch, Black Berry announced a new quantum-resistant code signing service that anticipates a problem that does not yet exist.

        “By adding the quantum-resistant code signing server to our cybersecurity tools, we will be able to address a major security concern for industries that rely on assets that will be in use for a long time. If your product, whether it’s a car or critical piece of infrastructure, needs to be functional 10-15 years from now, you need to be concerned about quantum computing attacks,” Charles Eagan, BlackBerry’s chief technology officer, said in a statement.

        While experts argue how long it could take to build a fully functioning quantum computer, most agree that it will take between 50 and 100 qubit computers to begin realizing that vision.

        Read more in TechCrunch

        Quantum mechanics defies causal order

        Physics World highlights an experiment by Jacqui RomeroFabio Costa and colleagues at the University of Queensland in Australia, that has confirmed that quantum mechanics allows events to occur with no definite causal order. In classical physics – and everyday life – there is a strict causal relationship between consecutive events. If a second event (B) happens after a first event (A), for example, then cannot affect the outcome of A. This relationship, however, breaks down in quantum mechanics.

        In their experiment, Romero, Costa and colleagues created a “quantum switch”, in which photons can take two paths. As well as making an experimental connection between relativity and quantum mechanics, the researchers point out that their quantum switch could find use in quantum technologies.

        “This is just a first proof of principle, but on a larger scale indefinite causal order can have real practical applications, like making computers more efficient or improving communication,” says Costa.

        Read more in Physics World

        Two Quantum Computing Bills Are Coming to Congress

        According to Gizmodo, quantum computing has made it to the United States Congress. China has funded a National Laboratory for Quantum Information Sciences, set to open in 2020, and has launched a satellite meant to test long-distance quantum secure information.

        “Quantum computing is the next technological frontier that will change the world, and we cannot afford to fall behind,” said Senator Kamala Harris (D-California). “We must act now to address the challenges we face in the development of this technology—our future depends on it.”

        The bill introduced by Harris in the Senate focuses on defense, calling for the creation of a consortium of researchers selected by the Chief of Naval Research and the Director of the Army Research Laboratory. Another, yet-to-be-introduced bill, seen in draft form by Gizmodo, calls for a 10-year National Quantum Initiative Program to set goals and priorities for quantum computing in the US; invest in the technology; and partner with academia and industry.

        Read more in Gizmodo

        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

          Cloud Computing News #5: AI, IoT and cloud in manufacturing



          This week we focus on how AI, IoT and cloud computing are transforming manufacturing.

          Cloud Computing Will Drive Manufacturing Growth

 lists 10 ways cloud computing will drive manufacturing growth during this year:

          1. Quality gains greater value company-wide when a cloud-based application is used to track, analyse and report quality status by center and product.
          2. Manufacturing cycle times are accelerated through the greater insights available with cloud-based manufacturing intelligence systems.
          3. Insights into overall equipment effectiveness (OEE) get stronger using cloud-based platforms to capture, track and analyse the health of the equipment.
          4. Automating compliance and reporting saves valuable time.
          5. Real-time tracking and traceability become easier to achieve with cloud based applications.
          6. APIs let help scale manufacturing strategies faster than ever.
          7. Cloud-based systems enable higher supply chain performance. 
          8. Order cycle times and rework are reduced.
          9. Integrating teams’ functions increases new product introduction success. 
          10. Perfect order performance is tracked across multiple production centers for the first time.


          Machine learning in manufacturing

          According to CIO Review, the challenge with machine learning in manufacturing is not always just the machines. Machine learning in IoT has focused on optimizing at the machine level but now it’s time for manufacturers, to unlock the true poten­tial of machine learn­ing, start looking at network-wide efficiency.

          By opening up the entire network’s worth of data to these network-based algorithms we can unlock an endless amount of previously unattainable opportunities:

          1. With the move to network-based machine learning algorithms, engineers will have the ability to determine the optimal workflow based on the next stage of the manufacturing process.
          2. Machine-learning algorithms can reduce labor costs and improve the work-life balance of plant employees. 
          3. Manufacturers will be able to more effectively move to a multi-modal facility production model where the capacity of each plant is optimized to increase the efficiency of the entire network.
          4. By sharing data across the network, manufacturing plants can optimize capacity.
          5. In the future, the algorithms will be able to provide the ability to schedule for purpose to optimize cost and delivery and to meet the demand.

          Read more in CIO Review

          Introducing IOT into manufacturing

          According to Global Manufacturing, IoT offers manufacturers many potential benefits in product innovation, but it also brings challenges, particularly around the increased dependency on software:

          1. Compliance: Manufacturers developing IoT-based products must demonstrate compliance due to critical safety and security demands. In order to do this, development organisations must be able to trace and even have an audit trail for all the changes involved in a product lifecycle.
          2. Diversity and number of contributors, who may be spread across different locations or time-zones and working with different platforms or systems.  Similarly, over-the-air updates, also exacerbate the need for control and managing complex dependency issues at scale and over long periods of time.
          3. Need to balance speed to market, innovation and flexibility, against the need for reliability, software quality and compliance, all in an environment that is more complex and involving many more components.

          Because of these challenges, increasing number of manufacturing companies revise how they approach development projects. More of them are moving away from traditional processes like Waterfall, towards Agile, Continuous Delivery and DevOps or hybrids of more than one.  These new ways of working also help empower internal teams, while simultaneously providing the rigour and control that management requires.

          In addition to new methodology, this change requires the right supporting tools. Many existing tools may no longer be fit for purpose, though equally many have also evolved to meet the specific requirements of IoT. Building the right foundation of tools, methodologies and corporate thinking in place is essential to success.

          Read more in Global Manufacturing

          Data driven solutions and devops at Nordcloud

          Our data driven solutions and DevOps will make an impact on your business with better control and valuable business insight with IoT, modern data platforms and advanced analytics based on machine learning. How can we help you take your business to the next level? 

          Get in Touch.

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

            Nordcloud nominated ‘Preferred AI Training Partner’ by Microsoft



            Microsoft has nominated Nordcloud as a preferred AI Training Partner on the topics “Azure Machine Learning”, “Batch AI” and “Team Data Science Process”.

            The topics are covered e.g. in the 2 day “Professional AI developer bootcamp”, where participants are learned how to use the Azure Machine Learning Workbench to develop, test and deploy Machine Learning solutions to Azure Container Services using an agile and team-oriented framework.

            Why Microsoft for AI?

            Microsoft’s Azure cloud computing service offers a fast-growing range of Platform Services for AI, machine learning and IoT development.

            Microsoft´s AI platform consists of 3 core areas:

            • AI Services: Developers can rapidly consume high-level “finished” services that accelerate the development of AI solutions. Compose intelligent applications, customised to your organisation’s availability, security, and compliance requirements.
            • AI Infrastructure: Services and tools backed by a best-of-breed infrastructure with enterprise grade security, availability, compliance, and manageability. Harness the power of infinite scale infrastructure and integrated AI services.
            • AI Tools: Leverage a set of comprehensive tools and frameworks to build, deploy, and operationalise AI products and services at scale. Use the extensive set of supported tools and IDEs of your choice and harness the intelligence with massive datasets through deep learning frameworks of your choice.

            Azure AI

            Download our guide: Steps needed to build an AI enabled solution in Azure here 

            We’d love to help you to boost your business with the adoption of AI technologies

            You may find yourself in a position where you need a fully customised option but lack access to some of the specific expertise required. In that case we are available to advise and where appropriate, help directly.

            Nordcloud offers a range of services from managed service provision through to full cloud-software project management and execution. Just as you’re sure to find a suitable development option within Azure, we can offer you whatever support you need for your AI/ML project.

            Contact us for AI training and consultation!

            Check also our data driven solutions that will make an impact on your business here.

            Get in Touch.

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

              Machine learning? Yes, we can!



              Machine learning is changing the world as we know it

              Algorithms that learn from data are responsible for breakthroughs such as self-driving carsmedical diagnoses of unprecedented accuracy, and, on a lighter note, the ability to identify cats via YouTube videos. They power your Netflix recommendations and generate Spotify’s weekly playlists. They can translate text, solve analogies, route your mail, win Go championsanalyse sentiments in written text (even, to some extent, detecting sarcasm), and make sure your palm isn’t accidentally recognised as input when you’re using an Apple Pencil. They can even generate pieces of art.

              With the rise of cloud computing and from that the viability of Deep Learning techniques, the relatively old field of machine learning is undergoing a beautiful renaissance, backed by the biggest players in IT. Machines may not yet be close to as intelligent as us human beings, but we’re witnessing huge strides almost daily. It’s a truly exciting time not just for Computer Science, but society as a whole.

              At Nordcloud, we believe that machine learning will only become more and more important, regardless of domain or sector. We envisage a world in which machine learning completely changes the way we use and interact with computers.

              To this end, we are proud to announce that machine learning is now part of our official offering

              Our aim is to take a pragmatic approach and use best-of-breed algorithms, libraries and tools, fine-tuning them to make truly remarkable, smart applications and digital services. And in cases where existing approaches don’t cut it, we’ll implement our own bespoke solutions based on the latest academic research. And, as always, we hope to do all this with the unique blend of passion, pride and fun that Nordcloud is known for.

              In the coming months, we’ll be writing a series of blog posts about machine learning—what it’s all about, what it can be used for, and why it’s worth taking note of. In the meantime, if you want to learn more, fancy seeing some demos, or just having a chat over a cup of coffee, our doors are always open!

              You can find our data-driven solutions for business intelligence here.

              Get in Touch.

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                Three main uses of machine learning



                The beauty of applications that employ machine learning is that they can be extremely simple on the surface, hiding away much complexity from users. However, designers can’t afford to ignore the under-the-hood part of machine learning altogether. In this article, I demonstrate the three main functions machine learning algorithms perform underneath with along with six unique benefits you can derive from using them.

                So what have machines learned so far?

                In 2016, the most celebrated milestone of machine learning was AlphaGo’s victory over the world champion of Go, Lee Sedol. Considering that Go is an extremely complicated game to master, this was a remarkable achievement. Beyond exotic games such as Go, Google Image Search is maybe the best-known application of machine learning. Search feels so natural and mundane when it effectively hides away all of the complexity is embeds. With over 30 billion search queries every day, Google Image Search constantly gets more opportunities to learn.


                There are already more individual machine learning applications that are reasonable to list here. But a major simplification is not sufficient either, I feel. One way to appreciate the variety is to look at successful ML applications from Eric Siegel’s book Predictive Analytics from 2013. The listed applications fall under the following domains:

                • marketing, advertising, and the web;
                • financial risk and insurance;
                • health care;
                • crime fighting and fraud detection;
                • fault detection for safety and efficiency;
                • government, politics, nonprofit and education;
                • human-language understanding, thought and psychology;
                • staff and employees, human resources.


                Siegel’s cross-industry collection of examples is a powerful illustration of the omnipresence of predictive applications, even though not all of his 147 examples utilise machine learning as we know it. However, for a designer, knowing whether your problem domain is among the nine listed will give an idea of whether machine learning has already proven to be useful or whether you are facing a great unknown.


                Detection, prediction, and creation

                As I see it, the power of learning algorithms comes down to two major applications: detection and prediction. Detection is about interpreting the present, and prediction is the way of the future. Interestingly, machines can also do generative or “creative” tasks. However, these are still a marginal application.

                When you combine detection and prediction, you can achieve impressive overall results. For instance, combine the detection of traffic signs, vehicles and pedestrians with the prediction of vehicular and pedestrian movements and of the times to vehicle line crossings, and you have the makings of an autonomous vehicle!

                This is my preferred way of thinking about machine learning applications. In practice, detection and prediction are sometimes much alike because they don’t yet cut into the heart and bones of machine learning, but I believe they offer an appropriate level of abstraction to talk about machine learning applications. Let’s clarify these functions through examples.

                The varieties of detection

                There are at least four major types of applications of detection. Each deals with a different core learning problem. They are:

                • text and speech interpretation,
                • image and sound interpretation,
                • human behaviour and identity detection,
                • abuse and fraud detection.


                Text & speech interpretation

                Text and speech are the most natural interaction and communication methods. Thus, it has not been feasible in the realm of computers. Previous generations of voice dialling and interactive voice response systems were not very impressive. Only in this decade have we seen a new generation of applications that take spoken commands and even have a dialogue with us! This can go so smoothly that we can’t tell computers and humans apart in text-based chats, indicating that computers have passed the Turing test.

                Dealing with speech, new systems such as personal assistant Siri or Amazon’s Echo device are capable of interpreting a wide range of communications and responding intelligently. The technical term for this capability is natural language processing (NLP). This indicates that, based on successful text and speech detection (i.e. recognition), computers can also interpret the meaning of words, spoken and written, and take action.

                Text interpretation enables equally powerful applications. The detection of emotion or sentiment from the text means that large masses of text can be automatically analyzed to reveal what people in social media think about brands, products or presidential candidates. For instance, Google Translate just recently witnessed significant quality improvements by switching to an ML approach to translations.

                Amazon Echo Dot is surfacing as one of the best-selling speech-recognition-driven appliances of early 2017. Picture: Amazon

                Image & sound interpretation

                Computer vision gives metaphorical eyes to a machine. The most radical example of this is a computer reconstruction of human perception from brain scans! However, that is hardly as useful as an application that automates the tagging of photos or videos to help you explore Google Photos or Facebook. The latter service recognizes faces to an even scary level of accuracy.

                Image interpretation finds many powerful business applications in industrial quality control, recording vehicle registration plates, analysing roadside photos for correct traffic signs and monitoring empty parking spaces. The recent applications of computer vision to skin cancer diagnosis have actually proven more proficient than human doctors, leading to the discovery of new diagnostic criteria.


                Speech was already mentioned, but other audio signals are also well detected by computers. Shazam and SoundHound have for years provided reliable detection of songs either from a recording fragment or a sung melody. The Fraunhofer Institute developed the Silometer, an app to detect varieties of coughs as a precursor to medical diagnosis. I would be very surprised if we don’t see many new applications for human and non-human sounds in the near future.


                Human behaviour and identity detection

                Given that computers are seeing and hearing what we do, it is not surprising that they have become capable of analysing and detecting human behaviour and identity as well — for instance, with Microsoft Kinect recognising our body motion. Machines can identify movements in a football game to automatically generate game statistics. Apple’s iPad Pro recognizes whether the user is using their finger or the pencil for control, to prevent unwanted gestures. A huge number of services detect what kind of items typically go together in a shopping cart; this enables Amazon to suggest that you might also be interested in similar products.

                In the world of transportation, it would be a major safety improvement if we could detect when a driver is about to fall asleep behind the steering wheel, to prevent traffic accidents. Identity detection is another valuable function enabled by several signals. A Japanese research institute has developed a car seat that recognises who’s sitting in it. Google’s reCAPTCHA is a unique system that tells apart humans from spambots. Perhaps the most notorious example of guessing people’s health was Target’s successful detection of expectant mothers. This was followed by a marketing campaign that awkwardly disclosed the pregnancy of Target customers, resulting in much bad publicity.


                Machine learning is also used to detect and prevent fraudulent, abusive or dangerous content and schemes. It is not always major attacks; sometimes it’s just about blocking bad checks or preventing petty criminals from entering the NFL’s Superbowl arena. The best successes are found in anti-spam; for instance, Google has been doing an excellent job for years of filtering spam from your Gmail inbox.

                I will conclude with a good-willed detection example from the normal human sphere. Whales can be reliably recognized from underwater recordings of their sounds — once more, thanks to machine learning. This can help human-made fishing machinery to avoid contact with whales for their protection.

                Species of prediction

                Several generations of TV watchers have been raised to watch weather forecasts for fun, ever since regular broadcasts began after the Second World War. The realm of prediction today is wide and varied. Some applications may involve non-machine learning parts that help in performing predictions.

                Here I will focus on the prediction of human activities, but note that the prediction of different non-human activities is currently gaining huge interest. Predictive maintenance of machines and devices is one such application, and more are actively envisioned as the Internet of Things generates more data to learn from.

                Predicting different forms of human behaviour falls roughly into the following core learning challenges and applications:

                • recommendations,
                • individual behaviour and condition,
                • collective behaviour prediction.

                Different types of recommendations are about predicting user preferences. When Netflix recommends a movie or Spotify generates a playlist of your future favourite music, they are trying to predict whether you will like it, watch it or listen through to the end of the piece. Netflix is on the lookout for your rating of the movie afterwards, whereas Spotify or Pandora might measure whether you are returning to enjoy the same song over and over again without skipping. This way, our behaviours, and preferences become connected even without our needing to express them explicitly. This is something machines can learn about and exploit.

                In design, predicting which content or which interaction models appeal to users could give rise to the personalisation of interfaces. This is mostly based on predicting which content a user would be most interested in. For a few years now, Amazon has been heavily personalising the front page, predicting what stuff and links should be present in anticipation of your shopping desires and habits.

                Recommendations are a special case of predicting individual behaviour. The scope of predictions does not end with trivial matters, such as whether you like Game of Thrones or Lady Gaga. Financial institutions attempt to predict who will default on their loan or try to refinance it. Big human-resource departments might predict employee performance and attrition. Hospitals might predict the discharge of a patient or the prognosis of cancer. Rather more serious humans conditions, such as divorce, premature birth and even death within a certain timeframe, have been all been predicted with some success. Of course, predicting fun things can get serious when money is involved, as when big entertainment companies try to guess which songs and movies will top the charts to direct their marketing and production efforts.

                The important part about predictions is that they lead to an individual assessment that is actionable. The more reliable the prediction and the weightier the consequences, the more profitable and useful the predictions become.

                Predicting collective behaviour becomes a generalization of individuals but with different implications and scope. In these cases, intervention is only successful if it affects most of the crowd. The looming result of a presidential election, cellular network use or seasonal shopping expenditure can all be subject to prediction. When predicting financial risk or a company’s key performance indicators, the gains of saving or making money are noticeable. J.P. Morgan Chase was one of the first banks to increase efficiency by predicting mortgage defaulters (those who never pay back) and refinancers (those who pay back too early). On the other hand, the recent US presidential election is a good reminder that none of this is yet perfect.

                In a close resemblance to tools for antivirus and other present dangers, future trouble is also predictable. Predictive policing is about forecasting where street conflicts might happen or where squatters are taking over premises, which would help administrators to distribute resources to the right places. A similar process is going on in energy companies, as they try to estimate the capacity needed to last the night.


                What can machine intelligence do for you?

                After successfully creating a machine learning application to fulfil any of the three uses described above, what can you expect to come out of it? How would your product or service benefit from it? Here are six possible benefits:

                1. augment,
                2. automate,
                3. enable,
                4. reduce costs,
                5. improve safety,
                6. create.

                In rare cases, machine learning might enable a computer to perform tasks that humans simply can’t perform because of speed requirements or the scale of data. But most of the time, ML helps to automate repetitive, time-consuming tasks that defy the limits of human labour cost or attentional span. For instance, sorting through recycling waste 24/7 is more reliably and affordably done by a computer.

                In some areas, machine learning may offer a new type of expert system that augments and assists humans. This could be the case in design, where a computer might make a proposal for a new layout or colour palette aligned with the designer’s efforts. Google Slides already offers this type of functionality through the suggested layouts feature. Augmenting human drivers would improve traffic safety if a vehicle could, for example, start braking before the human operator could possibly react, saving the car from a rear-end collision.

                You can find our data-driven solutions for business intelligence here.

                Get in Touch.

                Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                  How Amazon’s IoT platform controls things without servers



                  Amazon’s IoT platform is a framework for connecting smart devices to the cloud. It aims to make the basic processes of collecting data and controlling devices as simple as possible. AWS IoT is a fully managed service, which means the customer doesn’t have to worry about configuring servers or updating operating systems. The platform simply exposes a set of APIs and automatically scales from a single device to millions of devices.

                  I recently wrote an article (in Finnish) in my personal blog about using AWS IoT for home automation. AWS IoT is not exactly designed for this purpose, but if you are tech savvy enough, it can be used for it. The pricing is currently set at $5 per million messages, which lasts a long time when you’re only dealing with a couple of devices sending occasional messages.

                  The home automation experiment provides a convenient context for discussing the basic concepts of AWS IoT. In the next few sections, I will refer to the elements of a simple home system that detects human presence in rooms and turns on the lights if it happens at a certain time of the day. All the devices are connected to the Amazon cloud via public Internet.

                  Device Registration

                  The first step in most IoT projects is to register the devices (also called “things”) into a centrally managed database. AWS IoT provides this database for free and lets you add any number of devices in it. The registration is important because each device also gets its own SSL/TLS certificate and private key, which are used for authentication and encryption. The devices can only be connected to AWS IoT by using their certificates and private keys.

                  The AWS IoT device registry also works as a simple asset management database. It lets you attach attributes to devices and maintain information such as customer IDs. The device registry can later be queried based on these attribute values. For example, you can find all devices belonging to a specific customer ID. The attributes are optional, so they can just be ignored if they’re not needed.

                  In the home automation experiment, two devices were added to the registry: A wireless human presence detector and a Philips Hue light control bridge.

                  Data Collection

                  Almost any IoT scenario involves collecting device data. Amazon provides the AWS IoT Device SDK for connecting devices to the IoT platform. The SDK is typically used to develop a small application that runs on the device (or on a gateway connected to the device) and transmits data to the cloud.

                  There are two ways to deliver data to the AWS IoT platform. The first one is to send raw MQTT messages, which are usually small JSON objects. You can then setup AWS IoT rules to forward these messages to other Amazon cloud services for further processing. In the home automation scenario, a rule specifies that all messages received under the topic “presence-detected” should be forwarded to an Amazon Lambda microservice, which then decides what to do with the information.

                  The other way is to use Thing Shadows, which are built into the AWS IoT platform. Every registered device has a “shadow” which contains its latest reported state. The state is stored as a JSON document, which can contain 8 kilobytes worth of fields and values. This makes it easy and cost-effective to store the current state of any device in the cloud, without requiring an external database. For instance, a device equipped with a thermometer might regularly report its current state as a JSON object that looks like this: {“temperature”:22}.

                  Moreover, It’s important to understand that Thing Shadows cannot be used as a general-purpose database. You can only look up a single Thing Shadow at a time, and it will only contain the current state. Indeed, you will need a separate database if you want to analyze historical time series of data. However, keep in mind that Amazon offers a wide range of databases you can easily connect to AWS IoT, by forwarding Thing Shadow updates to services like DynamoDB or Kinesis. This seamless integration between all Amazon cloud services is one of the key advantages of AWS IoT.

                  Data Analysis and Decision Making

                  Since Amazon already offers a wide range of data analysis services, the AWS IoT platform itself doesn’t include any new tools for analyzing data. Existing analysis services include products like Redshift, Elastic MapReduce, Amazon Machine Learning and various others. Device data is typically collected into S3 buckets using Kinesis Firehose and then processed by these services.

                  Device data can also be forwarded to Amazon Lambda microservices for real-time decision making. A JavaScript function will be executed every time a data point is received. This is suitable for the home automation scenario, where a single IoT message is sent whenever presence is detected in a room. The JavaScript function considers various factors, such as the current time of day, and decides whether to turn the lights on.

                  In addition to existing solutions, Amazon has announced an upcoming product called Kinesis Analytics. It will enable real-time analytics of streaming IoT data, similar to Apache Storm. This means that data can be analyzed on-the-fly without storing it in a database. For instance, you could maintain a rolling average of values and react to it instead of individual data points.

                  Device Control

                  The AWS IoT platform can control devices in the same two ways it collects data. The first way is to send raw MQTT messages directly to devices. Devices will react to the messages when they receive them. The problem with this approach is that devices might sometimes have network or electricity issues, which may cause the loss of some control messages.

                  Thing Shadows provide a more reliable way to have devices enter a desired state. A Thing Shadow will remember the new desired state and keep retrying until the device has acknowledged it.

                  In the home automation scenario, when presence is detected, the desired state of a lamp is set to {“light”:true}. When the lamp receives this desired state, it turns on the light and reports its current state back to AWS IoT as {“light”:true}. Once the reported state is the same as the desired state, the Thing Shadow of the lamp is known to be in sync.

                  User Interfaces and Data Visualization

                  You may use the AWS IoT Console to manually control devices by modifying their desired state. The console will show the current state and update it on the screen as it changes. This is, of course, a very low-level way to control lighting since you need to log in as a cloud administrator and then manually edit the JSON documents.

                  Then again, a better way is to build a web application that integrates to AWS IoT and offers a friendly user interface for controlling things. AWS provides rich infrastructure options for developing integrated mobile and web applications. Amazon API Gateway and Lambda are typically used to build a backend API that lets applications access IoT data. The data itself may be stored in a database like DynamoDB or Postgres. The access can be limited to authenticated users only using Amazon Cognito or a custom IAM solution.

                  For data visualization purposes, Amazon has recently announced an upcoming product called Amazon QuickSight, which will integrate with other Amazon services and databases. There are also many third-party solutions available through the AWS Marketplace. If any of these options doesn’t fit the use case well, a custom solution can always be developed as part of a web application.

                  My Findings

                  AWS IoT is a fast and easy way to get started on the Internet of Things. All the scenarios discussed in this article are based on managed cloud services. This means that you never have to maintain your own servers or worry about scaling.

                  For small-scale projects the operating costs are negligible. For larger scale projects, the costs will depend on the amount and frequency of the data being transferred. There are no fixed monthly or hourly fees, which makes personal experimentation at home very convenient.

                  Get in Touch.

                  Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                    Four UI Design Guidelines for Creating Machine Learning Applications



                    Previously, I’ve introduced three underlying general capacities of machine learning that are exploited in applications. However, they are not enough for designers to actually start building applications. This is why this particular post introduces four general design guidelines that can help on the way.

                    How can we and will we communicate machine intelligence to users, and what kinds of new interfaces will machine learning call for?

                    Machine learning under the hood entails both opportunities to do things in a new way, as well as requirements for new designs. To me, this means we will need a rise in importance of several design patterns or, rather, abstract design features, which will become important as services get smarter. They include:

                    1. suggested features,
                    2. personalization,
                    3. shortcuts vs. granular controls,
                    4. graceful failure.

                    Suggested features

                    Text and speech prediction has opened up new opportunities for interaction with smart devices. Conversational interfaces are the most prominent example of this development, but definitely not the only one. As we try to hide the interface and underlying complexity from users, we are balancing between what we hide and what we reveal. Suggested features help users to discover what the invisible UI is capable of.

                    Graphical user interfaces (GUIs) have made computing accessible for the better part of the human race that enjoys normal vision. GUIs provided a huge usability improvement in terms of feature discovery. Icon and menus were the first useful metaphors for direct manipulation of digital objects using a mouse and keyboard. With multi-touch screens, we have gained the new power of pinching, dragging and swiping to interact. Visual clues aren’t going anywhere, but they are not going to be enough when interaction modalities expand.

                    How does a user find out what your service can do?

                    Haptic interaction in the first consumer generation of wearables and in the foremost conversational interfaces presents a new challenge for feature discovery. Non-visual cues must be used that facilitate the interaction, particularly at the very onset of the interactive relationship. Feature suggestions — the machine exposing its features and informing the user what it is capable of — are one solution to this riddle.

                    In the case of a chatbot employed for car rentals, this could be, “Please ask me about available vehicles, upgrades, and your past reservations.”

                    Specific application areas come with specific, detailed patterns. For instance, Patrick Hebron’s recent ebook from O’Reilly contains a great discussion of the solutions for conversational interfaces.


                    Once a computer gets to know you and to predict your desires and preferences, it can start to serve you in new, more effective ways. This is personalization, the automated optimization of a service. Responsive website layouts are a crude way of doing this.

                    The utilization of machine learning features with interfaces could lead to highly personalized user experiences. Akin to giving everyone a unique desktop and home-screen, services and apps will start to adapt to people’s preferences as well. This new degree of personalization presents opportunities as well as forces designers to flex their thoughts on how to create truly adaptive interfaces that are largely controlled by the logic of machine learning. If you succeed in this, you will reward users with a superior experience and will impart a feeling of being understood.

          ’s front page has been personalised for a long time. The selection offered to me looks somewhat relevant, if not attractive.

                    Currently, personalisation is foremost applied in order to curate content. For instance, Amazon carefully considers which products would appeal to potential buyers on its front page. But it will not end with that. Personalisation will likely lead to much bigger changes across UIs — for instance, even in the presentation of the types of interactive elements a user likes to use.

                    Shortcuts versus granularity

                    Photoshop is an excellent example of a tool with a steep learning curve and a great deal of granularity in controlling what can be done. Most of the time, you work on small operations, each of which has a very specific influence. The creative combination of many small things allows for interesting patterns to emerge on a larger scale. Holistic, black-box operations such as transformative filters and automatic corrections are not really the reason why professionals use Photoshop.

                    What will happen when machines learn to predict what we are doing repeatedly? For instance, I frequently perform certain actions in Photoshop before uploading my photos to a blog. While I could manually automate this, creating yet another user-defined feature among thousands already in the product, Photoshop might learn to predict my intentions and offer a more prominent shortcut, or a highway, to fast-forward me to my intended destination. As Adobe currently puts effort into bringing AI into Creative Cloud, we’ll likely see something even more clever than this very soon. It is up to you to let the machine figure out the appropriate shortcuts in your application.

                    Mockup of a possible implementation of “predictive history” in Photoshop CC. The application suggests a possible future state for the user based on the user’s history and preceding actions and on the current image.


                    A funny illustration of a similar train of thought comes from Cristopher Hesse’s machine-learning-based image-to-image translation, which provides interesting content-specific filling of doodles. Similar to Photoshop’s content-aware fill, it creates most hilarious visualisations of building facades, cats, shoes, and bags based on minimal user input.

                    The edges2cats algorithm employs machine learning to finish your cat doodle as a photorealistic cat monster.

                    Graceful failure

                    I call the final pattern graceful failure. It means saying “sorry, I can’t do what you want because…” in an understandable way.

                    This is by no means unique to machine learning applications. It is innately human, but something that computers have been notoriously bad at since the time that syntax errors were repeatedly echoed by Commodore computers in the 1980s. But with machine learning, it’s slightly different. Because machine learning takes a fuzzy-logic approach to computing, there are new ways that the computer could produce unexpected results — that is, things could go very bad, and that has to be designed for. Nobody seriously blames the car in question for the death that occurred in the Tesla autopilot accident in 2016.

                    The other part is that building applications that rely on modern machine learning are still in its infancy. Classic software development has been around for so long that we’ve learned to deal with its insufficiencies better. As Peter Norvig, famous AI researcher and Google’s research director puts it like this:

                    The problem here is the methodology for scaling this up to a whole industry is still in progress.… We don’t have the decades of experience that we have in developing and verifying regular software.

                    The nature of learning is such that computers learn from what is given to them. If the algorithm has to deal with something else, then the results will not be to your liking. For example, if you’ve trained a system to detect animal species from pet photos and then start using it to classify plants, there will be trouble. This is more or less why Microsoft’s Twitterbot Tay had to be silenced after it picked up the wrong examples from malicious users when exposed to real-world conditions.

                    The uncertainty in detection and prediction should be taken into consideration. How this is done depends on the application. Consider Google Search. No one is offended or truly hurt but merely amused or frustrated, by bad search results. Of course, bad results will eventually be bad for business. However, if your bank started using a chatbot that suddenly could not figure out your checking account’s balance, you would be rightfully worried and should be offered a quick way to resolve your trouble.

                    To deal with failure, interfaces would do well to help both parties adjust. Users can tolerate one or two “I didn’t get that, please say that again” prompts (but no more) if that’s what it takes to advance the dialogue. For services that include machine learning, extensive testing is best. Next comes informing users about the probability and consequences of failure, and instructions on what the user might do to avoid it. The good practices are still emerging.


                    This text is from an article originally appearing in Smashing Magazine:

                    Get in Touch.

                    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.