Nordcloud nominated ‘Preferred AI Training Partner’ by Microsoft



Microsoft has nominated Nordcloud as a preferred AI Training Partner on the topics “Azure Machine Learning”, “Batch AI” and “Team Data Science Process”.

The topics are covered e.g. in the 2 day “Professional AI developer bootcamp”, where participants are learned how to use the Azure Machine Learning Workbench to develop, test and deploy Machine Learning solutions to Azure Container Services using an agile and team-oriented framework.

Why Microsoft for AI?

Microsoft’s Azure cloud computing service offers a fast-growing range of Platform Services for AI, machine learning and IoT development.

Microsoft´s AI platform consists of 3 core areas:

  • AI Services: Developers can rapidly consume high-level “finished” services that accelerate the development of AI solutions. Compose intelligent applications, customised to your organisation’s availability, security, and compliance requirements.
  • AI Infrastructure: Services and tools backed by a best-of-breed infrastructure with enterprise grade security, availability, compliance, and manageability. Harness the power of infinite scale infrastructure and integrated AI services.
  • AI Tools: Leverage a set of comprehensive tools and frameworks to build, deploy, and operationalise AI products and services at scale. Use the extensive set of supported tools and IDEs of your choice and harness the intelligence with massive datasets through deep learning frameworks of your choice.

Azure AI

Download our guide: Steps needed to build an AI enabled solution in Azure here 

We’d love to help you to boost your business with the adoption of AI technologies

You may find yourself in a position where you need a fully customised option but lack access to some of the specific expertise required. In that case we are available to advise and where appropriate, help directly.

Nordcloud offers a range of services from managed service provision through to full cloud-software project management and execution. Just as you’re sure to find a suitable development option within Azure, we can offer you whatever support you need for your AI/ML project.

Contact us for AI training and consultation!

Check also our data driven solutions that will make an impact on your business here.

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

    State of AI for digital business in 2018



    One of the influential people in AI who I follow is Andrew Ng. In the past, he has headed AI functions at both Google and Baidu and co-founded Coursera. Last December he was back on the stage of MIT Technology Review conference EmTech discussing the present state of AI. I found his presentation very inspiring and picked the following insights for those who didn’t have the half an hour to listen to him.

    What is AI now good at?

    Andrew has for some time defined the capacity of current AI as follows:

    Anything that a typical person can do in less than one second AI can learn. This is an imperfect rule, but holds pretty well.

    Jobs and manual procedures which can be decomposed into these simple, constituent jobs can probably be automated in the near future.

    Nowadays there are good examples of using AI to do market automation, loan decisions, speech recognition, and even to steer an autonomous vehicle. The technology behind the majority of these opportunities is “standard” AI, otherwise known as supervised learning.

    99% of value created by AI comes out of supervised learning – mapping from A to B [identification, categorization].

    The deep learning is the fancy new variant of AI repeatedly discussed in the media. Deep learning is finally improving and it provides superior performance in comparison to “old” AI technologies (SVM etc.) when the number of available data increases. Old learning solutions could not benefit from larger data sets, whereas neural networks can benefit from increasing datasets. The bigger the network, the more data can be poured in with a performance increment.

    Andrew lists different techniques based on their current business impact

    1. Supervised learning
    2. Transfer learning
    3. Unsupervised learning
    4. Reinforcement learning

    “Reinforcement learning PR excitement is largely disproportionate with its impact”

    The most valuable thing for AI-based businesses is an exclusive data asset

    Leading AI company don’t only have great data scientists, but unique data assets. Andrew says that data assets make AI-based businesses defendable in a competitive landscape. Although he has worked with leading search engines and knows intimately how they work, he would be unable to create a competitive product without similar sets of user data. To build a defensible business, a company must build a positive feedback loop that allows to accumulate more data from users.

    Data assets allow leading web search companies to provide more relevant results.

    What is an internet company and what is an AI company?

    Andrew introduces the notion of an AI company, a digital business set apart by their unique power derived from utilisation of AI. But what defines this type of a company? Let us compare it to the picture of an internet company.

    An Internet company is not just about selling stuff over the internet. Based on Andrew, the advantage of internet companies is to have distributed decision making which can’t depend upon centralized decision making (or the Hippo, cf. Lean). They do testing (AB) and have short cycle times and are able to ship product improvements frequently.

    In comparison, an AI company is not just a company which uses neural networks on top of traditional technology products. AI companies do strategic data acquisition, which allows them to build defensible data-based business. They have unified data warehouses which allow fluid flow of data from application to application, across any superficial silos. They are good at spotting pervasive automation opportunities, including those under the one-second threshold.

    New requirements for product management

    To run an AI company or manage an AI-heavy product, visual representation of the new product is not enough. To deal with AI capabilities, product managers must meet AI developers in their terms, for instance, present annotated datasets which describe how the product should behave, in terms of matching A’s to B’s. Traditional specifications such as wireframes do not suffice when trying to crack this equation.

    How to incorporate AI into a corporate structure

    The final theme Andrew touches is upon is the integration of AI know-how in large organisations. First, he recognises that AI is not a mature capability. As such AI capabilities are currently best integrated as centralised AI teams which help the whole organisation to integrate AI functions (in a matrix fashion). Later on, when the practices and methods of AI work mature, individual business units may hire their own talent as has happened with UX and mobile developers, for instance.

    “Common teams, common standard, company-wide platforms of AI”

    Find out about our data-driven solutions delivering business intelligence here.

    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

      Nordcloud partners with Microsoft to unleash the power of AI for Azure



      Nordcloud, Europe’s fastest growing public cloud provider and the Nordic market-leader has teamed up with Microsoft to boost the spread of AI for Azure across the continent.

      Nordcloud helps customers to complete AI projects faster and at lower costs with the use of Microsoft Azure

      AI projects with complex deep learning problems consume vast amounts of computing capacity and traditional hosting solutions are not sufficiently agile to meet this demand. Nordcloud has wide experience of working with the Azure platform and developing innovative services with and for its clients that use the power of AI. The company’s past projects have included helping customers with AI-related challenges such as recommendation engines, classification and natural language processing.

      Microsoft’s Azure cloud computing service offers a fast-growing range of Platform Services for AI, machine learning and IoT development that can substantially boost businesses’ adoption of these technologies, including the potential for creating innovative, market-disputing business models, and for the revenue-boosting, cost-cutting benefits they bring.

      The entire Azure AI stack in use

      Nordcloud allows customers to make use of the entire Azure AI stack, creating scalable intelligent services and adding cutting-edge smart features to existing solutions in a fast, agile manner with the total cost of ownership benefits only cloud services can deliver.

      AI is set to be the main engine for the next generation of digital services that will deliver more data-driven decision making, better customer experiences and new problem-solving paradigms for challenges that are beyond the scope of traditional programming.

      Many of the tools available come ‘pre-taught’, in other words, they’ve already been fed vast quantities of data so their machine learning is already advanced. This cuts the time it takes to fine-tune tools for specific applications.

      AI creates competitive advantage to early adopters

      “Nordcloud has already delivered AI projects for its clients,” says Microsoft’s Cloud (for Finland) and Enterprise Business Lead Antti Alila. “This is important to stress: AI is here and now and helping businesses innovate and gain a competitive edge.” “Nordcloud has partnered with Microsoft for jointly deploying Azure AI-based solutions for our enterprise customers.”

      A recent report by the global consultancy McKinsey highlighted the competitive advantage created by early adopters of AI and the growing gap between them and companies that had failed to grasp the opportunities.

      “Of the more than 3000 customers McKinsey surveyed only one in five is using AI” says Nordcloud’s CEO Jan Kritz. “Fewer than one in ten is using machine learning, which is the area seeing the greatest investment into AI. Moreover, it’s the bigger companies that are investing most readily.”

      By leveraging Azure’s world-class AI infrastructure and services, Nordcloud aims to help a much wider range of businesses to embrace the potential of artificial intelligence. Nordcloud and Microsoft will be working together with Nordcloud’s clients to spot opportunities to build the next generation of cost-effective, scalable, intelligent AI-powered digital solutions.

      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

        Machine learning? Yes, we can!



        Machine learning is changing the world as we know it

        Algorithms that learn from data are responsible for breakthroughs such as self-driving carsmedical diagnoses of unprecedented accuracy, and, on a lighter note, the ability to identify cats via YouTube videos. They power your Netflix recommendations and generate Spotify’s weekly playlists. They can translate text, solve analogies, route your mail, win Go championsanalyse sentiments in written text (even, to some extent, detecting sarcasm), and make sure your palm isn’t accidentally recognised as input when you’re using an Apple Pencil. They can even generate pieces of art.

        With the rise of cloud computing and from that the viability of Deep Learning techniques, the relatively old field of machine learning is undergoing a beautiful renaissance, backed by the biggest players in IT. Machines may not yet be close to as intelligent as us human beings, but we’re witnessing huge strides almost daily. It’s a truly exciting time not just for Computer Science, but society as a whole.

        At Nordcloud, we believe that machine learning will only become more and more important, regardless of domain or sector. We envisage a world in which machine learning completely changes the way we use and interact with computers.

        To this end, we are proud to announce that machine learning is now part of our official offering

        Our aim is to take a pragmatic approach and use best-of-breed algorithms, libraries and tools, fine-tuning them to make truly remarkable, smart applications and digital services. And in cases where existing approaches don’t cut it, we’ll implement our own bespoke solutions based on the latest academic research. And, as always, we hope to do all this with the unique blend of passion, pride and fun that Nordcloud is known for.

        In the coming months, we’ll be writing a series of blog posts about machine learning—what it’s all about, what it can be used for, and why it’s worth taking note of. In the meantime, if you want to learn more, fancy seeing some demos, or just having a chat over a cup of coffee, our doors are always open!

        You can find our data-driven solutions for business intelligence here.

        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

          Three main uses of machine learning



          The beauty of applications that employ machine learning is that they can be extremely simple on the surface, hiding away much complexity from users. However, designers can’t afford to ignore the under-the-hood part of machine learning altogether. In this article, I demonstrate the three main functions machine learning algorithms perform underneath with along with six unique benefits you can derive from using them.

          So what have machines learned so far?

          In 2016, the most celebrated milestone of machine learning was AlphaGo’s victory over the world champion of Go, Lee Sedol. Considering that Go is an extremely complicated game to master, this was a remarkable achievement. Beyond exotic games such as Go, Google Image Search is maybe the best-known application of machine learning. Search feels so natural and mundane when it effectively hides away all of the complexity is embeds. With over 30 billion search queries every day, Google Image Search constantly gets more opportunities to learn.


          There are already more individual machine learning applications that are reasonable to list here. But a major simplification is not sufficient either, I feel. One way to appreciate the variety is to look at successful ML applications from Eric Siegel’s book Predictive Analytics from 2013. The listed applications fall under the following domains:

          • marketing, advertising, and the web;
          • financial risk and insurance;
          • health care;
          • crime fighting and fraud detection;
          • fault detection for safety and efficiency;
          • government, politics, nonprofit and education;
          • human-language understanding, thought and psychology;
          • staff and employees, human resources.


          Siegel’s cross-industry collection of examples is a powerful illustration of the omnipresence of predictive applications, even though not all of his 147 examples utilise machine learning as we know it. However, for a designer, knowing whether your problem domain is among the nine listed will give an idea of whether machine learning has already proven to be useful or whether you are facing a great unknown.


          Detection, prediction, and creation

          As I see it, the power of learning algorithms comes down to two major applications: detection and prediction. Detection is about interpreting the present, and prediction is the way of the future. Interestingly, machines can also do generative or “creative” tasks. However, these are still a marginal application.

          When you combine detection and prediction, you can achieve impressive overall results. For instance, combine the detection of traffic signs, vehicles and pedestrians with the prediction of vehicular and pedestrian movements and of the times to vehicle line crossings, and you have the makings of an autonomous vehicle!

          This is my preferred way of thinking about machine learning applications. In practice, detection and prediction are sometimes much alike because they don’t yet cut into the heart and bones of machine learning, but I believe they offer an appropriate level of abstraction to talk about machine learning applications. Let’s clarify these functions through examples.

          The varieties of detection

          There are at least four major types of applications of detection. Each deals with a different core learning problem. They are:

          • text and speech interpretation,
          • image and sound interpretation,
          • human behaviour and identity detection,
          • abuse and fraud detection.


          Text & speech interpretation

          Text and speech are the most natural interaction and communication methods. Thus, it has not been feasible in the realm of computers. Previous generations of voice dialling and interactive voice response systems were not very impressive. Only in this decade have we seen a new generation of applications that take spoken commands and even have a dialogue with us! This can go so smoothly that we can’t tell computers and humans apart in text-based chats, indicating that computers have passed the Turing test.

          Dealing with speech, new systems such as personal assistant Siri or Amazon’s Echo device are capable of interpreting a wide range of communications and responding intelligently. The technical term for this capability is natural language processing (NLP). This indicates that, based on successful text and speech detection (i.e. recognition), computers can also interpret the meaning of words, spoken and written, and take action.

          Text interpretation enables equally powerful applications. The detection of emotion or sentiment from the text means that large masses of text can be automatically analyzed to reveal what people in social media think about brands, products or presidential candidates. For instance, Google Translate just recently witnessed significant quality improvements by switching to an ML approach to translations.

          Amazon Echo Dot is surfacing as one of the best-selling speech-recognition-driven appliances of early 2017. Picture: Amazon

          Image & sound interpretation

          Computer vision gives metaphorical eyes to a machine. The most radical example of this is a computer reconstruction of human perception from brain scans! However, that is hardly as useful as an application that automates the tagging of photos or videos to help you explore Google Photos or Facebook. The latter service recognizes faces to an even scary level of accuracy.

          Image interpretation finds many powerful business applications in industrial quality control, recording vehicle registration plates, analysing roadside photos for correct traffic signs and monitoring empty parking spaces. The recent applications of computer vision to skin cancer diagnosis have actually proven more proficient than human doctors, leading to the discovery of new diagnostic criteria.


          Speech was already mentioned, but other audio signals are also well detected by computers. Shazam and SoundHound have for years provided reliable detection of songs either from a recording fragment or a sung melody. The Fraunhofer Institute developed the Silometer, an app to detect varieties of coughs as a precursor to medical diagnosis. I would be very surprised if we don’t see many new applications for human and non-human sounds in the near future.


          Human behaviour and identity detection

          Given that computers are seeing and hearing what we do, it is not surprising that they have become capable of analysing and detecting human behaviour and identity as well — for instance, with Microsoft Kinect recognising our body motion. Machines can identify movements in a football game to automatically generate game statistics. Apple’s iPad Pro recognizes whether the user is using their finger or the pencil for control, to prevent unwanted gestures. A huge number of services detect what kind of items typically go together in a shopping cart; this enables Amazon to suggest that you might also be interested in similar products.

          In the world of transportation, it would be a major safety improvement if we could detect when a driver is about to fall asleep behind the steering wheel, to prevent traffic accidents. Identity detection is another valuable function enabled by several signals. A Japanese research institute has developed a car seat that recognises who’s sitting in it. Google’s reCAPTCHA is a unique system that tells apart humans from spambots. Perhaps the most notorious example of guessing people’s health was Target’s successful detection of expectant mothers. This was followed by a marketing campaign that awkwardly disclosed the pregnancy of Target customers, resulting in much bad publicity.


          Machine learning is also used to detect and prevent fraudulent, abusive or dangerous content and schemes. It is not always major attacks; sometimes it’s just about blocking bad checks or preventing petty criminals from entering the NFL’s Superbowl arena. The best successes are found in anti-spam; for instance, Google has been doing an excellent job for years of filtering spam from your Gmail inbox.

          I will conclude with a good-willed detection example from the normal human sphere. Whales can be reliably recognized from underwater recordings of their sounds — once more, thanks to machine learning. This can help human-made fishing machinery to avoid contact with whales for their protection.

          Species of prediction

          Several generations of TV watchers have been raised to watch weather forecasts for fun, ever since regular broadcasts began after the Second World War. The realm of prediction today is wide and varied. Some applications may involve non-machine learning parts that help in performing predictions.

          Here I will focus on the prediction of human activities, but note that the prediction of different non-human activities is currently gaining huge interest. Predictive maintenance of machines and devices is one such application, and more are actively envisioned as the Internet of Things generates more data to learn from.

          Predicting different forms of human behaviour falls roughly into the following core learning challenges and applications:

          • recommendations,
          • individual behaviour and condition,
          • collective behaviour prediction.

          Different types of recommendations are about predicting user preferences. When Netflix recommends a movie or Spotify generates a playlist of your future favourite music, they are trying to predict whether you will like it, watch it or listen through to the end of the piece. Netflix is on the lookout for your rating of the movie afterwards, whereas Spotify or Pandora might measure whether you are returning to enjoy the same song over and over again without skipping. This way, our behaviours, and preferences become connected even without our needing to express them explicitly. This is something machines can learn about and exploit.

          In design, predicting which content or which interaction models appeal to users could give rise to the personalisation of interfaces. This is mostly based on predicting which content a user would be most interested in. For a few years now, Amazon has been heavily personalising the front page, predicting what stuff and links should be present in anticipation of your shopping desires and habits.

          Recommendations are a special case of predicting individual behaviour. The scope of predictions does not end with trivial matters, such as whether you like Game of Thrones or Lady Gaga. Financial institutions attempt to predict who will default on their loan or try to refinance it. Big human-resource departments might predict employee performance and attrition. Hospitals might predict the discharge of a patient or the prognosis of cancer. Rather more serious humans conditions, such as divorce, premature birth and even death within a certain timeframe, have been all been predicted with some success. Of course, predicting fun things can get serious when money is involved, as when big entertainment companies try to guess which songs and movies will top the charts to direct their marketing and production efforts.

          The important part about predictions is that they lead to an individual assessment that is actionable. The more reliable the prediction and the weightier the consequences, the more profitable and useful the predictions become.

          Predicting collective behaviour becomes a generalization of individuals but with different implications and scope. In these cases, intervention is only successful if it affects most of the crowd. The looming result of a presidential election, cellular network use or seasonal shopping expenditure can all be subject to prediction. When predicting financial risk or a company’s key performance indicators, the gains of saving or making money are noticeable. J.P. Morgan Chase was one of the first banks to increase efficiency by predicting mortgage defaulters (those who never pay back) and refinancers (those who pay back too early). On the other hand, the recent US presidential election is a good reminder that none of this is yet perfect.

          In a close resemblance to tools for antivirus and other present dangers, future trouble is also predictable. Predictive policing is about forecasting where street conflicts might happen or where squatters are taking over premises, which would help administrators to distribute resources to the right places. A similar process is going on in energy companies, as they try to estimate the capacity needed to last the night.


          What can machine intelligence do for you?

          After successfully creating a machine learning application to fulfil any of the three uses described above, what can you expect to come out of it? How would your product or service benefit from it? Here are six possible benefits:

          1. augment,
          2. automate,
          3. enable,
          4. reduce costs,
          5. improve safety,
          6. create.

          In rare cases, machine learning might enable a computer to perform tasks that humans simply can’t perform because of speed requirements or the scale of data. But most of the time, ML helps to automate repetitive, time-consuming tasks that defy the limits of human labour cost or attentional span. For instance, sorting through recycling waste 24/7 is more reliably and affordably done by a computer.

          In some areas, machine learning may offer a new type of expert system that augments and assists humans. This could be the case in design, where a computer might make a proposal for a new layout or colour palette aligned with the designer’s efforts. Google Slides already offers this type of functionality through the suggested layouts feature. Augmenting human drivers would improve traffic safety if a vehicle could, for example, start braking before the human operator could possibly react, saving the car from a rear-end collision.

          You can find our data-driven solutions for business intelligence here.

          Get in Touch.

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

            How Amazon’s IoT platform controls things without servers



            Amazon’s IoT platform is a framework for connecting smart devices to the cloud. It aims to make the basic processes of collecting data and controlling devices as simple as possible. AWS IoT is a fully managed service, which means the customer doesn’t have to worry about configuring servers or updating operating systems. The platform simply exposes a set of APIs and automatically scales from a single device to millions of devices.

            I recently wrote an article (in Finnish) in my personal blog about using AWS IoT for home automation. AWS IoT is not exactly designed for this purpose, but if you are tech savvy enough, it can be used for it. The pricing is currently set at $5 per million messages, which lasts a long time when you’re only dealing with a couple of devices sending occasional messages.

            The home automation experiment provides a convenient context for discussing the basic concepts of AWS IoT. In the next few sections, I will refer to the elements of a simple home system that detects human presence in rooms and turns on the lights if it happens at a certain time of the day. All the devices are connected to the Amazon cloud via public Internet.

            Device Registration

            The first step in most IoT projects is to register the devices (also called “things”) into a centrally managed database. AWS IoT provides this database for free and lets you add any number of devices in it. The registration is important because each device also gets its own SSL/TLS certificate and private key, which are used for authentication and encryption. The devices can only be connected to AWS IoT by using their certificates and private keys.

            The AWS IoT device registry also works as a simple asset management database. It lets you attach attributes to devices and maintain information such as customer IDs. The device registry can later be queried based on these attribute values. For example, you can find all devices belonging to a specific customer ID. The attributes are optional, so they can just be ignored if they’re not needed.

            In the home automation experiment, two devices were added to the registry: A wireless human presence detector and a Philips Hue light control bridge.

            Data Collection

            Almost any IoT scenario involves collecting device data. Amazon provides the AWS IoT Device SDK for connecting devices to the IoT platform. The SDK is typically used to develop a small application that runs on the device (or on a gateway connected to the device) and transmits data to the cloud.

            There are two ways to deliver data to the AWS IoT platform. The first one is to send raw MQTT messages, which are usually small JSON objects. You can then setup AWS IoT rules to forward these messages to other Amazon cloud services for further processing. In the home automation scenario, a rule specifies that all messages received under the topic “presence-detected” should be forwarded to an Amazon Lambda microservice, which then decides what to do with the information.

            The other way is to use Thing Shadows, which are built into the AWS IoT platform. Every registered device has a “shadow” which contains its latest reported state. The state is stored as a JSON document, which can contain 8 kilobytes worth of fields and values. This makes it easy and cost-effective to store the current state of any device in the cloud, without requiring an external database. For instance, a device equipped with a thermometer might regularly report its current state as a JSON object that looks like this: {“temperature”:22}.

            Moreover, It’s important to understand that Thing Shadows cannot be used as a general-purpose database. You can only look up a single Thing Shadow at a time, and it will only contain the current state. Indeed, you will need a separate database if you want to analyze historical time series of data. However, keep in mind that Amazon offers a wide range of databases you can easily connect to AWS IoT, by forwarding Thing Shadow updates to services like DynamoDB or Kinesis. This seamless integration between all Amazon cloud services is one of the key advantages of AWS IoT.

            Data Analysis and Decision Making

            Since Amazon already offers a wide range of data analysis services, the AWS IoT platform itself doesn’t include any new tools for analyzing data. Existing analysis services include products like Redshift, Elastic MapReduce, Amazon Machine Learning and various others. Device data is typically collected into S3 buckets using Kinesis Firehose and then processed by these services.

            Device data can also be forwarded to Amazon Lambda microservices for real-time decision making. A JavaScript function will be executed every time a data point is received. This is suitable for the home automation scenario, where a single IoT message is sent whenever presence is detected in a room. The JavaScript function considers various factors, such as the current time of day, and decides whether to turn the lights on.

            In addition to existing solutions, Amazon has announced an upcoming product called Kinesis Analytics. It will enable real-time analytics of streaming IoT data, similar to Apache Storm. This means that data can be analyzed on-the-fly without storing it in a database. For instance, you could maintain a rolling average of values and react to it instead of individual data points.

            Device Control

            The AWS IoT platform can control devices in the same two ways it collects data. The first way is to send raw MQTT messages directly to devices. Devices will react to the messages when they receive them. The problem with this approach is that devices might sometimes have network or electricity issues, which may cause the loss of some control messages.

            Thing Shadows provide a more reliable way to have devices enter a desired state. A Thing Shadow will remember the new desired state and keep retrying until the device has acknowledged it.

            In the home automation scenario, when presence is detected, the desired state of a lamp is set to {“light”:true}. When the lamp receives this desired state, it turns on the light and reports its current state back to AWS IoT as {“light”:true}. Once the reported state is the same as the desired state, the Thing Shadow of the lamp is known to be in sync.

            User Interfaces and Data Visualization

            You may use the AWS IoT Console to manually control devices by modifying their desired state. The console will show the current state and update it on the screen as it changes. This is, of course, a very low-level way to control lighting since you need to log in as a cloud administrator and then manually edit the JSON documents.

            Then again, a better way is to build a web application that integrates to AWS IoT and offers a friendly user interface for controlling things. AWS provides rich infrastructure options for developing integrated mobile and web applications. Amazon API Gateway and Lambda are typically used to build a backend API that lets applications access IoT data. The data itself may be stored in a database like DynamoDB or Postgres. The access can be limited to authenticated users only using Amazon Cognito or a custom IAM solution.

            For data visualization purposes, Amazon has recently announced an upcoming product called Amazon QuickSight, which will integrate with other Amazon services and databases. There are also many third-party solutions available through the AWS Marketplace. If any of these options doesn’t fit the use case well, a custom solution can always be developed as part of a web application.

            My Findings

            AWS IoT is a fast and easy way to get started on the Internet of Things. All the scenarios discussed in this article are based on managed cloud services. This means that you never have to maintain your own servers or worry about scaling.

            For small-scale projects the operating costs are negligible. For larger scale projects, the costs will depend on the amount and frequency of the data being transferred. There are no fixed monthly or hourly fees, which makes personal experimentation at home very convenient.

            Get in Touch.

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

              Four UI Design Guidelines for Creating Machine Learning Applications



              Previously, I’ve introduced three underlying general capacities of machine learning that are exploited in applications. However, they are not enough for designers to actually start building applications. This is why this particular post introduces four general design guidelines that can help on the way.

              How can we and will we communicate machine intelligence to users, and what kinds of new interfaces will machine learning call for?

              Machine learning under the hood entails both opportunities to do things in a new way, as well as requirements for new designs. To me, this means we will need a rise in importance of several design patterns or, rather, abstract design features, which will become important as services get smarter. They include:

              1. suggested features,
              2. personalization,
              3. shortcuts vs. granular controls,
              4. graceful failure.

              Suggested features

              Text and speech prediction has opened up new opportunities for interaction with smart devices. Conversational interfaces are the most prominent example of this development, but definitely not the only one. As we try to hide the interface and underlying complexity from users, we are balancing between what we hide and what we reveal. Suggested features help users to discover what the invisible UI is capable of.

              Graphical user interfaces (GUIs) have made computing accessible for the better part of the human race that enjoys normal vision. GUIs provided a huge usability improvement in terms of feature discovery. Icon and menus were the first useful metaphors for direct manipulation of digital objects using a mouse and keyboard. With multi-touch screens, we have gained the new power of pinching, dragging and swiping to interact. Visual clues aren’t going anywhere, but they are not going to be enough when interaction modalities expand.

              How does a user find out what your service can do?

              Haptic interaction in the first consumer generation of wearables and in the foremost conversational interfaces presents a new challenge for feature discovery. Non-visual cues must be used that facilitate the interaction, particularly at the very onset of the interactive relationship. Feature suggestions — the machine exposing its features and informing the user what it is capable of — are one solution to this riddle.

              In the case of a chatbot employed for car rentals, this could be, “Please ask me about available vehicles, upgrades, and your past reservations.”

              Specific application areas come with specific, detailed patterns. For instance, Patrick Hebron’s recent ebook from O’Reilly contains a great discussion of the solutions for conversational interfaces.


              Once a computer gets to know you and to predict your desires and preferences, it can start to serve you in new, more effective ways. This is personalization, the automated optimization of a service. Responsive website layouts are a crude way of doing this.

              The utilization of machine learning features with interfaces could lead to highly personalized user experiences. Akin to giving everyone a unique desktop and home-screen, services and apps will start to adapt to people’s preferences as well. This new degree of personalization presents opportunities as well as forces designers to flex their thoughts on how to create truly adaptive interfaces that are largely controlled by the logic of machine learning. If you succeed in this, you will reward users with a superior experience and will impart a feeling of being understood.

    ’s front page has been personalised for a long time. The selection offered to me looks somewhat relevant, if not attractive.

              Currently, personalisation is foremost applied in order to curate content. For instance, Amazon carefully considers which products would appeal to potential buyers on its front page. But it will not end with that. Personalisation will likely lead to much bigger changes across UIs — for instance, even in the presentation of the types of interactive elements a user likes to use.

              Shortcuts versus granularity

              Photoshop is an excellent example of a tool with a steep learning curve and a great deal of granularity in controlling what can be done. Most of the time, you work on small operations, each of which has a very specific influence. The creative combination of many small things allows for interesting patterns to emerge on a larger scale. Holistic, black-box operations such as transformative filters and automatic corrections are not really the reason why professionals use Photoshop.

              What will happen when machines learn to predict what we are doing repeatedly? For instance, I frequently perform certain actions in Photoshop before uploading my photos to a blog. While I could manually automate this, creating yet another user-defined feature among thousands already in the product, Photoshop might learn to predict my intentions and offer a more prominent shortcut, or a highway, to fast-forward me to my intended destination. As Adobe currently puts effort into bringing AI into Creative Cloud, we’ll likely see something even more clever than this very soon. It is up to you to let the machine figure out the appropriate shortcuts in your application.

              Mockup of a possible implementation of “predictive history” in Photoshop CC. The application suggests a possible future state for the user based on the user’s history and preceding actions and on the current image.


              A funny illustration of a similar train of thought comes from Cristopher Hesse’s machine-learning-based image-to-image translation, which provides interesting content-specific filling of doodles. Similar to Photoshop’s content-aware fill, it creates most hilarious visualisations of building facades, cats, shoes, and bags based on minimal user input.

              The edges2cats algorithm employs machine learning to finish your cat doodle as a photorealistic cat monster.

              Graceful failure

              I call the final pattern graceful failure. It means saying “sorry, I can’t do what you want because…” in an understandable way.

              This is by no means unique to machine learning applications. It is innately human, but something that computers have been notoriously bad at since the time that syntax errors were repeatedly echoed by Commodore computers in the 1980s. But with machine learning, it’s slightly different. Because machine learning takes a fuzzy-logic approach to computing, there are new ways that the computer could produce unexpected results — that is, things could go very bad, and that has to be designed for. Nobody seriously blames the car in question for the death that occurred in the Tesla autopilot accident in 2016.

              The other part is that building applications that rely on modern machine learning are still in its infancy. Classic software development has been around for so long that we’ve learned to deal with its insufficiencies better. As Peter Norvig, famous AI researcher and Google’s research director puts it like this:

              The problem here is the methodology for scaling this up to a whole industry is still in progress.… We don’t have the decades of experience that we have in developing and verifying regular software.

              The nature of learning is such that computers learn from what is given to them. If the algorithm has to deal with something else, then the results will not be to your liking. For example, if you’ve trained a system to detect animal species from pet photos and then start using it to classify plants, there will be trouble. This is more or less why Microsoft’s Twitterbot Tay had to be silenced after it picked up the wrong examples from malicious users when exposed to real-world conditions.

              The uncertainty in detection and prediction should be taken into consideration. How this is done depends on the application. Consider Google Search. No one is offended or truly hurt but merely amused or frustrated, by bad search results. Of course, bad results will eventually be bad for business. However, if your bank started using a chatbot that suddenly could not figure out your checking account’s balance, you would be rightfully worried and should be offered a quick way to resolve your trouble.

              To deal with failure, interfaces would do well to help both parties adjust. Users can tolerate one or two “I didn’t get that, please say that again” prompts (but no more) if that’s what it takes to advance the dialogue. For services that include machine learning, extensive testing is best. Next comes informing users about the probability and consequences of failure, and instructions on what the user might do to avoid it. The good practices are still emerging.


              This text is from an article originally appearing in Smashing Magazine:

              Get in Touch.

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.