An exciting end to the year, so what’s in ‘it’ for 2018?

CATEGORIES

News

Christmas is just around the corner meaning so is the beginning of 2018 and another exciting year at Nordcloud.

We’re nearing the end of what’s been a remarkable year, both for Nordcloud and for our growing community of customers! 2018 will see us add Cloud Application Development to our Cloud Infrastructure and Enablement offering, providing our customers with a complete set of Cloud transformation services to help them innovate and grow.

We will also start 2018 with a new captain at the wheel, steering us towards even greater things, our new CEO, Jan Kritz. One of Jan’s first tasks will be to lead us into our brand-new HQ at Antinkatu 1, Kamppi, Helsinki, which will become our home early in the new year.

Alongside our own success, the market will also continue to grow in 2018. Businesses will be looking at the Cloud as a means for complete transformation, not ‘solely a tool’. Here are just a few of the predictions for next year:

  • IDC expects worldwide whole cloud revenues to reach $554 billion in 2021, more than double those of 2016. This includes public, private, and hybrid clouds, along with managed cloud services, cloud-related professional services, and hardware and software infrastructure for building clouds.
  • Deloitte says spending on IT-as-a-Service for data centers, software and services will reach $547B by the end of 2018.
  • Forrester predicts that more than 50% of global enterprises will rely on at least one public cloud platform to drive digital transformation.
  • Forrester also suggests that there will be a continued emphasis on digital transformation and businesses using AI and that these two areas would eventually become the foundation for most businesses’ IT strategies.

We’d like to say a big thank you to all of our customers and partners who have supported us throughout this last year and hopefully in the very exciting future. Merry Christmas, and a very happy New Year!

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Blog

Six capabilities modern ISVs need in order to future-proof their SaaS offering

Successful SaaS providers have built their business around 6 core capabilities

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








AWS Pop-Up Loft Munich: The Full Experience

CATEGORIES

Events

Last Friday, the AWS Pop-Up Loft Munich 2017 came to an end, with a nice little closing party exactly where we spent the last month learning, exchanging and building. In Nordcloud’s German team, we see a lot of value in the loft to AWS partners and customers alike and it was a logical step for us to sponsor and support this initiative. This is a privilege only few companies have, and we wanted to share our experiences, allowing our team members speak for themselves.

“AI and ML are the new orange” — Oswald Yinyeh

The interest in AI and ML with Big Data at the AWS Loft was mind-blowing.  As a Cloud Architect at the Ask an Architect desk, I realised that the majority of users from small, medium and large-scale enterprises are trying to push beyond the generic AI and ML models to more specialised production-ready ones that can adapt seamlessly to their special needs and conditions. For me, having a strong background in this field, it was music to my ears.

 

The people around the loft I met were very interested in how they can build software that uses ML and AI to learn and adapt to the needs and conditions within their specific businesses, rather than leveraging ready-made or buying pre-baked ML and AI solutions that do image recognition or text to speech. Most of the times, they have problems building a model/software that is able to really learn from its’ surroundings. Further work should be done on developing or advancing the currently available generic AI and ML models to enable them to learn by doing different actions instead of building or training algorithms that just uses static data (e.g. images, text). In my opinion, more reinforcement learning algorithms for different domains should be built into AWS AI & ML stack to serve as starting point for customers. All in all: great event!

“re:Invent increased the hype a lot” — Zoran Pajeska

As I was part of the Ask an Architect booth team as well and spent probably most days on the ground at the Loft, (thanks for the trusting customers I work with) I would like to share especially the post re:invent experience. The weeks before the Vegas madness happened, we had comparably quiet days in the Loft and most of the questions we received were quite easy to answer. However, as the re:invent week commenced, the new service announcements made the traffic at our booth explode. We started to get a large variety of unprecedented questions, from basic “how to start” with AWS to questions about newest services like Fargate, EKS (managed Kubernetes service) and GuardDuty, to more advanced stuff like IoT, Machine Learning and Rekognition.

 

Based on these questions, we at Nordcloud can really see first hand what “moves” people, why they start to use AWS and what is it that drives the most interest. My experience with the Loft and the people there was really great. A lot of answered questions, a lot of new connections and maybe new customers.

“How can I sell books?” — Richard Zimmermann

Not everyone at the AWS Loft was interested in Amazon Web Services, but some actually wanted to know something about Amazon retail (Amazon.com). From time to time you could see people entering the Loft – and then leaving it after a few minutes because the saw that they couldn’t buy anything there! Luckily, those were the edge cases and most of the conversations were purely technology and AWS focused. The chats I had ranged from “first-time users” to very complex topics around all services of AWS. It was also great to talk with people who are working at AWS and to learn more about the current and coming service updates.

For me personally, holding a presentation about Serverless was another great aspect of the event as it gave me a platform to speak to a wider audience about my daily work. We showed people one of our favourite jobs at Nordcloud: developing cloud-native applications in an effective, secure and customer orientated way.

“Deutschland – Advantage cloud” — Sandip Jadhav

AWS Pop-up Loft was in many ways the unique and fulfilling event for everyone. My favourite aspect was the Ask an Architect area. As we are a Premier Consulting Partner with AWS and were a main sponsor of the event, we had a dedicated Nordcloud presence on one out of the three booths. Our volunteers spent entire days there, interacting with visitors to address their challenges with using all kinds of AWS elements. What was great about was that there was almost always a two-way knowledge exchange. We were helping customers to solve their problems and at the same time learning about current industry trend and the needs of emerging markets from them.

AWS pop-up loft days and AWS training sessions gave us valuable opportunity to get in touch with knowledgeable people to sense the current interest among the tech community and present market needs. Looking at the AWS session responses and attendees, one thing was very clear. Germany is a rising market and companies interested in cloud adaptation and AWS tech community is growing rapidly in Germany.

Another really good thing was the great concept of having many technical experts working for their respective companies/customers from a common work location. It was a great experience, as we worked normal office hours and yet we explored new things and shared experience and knowledge on the fly.

“Cloudy way to fade away 2017” — Lars Oehlandt

Like 2016, this year’s AWS Loft was again a great get-together for the Bavarian and surrounding cloud community. Partners, customers, AWSlers, Nordcloudians and many more gather to work, experience or learn in one building. Everyone has his or her own opportunities and challenges and sees the core topic, AWS Cloud, from a different angle. But without much limitations, people are open to new ideas and connections. The long duration – a whole week longer than last year –  of the Loft enables stress-free planning of how to maximise the desirable value. Especially in the Nordcloud Ask an Architect our tech team members were able to learn from practical cases and share their experiences. It is hard to imagine a more interesting and casual interface for Cloud Architects and cloud-implementing companies to solve problems together.

Nordcloud – as a Munich-based advisory – were more than happy to again extensively participate and sponsor the Loft and are already looking forward to 2018 summits and lofts all around Germany!

So long, and thanks for all the clouds

All in all, we would love to thank our friends in AWS Germany and AWS Global who made this event possible, it was indeed a seemingly impossible improvement from the year before and the re:invent experience happening in parallel made it especially amazing for us as a partner. Thanks also to the many many new friends we made there, both from other AWS Partners and from AWS customers alike. Let ’s keep building amazing things!

See you all next year and have a great holiday season y’all!

Blog

Meet us at Microsoft Ignite 2019

Explore the latest tools and technology – join Nordcloud at Microsoft Ignite in Orlando on November 4-8!

Blog

SaaS Business Model and Public Cloud are a Winning Combination for ISVs

Our experts have helped many ISVs to leverage cloud technologies to transition their business from that of a traditional software...

Blog

Migrations to MS Azure – Best Practices shared in Poland

Microsoft & Nordcloud Poland on the road.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Machine learning? Yes, we can!

CATEGORIES

Insights

Machine learning is changing the world as we know it

Algorithms that learn from data are responsible for breakthroughs such as self-driving carsmedical diagnoses of unprecedented accuracy, and, on a lighter note, the ability to identify cats via YouTube videos. They power your Netflix recommendations and generate Spotify’s weekly playlists. They can translate text, solve analogies, route your mail, win Go championsanalyse sentiments in written text (even, to some extent, detecting sarcasm), and make sure your palm isn’t accidentally recognised as input when you’re using an Apple Pencil. They can even generate pieces of art.

With the rise of cloud computing and from that the viability of Deep Learning techniques, the relatively old field of machine learning is undergoing a beautiful renaissance, backed by the biggest players in IT. Machines may not yet be close to as intelligent as us human beings, but we’re witnessing huge strides almost daily. It’s a truly exciting time not just for Computer Science, but society as a whole.

At Nordcloud, we believe that machine learning will only become more and more important, regardless of domain or sector. We envisage a world in which machine learning completely changes the way we use and interact with computers.

To this end, we are proud to announce that machine learning is now part of our official offering

Our aim is to take a pragmatic approach and use best-of-breed algorithms, libraries and tools, fine-tuning them to make truly remarkable, smart applications and digital services. And in cases where existing approaches don’t cut it, we’ll implement our own bespoke solutions based on the latest academic research. And, as always, we hope to do all this with the unique blend of passion, pride and fun that Nordcloud is known for.

In the coming months, we’ll be writing a series of blog posts about machine learning—what it’s all about, what it can be used for, and why it’s worth taking note of. In the meantime, if you want to learn more, fancy seeing some demos, or just having a chat over a cup of coffee, our doors are always open!

You can find our data-driven solutions for business intelligence here.

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Three main uses of machine learning

CATEGORIES

Insights

The beauty of applications that employ machine learning is that they can be extremely simple on the surface, hiding away much complexity from users. However, designers can’t afford to ignore the under-the-hood part of machine learning altogether. In this article, I demonstrate the three main functions machine learning algorithms perform underneath with along with six unique benefits you can derive from using them.

So what have machines learned so far?

In 2016, the most celebrated milestone of machine learning was AlphaGo’s victory over the world champion of Go, Lee Sedol. Considering that Go is an extremely complicated game to master, this was a remarkable achievement. Beyond exotic games such as Go, Google Image Search is maybe the best-known application of machine learning. Search feels so natural and mundane when it effectively hides away all of the complexity is embeds. With over 30 billion search queries every day, Google Image Search constantly gets more opportunities to learn.

 

There are already more individual machine learning applications that are reasonable to list here. But a major simplification is not sufficient either, I feel. One way to appreciate the variety is to look at successful ML applications from Eric Siegel’s book Predictive Analytics from 2013. The listed applications fall under the following domains:

  • marketing, advertising, and the web;
  • financial risk and insurance;
  • health care;
  • crime fighting and fraud detection;
  • fault detection for safety and efficiency;
  • government, politics, nonprofit and education;
  • human-language understanding, thought and psychology;
  • staff and employees, human resources.

 

Siegel’s cross-industry collection of examples is a powerful illustration of the omnipresence of predictive applications, even though not all of his 147 examples utilise machine learning as we know it. However, for a designer, knowing whether your problem domain is among the nine listed will give an idea of whether machine learning has already proven to be useful or whether you are facing a great unknown.

 

Detection, prediction, and creation

As I see it, the power of learning algorithms comes down to two major applications: detection and prediction. Detection is about interpreting the present, and prediction is the way of the future. Interestingly, machines can also do generative or “creative” tasks. However, these are still a marginal application.

When you combine detection and prediction, you can achieve impressive overall results. For instance, combine the detection of traffic signs, vehicles and pedestrians with the prediction of vehicular and pedestrian movements and of the times to vehicle line crossings, and you have the makings of an autonomous vehicle!

This is my preferred way of thinking about machine learning applications. In practice, detection and prediction are sometimes much alike because they don’t yet cut into the heart and bones of machine learning, but I believe they offer an appropriate level of abstraction to talk about machine learning applications. Let’s clarify these functions through examples.

The varieties of detection

There are at least four major types of applications of detection. Each deals with a different core learning problem. They are:

  • text and speech interpretation,
  • image and sound interpretation,
  • human behaviour and identity detection,
  • abuse and fraud detection.

 

Text & speech interpretation

Text and speech are the most natural interaction and communication methods. Thus, it has not been feasible in the realm of computers. Previous generations of voice dialling and interactive voice response systems were not very impressive. Only in this decade have we seen a new generation of applications that take spoken commands and even have a dialogue with us! This can go so smoothly that we can’t tell computers and humans apart in text-based chats, indicating that computers have passed the Turing test.

Dealing with speech, new systems such as personal assistant Siri or Amazon’s Echo device are capable of interpreting a wide range of communications and responding intelligently. The technical term for this capability is natural language processing (NLP). This indicates that, based on successful text and speech detection (i.e. recognition), computers can also interpret the meaning of words, spoken and written, and take action.

Text interpretation enables equally powerful applications. The detection of emotion or sentiment from the text means that large masses of text can be automatically analyzed to reveal what people in social media think about brands, products or presidential candidates. For instance, Google Translate just recently witnessed significant quality improvements by switching to an ML approach to translations.

Amazon Echo Dot is surfacing as one of the best-selling speech-recognition-driven appliances of early 2017. Picture: Amazon

Image & sound interpretation

Computer vision gives metaphorical eyes to a machine. The most radical example of this is a computer reconstruction of human perception from brain scans! However, that is hardly as useful as an application that automates the tagging of photos or videos to help you explore Google Photos or Facebook. The latter service recognizes faces to an even scary level of accuracy.

Image interpretation finds many powerful business applications in industrial quality control, recording vehicle registration plates, analysing roadside photos for correct traffic signs and monitoring empty parking spaces. The recent applications of computer vision to skin cancer diagnosis have actually proven more proficient than human doctors, leading to the discovery of new diagnostic criteria.

 

Speech was already mentioned, but other audio signals are also well detected by computers. Shazam and SoundHound have for years provided reliable detection of songs either from a recording fragment or a sung melody. The Fraunhofer Institute developed the Silometer, an app to detect varieties of coughs as a precursor to medical diagnosis. I would be very surprised if we don’t see many new applications for human and non-human sounds in the near future.

 

Human behaviour and identity detection

Given that computers are seeing and hearing what we do, it is not surprising that they have become capable of analysing and detecting human behaviour and identity as well — for instance, with Microsoft Kinect recognising our body motion. Machines can identify movements in a football game to automatically generate game statistics. Apple’s iPad Pro recognizes whether the user is using their finger or the pencil for control, to prevent unwanted gestures. A huge number of services detect what kind of items typically go together in a shopping cart; this enables Amazon to suggest that you might also be interested in similar products.

In the world of transportation, it would be a major safety improvement if we could detect when a driver is about to fall asleep behind the steering wheel, to prevent traffic accidents. Identity detection is another valuable function enabled by several signals. A Japanese research institute has developed a car seat that recognises who’s sitting in it. Google’s reCAPTCHA is a unique system that tells apart humans from spambots. Perhaps the most notorious example of guessing people’s health was Target’s successful detection of expectant mothers. This was followed by a marketing campaign that awkwardly disclosed the pregnancy of Target customers, resulting in much bad publicity.

ANTI-*

Machine learning is also used to detect and prevent fraudulent, abusive or dangerous content and schemes. It is not always major attacks; sometimes it’s just about blocking bad checks or preventing petty criminals from entering the NFL’s Superbowl arena. The best successes are found in anti-spam; for instance, Google has been doing an excellent job for years of filtering spam from your Gmail inbox.

I will conclude with a good-willed detection example from the normal human sphere. Whales can be reliably recognized from underwater recordings of their sounds — once more, thanks to machine learning. This can help human-made fishing machinery to avoid contact with whales for their protection.

Species of prediction

Several generations of TV watchers have been raised to watch weather forecasts for fun, ever since regular broadcasts began after the Second World War. The realm of prediction today is wide and varied. Some applications may involve non-machine learning parts that help in performing predictions.

Here I will focus on the prediction of human activities, but note that the prediction of different non-human activities is currently gaining huge interest. Predictive maintenance of machines and devices is one such application, and more are actively envisioned as the Internet of Things generates more data to learn from.

Predicting different forms of human behaviour falls roughly into the following core learning challenges and applications:

  • recommendations,
  • individual behaviour and condition,
  • collective behaviour prediction.

Different types of recommendations are about predicting user preferences. When Netflix recommends a movie or Spotify generates a playlist of your future favourite music, they are trying to predict whether you will like it, watch it or listen through to the end of the piece. Netflix is on the lookout for your rating of the movie afterwards, whereas Spotify or Pandora might measure whether you are returning to enjoy the same song over and over again without skipping. This way, our behaviours, and preferences become connected even without our needing to express them explicitly. This is something machines can learn about and exploit.

In design, predicting which content or which interaction models appeal to users could give rise to the personalisation of interfaces. This is mostly based on predicting which content a user would be most interested in. For a few years now, Amazon has been heavily personalising the front page, predicting what stuff and links should be present in anticipation of your shopping desires and habits.

Recommendations are a special case of predicting individual behaviour. The scope of predictions does not end with trivial matters, such as whether you like Game of Thrones or Lady Gaga. Financial institutions attempt to predict who will default on their loan or try to refinance it. Big human-resource departments might predict employee performance and attrition. Hospitals might predict the discharge of a patient or the prognosis of cancer. Rather more serious humans conditions, such as divorce, premature birth and even death within a certain timeframe, have been all been predicted with some success. Of course, predicting fun things can get serious when money is involved, as when big entertainment companies try to guess which songs and movies will top the charts to direct their marketing and production efforts.

The important part about predictions is that they lead to an individual assessment that is actionable. The more reliable the prediction and the weightier the consequences, the more profitable and useful the predictions become.

Predicting collective behaviour becomes a generalization of individuals but with different implications and scope. In these cases, intervention is only successful if it affects most of the crowd. The looming result of a presidential election, cellular network use or seasonal shopping expenditure can all be subject to prediction. When predicting financial risk or a company’s key performance indicators, the gains of saving or making money are noticeable. J.P. Morgan Chase was one of the first banks to increase efficiency by predicting mortgage defaulters (those who never pay back) and refinancers (those who pay back too early). On the other hand, the recent US presidential election is a good reminder that none of this is yet perfect.

In a close resemblance to tools for antivirus and other present dangers, future trouble is also predictable. Predictive policing is about forecasting where street conflicts might happen or where squatters are taking over premises, which would help administrators to distribute resources to the right places. A similar process is going on in energy companies, as they try to estimate the capacity needed to last the night.

 

What can machine intelligence do for you?

After successfully creating a machine learning application to fulfil any of the three uses described above, what can you expect to come out of it? How would your product or service benefit from it? Here are six possible benefits:

  1. augment,
  2. automate,
  3. enable,
  4. reduce costs,
  5. improve safety,
  6. create.

In rare cases, machine learning might enable a computer to perform tasks that humans simply can’t perform because of speed requirements or the scale of data. But most of the time, ML helps to automate repetitive, time-consuming tasks that defy the limits of human labour cost or attentional span. For instance, sorting through recycling waste 24/7 is more reliably and affordably done by a computer.

In some areas, machine learning may offer a new type of expert system that augments and assists humans. This could be the case in design, where a computer might make a proposal for a new layout or colour palette aligned with the designer’s efforts. Google Slides already offers this type of functionality through the suggested layouts feature. Augmenting human drivers would improve traffic safety if a vehicle could, for example, start braking before the human operator could possibly react, saving the car from a rear-end collision.

You can find our data-driven solutions for business intelligence here.

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








How Amazon’s IoT platform controls things without servers

CATEGORIES

Tech

Amazon’s IoT platform is a framework for connecting smart devices to the cloud. It aims to make the basic processes of collecting data and controlling devices as simple as possible. AWS IoT is a fully managed service, which means the customer doesn’t have to worry about configuring servers or updating operating systems. The platform simply exposes a set of APIs and automatically scales from a single device to millions of devices.

I recently wrote an article (in Finnish) in my personal blog about using AWS IoT for home automation. AWS IoT is not exactly designed for this purpose, but if you are tech savvy enough, it can be used for it. The pricing is currently set at $5 per million messages, which lasts a long time when you’re only dealing with a couple of devices sending occasional messages.

The home automation experiment provides a convenient context for discussing the basic concepts of AWS IoT. In the next few sections, I will refer to the elements of a simple home system that detects human presence in rooms and turns on the lights if it happens at a certain time of the day. All the devices are connected to the Amazon cloud via public Internet.

Device Registration

The first step in most IoT projects is to register the devices (also called “things”) into a centrally managed database. AWS IoT provides this database for free and lets you add any number of devices in it. The registration is important because each device also gets its own SSL/TLS certificate and private key, which are used for authentication and encryption. The devices can only be connected to AWS IoT by using their certificates and private keys.

The AWS IoT device registry also works as a simple asset management database. It lets you attach attributes to devices and maintain information such as customer IDs. The device registry can later be queried based on these attribute values. For example, you can find all devices belonging to a specific customer ID. The attributes are optional, so they can just be ignored if they’re not needed.

In the home automation experiment, two devices were added to the registry: A wireless human presence detector and a Philips Hue light control bridge.

Data Collection

Almost any IoT scenario involves collecting device data. Amazon provides the AWS IoT Device SDK for connecting devices to the IoT platform. The SDK is typically used to develop a small application that runs on the device (or on a gateway connected to the device) and transmits data to the cloud.

There are two ways to deliver data to the AWS IoT platform. The first one is to send raw MQTT messages, which are usually small JSON objects. You can then setup AWS IoT rules to forward these messages to other Amazon cloud services for further processing. In the home automation scenario, a rule specifies that all messages received under the topic “presence-detected” should be forwarded to an Amazon Lambda microservice, which then decides what to do with the information.

The other way is to use Thing Shadows, which are built into the AWS IoT platform. Every registered device has a “shadow” which contains its latest reported state. The state is stored as a JSON document, which can contain 8 kilobytes worth of fields and values. This makes it easy and cost-effective to store the current state of any device in the cloud, without requiring an external database. For instance, a device equipped with a thermometer might regularly report its current state as a JSON object that looks like this: {“temperature”:22}.

Moreover, It’s important to understand that Thing Shadows cannot be used as a general-purpose database. You can only look up a single Thing Shadow at a time, and it will only contain the current state. Indeed, you will need a separate database if you want to analyze historical time series of data. However, keep in mind that Amazon offers a wide range of databases you can easily connect to AWS IoT, by forwarding Thing Shadow updates to services like DynamoDB or Kinesis. This seamless integration between all Amazon cloud services is one of the key advantages of AWS IoT.

Data Analysis and Decision Making

Since Amazon already offers a wide range of data analysis services, the AWS IoT platform itself doesn’t include any new tools for analyzing data. Existing analysis services include products like Redshift, Elastic MapReduce, Amazon Machine Learning and various others. Device data is typically collected into S3 buckets using Kinesis Firehose and then processed by these services.

Device data can also be forwarded to Amazon Lambda microservices for real-time decision making. A JavaScript function will be executed every time a data point is received. This is suitable for the home automation scenario, where a single IoT message is sent whenever presence is detected in a room. The JavaScript function considers various factors, such as the current time of day, and decides whether to turn the lights on.

In addition to existing solutions, Amazon has announced an upcoming product called Kinesis Analytics. It will enable real-time analytics of streaming IoT data, similar to Apache Storm. This means that data can be analyzed on-the-fly without storing it in a database. For instance, you could maintain a rolling average of values and react to it instead of individual data points.

Device Control

The AWS IoT platform can control devices in the same two ways it collects data. The first way is to send raw MQTT messages directly to devices. Devices will react to the messages when they receive them. The problem with this approach is that devices might sometimes have network or electricity issues, which may cause the loss of some control messages.

Thing Shadows provide a more reliable way to have devices enter a desired state. A Thing Shadow will remember the new desired state and keep retrying until the device has acknowledged it.

In the home automation scenario, when presence is detected, the desired state of a lamp is set to {“light”:true}. When the lamp receives this desired state, it turns on the light and reports its current state back to AWS IoT as {“light”:true}. Once the reported state is the same as the desired state, the Thing Shadow of the lamp is known to be in sync.

User Interfaces and Data Visualization

You may use the AWS IoT Console to manually control devices by modifying their desired state. The console will show the current state and update it on the screen as it changes. This is, of course, a very low-level way to control lighting since you need to log in as a cloud administrator and then manually edit the JSON documents.

Then again, a better way is to build a web application that integrates to AWS IoT and offers a friendly user interface for controlling things. AWS provides rich infrastructure options for developing integrated mobile and web applications. Amazon API Gateway and Lambda are typically used to build a backend API that lets applications access IoT data. The data itself may be stored in a database like DynamoDB or Postgres. The access can be limited to authenticated users only using Amazon Cognito or a custom IAM solution.

For data visualization purposes, Amazon has recently announced an upcoming product called Amazon QuickSight, which will integrate with other Amazon services and databases. There are also many third-party solutions available through the AWS Marketplace. If any of these options doesn’t fit the use case well, a custom solution can always be developed as part of a web application.

My Findings

AWS IoT is a fast and easy way to get started on the Internet of Things. All the scenarios discussed in this article are based on managed cloud services. This means that you never have to maintain your own servers or worry about scaling.

For small-scale projects the operating costs are negligible. For larger scale projects, the costs will depend on the amount and frequency of the data being transferred. There are no fixed monthly or hourly fees, which makes personal experimentation at home very convenient.

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Blog

Controlling lights with Ikea Trådfri, Raspberry Pi and AWS

One of our developers build a smart lights solution with Ikea Trådfri, Raspberry Pi and AWS for his home.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Agilifying the release cycle of a business-to-business service

CATEGORIES

Insights

Why go fast when you can go hyper-fast?

Customer needs are nearly impossible to predict when developing new products. However, if your product development process is agile enough, you may be able to respond to customers’ arising demands on a short notice.

The importance of reaction time much depends on the business and the competitive environment. The tougher the competition and the lower the cost of switching service providers, the more pressed you are to react in a timely fashion.

While every software company nowadays markets themselves as lean, agile, nimble and fast moving, how often do these slogans realize in everyday operations? Do some companies really have special tools to accelerate development?

Nordcloud Solutions secret overdrive switch

For several years, we’ve been helping energy management company Enegia to maintain their competitive edge in the digital channel. In late 2014, Nordcloud Solutions came along to co-create a new cloud-based reporting platform EnerKey. During our collaboration, we’ve been lucky to witness and grasp several opportunities for structural change in the development process.

In under three years, many things have happened. Nordcloud Solutions experts have participated in renewing the architecture. The changes towards a micro-architecture have enabled splicing the back-end development to smaller sections which can be independently developed. This has consecutively sped up the front-end development as well.

Automated testing is another must-have feature. It enables the confident delivery of new features in short intervals. After its launch in mid-2015, the service has landed on a smooth development flyway.

There were 7 releases in 2015.
32 in 2016.

Senior project manager Kimmo Kinnunen from Enegia counts that in the past 12 months the development team has been able to release 20 major feature releases plus ten bug fixes!

This speed has far exceeded earlier expectations of quarterly releases. And there is no going back. Kimmo describes the situation as follows:

“We’re now able to move things quickly from designer’s desk to development and production. There is no need to revisit old ideas ten months after inception and try to recall what we were thinking. The pipeline goes so fast and we will focus on improving production features rather than redefining designs before a major release.”

There is no secret switch

With Enegia, Nordcloud Solutions has not pulled any single extraordinary magic trick. Our software architect Mikko Kärkkäinen has been working on the project for over two years. He attributes the vices of the current process to several development decisions.

“Our thinking is focused on doing the minimum viable thing. Starting with small releases and doing consecutive releases that extend and augment the MVP.

Behind small releases, we have a microservice architecture that is the enabler for developing small, individual features. And of course, there’s test automation and continuous integration in the development environment that help to get these smaller tasks done fast. ” Mikko says.

Smaller, more frequent releases force everyone to think differently. But it is also more rewarding as the individual development tasks are completed, and even released in a matter of days, not months.

 

The only way is forward

We’re on a good track here. I can’t imagine slowing down, but then again, we have no need to go much faster”, says Kimmo Kinnunen.

The new platform development is slowly but surely catching up previous generation service in terms of feature scope. The old service is expected to retire next year as Enegia team with Nordcloud Solutions and another contractor has pushed some 20 new feature releases to the new EnerKey platform.

The transition is not only about replacing outdated technology, but also exploiting the benefits of speed:

“We’re finally able to address customer requests in a more timely fashion. If we decide to prioritize a new feature, it can be a matter of weeks to its release on the new platform and with the current process.

If the release of a new feature after its design and development is delayed by several months only due to slow release cycle, this will negatively affect the return-on-investment”  Kimmo recounts.

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Four UI Design Guidelines for Creating Machine Learning Applications

CATEGORIES

Insights

Previously, I’ve introduced three underlying general capacities of machine learning that are exploited in applications. However, they are not enough for designers to actually start building applications. This is why this particular post introduces four general design guidelines that can help on the way.

How can we and will we communicate machine intelligence to users, and what kinds of new interfaces will machine learning call for?

Machine learning under the hood entails both opportunities to do things in a new way, as well as requirements for new designs. To me, this means we will need a rise in importance of several design patterns or, rather, abstract design features, which will become important as services get smarter. They include:

  1. suggested features,
  2. personalization,
  3. shortcuts vs. granular controls,
  4. graceful failure.

Suggested features

Text and speech prediction has opened up new opportunities for interaction with smart devices. Conversational interfaces are the most prominent example of this development, but definitely not the only one. As we try to hide the interface and underlying complexity from users, we are balancing between what we hide and what we reveal. Suggested features help users to discover what the invisible UI is capable of.

Graphical user interfaces (GUIs) have made computing accessible for the better part of the human race that enjoys normal vision. GUIs provided a huge usability improvement in terms of feature discovery. Icon and menus were the first useful metaphors for direct manipulation of digital objects using a mouse and keyboard. With multi-touch screens, we have gained the new power of pinching, dragging and swiping to interact. Visual clues aren’t going anywhere, but they are not going to be enough when interaction modalities expand.

How does a user find out what your service can do?

Haptic interaction in the first consumer generation of wearables and in the foremost conversational interfaces presents a new challenge for feature discovery. Non-visual cues must be used that facilitate the interaction, particularly at the very onset of the interactive relationship. Feature suggestions — the machine exposing its features and informing the user what it is capable of — are one solution to this riddle.

In the case of a chatbot employed for car rentals, this could be, “Please ask me about available vehicles, upgrades, and your past reservations.”

Specific application areas come with specific, detailed patterns. For instance, Patrick Hebron’s recent ebook from O’Reilly contains a great discussion of the solutions for conversational interfaces.

Personalisation

Once a computer gets to know you and to predict your desires and preferences, it can start to serve you in new, more effective ways. This is personalization, the automated optimization of a service. Responsive website layouts are a crude way of doing this.

The utilization of machine learning features with interfaces could lead to highly personalized user experiences. Akin to giving everyone a unique desktop and home-screen, services and apps will start to adapt to people’s preferences as well. This new degree of personalization presents opportunities as well as forces designers to flex their thoughts on how to create truly adaptive interfaces that are largely controlled by the logic of machine learning. If you succeed in this, you will reward users with a superior experience and will impart a feeling of being understood.

Amazon.com’s front page has been personalised for a long time. The selection offered to me looks somewhat relevant, if not attractive.

Currently, personalisation is foremost applied in order to curate content. For instance, Amazon carefully considers which products would appeal to potential buyers on its front page. But it will not end with that. Personalisation will likely lead to much bigger changes across UIs — for instance, even in the presentation of the types of interactive elements a user likes to use.

Shortcuts versus granularity

Photoshop is an excellent example of a tool with a steep learning curve and a great deal of granularity in controlling what can be done. Most of the time, you work on small operations, each of which has a very specific influence. The creative combination of many small things allows for interesting patterns to emerge on a larger scale. Holistic, black-box operations such as transformative filters and automatic corrections are not really the reason why professionals use Photoshop.

What will happen when machines learn to predict what we are doing repeatedly? For instance, I frequently perform certain actions in Photoshop before uploading my photos to a blog. While I could manually automate this, creating yet another user-defined feature among thousands already in the product, Photoshop might learn to predict my intentions and offer a more prominent shortcut, or a highway, to fast-forward me to my intended destination. As Adobe currently puts effort into bringing AI into Creative Cloud, we’ll likely see something even more clever than this very soon. It is up to you to let the machine figure out the appropriate shortcuts in your application.

Mockup of a possible implementation of “predictive history” in Photoshop CC. The application suggests a possible future state for the user based on the user’s history and preceding actions and on the current image.

 

A funny illustration of a similar train of thought comes from Cristopher Hesse’s machine-learning-based image-to-image translation, which provides interesting content-specific filling of doodles. Similar to Photoshop’s content-aware fill, it creates most hilarious visualisations of building facades, cats, shoes, and bags based on minimal user input.

The edges2cats algorithm employs machine learning to finish your cat doodle as a photorealistic cat monster.

Graceful failure

I call the final pattern graceful failure. It means saying “sorry, I can’t do what you want because…” in an understandable way.

This is by no means unique to machine learning applications. It is innately human, but something that computers have been notoriously bad at since the time that syntax errors were repeatedly echoed by Commodore computers in the 1980s. But with machine learning, it’s slightly different. Because machine learning takes a fuzzy-logic approach to computing, there are new ways that the computer could produce unexpected results — that is, things could go very bad, and that has to be designed for. Nobody seriously blames the car in question for the death that occurred in the Tesla autopilot accident in 2016.

The other part is that building applications that rely on modern machine learning are still in its infancy. Classic software development has been around for so long that we’ve learned to deal with its insufficiencies better. As Peter Norvig, famous AI researcher and Google’s research director puts it like this:

The problem here is the methodology for scaling this up to a whole industry is still in progress.… We don’t have the decades of experience that we have in developing and verifying regular software.

The nature of learning is such that computers learn from what is given to them. If the algorithm has to deal with something else, then the results will not be to your liking. For example, if you’ve trained a system to detect animal species from pet photos and then start using it to classify plants, there will be trouble. This is more or less why Microsoft’s Twitterbot Tay had to be silenced after it picked up the wrong examples from malicious users when exposed to real-world conditions.

The uncertainty in detection and prediction should be taken into consideration. How this is done depends on the application. Consider Google Search. No one is offended or truly hurt but merely amused or frustrated, by bad search results. Of course, bad results will eventually be bad for business. However, if your bank started using a chatbot that suddenly could not figure out your checking account’s balance, you would be rightfully worried and should be offered a quick way to resolve your trouble.

To deal with failure, interfaces would do well to help both parties adjust. Users can tolerate one or two “I didn’t get that, please say that again” prompts (but no more) if that’s what it takes to advance the dialogue. For services that include machine learning, extensive testing is best. Next comes informing users about the probability and consequences of failure, and instructions on what the user might do to avoid it. The good practices are still emerging.

 

This text is from an article originally appearing in Smashing Magazine:
https://www.smashingmagazine.com/2017/04/applications-machine-learning-designers/ 

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








NEW MACHINE LEARNING SERVICES ANNOUNCED AT THE RE:INVENT KEYNOTE

CATEGORIES

Insights

Last Wednesday, AWS’s CEO Andy Jassy held his traditional keynote at AWS re:Invent, and on the machine learning front, there were several interesting announcements. Here’s a summary of what they were and why you should care…

Amazon SageMaker – What is it?

SageMaker is a fully managed service for implementation, training, automatic hyperparametre tuning, and deployment of machine learning models.

Why should you care?

SageMaker includes a hosted Jupyter environment that doesn’t limit you to a particular machine learning framework -TensorFlow, Caffe, MXnet, CNTK, Keras, Gluon and other major frameworks are all supported. This is in contrast to other Cloud vendors’ fully managed ML offerings, which only offer a single ML framework to work with.

In addition, SageMaker automatically provisions EC2 instances for training and tears them down when the training is complete. This is really handy because up to this point, you had to handle instance provisioning a) manually or b) by implementing your own automation. This annoyance is now a thing of the past.

SageMaker also does automatic hyper-parametre tuning, (no more manual trial-and-error tuning) and model deployment, giving you auto-scaling inference endpoints with very little hassle.

AWS DeepLens – What is it?

DeepLens is a deep learning enabled video camera and associated software toolkit.

Why should you care?

DeepLens includes an onboard graphics processor and over 100 GFLOPS of compute power. What this means in practice is that you can deploy a computer vision model on the device itself and run predictions/inference locally, without a round trip to the Cloud. DeepLens is fully programmable using the AWS Lambda serverless programming model. The models themselves even run as part of a Lambda function. All deep learning frameworks are supported, just like in SageMaker.

Amazon Rekognition Video – What is it?

Rekognition Video does object recognition for video files. Rekognition Video complements the original Rekognition service, which works on image data.

Why should you care?

Object recognition from video previously required you to extract frames from video, convert them to images and then feed them to Rekognition. This process was unwieldy, introducing latency that made it impossible to do near real-time inference. With Rekognition Video, you can do real-time recognition for video, which enables a lot of different use cases. Rekognition Video can detect faces, filter inappropriate content, detect activities and even track people, which is something that other cloud vendors’ object recognition services do not provide out-of-the-box.

Amazon Kinesis Video Streams – What is it?

Kinesis Video Streams is fully managed secure video ingestion and storage service.

Why should you care?

Streaming video to the Cloud is tricky business, typically requiring you to implement your own solution with sufficient protection, scalability and failover mechanisms. It’s a huge hassle, and it’s only a means to an end. A fully managed service that handles all of this is extremely welcome, and in true AWS fashion, it integrates seamlessly with other AWS services.

Amazon Transcribe – What is it?

Amazon Transcribe is machine learning-powered automatic speech recognition and transcription service.

Why should you care?

Transcription typically requires you to hire a transcription service, which may be prohibitively expensive depending on the use case. Amazon Transcription does transcription without manual work, adding in punctuation and, crucially, providing granular timestamps for each uttered word. As with other ready-made AI services, it’ll get better (more accurate) over time and you don’t have to do anything.

Amazon Translate – What is it?

Amazon Translate is a machine learning-powered language translation service.

Why should you care?

Translation services are provided by other Cloud vendors, but until now, AWS hasn’t had their own. Amazon Translate is useful because, as usual, it’s well integrated into other AWS services. It also increases competition in the translation space, which is a win for end users.

Amazon Comprehend – What is it?

Amazon Comprehend is a natural language processing (NLP) service that identifies key phrases, topic, places, people, brands, or events from text. It also does sentiment analysis.

Why should you care?

Entity recognition is, in general, a hard machine learning problem – rolling out your own model takes massive amounts of data, careful algorithm selection and long training times. A ready-made solution allows you to focus on implementing your use case.

If you’d like to know more about these tools, and how best to use them, please contact us here.

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








NORDCLOUD: THE FASTEST GROWING CLOUD ENABLER IN EUROPE!

CATEGORIES

News

Nordcloud #76 overall and top-ranked cloud business in Deloitte’s 2017 Technology Fast 500 EMEA

Helsinki, December 8th, 2017. Nordcloud, the Nordics’ market-leading public cloud infrastructure solutions and services enabler, has cemented its position as a force to be reckoned with in cloud infrastructure across Europe.

The international consultancy Deloitte ranked Nordcloud at #76 in its Technology Fast 500 for 2017, which lists the fastest growing technology companies in Europe, the Middle East and Africa, based on revenue growth.

During 2017, Nordcloud added eight, or a full 20% of the companies listed on the OMX40, the main Nordic stock index.

The company is focused on expansion and has adopted uncompromising targets for the period 2018 to 2020; it’s aiming to grow by 400%, boosting its current turnover of around €50M to over €200M. Currently employing around 250 people, Nordcloud also plans to recruit more than 150 new staff over the next 12 months to expand its R&D, fuel international growth and maintain its reputation for excellent service.

“The really significant thing isn’t that we’re Europe’s fastest growing public cloud enabler, but why we are,” says Nordcloud’s CTO Ilja Summala.

“Part of the answer lies in the fact that we saw the true potential in business of the cloud very early on, and moved forward on our cloud-focused path from 2011,” says Summala. “But mostly it’s that we’ve built a reputation for delivering what we promise and with excellence. Together with our partners and customers, we’ve realised our vision of bringing about truly agile and efficient corporate IT through hyperscale cloud and software.”

Nordcloud offers a unique mix of cloud-infrastructure and software expertise. Not only can Nordcloud engineers work with clients to translate their applications to the cloud, but they can create cloud native applications to order for clients including software that uses the latest innovations. Nordcloud is also expert in specifically European aspects of both IT infrastructure and software such as compliance with EU regulations and has teams in every territory where it has customers.

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Blog

Six capabilities modern ISVs need in order to future-proof their SaaS offering

Successful SaaS providers have built their business around 6 core capabilities

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Azure Global Vnet Peering – A Step Closer To MPLS Replacement

CATEGORIES

Tech

This year, Azure announced a global VNET peering preview during Ignite 2017. Global VNET peering enables customers to connect Azure networks in different regions by easily leveraging Azures global networking backbone.

Many customers with a large number of globally distributed regional offices historically have not been too happy with the cost and performance of their global networking. For example, there are not enough internet breakouts to satisfy local performance needs, MPLS bandwidth for the remote office is less than provided by their local internet cafe, and overall IT in remote offices cost more than it should.

Currently, the peering does not support transitive routing or gateway transit. Therefore a remote office connected to a VPN in a remote Azure region cannot leverage Azure Express Route connection in another region over global peering.

When transitive routing becomes available, customers are able to reduce the MPLS costs and provide fast internet access in local offices. Office servers can be migrated to Azure leaving managed VPN/Firewall as the only infrastructure to be maintained, therefore greatly simplifying IT. MPLS replacement with Azure also requires many other management solutions to be compatible (such as compute endpoint management) but promises to be a way of reducing cost, simplifying operations and improving global network services. Nordcloud has years of experience in Azure core infrastructure solutions development, so please get in touch if you want to find out more.

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Blog

Controlling lights with Ikea Trådfri, Raspberry Pi and AWS

One of our developers build a smart lights solution with Ikea Trådfri, Raspberry Pi and AWS for his home.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.