Three main uses of machine learning

Post • 9 min read
The beauty of applications that employ machine learning is that they can be extremely simple on the surface, hiding away much complexity from users. However, designers can’t afford to ignore the under-the-hood part of machine learning altogether. In this article, I demonstrate the three main functions machine learning algorithms perform underneath with along with six unique benefits you can derive from using them.

So what have machines learned so far?

In 2016, the most celebrated milestone of machine learning was AlphaGo’s victory over the world champion of Go, Lee Sedol. Considering that Go is an extremely complicated game to master, this was a remarkable achievement. Beyond exotic games such as Go, Google Image Search is maybe the best-known application of machine learning. Search feels so natural and mundane when it effectively hides away all of the complexity is embeds. With over 30 billion search queries every day, Google Image Search constantly gets more opportunities to learn. There are already more individual machine learning applications that are reasonable to list here. But a major simplification is not sufficient either, I feel. One way to appreciate the variety is to look at successful ML applications from Eric Siegel’s book Predictive Analytics from 2013. The listed applications fall under the following domains:
  • marketing, advertising, and the web;
  • financial risk and insurance;
  • health care;
  • crime fighting and fraud detection;
  • fault detection for safety and efficiency;
  • government, politics, nonprofit and education;
  • human-language understanding, thought and psychology;
  • staff and employees, human resources.
Siegel’s cross-industry collection of examples is a powerful illustration of the omnipresence of predictive applications, even though not all of his 147 examples utilise machine learning as we know it. However, for a designer, knowing whether your problem domain is among the nine listed will give an idea of whether machine learning has already proven to be useful or whether you are facing a great unknown.

Detection, prediction, and creation

As I see it, the power of learning algorithms comes down to two major applications: detection and prediction. Detection is about interpreting the present, and prediction is the way of the future. Interestingly, machines can also do generative or “creative” tasks. However, these are still a marginal application. When you combine detection and prediction, you can achieve impressive overall results. For instance, combine the detection of traffic signs, vehicles and pedestrians with the prediction of vehicular and pedestrian movements and of the times to vehicle line crossings, and you have the makings of an autonomous vehicle! This is my preferred way of thinking about machine learning applications. In practice, detection and prediction are sometimes much alike because they don’t yet cut into the heart and bones of machine learning, but I believe they offer an appropriate level of abstraction to talk about machine learning applications. Let’s clarify these functions through examples.

The varieties of detection

There are at least four major types of applications of detection. Each deals with a different core learning problem. They are:
  • text and speech interpretation,
  • image and sound interpretation,
  • human behaviour and identity detection,
  • abuse and fraud detection.

Text & speech interpretation

Text and speech are the most natural interaction and communication methods. Thus, it has not been feasible in the realm of computers. Previous generations of voice dialling and interactive voice response systems were not very impressive. Only in this decade have we seen a new generation of applications that take spoken commands and even have a dialogue with us! This can go so smoothly that we can’t tell computers and humans apart in text-based chats, indicating that computers have passed the Turing test. Dealing with speech, new systems such as personal assistant Siri or Amazon’s Echo device are capable of interpreting a wide range of communications and responding intelligently. The technical term for this capability is natural language processing (NLP). This indicates that, based on successful text and speech detection (i.e. recognition), computers can also interpret the meaning of words, spoken and written, and take action. Text interpretation enables equally powerful applications. The detection of emotion or sentiment from the text means that large masses of text can be automatically analyzed to reveal what people in social media think about brands, products or presidential candidates. For instance, Google Translate just recently witnessed significant quality improvements by switching to an ML approach to translations. Amazon Echo Dot is surfacing as one of the best-selling speech-recognition-driven appliances of early 2017. Picture: Amazon

Image & sound interpretation

Computer vision gives metaphorical eyes to a machine. The most radical example of this is a computer reconstruction of human perception from brain scans! However, that is hardly as useful as an application that automates the tagging of photos or videos to help you explore Google Photos or Facebook. The latter service recognizes faces to an even scary level of accuracy. Image interpretation finds many powerful business applications in industrial quality control, recording vehicle registration plates, analysing roadside photos for correct traffic signs and monitoring empty parking spaces. The recent applications of computer vision to skin cancer diagnosis have actually proven more proficient than human doctors, leading to the discovery of new diagnostic criteria. Speech was already mentioned, but other audio signals are also well detected by computers. Shazam and SoundHound have for years provided reliable detection of songs either from a recording fragment or a sung melody. The Fraunhofer Institute developed the Silometer, an app to detect varieties of coughs as a precursor to medical diagnosis. I would be very surprised if we don’t see many new applications for human and non-human sounds in the near future.

Human behaviour and identity detection

Given that computers are seeing and hearing what we do, it is not surprising that they have become capable of analysing and detecting human behaviour and identity as well — for instance, with Microsoft Kinect recognising our body motion. Machines can identify movements in a football game to automatically generate game statistics. Apple’s iPad Pro recognizes whether the user is using their finger or the pencil for control, to prevent unwanted gestures. A huge number of services detect what kind of items typically go together in a shopping cart; this enables Amazon to suggest that you might also be interested in similar products. In the world of transportation, it would be a major safety improvement if we could detect when a driver is about to fall asleep behind the steering wheel, to prevent traffic accidents. Identity detection is another valuable function enabled by several signals. A Japanese research institute has developed a car seat that recognises who’s sitting in it. Google’s reCAPTCHA is a unique system that tells apart humans from spambots. Perhaps the most notorious example of guessing people’s health was Target’s successful detection of expectant mothers. This was followed by a marketing campaign that awkwardly disclosed the pregnancy of Target customers, resulting in much bad publicity.

ANTI-*

Machine learning is also used to detect and prevent fraudulent, abusive or dangerous content and schemes. It is not always major attacks; sometimes it’s just about blocking bad checks or preventing petty criminals from entering the NFL’s Superbowl arena. The best successes are found in anti-spam; for instance, Google has been doing an excellent job for years of filtering spam from your Gmail inbox. I will conclude with a good-willed detection example from the normal human sphere. Whales can be reliably recognized from underwater recordings of their sounds — once more, thanks to machine learning. This can help human-made fishing machinery to avoid contact with whales for their protection.

Species of prediction

Several generations of TV watchers have been raised to watch weather forecasts for fun, ever since regular broadcasts began after the Second World War. The realm of prediction today is wide and varied. Some applications may involve non-machine learning parts that help in performing predictions. Here I will focus on the prediction of human activities, but note that the prediction of different non-human activities is currently gaining huge interest. Predictive maintenance of machines and devices is one such application, and more are actively envisioned as the Internet of Things generates more data to learn from. Predicting different forms of human behaviour falls roughly into the following core learning challenges and applications:
  • recommendations,
  • individual behaviour and condition,
  • collective behaviour prediction.
Different types of recommendations are about predicting user preferences. When Netflix recommends a movie or Spotify generates a playlist of your future favourite music, they are trying to predict whether you will like it, watch it or listen through to the end of the piece. Netflix is on the lookout for your rating of the movie afterwards, whereas Spotify or Pandora might measure whether you are returning to enjoy the same song over and over again without skipping. This way, our behaviours, and preferences become connected even without our needing to express them explicitly. This is something machines can learn about and exploit. In design, predicting which content or which interaction models appeal to users could give rise to the personalisation of interfaces. This is mostly based on predicting which content a user would be most interested in. For a few years now, Amazon has been heavily personalising the front page, predicting what stuff and links should be present in anticipation of your shopping desires and habits. Recommendations are a special case of predicting individual behaviour. The scope of predictions does not end with trivial matters, such as whether you like Game of Thrones or Lady Gaga. Financial institutions attempt to predict who will default on their loan or try to refinance it. Big human-resource departments might predict employee performance and attrition. Hospitals might predict the discharge of a patient or the prognosis of cancer. Rather more serious humans conditions, such as divorce, premature birth and even death within a certain timeframe, have been all been predicted with some success. Of course, predicting fun things can get serious when money is involved, as when big entertainment companies try to guess which songs and movies will top the charts to direct their marketing and production efforts. The important part about predictions is that they lead to an individual assessment that is actionable. The more reliable the prediction and the weightier the consequences, the more profitable and useful the predictions become. Predicting collective behaviour becomes a generalization of individuals but with different implications and scope. In these cases, intervention is only successful if it affects most of the crowd. The looming result of a presidential election, cellular network use or seasonal shopping expenditure can all be subject to prediction. When predicting financial risk or a company’s key performance indicators, the gains of saving or making money are noticeable. J.P. Morgan Chase was one of the first banks to increase efficiency by predicting mortgage defaulters (those who never pay back) and refinancers (those who pay back too early). On the other hand, the recent US presidential election is a good reminder that none of this is yet perfect. In a close resemblance to tools for antivirus and other present dangers, future trouble is also predictable. Predictive policing is about forecasting where street conflicts might happen or where squatters are taking over premises, which would help administrators to distribute resources to the right places. A similar process is going on in energy companies, as they try to estimate the capacity needed to last the night.

What can machine intelligence do for you?

After successfully creating a machine learning application to fulfil any of the three uses described above, what can you expect to come out of it? How would your product or service benefit from it? Here are six possible benefits:
  1. augment,
  2. automate,
  3. enable,
  4. reduce costs,
  5. improve safety,
  6. create.
In rare cases, machine learning might enable a computer to perform tasks that humans simply can’t perform because of speed requirements or the scale of data. But most of the time, ML helps to automate repetitive, time-consuming tasks that defy the limits of human labour cost or attentional span. For instance, sorting through recycling waste 24/7 is more reliably and affordably done by a computer. In some areas, machine learning may offer a new type of expert system that augments and assists humans. This could be the case in design, where a computer might make a proposal for a new layout or colour palette aligned with the designer’s efforts. Google Slides already offers this type of functionality through the suggested layouts feature. Augmenting human drivers would improve traffic safety if a vehicle could, for example, start braking before the human operator could possibly react, saving the car from a rear-end collision. You can find our data-driven solutions for business intelligence here.
CloudCloud

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

Ilja Summala
Ilja Summala LinkedIn
CTO
Ilja’s passion and tech knowledge help customers transform how they manage infrastructure and develop apps in cloud.