Training PyTorch Transformers on Google Cloud AI Platform

Google Cloud is widely known for its great AI and machine learning capabilities and products. In fact, there are tons of material available on how you can train and deploy TensorFlow models on Google Cloud. However, Google Cloud is not just for people using TensorFlow but it has good support for other frameworks as well.

In this post I will show how to use another highly popular ML framework PyTorch on AI Platform Training. I will show how to fine-tune a state-of-the-art sequence classification model using PyTorch and the transformers library. We will be using a pre-trained RoBERTa as the transformer model for this task which we will fine-tune to perform sequence classification.

RoBERTa falls under the family of transformer-based massive language models which have become very popular in natural language processing since the release of BERT developed by Google. RoBERTa was developed by researchers at University of Washington and Facebook AI. It is fundamentally a BERT model pre-trained with an improved pre-training approach. See the details about RoBERTa here.

This post covers the following topics:

  • How to structure your ML project for AI Platform Training
  • Code for the model, the training routine and evaluation of the model
  • How to launch and monitor your training job

You can find all the code on Github.

ML Project Structure

Let’s start with the contents of our ML project.

├── trainer/
│   ├──
│   ├──
│   ├──
│   ├──
│   └──
├── scripts/
│   └──
├── config.yaml

The trainer directory contains all the python files required to train the model. The contents of this directory will be packaged and submitted to AI Platform. You can find more details and best practices on how to package your training application here. We will look at the contents of the individual files later in this post.

The scripts directory contains our training scripts that will configure the required environment variables and submit the job to AI Platform Training.

config.yaml contains configuration of the compute instance used for training the model. Finally, contains details about our python package and the required dependencies. AI Platform Training will use the details in this file to install any missing dependencies before starting the training job.

PyTorch Code for Training the Model

Let’s look at the contents of our python package. The first file, is just an empty file. This needs to be in place and located in each subdirectory. The init files will be used by Python Setuptools to identify directories with code to package. It is OK to leave this file empty.

The rest of the files contain different parts of our PyTorch software. is our main file and will be called by AI Platform Training. It retrieves the command line arguments for our training task and passes those to the run function in

def get_args():
    """Define the task arguments with the default values.

        experiment parameters
    parser = ArgumentParser(description='NLI with Transformers')

                        help='GCS location to export models')
                        help='The name of your saved model',

    return parser.parse_args()

def main():
    """Setup / Start the experiment
    args = get_args()

if __name__ == '__main__':

Before we look at the main training and evaluation routines, let’s look at the and which define the datasets for the task and the transformer model respectively. First, the we use the datasets library to retrieve our data for the experiment. We use the MultiNLI sequence classification dataset for this experiment. The file contains code to retrieve, split and pre-process the data. The NLIDataset provides the PyTorch Dataset object for the training, development and test data for our task.

class NLIDataset(
    def __init__(self, encodings, labels):
        self.encodings = encodings
        self.labels = labels

    def __getitem__(self, idx):
        item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
        item['labels'] = torch.tensor(self.labels[idx])
        return item

    def __len__(self):
        #return len(self.labels)
        return len(self.encodings.input_ids)

The load_data function retrieves the data using the datasets library, splits the data into training, development and test sets, and then tokenises the input using RobertaTokenizer and creates PyTorch DataLoader objects for the different sets.

def load_data(args):
    tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
    nli_data = datasets.load_dataset('multi_nli')

    # For testing purposes get a slammer slice of the training data
    all_examples = len(nli_data['train']['label'])
    num_examples = int(round(all_examples * args.fraction_of_train_data))

    print("Training with {}/{} examples.".format(num_examples, all_examples))
    train_dataset = nli_data['train'][:num_examples]

    dev_dataset = nli_data['validation_matched']
    test_dataset = nli_data['validation_matched']

    train_labels = train_dataset['label']

    val_labels = dev_dataset['label']
    test_labels = test_dataset['label']

    train_encodings = tokenizer(train_dataset['premise'], train_dataset['hypothesis'], truncation=True, padding=True)
    val_encodings = tokenizer(dev_dataset['premise'], dev_dataset['hypothesis'], truncation=True, padding=True)
    test_encodings = tokenizer(test_dataset['premise'], test_dataset['hypothesis'], truncation=True, padding=True)

    train_dataset = NLIDataset(train_encodings, train_labels)
    val_dataset = NLIDataset(val_encodings, val_labels)
    test_dataset = NLIDataset(test_encodings, test_labels)

    train_loader = DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True)
    dev_loader = DataLoader(val_dataset, batch_size=args.batch_size, shuffle=True)
    test_loader = DataLoader(test_dataset, batch_size=args.batch_size, shuffle=True)

    return train_loader, dev_loader, test_loader

The save_model function will save the trained model once it’s been trained and uploads it to Google Cloud Storage.

def save_model(args):
    """Saves the model to Google Cloud Storage

      args: contains name for saved model.
    scheme = 'gs://'
    bucket_name = args.job_dir[len(scheme):].split('/')[0]

    prefix = '{}{}/'.format(scheme, bucket_name)
    bucket_path = args.job_dir[len(prefix):].rstrip('/')

    datetime_ ='model_%Y%m%d_%H%M%S')

    if bucket_path:
        model_path = '{}/{}/{}'.format(bucket_path, datetime_, args.model_name)
        model_path = '{}/{}'.format(datetime_, args.model_name)

    bucket = storage.Client().bucket(bucket_name)
    blob = bucket.blob(model_path)

The file contains code for the transformer model RoBERTa. The __init__ function initialises the module and defines the transformer model to use. The forward function will be called by PyTorch during execution of the code using the input batch of tokenised sentences together with the associated labels. The create function is a wrapper that is used to initialise the model and the optimiser during execution.

# Specify the Transformer model
class RoBERTaModel(nn.Module):
    def __init__(self):
        """Defines the transformer model to be used.
        super(RoBERTaModel, self).__init__()

        self.model = RobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=3)

    def forward(self, x, attention_mask, labels):
        return self.model(x, attention_mask=attention_mask, labels=labels)

def create(args, device):
    Create the model

      args: experiment parameters.
      device: device.
    model = RoBERTaModel().to(device)
    optimizer = optim.Adam(model.parameters(),

    return model, optimizer

The file contains the main training and evaluation routines for our task. It contains the functions trainevaluate and run. The train function takes our training dataloader as an input and trains the model for one epoch in batches of the size defined in the command line arguments.

def train(args, model, dataloader, optimizer, device):
    """Create the training loop for one epoch.

      model: The transformer model that you are training, based on
      dataloader: The training dataset
      optimizer: The selected optmizer to update parameters and gradients
      device: device
    for i, batch in enumerate(dataloader):
            input_ids = batch['input_ids'].to(device)
            attention_mask = batch['attention_mask'].to(device)
            labels = batch['labels'].to(device)
            outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
            loss = outputs[0]
            if i == 0 or i % args.log_every == 0 or i+1 == len(dataloader):
                print("Progress: {:3.0f}% - Batch: {:>4.0f}/{:<4.0f} - Loss: {:<.4f}".format(
                    100. * (1+i) / len(dataloader), # Progress
                    i+1, len(dataloader), # Batch
                    loss.item())) # Loss

The evaluate function takes the development or test dataloader as an input and evaluates the prediction accuracy of our model. This will be called after each training epoch using the development dataloader and after the training has finished using the test dataloader.

def evaluate(model, dataloader, device):
      """Create the evaluation loop.

      model: The transformer model that you are training, based on
      dataloader: The development or testing dataset
      device: device
    print("\nStarting evaluation...")
    with torch.no_grad():
        eval_preds = []
        eval_labels = []

        for _, batch in enumerate(dataloader):
            input_ids = batch['input_ids'].to(device)
            attention_mask = batch['attention_mask'].to(device)
            labels = batch['labels'].to(device)
            preds = model(input_ids, attention_mask=attention_mask, labels=labels)
            preds = preds[1].argmax(dim=-1)

    print("Done evaluation")
    return np.concatenate(eval_labels), np.concatenate(eval_preds)

Finally, the run function calls the run and evaluate functions and saves the fine-tuned model to Google Cloud Storage once training has completed.

def run(args):
    """Load the data, train, evaluate, and export the model for serving and

      args: experiment parameters.
    cuda_availability = torch.cuda.is_available()
    if cuda_availability:
      device = torch.device('cuda:{}'.format(torch.cuda.current_device()))
      device = 'cpu'
    print('`cuda` available: {}'.format(cuda_availability))
    print('Current Device: {}'.format(device))


    # Open our dataset
    train_loader, eval_loader, test_loader = inputs.load_data(args)

    # Create the model, loss function, and optimizer
    bert_model, optimizer = model.create(args, device)

    # Train / Test the model
    for epoch in range(1, args.epochs + 1):
        train(args, bert_model, train_loader, optimizer, device)
        dev_labels, dev_preds = evaluate(bert_model, eval_loader, device)
        # Print validation accuracy
        dev_accuracy = (dev_labels == dev_preds).mean()
        print("\nDev accuracy after epoch {}: {}".format(epoch, dev_accuracy))

    # Evaluate the model
    print("Evaluate the model using the testing dataset")
    test_labels, test_preds = evaluate(bert_model, test_loader, device)
    # Print validation accuracy
    test_accuracy = (test_labels == test_preds).mean()
    print("\nTest accuracy after epoch {}: {}".format(args.epochs, test_accuracy))

    # Export the trained model, args.model_name)

    # Save the model to GCS
    if args.job_dir:

Launching and monitoring the training job

Once we have the python code for our training job, we need to prepare it for AI Platform Training. There are three important files required for this. First, contains information about the dependencies of our python package as well as metadata like name and version of the package.

from setuptools import find_packages
from setuptools import setup


    description='Sequence Classification with Transformers on Google Cloud AI Platform'

The config.yaml file contains information about the compute instance used for training the model. For this job we need use an NVIDIA V100 GPU as it provides improved training speed and larger GPU memory compared to the cheaper K80 GPUs. See this great blog post by Google on selecting a GPU.

  scaleTier: CUSTOM
  masterType: n1-standard-8
      count: 1
      type: NVIDIA_TESLA_V100

Finally the scripts directory contains the script which includes the required environment variables as will as the gcloud command to submit the AI Platform Training job.

# BUCKET_NAME: unique bucket name

# The PyTorch image provided by AI Platform Training.

# JOB_NAME: the name of your job running on AI Platform.
JOB_NAME=transformers_job_$(date +%Y%m%d_%H%M%S)

echo "Submitting AI Platform Training job: ${JOB_NAME}"

PACKAGE_PATH=./trainer # this can be a GCS location to a zipped and uploaded package


# JOB_DIR: Where to store prepared package and upload output model.

gcloud ai-platform jobs submit training ${JOB_NAME} \
    --region ${REGION} \
    --master-image-uri ${IMAGE_URI} \
    --config config.yaml \
    --job-dir ${JOB_DIR} \
    --module-name trainer.task \
    --package-path ${PACKAGE_PATH} \
    -- \
    --epochs 2 \
    --batch_size 16 \
    --learning_rate 2e-5

gcloud ai-platform jobs stream-logs ${JOB_NAME}

The list line of this script streams the logs directly to your command line. Alternatively you can head to Google Cloud console and navigate to AI Platform jobs and select View logs.


You can also view the GPU utilisation and memory from the AI Platform job page.

Monitoring GPU utilisation


That concludes this post. You can find all the code on Github.

Hope you enjoyed this demo. Feel free to contact me if you have any questions.

This is a slightly modified version of an article originally posted on Nordcloud Engineering blog.

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

    Global Azure Bootcamp in Poland



    Global Azure Bootcamp is organized all over the world, this time in 5 cities in Poland including Poznan where’s our Polish office and where we hire Azure geeks like Sławek Stanek who co-organizes the event so we are more than happy to support such a bootcamp!

    Come and learn more about the public cloud, Azure Active Directory, DevOps on Azure, Artificial Intelligence & Big Data, Power BI in Azure and SAP on Azure!

    Registrate here:

    Venue: Centrum Wykładowe Politechniki Poznańskiej – Sala CW8 Piotrowo 2, Poznań

    Date: 27.04.2019

    Time: 09:00 – 16:00

    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

      Looking ahead: what’s next for AI in manufacturing?


      BlogTech Community

      AI and manufacturing have been on an exciting journey together. It’s a combination that is fast changing the world of manufacturing: 92 percent of senior manufacturing executives believe that the ‘Smart Factory’ will empower their staff to work smarter and increase productivity.

      How does AI benefit manufacturers?

      Some of the biggest companies are already adopting AI. Why? A big reason is increased uptime and productivity through predictive maintenance. AI enables industrial technology to track its own performance and spot trends and looming problems that humans might miss. This gives the operator a better chance of planning critical downtime and avoiding surprises.

      But what’s the next big thing? Let’s look to the immediate future, to what is on the horizon and a very real possibility for manufacturers.

      Digital twinning

      ‘A digital twin is an evolving digital profile of the historical and current behaviour of a physical object or process that helps optimize business performance.’ – According to Deloitte.

      Digital twinning will be effective in the manufacturing industry because it could replace computer-aided design (CAD). CAD is highly effective in computer-simulated environments and has shown some success in modelling complex environments, yet its limitations lay in the interactions between the components and the full lifecycle processes.

      The power of a digital twin is in its ability to provide a real-time link between the digital and physical world of any given product or system. A digital twin is capable of providing more realistic measurements of unpredictability. The first steps in this direction have been taken by cloud-based building information modelling (BIM), within the AEC industry. It enables a manufacturer to make huge design and process changes ahead of real-life occurrences.

      Predictive maintenance

      Take a wind farm. You’re manufacturing the turbines that will stand in a wind farm for hundreds of years. With the help of a CAD design you might be able to ‘guesstimate’ the long-term wear, tear and stress that those turbines might encounter in different weather conditions. But a digital twin will use predictive machine learning to show the likely effects of varying environmental events, and what impact that will have on the machinery.

      This will then affect future designs and real-time manufacturing changes. The really futuristic aspect will be the incredible increases in accuracy as the AI is ‘trained.’

      Smart factories

      An example of a digital twin in a smart factory setting would be to create a virtual replica of what is happening on the factory floor in (almost) real-time. Using thousands or even millions of sensors to capture real-time performance and data, artificial intelligence can assess (over a period of time) the performance of a process, machine or even a person. Cloud-based AI, such as those technologies offered by Microsoft Azure, have the flexibility and capacity to process this volume of data.

      This would enable the user to uncover unacceptable trends in performance. Decision-making around changes and training will be based on data, not gut feeling. This will enhance productivity and profitability.

      The uses of AI in future manufacturing technologies are varied. Contact us to discuss the possibilities and see how we can help you take the next steps towards the future.

      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

        10 examples of AI in manufacturing to inspire your smart factory


        BlogTech Community

        AI in manufacturing promises massive leaps forward in productivity, environmental friendliness and quality of life, but research shows that while 58 percent of manufacturers are actively interested, only 12 percent are implementing it.

        We’ve gathered 10 examples of AI at work in smart factories to bridge the gap between research and implementation, and to give you an idea of some of the ways you might use it in your own manufacturing.

        1. Quality checks

        Factories creating intricate products like microchips and circuit boards are making use of ‘machine vision’, which equips AI with incredibly high-resolution cameras. The technology is able to pick out minute details and defects far more reliably than the human eye. When integrated with a cloud-based data processing framework, defects are instantly flagged and a response is automatically coordinated.

        2. Maintenance

        Smart factories like those operated by LG are making use of Azure Machine Learning to detect and predict defects in their machinery before issues arise. This allows for predictive maintenance that can cut down on unexpected delays, which can cost tens of thousands of pounds.

        3. Faster, more reliable design

        AI is being used by companies like Airbus to create thousands of component designs in the time it takes to enter a few numbers into a computer. Using what’s called ‘generative design’, AI giant Autodesk is able to massively reduce the time it takes for manufacturers to test new ideas.

        4. Reduced environmental impact

        Siemens outfits its gas turbines with hundreds of sensors that feed into an AI-operated data processing system, which adjusts fuel valves in order to keep emissions as low as possible.

        5. Harnessing useful data

        Hitachi has been paying close attention to the productivity and output of its factories using AI. Previously unused data is continuously gathered and processed by their AI, unlocking insights that were too time-consuming to analyse in the past.

        6. Supply chain communication

        The aforementioned data can also be used to communicate with the links in the supply chain, keeping delays to a minimum as real-time updates and requests are instantly available. Fero Labs is a frontrunner in predictive communication using machine learning.

        7. Cutting waste

        Steel industry uses Fero Labs’ technology to cut down on ‘mill scaling’, which results in 3 percent of steel being lost. The AI was able to reduce this by 15 percent, saving millions of dollars in the process.

        8. Integration

        Cloud-based machine learning – like Azure’s Cognitive Services – is allowing manufacturers to streamline communication between their many branches. Data collected on one production line can be interpreted and shared with other branches to automate material provision, maintenance and other previously manual undertakings.

        9. Improved customer service

        Nokia is leading the charge in implementing AI in customer service, creating what it calls a ‘holistic, real-time view of the customer experience’. This allows them to prioritise issues and identify key customers and pain points.

        10. Post-production support

        Finnish elevator and escalator manufacturer KONE is using its ‘24/7 Connected Services’ to monitor how its products are used and to provide this information to its clients. This allows them not only to predict defects, but to show clients how their products are being used in practice.

        AI in manufacturing is reaching a wider and wider level of adoption, and for good reason. McKinsey predicts that ‘smart factories’ will drive $37 trillion in new value by 2025, giving rise to research projects like Reboot Finland IoT Factory, which involves organisations as diverse as Nokia and GE Healthcare. The technology is here and the research is ready – how will AI revolutionise your industry?

        Check out our whitepaper: “Industry 4.0: 7 steps to implement smart manufacturing”


        The uses of AI in future manufacturing technologies are varied. Contact us to discuss the possibilities and see how we can help you take the next steps towards the future.

        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

          Nordcloud @ Smart Factory 2018 in Jyväskylä – 20-22.11.2018



          Nordcloud at Smart Factory 2018 Jyväskylä

          Make sure to visit Nordcloud’s booth (C430) at the ‘Smart Factory 2018’ event, which is held in The Congress and Trade Fair Centre Jyväskylän Paviljonki, Jyväskylä between the 20-22.11.2018.


          Smart Factory 2018 is an event focused on how to utilise opportunities offered by digitalisation

          The event gathers together the themes of Industry 4.0 and the related technology, service and expertise offering. Smart Factory 2018 is targeted at all operators who are involved with changes associated with digital transformation in production activities and related new services and concepts. It strongly emphasizes the already known future-building themes, such as automation, machine vision, robotics, industrial internet and cybersecurity.

          You can register for the event here:

          Register to Smart Factory 2018

          See you there!

          Nordcloud at Smart Factory 2018

          Get in Touch.

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

            Cloud Computing News #11: Quantum Computing is the New Space Race



            This week we focus on quantum computing.

            Classical computers store information in bits that are either 1 or 0, but quantum computers use qubits, which can be thought to exist in both states of 1 and 0 at the same time, and also influence one another instantaneously via a process known as “entanglement”. These exotic new qualities for quantum bits mean that upcoming quantum computers computing power will be exponentially larger and faster.

            Quantum computing is expected to, for example, boost machine learning and have a big impact on artificial intelligence – and cloud services are being looked on as the method for providing access to quantum processing.

            Now as Nordcloud´s partners Google and Microsoft are investing massively into quantum computing, we are keenly following this development to be ready to bring this power to our customers in the future.

            BlackBerry races ahead of security curve with quantum-resistant solution

            According to TechCrunch, Black Berry announced a new quantum-resistant code signing service that anticipates a problem that does not yet exist.

            “By adding the quantum-resistant code signing server to our cybersecurity tools, we will be able to address a major security concern for industries that rely on assets that will be in use for a long time. If your product, whether it’s a car or critical piece of infrastructure, needs to be functional 10-15 years from now, you need to be concerned about quantum computing attacks,” Charles Eagan, BlackBerry’s chief technology officer, said in a statement.

            While experts argue how long it could take to build a fully functioning quantum computer, most agree that it will take between 50 and 100 qubit computers to begin realizing that vision.

            Read more in TechCrunch

            Quantum mechanics defies causal order

            Physics World highlights an experiment by Jacqui RomeroFabio Costa and colleagues at the University of Queensland in Australia, that has confirmed that quantum mechanics allows events to occur with no definite causal order. In classical physics – and everyday life – there is a strict causal relationship between consecutive events. If a second event (B) happens after a first event (A), for example, then cannot affect the outcome of A. This relationship, however, breaks down in quantum mechanics.

            In their experiment, Romero, Costa and colleagues created a “quantum switch”, in which photons can take two paths. As well as making an experimental connection between relativity and quantum mechanics, the researchers point out that their quantum switch could find use in quantum technologies.

            “This is just a first proof of principle, but on a larger scale indefinite causal order can have real practical applications, like making computers more efficient or improving communication,” says Costa.

            Read more in Physics World

            Two Quantum Computing Bills Are Coming to Congress

            According to Gizmodo, quantum computing has made it to the United States Congress. China has funded a National Laboratory for Quantum Information Sciences, set to open in 2020, and has launched a satellite meant to test long-distance quantum secure information.

            “Quantum computing is the next technological frontier that will change the world, and we cannot afford to fall behind,” said Senator Kamala Harris (D-California). “We must act now to address the challenges we face in the development of this technology—our future depends on it.”

            The bill introduced by Harris in the Senate focuses on defense, calling for the creation of a consortium of researchers selected by the Chief of Naval Research and the Director of the Army Research Laboratory. Another, yet-to-be-introduced bill, seen in draft form by Gizmodo, calls for a 10-year National Quantum Initiative Program to set goals and priorities for quantum computing in the US; invest in the technology; and partner with academia and industry.

            Read more in Gizmodo

            Get in Touch.

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

              Cloud Computing News #5: AI, IoT and cloud in manufacturing



              This week we focus on how AI, IoT and cloud computing are transforming manufacturing.

              Cloud Computing Will Drive Manufacturing Growth

     lists 10 ways cloud computing will drive manufacturing growth during this year:

              1. Quality gains greater value company-wide when a cloud-based application is used to track, analyse and report quality status by center and product.
              2. Manufacturing cycle times are accelerated through the greater insights available with cloud-based manufacturing intelligence systems.
              3. Insights into overall equipment effectiveness (OEE) get stronger using cloud-based platforms to capture, track and analyse the health of the equipment.
              4. Automating compliance and reporting saves valuable time.
              5. Real-time tracking and traceability become easier to achieve with cloud based applications.
              6. APIs let help scale manufacturing strategies faster than ever.
              7. Cloud-based systems enable higher supply chain performance. 
              8. Order cycle times and rework are reduced.
              9. Integrating teams’ functions increases new product introduction success. 
              10. Perfect order performance is tracked across multiple production centers for the first time.


              Machine learning in manufacturing

              According to CIO Review, the challenge with machine learning in manufacturing is not always just the machines. Machine learning in IoT has focused on optimizing at the machine level but now it’s time for manufacturers, to unlock the true poten­tial of machine learn­ing, start looking at network-wide efficiency.

              By opening up the entire network’s worth of data to these network-based algorithms we can unlock an endless amount of previously unattainable opportunities:

              1. With the move to network-based machine learning algorithms, engineers will have the ability to determine the optimal workflow based on the next stage of the manufacturing process.
              2. Machine-learning algorithms can reduce labor costs and improve the work-life balance of plant employees. 
              3. Manufacturers will be able to more effectively move to a multi-modal facility production model where the capacity of each plant is optimized to increase the efficiency of the entire network.
              4. By sharing data across the network, manufacturing plants can optimize capacity.
              5. In the future, the algorithms will be able to provide the ability to schedule for purpose to optimize cost and delivery and to meet the demand.

              Read more in CIO Review

              Introducing IOT into manufacturing

              According to Global Manufacturing, IoT offers manufacturers many potential benefits in product innovation, but it also brings challenges, particularly around the increased dependency on software:

              1. Compliance: Manufacturers developing IoT-based products must demonstrate compliance due to critical safety and security demands. In order to do this, development organisations must be able to trace and even have an audit trail for all the changes involved in a product lifecycle.
              2. Diversity and number of contributors, who may be spread across different locations or time-zones and working with different platforms or systems.  Similarly, over-the-air updates, also exacerbate the need for control and managing complex dependency issues at scale and over long periods of time.
              3. Need to balance speed to market, innovation and flexibility, against the need for reliability, software quality and compliance, all in an environment that is more complex and involving many more components.

              Because of these challenges, increasing number of manufacturing companies revise how they approach development projects. More of them are moving away from traditional processes like Waterfall, towards Agile, Continuous Delivery and DevOps or hybrids of more than one.  These new ways of working also help empower internal teams, while simultaneously providing the rigour and control that management requires.

              In addition to new methodology, this change requires the right supporting tools. Many existing tools may no longer be fit for purpose, though equally many have also evolved to meet the specific requirements of IoT. Building the right foundation of tools, methodologies and corporate thinking in place is essential to success.

              Read more in Global Manufacturing

              Data driven solutions and devops at Nordcloud

              Our data driven solutions and DevOps will make an impact on your business with better control and valuable business insight with IoT, modern data platforms and advanced analytics based on machine learning. How can we help you take your business to the next level? 

              Get in Touch.

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                Cloud Computing News #4: IoT in the Cloud



                This week we focus on IoT in the cloud.


                AWS IOT platform is great for startups

                IoT for all lists 7 reasons why start up companies like iRobot, GoPro, and Under Armour have chosen AWS IoT platform:

                1. Starting with AWS IOT is easy: The AWS IoT platform connects IoT devices to the cloud and allows them to securely interact with each other and various IoT applications.
                2.  High IoT security: Amazon doesn’t spare resources to protect its customers’ data, devices, and communication.
                3. AWS cherises and cultivates startup culture: AWS has helped multiple IoT startups get off the ground and startups are a valuable category of Amazon’s target audience.
                4. Serverless approach and AWS Lambda are right for startups: startups can reduce the cost of building prototypes and add agility to the development process as well as build a highly customizable and flexible, serverless back end that is highly automated.
                5. AWS IoT Analytics paired with AI and Machine LearningAWS IoT Analytics and Amazon Kinesis Analytics answer to high demand for data-analytic capacities in IoT.
                6. Amazon partners with a broad network of IoT device manufacturers, IoT device startups, and IoT software providers.
                7. The range of AWS products and services: the top provider of cloud services has a range of solutions tailored for major customer categories, including startups.

                Read more in IoT for all


                IoT – 5 predictions for 2019 and their impact

                Forbes makes five IoT predictions for 2019:

                1. Growth across the board: IoT market and connectivity statistics show numbers mostly in the billions (check the article below)
                2. Manufacturing and healthcare – deeper penetration: Market analysts predict the number of connected devices in the manufacturing industry will double between 2017 and 2020.
                3. Increased security at all end points: Increase in end point security solutions to prevent data loss and give insights into network health and threat protection.
                4. Smart areas or smart neighborhoods in cities. Smart sensors around the neighborhood will record everything from walking routes, shared car use, sewage flow, and temperature choice 24/7.
                5. Smart cars – increased market penetration for IoT: Diagnostic information, connected apps, voice search, current traffic information, and more to come.

                Read more on these predictions in Forbes


                IoT is growing at an exponential rate

                According to Forbes IoT is one of the most-researched emerging markets globally. The magazine lists 10 charts on the explosive growth of IoT adoption and market.

                Here below a few teasers, check all charts in Forbes.

                1. According to Statista, by 2020, Discrete Manufacturing, Transportation & Logistics and Utilities industries are projected to spend $40B each on IoT platforms, systems, and services.
                2. McKinsey predicts the IoT market will be worth $581B for ICT-based spend alone by 2020, growing at a Compound Annual Growth Rate (CAGR) between 7 and 15%.
                3. Smart Cities (23%), Connected Industry (17%) and Connected Buildings (12%) are the top three IoT projects in progress (IoT Analytics).
                4. GE found that Industrial Internet of Things (IIoT) applications are relied on by 64% power and energy (utilities) companies to succeed with their digital transformation initiatives.
                5. Industrial products lead all industries in IoT adoption at 45% with an additional 22% planning in 12 months, according to Forrester.
                6. Harley Davidson reduced its build-to-order cycle by a factor of 36 and grew overall profitability by 3% to 4% by shifting production to a fully IoT-enabled plant according to Deloitte.


                Philips is tapping into the IoT market with AWS

                According to the NetworkWorld, IDC forecasts the IoT market will reach $1.29 trillion by 2020. Philips is turning toothbrushes and MRI machines into IoT devices to tap this market and to keep patients more healthy and the machines running more smoothly.

                “We’re transforming from mainly a device-focused business to a health technology company focused on the health continuum of care and service”, says Dale Wiggins, VP and General Manager of the Philips HealthSuite Digital Platform. “By connecting our devices and modalities in the hospital or consumer environment, it provides more data that can be used to benefit our customers.”

                Philips relies on a combination of AWS services and tools, including the company’s IoT platform, Amazon’s CloudWatch and Cloud Formation. Philips uses predictive algorithms and data analysis tools to monitor activity, identify trends and report abnormal behavior.

                Read more in NetworkWorld



                Our data driven solutions will make an impact on your business with better control and valuable business insight with IoT, modern data platforms and advanced analytics based on machine learning. How can we help you take your business to the next level? 




                Get in Touch.

                Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                  Cloud Computing News #2: Digital transformation in the cloud



                  This week we focus on digital transformation and IT transformation in the cloud.


                  Campbell’s Drives IT Transformation on Azure

                  Campbell Soup Co has partnered with Microsoft to modernize Campbell’s IT platform through the Azure cloud by streamlining workflows and driving efficiencies.

                  “Campbell’s migration to Azure will increase our flexibility, agility and resiliency,” said Francisco Fraga, Campbell Soup’s CIO. “Azure will give us the ability to respond quickly to evolving business needs, introduce new solutions, and support our 24/7, always-on architecture. The Microsoft cloud is a proven, reliable and highly secure platform.”

                  The Microsoft solution will provide additional benefits, including increased security, compliance and information protection. The move to Azure will allow Campbell to re-architect its data warehousing capabilities to be able to support the company’s data and analytics needs.

                  Read the full article here

                  Nordcloud is also Microsoft Gold Cloud Partner and Microsoft Azure Expert Managed Services Provider. Accelerate operations by moving IT to the public cloud with our solutions, you can find them here.


                  Walmart Picked Microsoft To Accelerate Digital Transformation in the cloud

                  According to Forbes, Walmart has signed a 5-year strategic partnership with Microsoft to accelerate digital transformation. This is an extension of an existing relationship between Walmart and Microsoft.

                  This new agreement will see the companies collaborating on machine learnings, AI and data-platform solutions that span customer-facing projects as well as those aimed at optimizing internal operations.

                  3 focus ares of the partnership are:

                  1. Digital transformation:  Walmart will have the full range of Microsoft cloud solutions, move hundreds of existing applications to cloud-native architectures, migrate of a significant portion of and to Azure to grow and enhance the online customer experience.
                  2. Innovation: Walmart will build a global IoT platform on Azure.
                  3. Changing way of working at Walmart: Walmart is investing in its people with a phased rollout of Microsoft 365.

                  More on Walmart´s digital transformation in Forbes.

                  Read also our blog post on how to accelerate digital transformation with culture and cloud here.

                  Our data driven solutions that will make an impact on your business you can find here.


                  Gartner identifies 6 barriers to becoming a digital business

                  According to a recent survey by Gartner, companies embracing digital transformation are finding that digital business is not as simple as buying the latest technology but requires changes in systems and culture.

                  Gartner lists six barriers that CIOs must overcome to transform their business:

                  1. A Change-Resisting Culture. Digital innovation requires collaborative cross-functional and self-directed teams that are not afraid of uncertain outcomes.
                  2. Limited Sharing and Collaboration. Issues of ownership and control of processes, information and systems make people reluctant to share their knowledge. But it is not necessary to have everyone on board in the early stages.
                  3. The Business Isn’t Ready. When a CIO wants to kick-off a transformation, they find that the business doesn’t have the resources or skills needed.
                  4. The Talent Gap. Markus Blosch, research vice president at Gartner, says: “There are two approaches to breach the talent gap — upskill and bimodal.”
                  5. The Current Practices Don’t Support the Talent. Highly structured and slow traditional processes don’t work for digital.
                  6. Change Isn’t Easy. Gartner advocates adopting a platform-based strategy which supports continuous change.

                  Read more about the survey on Gartner Newsroom.

                  Read also our blog post on how to support cloud and digital transformation here.

                  Get in Touch.

                  Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                    Nordcloud nominated ‘Preferred AI Training Partner’ by Microsoft



                    Microsoft has nominated Nordcloud as a preferred AI Training Partner on the topics “Azure Machine Learning”, “Batch AI” and “Team Data Science Process”.

                    The topics are covered e.g. in the 2 day “Professional AI developer bootcamp”, where participants are learned how to use the Azure Machine Learning Workbench to develop, test and deploy Machine Learning solutions to Azure Container Services using an agile and team-oriented framework.

                    Why Microsoft for AI?

                    Microsoft’s Azure cloud computing service offers a fast-growing range of Platform Services for AI, machine learning and IoT development.

                    Microsoft´s AI platform consists of 3 core areas:

                    • AI Services: Developers can rapidly consume high-level “finished” services that accelerate the development of AI solutions. Compose intelligent applications, customised to your organisation’s availability, security, and compliance requirements.
                    • AI Infrastructure: Services and tools backed by a best-of-breed infrastructure with enterprise grade security, availability, compliance, and manageability. Harness the power of infinite scale infrastructure and integrated AI services.
                    • AI Tools: Leverage a set of comprehensive tools and frameworks to build, deploy, and operationalise AI products and services at scale. Use the extensive set of supported tools and IDEs of your choice and harness the intelligence with massive datasets through deep learning frameworks of your choice.

                    Azure AI

                    Download our guide: Steps needed to build an AI enabled solution in Azure here 

                    We’d love to help you to boost your business with the adoption of AI technologies

                    You may find yourself in a position where you need a fully customised option but lack access to some of the specific expertise required. In that case we are available to advise and where appropriate, help directly.

                    Nordcloud offers a range of services from managed service provision through to full cloud-software project management and execution. Just as you’re sure to find a suitable development option within Azure, we can offer you whatever support you need for your AI/ML project.

                    Contact us for AI training and consultation!

                    Check also our data driven solutions that will make an impact on your business here.

                    Get in Touch.

                    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.