Analysing News Article Content with Google Cloud Natural Language API

In my previous blog post I showed how to use AI Platform Training to fine-tune a custom NLP model using PyTorch and the transformers library. In this post we take advantage of Google’s pre-trained AI models for NLP and use Cloud Natural Language API to analyse text.

Google’s pre-trained machine learning APIs are great for building working AI prototypes and proof of concepts in matter of hours. Google’s Cloud Natural Language API allows you to do named entity recognition, sentiment analysis, content classification and syntax analysis using a simple REST API. The API supports Python, Go, Java, Node.js, Ruby, PHP and C#. In this post we’ll be using the Python API.

Photo by AbsolutVision on Unsplash

Before we jump in, let’s define our use case. To highlight the simplicity and power of the API, I’m going to use it to analyse the contents of news articles. In particular, I want to find out if the latest articles published in The Guardian’s world news section contain mentions of famous people and if those mentions have a positive or a negative sentiment. I also want to find out the overall sentiment of the news articles. To do this, we will go through a number of steps.

  1. We will use The Guardian’s RSS feed to extract links to the latest news articles in the world news section.
  2. We will download the HTML content of the articles published in the past 24 hours and extract the article text in plain text.
  3. We will analyse the overall sentiment of the text using Cloud Natural Language.
  4. We will extract named entities from the text using Cloud Natural Language.
  5. We will go through all named entities of type PERSON and see if they have a Wikipedia entry (for the purposes of this post, this will be our measure of the person being “famous”).
  6. Once we’ve identified all the mentions of “famous people”, we analyse the sentiment of the sentences mentioning them.
  7. Finally, we will print the names, Wikipedia links and the sentiments of the mentions of all the “famous people” in each article, together with the article title, url and the overall sentiment of the article.

We will do all this using GCP AI Platform Notebooks.

To launch new notebook make sure you are logged in to Google Cloud Console and have an active project selected. Navigate to AI Platform Notebooks and select New Instance. For this demo you don’t need a very powerful notebook instance, so we will make some changes to the defaults to save cost. First, select Python 3 (without CUDA) from the list and give a name for your notebook. Next, click the edit icon next to Instance properties. From Instance properties select n1-standard-1 as the Machine type. You will see that the estimated cost of running this instance is only $0.041 per hour.

Select Machine type

Once you have created the instance and it is running, click the Open JupyterLab link of your notebook instance. Once you’re in JupyterLab, select new Python 3 notebook.

Steps 1–2: Extract the Latest News Articles

We start start by downloading some required Python libraries. The following command uses pip to install lxml, Beautiful Soup and Feedparser. We use lxml and Beautiful Soup for processing and parsing HTML the content. Feedparser will be used to parse the RSS feed to identify the latest news articles and to get the links to the full text of those articles.

!pip install lxml bs4 feedparser

Once we have installed the required libraries we need to import them together with the other libraries we need for extracting the news article content. Next, we will define the url to the RSS feed as well as the time period we want to limit our search to. We will then define two functions we will use to extract the main article text from the HTML document. The text_from_html function will parse the HTML file, extract the text from that file and use the tag_visible function to filter out all but the main article text.

Once we have defined these functions we will parse the RSS feed, identify the articles published in the past 24 hours and extract the required attributes for those articles. We will need the article title, link, publishing time and, using the functions defined above, the plain text version of the article text.

Once we have defined these functions we will parse the RSS feed, identify the articles published in the past 24 hours and extract the required attributes for those articles. We will need the article title, link, publishing time and, using the functions defined above, the plain text version of the article text.

3–7: Analyse the Content Using Cloud Natural Language API

To use the Natural Language API we will import the required libraries.

from import language_v1
from import enums

Next, we define the main function for the demo print_sentiments(document). In this function, in 21 lines of code, we will do all the needed text analysis as well as print the results to view the output. The function takes document as the input, analyses the contents and prints the results. We will look at the contents of the document input later.

To use the API we need to initialise the LanguegeServiceClient. We then define the encoding type which we need to pass together with the document to the API.

The first API call analyze_entities(document, encoding_type=encoding_type) takes the input document and the encoding type and returns a response of the following form:

"entities": [
"language": string

We will then call the API to analyse the sentiment of the document as well as to get the sentiments of each sentence in the document. The response has the following form:

"documentSentiment": {
"language": string,
"sentences": [

The overall document sentiment is stored in annotations.document_sentiment.score. We assign the document an overall sentiment POSITIVE if the score is above 0, NEGATIVE if it is less than 0 and NEUTRAL if it is 0.

We then go through all the entities identified by the API and create a list of those entities that have the type PERSON. Once we have this list, we loop through it and check which ones from the list have wikipedia_url in their metadata_name. As said, we use this as our measure of the person being “famous”. When we identify a “famous person” we print the person’s name and the link to the Wikipedia entry.

We then check the sentiment annotated sentences for occurrence of the identified “famous person” and use the same values as above to determine the sentiment category of those sentences. Finally, we print all the sentiments of all the sentences mentioning the person.

Now that we have extracted the text from the news site and defined the function to analyse the contents of each article, all we need to do is go through the articles and call the function. The input for the function is a dictionary containing the plain text contents of the article, the type of the document (which in our case if PLAIN_TEXT) and the language of the document (which for us is English). We also print the name of each article and the link to the article.

For demo purposes we limit our analysis to the first 3 articles. The code for the above steps is displayed below together with the output of running that code.


‘We have to win’: Myanmar protesters persevere as forces ramp up violence
Overall sentiment: NEGATIVE

Person: Min Aung Hlaing
- Wikipedia:
- Sentence: 1 mentioning Min Aung Hlaing is: NEUTRAL

Person: Aung San Suu Kyi
- Wikipedia:
- Sentence: 1 mentioning Aung San Suu Kyi is: POSITIVE


White House defends move not to sanction Saudi crown prince
Overall sentiment: NEGATIVE

Person: Joe Biden
- Wikipedia:
- Sentence: 1 mentioning Joe Biden is: NEGATIVE

Person: Mark Warner
- Wikipedia:
- Sentence: 1 mentioning Mark Warner is: NEGATIVE

Person: Khashoggi
- Wikipedia:
- Sentence: 1 mentioning Khashoggi is: NEGATIVE
- Sentence: 2 mentioning Khashoggi is: NEGATIVE
- Sentence: 3 mentioning Khashoggi is: NEGATIVE

Person: Jen Psaki
- Wikipedia:
- Sentence: 1 mentioning Jen Psaki is: NEGATIVE

Person: Democrats
- Wikipedia:
- Sentence: 1 mentioning Democrats is: NEGATIVE

Person: Gregory Meeks
- Wikipedia:
- Sentence: 1 mentioning Gregory Meeks is: POSITIVE

Person: Prince Mohammed
- Wikipedia:
- Sentence: 1 mentioning Prince Mohammed is: NEGATIVE


Coronavirus live news: South Africa lowers alert level; Jordan ministers sacked for breaches
Overall sentiment: NEGATIVE

Person: Germany
- Wikipedia:
- Sentence: 1 mentioning Germany is: NEGATIVE
- Sentence: 2 mentioning Germany is: NEUTRAL

Person: Nick Thomas-Symonds
- Wikipedia:
- Sentence: 1 mentioning Nick Thomas-Symonds is: NEGATIVE

Person: Cyril Ramaphosa
- Wikipedia:
- Sentence: 1 mentioning Cyril Ramaphosa is: NEGATIVE

Person: Raymond Johansen
- Wikipedia:
- Sentence: 1 mentioning Raymond Johansen is: NEGATIVE

Person: Archie Bland
- Wikipedia:
- Sentence: 1 mentioning Archie Bland is: NEUTRAL


As you can see the 3 articles we analysed all have an overall negative sentiment. We also found quite a few mentions of people with Wikipedia entries as well as the sentiments of those sentences.


As we saw, the Cloud Natural Language API is a super simple and powerful tool that allows us to analyse text with just a few lines of code. This is great when you are working on a new use case and need to quickly test the feasibility of an AI-based solution. It is also the go-to resource when you don’t have data to train your own machine learning model for the task. However, if you need to create a more customised model for your use case, I recommend using AutoML Natural Language or training your own model using AI Platform Training.

Hope you enjoyed this demo. Feel free to contact me if you have any questions.

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

    Training PyTorch Transformers on Google Cloud AI Platform

    Google Cloud is widely known for its great AI and machine learning capabilities and products. In fact, there are tons of material available on how you can train and deploy TensorFlow models on Google Cloud. However, Google Cloud is not just for people using TensorFlow but it has good support for other frameworks as well.

    In this post I will show how to use another highly popular ML framework PyTorch on AI Platform Training. I will show how to fine-tune a state-of-the-art sequence classification model using PyTorch and the transformers library. We will be using a pre-trained RoBERTa as the transformer model for this task which we will fine-tune to perform sequence classification.

    RoBERTa falls under the family of transformer-based massive language models which have become very popular in natural language processing since the release of BERT developed by Google. RoBERTa was developed by researchers at University of Washington and Facebook AI. It is fundamentally a BERT model pre-trained with an improved pre-training approach. See the details about RoBERTa here.

    This post covers the following topics:

    • How to structure your ML project for AI Platform Training
    • Code for the model, the training routine and evaluation of the model
    • How to launch and monitor your training job

    You can find all the code on Github.

    ML Project Structure

    Let’s start with the contents of our ML project.

    ├── trainer/
    │   ├──
    │   ├──
    │   ├──
    │   ├──
    │   └──
    ├── scripts/
    │   └──
    ├── config.yaml

    The trainer directory contains all the python files required to train the model. The contents of this directory will be packaged and submitted to AI Platform. You can find more details and best practices on how to package your training application here. We will look at the contents of the individual files later in this post.

    The scripts directory contains our training scripts that will configure the required environment variables and submit the job to AI Platform Training.

    config.yaml contains configuration of the compute instance used for training the model. Finally, contains details about our python package and the required dependencies. AI Platform Training will use the details in this file to install any missing dependencies before starting the training job.

    PyTorch Code for Training the Model

    Let’s look at the contents of our python package. The first file, is just an empty file. This needs to be in place and located in each subdirectory. The init files will be used by Python Setuptools to identify directories with code to package. It is OK to leave this file empty.

    The rest of the files contain different parts of our PyTorch software. is our main file and will be called by AI Platform Training. It retrieves the command line arguments for our training task and passes those to the run function in

    def get_args():
        """Define the task arguments with the default values.
            experiment parameters
        parser = ArgumentParser(description='NLI with Transformers')
                            help='GCS location to export models')
                            help='The name of your saved model',
        return parser.parse_args()
    def main():
        """Setup / Start the experiment
        args = get_args()
    if __name__ == '__main__':

    Before we look at the main training and evaluation routines, let’s look at the and which define the datasets for the task and the transformer model respectively. First, the we use the datasets library to retrieve our data for the experiment. We use the MultiNLI sequence classification dataset for this experiment. The file contains code to retrieve, split and pre-process the data. The NLIDataset provides the PyTorch Dataset object for the training, development and test data for our task.

    class NLIDataset(
        def __init__(self, encodings, labels):
            self.encodings = encodings
            self.labels = labels
        def __getitem__(self, idx):
            item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
            item['labels'] = torch.tensor(self.labels[idx])
            return item
        def __len__(self):
            #return len(self.labels)
            return len(self.encodings.input_ids)

    The load_data function retrieves the data using the datasets library, splits the data into training, development and test sets, and then tokenises the input using RobertaTokenizer and creates PyTorch DataLoader objects for the different sets.

    def load_data(args):
        tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
        nli_data = datasets.load_dataset('multi_nli')
        # For testing purposes get a slammer slice of the training data
        all_examples = len(nli_data['train']['label'])
        num_examples = int(round(all_examples * args.fraction_of_train_data))
        print("Training with {}/{} examples.".format(num_examples, all_examples))
        train_dataset = nli_data['train'][:num_examples]
        dev_dataset = nli_data['validation_matched']
        test_dataset = nli_data['validation_matched']
        train_labels = train_dataset['label']
        val_labels = dev_dataset['label']
        test_labels = test_dataset['label']
        train_encodings = tokenizer(train_dataset['premise'], train_dataset['hypothesis'], truncation=True, padding=True)
        val_encodings = tokenizer(dev_dataset['premise'], dev_dataset['hypothesis'], truncation=True, padding=True)
        test_encodings = tokenizer(test_dataset['premise'], test_dataset['hypothesis'], truncation=True, padding=True)
        train_dataset = NLIDataset(train_encodings, train_labels)
        val_dataset = NLIDataset(val_encodings, val_labels)
        test_dataset = NLIDataset(test_encodings, test_labels)
        train_loader = DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True)
        dev_loader = DataLoader(val_dataset, batch_size=args.batch_size, shuffle=True)
        test_loader = DataLoader(test_dataset, batch_size=args.batch_size, shuffle=True)
        return train_loader, dev_loader, test_loader

    The save_model function will save the trained model once it’s been trained and uploads it to Google Cloud Storage.

    def save_model(args):
        """Saves the model to Google Cloud Storage
          args: contains name for saved model.
        scheme = 'gs://'
        bucket_name = args.job_dir[len(scheme):].split('/')[0]
        prefix = '{}{}/'.format(scheme, bucket_name)
        bucket_path = args.job_dir[len(prefix):].rstrip('/')
        datetime_ ='model_%Y%m%d_%H%M%S')
        if bucket_path:
            model_path = '{}/{}/{}'.format(bucket_path, datetime_, args.model_name)
            model_path = '{}/{}'.format(datetime_, args.model_name)
        bucket = storage.Client().bucket(bucket_name)
        blob = bucket.blob(model_path)

    The file contains code for the transformer model RoBERTa. The __init__ function initialises the module and defines the transformer model to use. The forward function will be called by PyTorch during execution of the code using the input batch of tokenised sentences together with the associated labels. The create function is a wrapper that is used to initialise the model and the optimiser during execution.

    # Specify the Transformer model
    class RoBERTaModel(nn.Module):
        def __init__(self):
            """Defines the transformer model to be used.
            super(RoBERTaModel, self).__init__()
            self.model = RobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=3)
        def forward(self, x, attention_mask, labels):
            return self.model(x, attention_mask=attention_mask, labels=labels)
    def create(args, device):
        Create the model
          args: experiment parameters.
          device: device.
        model = RoBERTaModel().to(device)
        optimizer = optim.Adam(model.parameters(),
        return model, optimizer

    The file contains the main training and evaluation routines for our task. It contains the functions trainevaluate and run. The train function takes our training dataloader as an input and trains the model for one epoch in batches of the size defined in the command line arguments.

    def train(args, model, dataloader, optimizer, device):
        """Create the training loop for one epoch.
          model: The transformer model that you are training, based on
          dataloader: The training dataset
          optimizer: The selected optmizer to update parameters and gradients
          device: device
        for i, batch in enumerate(dataloader):
                input_ids = batch['input_ids'].to(device)
                attention_mask = batch['attention_mask'].to(device)
                labels = batch['labels'].to(device)
                outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
                loss = outputs[0]
                if i == 0 or i % args.log_every == 0 or i+1 == len(dataloader):
                    print("Progress: {:3.0f}% - Batch: {:>4.0f}/{:<4.0f} - Loss: {:<.4f}".format(
                        100. * (1+i) / len(dataloader), # Progress
                        i+1, len(dataloader), # Batch
                        loss.item())) # Loss

    The evaluate function takes the development or test dataloader as an input and evaluates the prediction accuracy of our model. This will be called after each training epoch using the development dataloader and after the training has finished using the test dataloader.

    def evaluate(model, dataloader, device):
          """Create the evaluation loop.
          model: The transformer model that you are training, based on
          dataloader: The development or testing dataset
          device: device
        print("\nStarting evaluation...")
        with torch.no_grad():
            eval_preds = []
            eval_labels = []
            for _, batch in enumerate(dataloader):
                input_ids = batch['input_ids'].to(device)
                attention_mask = batch['attention_mask'].to(device)
                labels = batch['labels'].to(device)
                preds = model(input_ids, attention_mask=attention_mask, labels=labels)
                preds = preds[1].argmax(dim=-1)
        print("Done evaluation")
        return np.concatenate(eval_labels), np.concatenate(eval_preds)

    Finally, the run function calls the run and evaluate functions and saves the fine-tuned model to Google Cloud Storage once training has completed.

    def run(args):
        """Load the data, train, evaluate, and export the model for serving and
          args: experiment parameters.
        cuda_availability = torch.cuda.is_available()
        if cuda_availability:
          device = torch.device('cuda:{}'.format(torch.cuda.current_device()))
          device = 'cpu'
        print('`cuda` available: {}'.format(cuda_availability))
        print('Current Device: {}'.format(device))
        # Open our dataset
        train_loader, eval_loader, test_loader = inputs.load_data(args)
        # Create the model, loss function, and optimizer
        bert_model, optimizer = model.create(args, device)
        # Train / Test the model
        for epoch in range(1, args.epochs + 1):
            train(args, bert_model, train_loader, optimizer, device)
            dev_labels, dev_preds = evaluate(bert_model, eval_loader, device)
            # Print validation accuracy
            dev_accuracy = (dev_labels == dev_preds).mean()
            print("\nDev accuracy after epoch {}: {}".format(epoch, dev_accuracy))
        # Evaluate the model
        print("Evaluate the model using the testing dataset")
        test_labels, test_preds = evaluate(bert_model, test_loader, device)
        # Print validation accuracy
        test_accuracy = (test_labels == test_preds).mean()
        print("\nTest accuracy after epoch {}: {}".format(args.epochs, test_accuracy))
        # Export the trained model, args.model_name)
        # Save the model to GCS
        if args.job_dir:

    Launching and monitoring the training job

    Once we have the python code for our training job, we need to prepare it for AI Platform Training. There are three important files required for this. First, contains information about the dependencies of our python package as well as metadata like name and version of the package.

    from setuptools import find_packages
    from setuptools import setup
        description='Sequence Classification with Transformers on Google Cloud AI Platform'

    The config.yaml file contains information about the compute instance used for training the model. For this job we need use an NVIDIA V100 GPU as it provides improved training speed and larger GPU memory compared to the cheaper K80 GPUs. See this great blog post by Google on selecting a GPU.

      scaleTier: CUSTOM
      masterType: n1-standard-8
          count: 1
          type: NVIDIA_TESLA_V100

    Finally the scripts directory contains the script which includes the required environment variables as will as the gcloud command to submit the AI Platform Training job.

    # BUCKET_NAME: unique bucket name
    # The PyTorch image provided by AI Platform Training.
    # JOB_NAME: the name of your job running on AI Platform.
    JOB_NAME=transformers_job_$(date +%Y%m%d_%H%M%S)
    echo "Submitting AI Platform Training job: ${JOB_NAME}"
    PACKAGE_PATH=./trainer # this can be a GCS location to a zipped and uploaded package
    # JOB_DIR: Where to store prepared package and upload output model.
    gcloud ai-platform jobs submit training ${JOB_NAME} \
        --region ${REGION} \
        --master-image-uri ${IMAGE_URI} \
        --config config.yaml \
        --job-dir ${JOB_DIR} \
        --module-name trainer.task \
        --package-path ${PACKAGE_PATH} \
        -- \
        --epochs 2 \
        --batch_size 16 \
        --learning_rate 2e-5
    gcloud ai-platform jobs stream-logs ${JOB_NAME}

    The list line of this script streams the logs directly to your command line. Alternatively you can head to Google Cloud console and navigate to AI Platform jobs and select View logs.


    You can also view the GPU utilisation and memory from the AI Platform job page.

    Monitoring GPU utilisation


    That concludes this post. You can find all the code on Github.

    Hope you enjoyed this demo. Feel free to contact me if you have any questions.

    This is a slightly modified version of an article originally posted on Nordcloud Engineering blog.

    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

      How can you architect Serverless SaaS Applications on AWS?

      In my previous blog post I listed 6 key themes that separates successful SaaS vendors from the rest. In this post I dive more deeply into one of the themes discussed in the previous blog post, namely serverless and microservices. 

      One of the great innovations of public cloud computing in recent years has been the advent of serverless computing. Serverless computing allows you to focus on writing and deploying software components without the need for focusing on the underlying infrastructure. The software components, often called functions, are executed based on a defined set of events and compute resources are consumed based on the usage during execution. AWS Lambda [1] was the first publicly available serverless computing offering. AWS Lambda supports natively Java, Go, PowerShell, Node.js, C#, Python, and Ruby runtimes. It also provides AWS Lambda layers or runtimes which allows you to use any additional programming languages to author your functions.

      In this blog post I discuss some of the key considerations you need to make when designing a serverless SaaS architecture and give an overview of an example serverless microservice architecture for SaaS applications on AWS.

      Benefits of serverless

      When working with SaaS providers or independent software vendors (ISVs) wanting to transform their product into SaaS we typically advise them to design their application architecture using microservices and serverless capabilities. This offers a number of advantages:

      1. You focus on value adding activities like writing code instead of designing and managing infrastructure
      2. You speed up development time and simplify development of new functionality by breaking it down into small pieces of functionality
      3. You optimise infrastructure cost by consuming only the computing resources required to run the code with 100ms billing – you don’t pay for the idle time
      4. You get the benefits of autoscaling of infrastructure resources

      There are also drawbacks of going serverless. Testing, especially integration testing, becomes more difficult as the units of integration become smaller. It is also often difficult to identify security vulnerabilities with traditional security tooling and debug your application in the serverless approach. Lastly, for many organisations moving to serverless requires a complete paradigm shift. You need to upskill your developers, architects and security teams to think and operate in the new environment.

      Defining the microservices 

      When designing the architecture one of the first things you need to do is to define the granularity of your microservices and functions. By making your functions too large, including lots of functionality into the same function, you lose some of the flexibility and speed of development. It also makes it harder to debug your functions and makes your software less fault tolerant. By making your functions small enough you enhance the fault tolerance of your application. You can design your application in a way that if one function fails the others can mostly still remain operational. On the other hand, making your functions too small you increase complexity and make it more difficult to understand and manage the overall architecture. There’s a middle ground that depends a lot on the software and functionality you are building. For example, if you want to provide tiering of functionality for your customers, so that some functionality is available only for certain subscription tier, you should decouple such functionality into separate microservices. Whatever granularity you choose, it is important to make sure that the microservices are loosely coupled to allow them to be developed and deployed independently of the other microservices. 

      Adding tenant context

      The second consideration is specific to the SaaS model. When building SaaS applications you need to be able to do tenant isolation, tenant management, tenant metering and monitoring. You need to be able to identify and authenticate tenants and offer different tenants different sets of functionality based on their subscription tier. You should also make sure the performance is fairly distributed among tenants and monitor the usage to identify upsell opportunities and gain valuable insights about usage patterns. On AWS you can implement all this with the help of Amazon Cognito [2]. You can use Cognito to manage user identities and to inject user context into the different layers of your application stack.

      Example architecture 

      Our simplified example is a serverless architecture for a SaaS application. The example uses S3 buckets for static web content, API Gateway for REST API, Cognito for user management and authentication, Lambda for serverless microservices, Amazon Aurora Serverless for SQL database and DynamoDB for NoSQL database.

      Each of the Lambda functions can themselves trigger additional Lambdas. It is therefore easy to design even quite complex applications using simple functions. However we advise our customers to avoid so-called serverless monoliths and instead design their lambda functions to be as independent as possible. The best practice is to adopt an event-driven approach where each Lambda function is independent from each other and triggered by events. Lambda functions can then emit events to e.g. Amazon SNS, to trigger other functions. You can also use AWS Step Functions to coordinate the Lambda functions [3].

      There are couple of strategies you can take when designing your Lambda functions:

      • Create one Lambda function per microservice, where each microservice is a unit that is able to work in isolation
      • For each microservice, create one Lambda function that handles all of the HTTP methods POST, GET, PUT, DELETE, etc…
      • For each HTTP function create a separate Lambda function

      By choosing one of the strategies above you limit the complexity of your architecture. However, if you choose to have one Lambda function per microservice you need to make sure your microservices are quite granular. 

      Our example architecture uses Cognito to make sure that each layer is aware of the tenant context. This allows you to offer the right functionality through the API and execute Lambda functions in a tenant specific context.

      Next steps

      One of the most important factors of a successful SaaS architecture is how you enable multitenancy and implement tenant isolation. In our example above we implicitly assumed that we have the same instances of the APIs, functions and databases for all the tenants and tenant isolation is handled using tenant context through Cognito. There are other ways to handle tenant isolation. However, going through the different options deserves its own blog post. So stay tuned.

      How can you maximise the value of SaaS for your business?

      Join our webinar to find out how…





      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

        Six capabilities modern ISVs need in order to future-proof their SaaS offering


        BlogCloud Migration

        Successful ISVs are leveraging public cloud capabilities and becoming SaaS providers. The move to public cloud-based SaaS offering provides ISVs a potential for business growth that cannot be matched with traditional on-premise single-tenant solution. In fact, Gartner estimates that the market size of the SaaS marketplace this year will be $99.7B while growing at the rate of 21% [1].

        ISVs can benefit from moving to SaaS in several different ways. It helps them to:

        1. Unlock new customer segments through lower customer acquisition cost and easier geographical expansion
        2. Reduce the total cost of ownership (TCO) through elimination of customer-specific support costs
        3. Reduced time-to-market through leveraging built-in components available in all the public cloud platforms
        4. Leverage data and insights through a unified data platform 

        Moving from a traditional license-based business model to a subscription model also lowers customers’ barrier to buy while improving financial predictability for the ISV. In contrast to the traditional licensing model, subscription models allow customers to use the software without committing to long licensing periods – lowering their barrier to buy. It also smoothens the revenue curve through monthly recurring revenue, resulting in improved financial predictability.

        Successful SaaS providers have built their business around 6 core capabilities

        Having worked with many SaaS providers on their cloud migration journey, we have identified a set of capabilities that separates the successful companies from the rest. These capabilities are:

        The key for building these six capabilities effectively is to use the capabilities provided by public cloud platforms like AWS, Azure and GCP. I’ll go through each one of these capabilities in some detail below.


        Successful SaaS vendors provide standardised service to all customers through multi-tenancy. This means that they provide a single shared application and data layer to all customers, without customer specific instances.

        In contrast, the traditional single tenancy model results in high costs due to maintenance overhead of keeping application instances in sync across the installation base. Your different instances will also easily drift apart from each other in terms code and configuration. 

        Some organisations opt for limited multi-tenancy where all the customers share a common application layer, but the data layer is kept in separate customer-specific instances. This can be a useful model for organisations whose customers are following strict data compliancy regulations and must keep their data in a specific geographical region, for instance.

        The full multi-tenancy model provides the most value by allowing teams to focus on developing and maintaining a single version leading to lower TCO and easier maintainability. In full multi-tenancy customer specific variations can be built into the software as components that can be turned on or off based on the need.


        Successful SaaS vendors minimise any manual steps and build end-to-end automation across development, testing, deployment and operations. Automation capabilities and DevOps toolchain can drastically improve delivery quality and speed-to-market. 

        For instance, on the infrastructure side companies should use Infrastructure-as-Code (IaC) tools like AWS CloudFormation or Terraform to increase automation and consistency of environments, to templatise and automate infrastructure stack creation. 

        Companies should utilise the full DevOps toolchain that automates the workflow from coding to deployment. Automating the whole workflow is very important as any gaps in the automation will effectively become a bottleneck and kill the benefits that you were hoping to achieve. To achieve the end-to-end workflow automation, it is recommended to set up a dedicated team responsible for the DevOps toolchain and way of working.

        We recommend our customers is to use a managed DevOps tool service rather than building their own toolchain. For instance, Azure DevOps is a great SaaS service provided by Microsoft that is also compatible with other public cloud platforms like AWS.

        As your development teams will have more responsibility in the SaaS model, it is important to perform automated security and compliance tests. Start with automated reporting and compliance checks inserted into CI/CD pipeline complemented with cloud environment best-practices / anti-pattern checks.

        Microservices and Serverless

        Microservice architecture and serverless let companies focus on functionality rather than integration. We tell our customers that whenever they start developing something new to their SaaS solution, they should always think if it can be implemented using serverless services like AWS Lambda, Azure Functions or GCP Cloud Functions. If serverless is not an option, they should build new functionality as microservices.

        Serverless services allow you to build your functionality as event-driven components that are executed on-demand triggered by specific events, like database change, log activity etc. Serverless functions speed up development and deployment time and can significantly reduce cost as you only pay for the requests, not for the idle time.

        Microservices architecture has been around for a while, but it is interesting that so many ISVs are still stuck in the world of traditional monoliths. Microservices are built to separate functionality as independent components, where the functionality is offered through APIs, and that can be developed and maintained without having to worry about dependency issues (given you don’t alter the APIs).

        Data as a Platform

        Shared platform allows SaaS vendors to leverage insights from data aggregated across applications. In fact, a shared data layer is fast becoming the number one capability many ISVs and SaaS providers are after and which sets apart the successful providers from the rest. There are still many organisations that are not able to leverage data across their customer instances in an effective way. 

        Public cloud offer unparalleled capabilities to build a consolidated data asset from your service. Even if you’re keeping your customer databases in separate locations, you can still benefit from having a shared data lake for insights and analytics. However, you might have to do anonymization in case of strict data policies. 

        Shared data layer for applications is important not only for sharing data and getting platform wide analytics but also for compliance and auditability. Using cloud platform services (e.g. AWS Lakeformation) it is possible to build shared data layer with detailed access controls and audit trail. 

        Single Codebase

        Having a single codebase can sound like an obvious thing but maintaining a strict single codebase policy requires dedication. SaaS vendors with multiple different versions of the code end up spending more on change implementation, deployment and maintenance. Instead of building customer specific functionality to different codebases or versions, you should have a single codebase and build customer specific functionality into common build through config options. This is in line with what I already wrote about multitenancy.

        Velocity of Innovation

        The last common capability for successful SaaS vendors based on our experience is enabling velocity of innovation through public cloud. Having the possibility to shoot up a development environment in minutes or building your prototype as a serverless functions utilising cloud-native pre-built components can have a massive impact on the way you introduce new value adding services to your customers. 

        We recently worked with a SaaS provider who wanted to create a new mobile service from scratch. Using AWS Lambda, we were able to develop the first prototype overnight, which would potentially have taken them weeks to develop in their old on-premise environment.

        Building a roadmap for the six capabilities

        Public cloud is a natural choice for SaaS providers as it offers unmatched range of components and functionality to build the six key capabilities SaaS vendors need to compete in the highly contested market.

        Nordcloud has helped many SaaS vendors to migrate to public cloud and to build the six capabilities increasing their potential to grow faster than their competitors.

        Based on our experiences we have developed a capability maturity model that helps our customers to map their current state and future aspirations. 

        Let me know if you’d like to hear more about how your organisation can benefit from public cloud and our experience in helping SaaS vendors to succeed.

        [1] Gartner, Forecast: Public Cloud Services, Worldwide, 2016-2020

        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

          Leading Azure team in the UK by Harry Azariah


          Life at Nordcloud

          Harry Azariah joined UK Azure team almost 1,5 years ago as Senior Cloud Architect, and was quickly promoted to lead the local Azure team. Here’s his Nordcloudian story. Enjoy!


          1. Where are you from and how did you end up at Nordcloud?


          I’ve lived in South London my whole life. I had been following Nordcloud for couple of years and then my ex colleague who had joined Nordcloud was talking about how great it was and he invited me for a chat with the team.
          After the meeting, I was really interested in joining and contributing to the success and the vision of the company.


          2. What is your role and core competence?


          I am the Azure team lead in the UK; I have come originally from an infrastructure background but have been working with Azure for the past four years. Doing everything from solutions down to implementation.
          My role in the team is to ensure quality in all UK lead Azure engagements and to build a strong team culture through social events and group activities.


          3. What do you like most about working at Nordcloud?


          I like our dynamic, young, modern thinking – we build processes in a new way compared to big consultancies and system integrators etc. I also like our general flexibility and that we are not managed from top down; everybody can contribute and influence.


          4. What’s your favourite thing with public cloud?


          Constant change, always things to learn and new stuff to play with.


          5. What do you do outside work?


          Socialising, watching sports, occasionally playing sports. I’m also a massive foodie so naturally cooking and eating are too of my favourite past times.


          6. Best Nordcloudian memory?


          I have a lot of good (and fun) memories from our social team evenings (Christmas party, leaving or welcoming parties etc).


          Harry is one of our UK Technical interviewers, and if you like his story and can imagine working closely in his Azure team, have a look at our openings here!

          Get in Touch.

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

            Greetings from sales at Nordcloud Netherlands!


            Life at Nordcloud

            This week we got to dig deeper into the role of Sales Enterprise Manager at Nordcloud and interviewed Rogier from Amsterdam.


            Here’s his Nordcloudian story!


            • Where are you from and how did you end up at Nordcloud?

            I’m from a small city close to Amsterdam called Weesp and have lived my 32 years old life here ( – 1 year when I lived in Spain/Argentina during my studies!).

            After my studies I started at Canon as a Junior Account Manager and worked my way up to a more senior role at Pci/Canon. During that period I was working with Bart Bijman, who is now the Country Manager of Nordcloud Netherlands.

            Bart was working at Nordcloud and I left Canon for a green oil & energy company called Argent Energy and we kept in touch. In the beginning of 2019 Bart called me about Nordcloud once again, and told me about an Enterprise Sales Manager role in the Netherlands. We discussed the unique business and the possibilities and I got hooked immediately!


            • What is your role and core competence?

            My role is Enterprise Sales Manager and everything I do is new business, so I bring new customers in.

            I cooperate with cloud talent, partners and channels, set up meetings/organise workshops and onboard new customers with the full focus on Enterprises & Independent software vendors.


            • What are your most used USP’s since working in Sales at Nordcloud?

            It’s important to first listen & be approachable – it’s always about the customer and their infrastructure/products/services/apps.

            After the customer tells me why they are unique (find the dots), I explain the history of Nordcloud and where we are good at (everything).

            The most important thing after the meeting is that the customer have the feeling that “yes Nordcloud can bring my company further”!


            • What do you like most about working at Nordcloud?

            On a personal “human” level I like the fact that we are an international company, working closely together with different nationalities. Also, discussions at Friday drinks are just also always fun!!

            On a company & customer level I like the fact that it’s never the same – solutions and situations are always different & unique. You get to develop yourself and meet and cooperate with so many different people!


            • How would you describe our culture?

            Open & helpful.

            With any questions everyone is willing to help and work as a team.

            #wintogether #growtogether – just like our values!


            • What are your greetings/advice for someone who might be considering a job at Nordcloud?

            If you want to work for an international & unique company with a lot of diverse opportunities and new/unique projects, you should defo have a discussion with us!


            Get in Touch.

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

              Partner and capacity management with Peter Bakker


              Life at Nordcloud

              1. Where are you from and how did you end up at Nordcloud?

              I’m Dutch, living in Rotterdam.

              I started the Azure relationship between Microsoft and Mirabeau when I was working at Mirabeau.

              I grew their Azure business, we became an MSP and I was asked to join the Partner Advisory team by Microsoft.

              There I met Nordcloud’s founder Fernando.

              Mirabeau was acquired by Cognizant and integrated as of January 1st of this year.

              In terms of my career, I was in the middle of a journey with different changes and Fernando suggested for me to join Nordcloud in the spring of this year. His words were: “We always have room for good people”.

              I had a chat with Nordcloud’s CEO Jan and after some discussion, we agreed on interesting goals, I switched clouds from Microsoft to AWS and became the AWS partner manager at Nordcloud. 


              2. What is your role and core competence?

              I was hired as Partner Manager for AWS. My responsibility was first to move from escalation management to opportunity management. Working with different AWS managers we started fixing things and recently signed a joint partner plan for 2020. We now have a joint ambition for what we are aiming to achieve together and this is actually one of my best memories since working at Nordcloud!

              My role has also evolved since I started and I also have the hat of Head of Capacity now. I’m commercially responsible for reselling AWS, Azure, and GCP, managing our margins, making our Sales colleagues life is a bit easier and understanding cloud costs, cost optimisation and the real value of capacity management.

              I fly around a lot and get to work with different teams as we’re active in 10 countries. My daughter recently asked me if I was working at KLM.


              3. What do you like most about working at Nordcloud?

              1) Depth and broadness of skill levels: we have so many talented, amazing colleagues.

              2) The great names that we work for and all the great things we do for example for BMW, SKF, Volvo or Redbull.

              3) Freedom and opportunity to learn and grow. 


              5. What sets you on fire/ what’s your favourite thing with public cloud?

              Digital transformation! All the new business opportunities that our customers get by adopting cloud.

              For example last week at the AWS Partner summit Konecranes presented a great case of Nordcloud helping them in a very short timeframe to build a serverless solution using IoT that helps them to weigh containers. This solution is now fitted in new equipment and retrofitted into existing equipment. 

              The payback time for Konecranes was only 3 months. Sales of their equipment were boosted.

              It’s great seeing how starting small and laying foundations sets us and our clients up for success and even bigger projects. 


              6. What do you do outside work?

              I’m a passionate golf player as well as a youth at our golf club in Rotterdam.


              8. How would you describe our culture?

              Open and flat organisation!

              There is no hierarchy at Nordcloud. We are all colleagues and together we help our customers to get cloud native. 


              9. What are your greetings/advice for someone who might be considering a job in Nordcloud?

              Somebody in a recruitment process recently asked me how I like it here at Nordcloud. I answered, ” I should have done this a year ago”!

              As there is a lot of freedom and opportunity to learn and grow, you must remember to take care of yourself too. There is always something interesting to do, so it’s very much about finding the right balance. As things get excited, I sometimes have to remind myself; there is also always tomorrow!

              Get in Touch.

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                Journey To CKA and CKAD


                Life at Nordcloud

                This article is about trusting yourself to accomplish new things, achieving your goals and specifically Daniel’s journey to CKA and CKAD.

                (Picture of Daniel in Yosemite National Park.) 

                Last year I was working at Huawei in a position what looking from the outside must have been interesting. However, I was not satisfied with it. I had started to look for something else what would be more into it for me from a technology point of view.

                This is how I found Nordcloud and their UK based subsidiary Nordcloud LTD.

                Nordcloud went through some serious expansion last year and are still hiring tens of people in several countries. We in the UK have a few open positions if anyone is interested.

                I joined Nordcloud in January and I could not have made a better decision.

                They provide me just the right amount of hands-on tasks to keep me in the game, and not just be a theoretical architect.

                I always thought without real hands-on experience you cannot call yourself a technical architect.

                Everybody can talk about technology (we have seen it with several brain dumpers), but being able to talk about it and also to implement it with proper design, that is where the real knowledge resides.

                When I joined Nordcloud I was already into containerisation.

                My friend, Vinayak Kumar, was an SRE at a company where he designed/managed several K8s clusters and a K8s based environment spanning through different regions/areas of the world. The technology was just fascinating.

                I can kind of compare the whole experience with it for me, like when I first met with VMware virtualisation back in 2007-2008. I instantly knew I must work with this technology and become an expert in it.

                Nordcloud LTD is not a huge consultancy yet; however we are growing and contributing to the group level directives and solutions as well.

                We have agreed with my lead Harry Azariah, that I will pursue to become a K8s expert – being an Azure Senior Architect working on AKS and focusing on all managed and unmanaged K8s solutions.

                So my journey began…

                I started to build my own clusters based on Kelsey Hightower’s and Ivan Fioravanti’s Kubernetes the Hard Way git repository.

                I was watching tens, maybe hundreds, of hours of Kubernetes videos from Kelsey Hightower and others. Luckily, I already had some experience with Docker – I had built Docker Swarm demo environments in Azure a few years before, but still K8s was a bit of a new territory and a challenge. When I thought I had enough knowledge to ask relevant questions, I called my friend Vinay and he was kind enough to jump over from 120 miles away to have a session with me. Yeah, we could have done it online but it’s always good to see a friend!

                Anyhow after that session I knew a lot more and was sure this is the technology I want to focus on in the upcoming weeks, months, (years?! 😄)

                Fortunately enough, we got a few leads at Nordcloud with K8s, AKS requirements. I got the chance to put all I had learned into practice. This is when I realised I know less than I thought. 😄

                So, dwelled even deeper into the rabbit hole, I started to work with Ingress controllers such as Nginx. In one of our projects (which I’m still working on) I had the opportunity to start to work with Istio Service Mesh. The whole experience was like a roller coaster ride. Just when I thought yeah I’m confident, something new came up. I think this is what got me excited about the whole K8s experience,  technology I knew little about and constantly can provide challenges.

                About this time I decided I want to be certified.

                With my lead Harry, I agreed that the CKA exam should be the first I achieve. I jumped on Linux Academy and started the CKA course there. It’s an OK course; you can get enough information to understand those requirements which are shared on the CKA exam leaflet. However, do not expect to be able to pass if you only go through this training.

                You must do more – as a bare minimum I would recommend going through the K8s the hard way material at least 5 times if you are not managing real world clusters on a day to day basis.

                By this time I was already working with AKS for 4 months, but that is a managed K8s solution, so you have almost 0 tasks to manage the Master nodes, and you can bet you will have some questions related to those.

                11% Cluster Maintenance, 12 % Installation, Configuration, Validation, 10% Trouble shooting; all these can mean you will have to look at some master components.

                So I was going on a long journey to search for useful exam prep tests and K8s trainings and found this link at the Kubernetes slack.

                It contains so much information that it’s an overkill in general for the exam, but some are worth to go through.

                With the CKA exam you can expect to use all your 3 hours to answer the questions, maybe get 10 minutes at the end to review what you have done. I used up my time completely, had about that 10 minutes to review with 1 question unanswered (8% worth). I decided to go and look at my solutions for other questions and not to bother with that one.

                The main reason you will use the 3 hours is that you have to type a lot, even if you know where to find some templates in the documentation (what you can use), it is still a lot to do. Even if you use “kubectl create dry-run -o yaml > pod.yaml” to generate your base config, it’s a lot to achieve on the k8s resource side, not to talk about the install/manage questions.

                I definitely recommend to use completion source <(kubectl completion bash)”>>~/.bashrc

                Personally, I have not used any alias configured by me, some people find that useful. I work with alias in my day-to-day environments, but for the exam I was not finding it useful (configured a few though).

                • Know the documentation and where to find what.
                • Watch as some links are navigating off from the domain ad that is not allowed, but in general if you know where to find things or how to ask the right question if you get stuck, this will be a life saver.
                • Build a AKS, EKS, GKE cluster and use that to prep with the Kubernetes resources (it’s faster to build than a K8s the hard way cluster and it does not depends on your setup).
                • Do deployments of objects until you feel like you are bored with it, when you literally wake up at night and hear your thoughts going around “apiVersion: v1 kind: Pod metadata: labels: app: someapp spec: containers:”

                That is the time when you can feel confident about your knowledge… not joking…

                • Build a habit to use the commands which help you to generate templates, or create resources quickly. There is a really good cheat sheet from Denny Zhang to start with.
                • Do some tests in a practice environment, the exam environment is nothing too complex, but it’s a browser based exam, not a basic ssh from your tty client.
                • I have tried this environment to get a look and feel, from Arush Salil.
                Practical tips for the day of the exam:
                • Find a place with good WiFi Coverage and without any distractions.
                  I sat in a phone booth at the office, however my WiFi was awful when I shared my camera ( have not tested it before properly) so I had to find another place to do the exam from. Save your self that 20 minutes of worry which I had…
                • Do some video calls with someone from the location you will do your exam from.

                Luckily the proctor was reasonable enough to give me time to find another place.

                • I would not recommend to do it from home. I’ve heard horror stories from others that proctors were asking to cover everything in a room and such.
                • Have a glass of water with you. As I have mentioned you won’t have much time to leave from the exam… No food, no headset, papers, other electronics, etc. is allowed on the desk or around you.
                • Your face and eyes must be always on the screen. I was asked several times to adjust my camera (Dell XPS 15) or position because I was leaning too close to the screen… That was annoying – probably an external camera would have been better to use.

                After passing the CKA;

                I must say the CKAD was like a walk in the park. I went through the Linux Academy course just to have training and then I took the exam.

                With the extensive preparation I had spent for the CKA (you need a lot as it almost covers “everything”) and the Linux Academy course I easily passed the CKAD. The exam is only 2 hours long and I finished it about 15 minutes earlier.

                I can’t say that anybody who passes CKA can easily pass CKAD but for me it was not a problem. However, it is worth to mention by this time it was already 5 months into an AKS project for me where I was working with Probes, Persistent Storages, Deployments on an almost day-to-day basis.

                So where to from here?

                I’m definitely going to stick with this technology; it gives me the chills with all the challenges and new aspects of technology it comes with. Nordcloud is a place which lets its employees flourish if you can and are willing to put in the additional effort.
                There are some plans in my head to get to know other K8s versions better like OpenShift (already studying), dwell into EKS and GKE more, see how they really compare, and build a K8s practice at Nordcloud limited on the long run would be nice. My leads as far as I see are partners in this.

                What is the conclusion of all this?

                I think for me it is that never be afraid to change. Admit to yourself what you think you need/want. I mean I did this a bit more than 3 years ago when I came to the UK from several years of being a Solution/Enterprise Architect, went to a Senior Consultant position, and just about 7 months ago from a Product Owner/Architect Position I accepted Nordcloud’s offer for a Hands-On Senior Architect position.  I can clearly say it totally is worth it – my move 3 years ago, and my decision this year, I was never really happier than when I made these two decisions in my professional life.

                There is really something in the saying from Confucius: “He who says he can and he who says he can’t are both usually right…” If you want something, do it; you just need to put the required time and effort into it and you can achieve anything.

                Trust in yourself, and do not wait others to make your life happen! Because when you trust in yourself that is when magic happens in your life. 😊 

                Get in Touch.

                Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                  Responsibilities & freedom


                  Life at Nordcloud

                  Jonah joined Nordcloud around half a year ago – now he shares his thoughts and wisdom, for example about the freedom we offer, but also the responsibilities that come with it!

                  He explains how we don’t have strict rules or a lot of people giving us directions, so it means that everyone needs to be able to work independently, with that freedom.


                  It’s all about common trust!

                  1. Where are you from and how did you end up at Nordcloud?

                  I’m from the Netherlands, born in Amsterdam.

                  I was looking for the next, bigger challenge and was contacted by Anna (Talent Acquisition Specialist of Nordcloud) in Linkedin and we started talking. I felt that as a professional who is always looking to develop, Nordcloud was the right size for me to do that and in about 3,5 months I started at Nordcloud.

                  2. What is your role and core competence?

                  I’m a Cloud Architect, infra as code and automation as my core competences.


                  3. What do you like most about working at Nordcloud?

                  Interesting projects and a lot of freedom to do things that I find useful and meaningful.

                  By simply being interested and showing it, I’m getting the chance to contribute to areas that I personally feel like we should develop.

                  4. What is the most useful thing you have learned at Nordcloud?

                  Organising things in a distributed, flat company and navigating through the whole web of people

                  in different countries to get things done.

                  5. What sets you in fire/ what’s your favourite thing with public cloud?

                  There are a lot of tools that are available with public cloud with very little effort, but sometimes there are gaps in different functionalities (for example between AWS and GCP).

                  Filling those gaps by engineering we can achieve very large things with very little effort.

                  6. What do you do outside work?

                  Spend time with family and cooking. I also enjoy making tangible things, like building furniture or fixing things.

                  7. Best Nordcloudian memory?

                  Conferences are always fun and there’s interesting talks and meetings but one that professionally stands out is when we did a well architected review for a client in Stockholm.

                  They had a really interesting and innovative product and the client here was like a kid in a candy shop!

                  He really understood the potential what we could do for them and we got to do cool things!


                  8. How would you describe our culture?

                  We get recognised for contribution; that’s very clear and open.

                  There is a high degree of trust in our engineering skills.

                  We are a very diverse group of different nationalities!

                  Nordcloud NL is also a very tight group and we have casual work place banter in my opinion more than in the other countries.

                  We do work remotely a lot but when we are together we go for lunches and have a good time.

                  There is no power distance and we are very flat organisation so I can make fun of my manager and vice versa and it’s all fun!


                  Would you fit in a team with freedom and lots of changes to influence? Well Jonah is looking for more colleagues, so do get in touch! Click here for open vacancies in the Netherlands

                  Get in Touch.

                  Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                    From TAP to Azure UK team


                    Life at Nordcloud

                    Tom Lloyd, Azure Engineer, talks about trust, pace of change and Ralph the Cockerpoo!


                    Where are you from and how did you end up at Nordcloud?

                    I live about 15 minutes outside of Cambridge, in a quiet (but great) town called Godmanchester. Having previously worked alongside Ian Sharpe (Cloud Enablement Lead) at a previous employer, he was forever raving about Nordcloud and saying I should apply. Knowing I was possibly not ready for a more senior cloud role, I thought my chance was missed, that was until I heard about the Talent Acceleration Program. After one interview I was hooked! I was lucky enough to be selected on the first Azure TAP track in January 2019, and I’ve been glad I was ever since!


                    What is your role and core competence?

                    I’m an Azure Cloud Engineer, working as part of our professional services team.  My core competence upon joining Nordcloud would have been Azure Infrastructure.

                    The TAP program was a fantastic introduction to a great many Azure services and I’m now on a long-term project helping migrate an eCommerce platform into Azure Kubernetes Services, leveraging Azure DevOps. The pace at which public cloud evolves makes it difficult to pick a singular core competence, it seems to change daily!


                    What do you like most about working at Nordcloud?

                    The trust that Nordcloud places in their employees. We are treated as professionals, given the autonomy to manage our own time and if we choose to work remotely that is totally at our discretion  – as long as we are keeping our clients happy and delivering good work, how we go about doing that is up to you!

                    In addition, the diverse nature of our workforce! Working with colleagues from so many different countries and backgrounds is great fun.


                    What is the most useful thing you have learned at Nordcloud?

                    You never stop learning. We are fortunate to have so many hugely talented people, in such a wide range of areas, lots of room for continuous professional development!


                    What sets you in fire/ what’s your favourite thing with public cloud?

                    The pace of change, ease of access to new technology and the agility it brings to both Nordcloud and our clients. Due to the fact you can get left behind so quickly, it constantly keeps you on your toes!


                    What do you do outside work?

                    Maintain a busy social life! I love to travel and experience new places. Pretty much any sport (whether playing or watching), football and golf primarily, I’m a long-suffering Arsenal fan. Trying (and often failing) to tame my crazy dog, Ralph the Cockerpoo. And, of course, a few beers when the opportunity arises.


                    Best Nordcloudian memory?

                    Successfully completing the TAP program. I loved it. The 6 weeks were fantastic, getting to visit new colleagues in our Poznan office, learning from genuine Azure experts and realising the move to Nordcloud was the right one was really rewarding. I’ve no doubt there will be more good memories along the way, but that sticks out for now!


                    How would you describe our culture in 3 words?

                    Professional, Flexible, Fun.


                    How does Nordcloud UK differ from our other offices in your opinion?

                    Having an office in central London, the best city in the world, is a major win for us!

                    Seriously though, that’s a tough one, all Nordcloud offices are full of fantastic people and are unique in their own way. The UK team are a very close-knit group, full of true industry experts, that are great at what they do. I could not have felt more welcome upon joining, and I know colleagues that have joined since will feel the same.


                    What’s your greetings/advice for someone who might be considering applying for a job in Nordcloud?

                    Easy; do it! You won’t be disappointed. If you want to work for a company that provides interesting projects, trusts you to work as a professional and truly wants you to succeed, then Nordcloud is the place for you. I’ve never worked for a company that places such a high value in employee work/life balance, and that so often an afterthought for companies. Sounds good, right?

                    Get in Touch.

                    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.