SC5 and Nordcloud are joining forces



Finland’s leading cloud-native development and design company SC5 joins Nordcloud in building Europes leading cloud enabler. SC5 will be renamed Nordcloud Solutions in January 2018.

Nordcloud has combined its previously independent cloud infrastructure and cloud application development businesses in order to provide full-service cloud transformations to its customers. With deep expertise in transforming enterprise IT operations to a cloud-first model, Nordcloud has seen more than 70% year-on-year growth for the past consecutive five years – making it one of the fastest growing companies in Europe.

“After an enterprise has adopted the cloud-first model, all their new applications will be cloud-powered. This is the driving force behind our decision to combine our businesses into Nordcloud,” said CEO Esa Kinnunen. “Now we can help even more of our customers move quickly and efficiently to a cloud-first model for both their IT operations and application development. We serve our customers all the way from design to implementation and maintenance of cloud infrastructure and services.”

With more than 250 cloud experts across Finland, Sweden, Denmark, Norway, Poland, Germany, the Netherlands and the United Kingdom, Nordcloud can offer the full range of cloud-integration services for any international enterprise. Nordcloud is an AWS Premier Consulting and Managed Services Partner, Microsoft Azure Gold Cloud Partner and strategic Google Cloud Platform Partner. In early 2017, Nordcloud was included in Gartner’s Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide.

As the cloud services market is predicted to grow to almost USD 250 billion in 2017**, and with 48 of the Fortune Global 50 companies having announced cloud adoption plans***, Nordcloud expects to continue growing its IT services business substantially within the coming years.


Microsoft awards Nordcloud Partner of the year for 2020

Nordcloud has been awarded the 2020 Microsoft Country Partner of the Year Award for Finland. The awards recognises partners that...


Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...


5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

App Service environment isolated: Making security & internal network connectivity easier and cheaper



Azure App Services

Azure App Services is arguably the most popular Azure PaaS service, allowing you to host Web Sites and App Functions in a fully managed service.

Once you had created your App Service Plan, and Web App, API App, Logic App or Function you can upload your code and have a simple website, API, Logic App or Function up and running in under 10 minutes. Of course, there are a number of other configuration options, such as binding a domain name, adding an SSL Certificate and the ‘out the box’ ability to do blue / green deployments. However, it was simple to get up and running and like any PaaS, Service should be, simple to manage.

Azure App Servers runs in a multi-tenanted Environment. In short, your App Service shares the same hardware as other Azure Customers, and while this provided a cost-effective hosting solution, the multi-tenanted aspect introduced a number of restrictions around scalability and security. To address this, in June 2015, Microsoft Azure released App Service Environments (ASE), a premium tier of Azure App Services, which allow you to run App Services isolated within a subnet of your own Virtual Network.

As a premium service, ASE comes with a premium price tag, so if you are architecting a solution that included an ASE, (compared to the multi-tenanted and cheaper equivalent) you have to make sure your business case can justify it. If you need:

 – Your App Service to connect to infrastructure within your local network, via a site-to-site VPN, Express Route of VNET Peering,

 – Your App Server hosts internal or Line of Business Application that should not be publicly accessible,

 – Layer 3 Network Access Control, on both inbound and outbound traffic,

 – A Static Outbound IP Addresses that can be whitelisted on on-premises or third-party firewalls, including securing connections to AzureSQL,

 – You need more compute resource (without an ASE, you have can have a maximum of 10 compute resources, with an ASE you can have up to 100),

 – You wish to place a Network Virtual Appliance in front of your App Service, or,

 – You require a fully isolated & dedicated compute resource,

In July Microsoft released the next generation ASE: App Service Environment Isolated or ASEv2, but what are the differences between ASE Isolated and the Original (ASEv1) and should you migrate?

What has changed?

User Experience

If you are used to deploying ASEv1, the biggest change you will notice is that they have made using an ASE simpler, and there is less to set up and manage. It feels a lot more ‘PaaS’ like and is generally a good experience. Gone is the configuration overhead of ‘Front End Workers’ and ‘Backend Workers’, and you no longer need to worry about ensuring you have additional workers for fault-tolerance and scaling – this is all now managed for you.


This change means that there is a large change to how an ASE is priced. There are two components you need to be familiar with to be able to price an ASEv2:

  1. The App Service Environment Base Fee which covers the costs of running your ASE in “a private dedicated environment”, these include load balancing, high-availability, publishing, deployment slots, and general configuration that takes place at an ASE level. This cost (which varies by region, in UK South the pricing currently stands at £782.88/per month) remains consistent provided you don’t alter the default Front-End Configuration – one I1 instance for every 15 worker instances. The Front-End Instances only handle SSL termination & layer 7 load balancing. In this case, the default settings will work in the majority of cases, and while they do, regardless of how many worker instances you have the base fee will not change. If you scale up the Front End instances, (or the I2 or I3 instance type) or decrease the number of workers per Front-End Instance, your base feel will increase with every additional core above the default configuration.
  2. Isolated Workers (the compute that excuses your code) – you decide how many workers you require to run and scale your app and as such, you have control of the costs of the worker layer. This is charged per hour, so if your workers are scaling to meet demand, this cost may not be consistent each month.

Compute Resource

ASEv2 only supports the ‘I’ series of virtual machines – these are all Dv2 based machines, which means faster cores, SSD storage and twice the memory per core when you compare to ASEv1. In short – the Dv2 almost doubles the performance compared to the previous generation.


As part of the process of making ASEv2 simpler, fault tolerance is now managed for you. You no longer need to have, or pay for, standby workers.


In ASEv2, they have simplified how the scaling works to bring it inline with you auto-scale your App Service Plan outside of an ASE. You no longer need to worry about ensuring you have enough workers at the ASE level to enable your scaling actions to happen, and again, means you are not paying for computer resource that you’re not using.

Cost Savings

If you do a head to head cost comparison of ASEv1 and ASEv2, you might question if ASEv2 is any cheaper. However, when you consider that you are getting almost twice the compute power in the ‘I’ series of computer resource (and therefore should need less of them) and that you no longer need to pay for fault-tolerance or workers ready to scale, ASEv2 will work out cheaper.

Should we migrate?

Microsoft Azure has taken the end user feedback and each of the changes we have listed above bring real benefits. Nordcloud has already seamlessly migrated a number of its clients from ASEv1 to ASEv2 and they are already benefiting from the cost savings, both directly and indirectly from the reduced effort of operating the ASE.

If you would like to talk to Nordcloud about if an ASE is suitable for your requirements, or how we can help quickly and seamlessly migrate you from an ASEv1 to and ASEv2, please get in touch today. 


Time to rescue your data operations

The value created by data can fundamentally influence key areas of your business, from enabling, optimizing and steering key functions. ...


What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...


A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

How to create stateful clusters in AWS



With stateful clusters, the idea is to create storage and network interface before a VM is created. The storage and ENI are then associated with the VM on start up.

Why do we use ENIs?

We use ENI’s instead of reserved IP addresses because we cannot know if the IP address we specify in the template or template parameter will actually be available when the instance or ENI is created.  When the ENI is created, it will be assigned an IP address from the subnet. As long as the ENI is not deleted, the IP address will be reserved and associated with the ENI.

When a security group is assigned to an instance it is actually assigned to the first ENI on the instance. This means that we can create the security group for the cluster when we create the ENI’s . It’s also a good idea to create a client security group that is allowed in the ingress rules of the cluster/servers.

When the instances are created you shouldn’t assign them to any Security groups or subnets as this all comes with the ENI that is attached to index 0.

Creating Storage Volume

To maintain the local state on each machine we need to also create a storage volume for each instance. This is not the root file system volume but an additional volume. You also want to have the option to either use a blank volume or one created from a snapshot.

It’s important to check the volume on boot up if it has a filesystem on it or not.  If it doesn’t have one, then this should indicate that it’s blank and should be formatted.  If it has a file system it means that the disk was created from a snapshot. In this case, the file system should be grown so that it uses all the available space on the volume. This is because it’s possible to create volumes that are larger than the snapshot.

Scaling your Storage

The storage can be scaled up in both size and/or IOPs. If you increase the volume size then you would also need to resize the filesystem. To do this we need to trigger resize2fs after the volume has been updated. To watch the volume for updates we need to configure the cfn-auto-reloader as described below.


This pattern is not dependent on any tooling. However, depending on what tool is used, additional features might be available.

Deploying new AMIs

By separating the ENI and disk from the instance we can easily perform a rolling update by having one CloudFormation parameter for each instance AMI. You’ll then be able to just update the stack three times, changing the AMI parameter for each instance per update.



DeletionPolicy=”Snapshot” is used on volumes, so in case CFN deletes the volumes it will create a final snapshot automatically.


The instance will not reach CREATE_COMPLETE state until it signals healthy

Online Scaling of Storage

Configuring cfn-hup to watch the volume associated with the instance enables us to scale up the storage without any outage. The storage can be scaled up in size or in iops.

To watch the volume we need to configure the cfn-auto-reloader as described below.

"/etc/cfn/hooks.d/cfn-auto-reloader.conf": {
    "content": Join("", [
        "action=/opt/aws/bin/cfn-init ",
        " --stack ", Ref("AWS::StackName"),
        " --resource {} ".format(instance),
        " --configsets update ",
        " --region ", Ref("AWS::Region")
    "mode": "000400",
    "owner": "root",
    "group": "root"

When the volume reaches the UPDATE_COMPLETE stage it will trigger the configset update that will grow the file system.

        "resize": {
            "command": "/sbin/resize2fs /dev/xvdh",
            "env": {"HOME": "/root"}



The model would look something like this:

If you want to find out more about stateful clusters on AWS and how to create them, get in touch here. 


Stop murdering “Agile”, and be agile instead

“Agile Macabre” on, Apr-2020 I’ve been leading projects since 2002, becoming a full-blown agilist around 2011. Not so long...


Make Work Better by Documentation

Sharing knowledge in a team is an ongoing challenge. Finding balance between writing documentation and keeping it up to date,...


How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.