Cloud buying guide: infra optimisation

CATEGORIES

Insights

We have been fine tuning our IT purchasing over the past decade to outsourcing deals. Buying cloud is, on the surface, very similar, but also very different when you take a deep dive into it.

In outsourcing deals, your aim is to lower the cost per VM and include as many services and responsibilities as possible in that price. In cloud, prices and services are fixed and there is little room for negotiation. But cost per VM is just the starting point for understanding if you save or lose money in cloud. Optimising the TCO in cloud is very different from optimising an on-prem or outsourcing deal.

Here are a few things to think about…

Commitment –You cannot compare 5 years of on-prem TCO with cloud on-demand prices. In cloud, the cost difference between on-demand prices and 1-year commitment (Reserved Instances as an example) can be over 50%. This will significantly reduce your costs, but do not expect to have 100% of your consumption under RIs. Three year RIs are also not usually optimal because the on-demand prices will lower every year, so optimal RI usage needs careful analysing and planning.

Right instance type – On the other end of the spectrum from RIs, you have spot and billed by second instances. Spot instances are a very cost-efficient way to execute workloads like most batch and big data queries. The downside is that you can lose spot instances with a minutes warning. You, therefore, need a mechanism to stop and continue work. With a bit of overallocation, you can run enterprise workloads without risk and with even lower cost than with RIs.

License cost saving -There are two ways to reduce your software license costs in cloud. You can convert most of your commercial licenses to free versions in the cloud but that requires heavier migration and the business case would need to be calculated.

Cloud architecture can also enable other types of license reduction. Consider your typical mission critical Active-Active + DR setup. You have 3 licenses and usually, the most expensive license type which allows clustering. In cloud, you can run a second site both as a mirror site and a DR site, reducing your license cost by 30-50%.

Operation savings – cloud is, by default, software-defined infrastructure. This means all changes can be done quickly and enables them to be highly automated. Changes that take hours or days in traditional infra take seconds in the cloud. All this mean dramatically reduced operations cost (and increased speed).

Price reductions – all cloud vendors lower their prices on a constant basis. While there is no guarantee if that will continue to happen in the future and how much the prices are lowered, history will give a good indication. You need to calculate a 3-5 year average cost and compare that to the fixed cost on-prem solution.

Most organisations are used to looking at a per VM, per month cost. This is only the starting point for your cloud TCO, but understanding cloud TCO is an important skill for anybody making infra decisions. Nordcloud can help you understand more about TCO and instances with an in-depth meeting from one of our cloud experts, contact us here.

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Container security: How to differ from the traditional

CATEGORIES

Tech

Containerisation in the industry is rapidly evolving

 

No, not shipping containers, but cloud containers. Fortune 500 organisations all use containers because they provide portability, simple scalability, and isolation. Linux distros have long been used, but this has since changed. Microsoft has now started to support Windows-based containers with Windows Server 2016 running on Windows Core or Nano. Even with a lot of organisations using containers, we are still seeing a lot of them reverting back to how security was for traditional VMs.

 

If you already know anything about containers, then you probably know about Kubernetes, Docker, Mesos, CoreOS, but security measures still need to be carried out and therefore this is always a good topic for discussion.

 

 

Hardened container image security

Hardened container image security comes to mind first, because of how the image is deployed and if there are any vulnerabilities in the base image. A best practice would be to create a custom container image so that your organization knows exactly what is being deployed.

Developers or software vendors should know every library installed and the vulnerabilities of those libraries. There is a lot of them, but try to focus on the host OScontainer dependencies, and most of all the application code. Application code is one of the biggest vulnerabilities, but practising DevOps can help prevent this. Reviewing your code for security vulnerabilities before committing it into production can cost time, but save you a lot of money if best practices are followed. It is also a good idea to keep an RSS feed on security blogs like Google Project Zero Team and  Fuzz Testing to find vulnerabilities.

Infrastructure security

Infrastructure security is a broad subject because it means identity management, logging, networking, and encryption.

Controlling access to resources should be at the top of everyone’s list. Following the best practice of providing the least privileges is key to a traditional approach. Role-Based Access Control (RBAC) is one of the most common methods used. RBAC is used to restrict system access to only authorized users. The traditional method was to provide access to a wide range of security policies but now fine-tuned roles can be used.

Logging onto the infrastructure layers is a must needed best practice. Audit logging using an API cloud vendor services such as AWS CloudWatchAWS CloudTrailsAzure OMS, and Google Stackdriver will allow you to measure trends and find abnormal behaviour.

Networking is commonly overlooked because it is sometimes referred to as the magic unicorn. Understanding how traffic is flowing in and out of the containers is where the need for security truly starts. Networking theories make this complicated, but understanding the underlying tools like firewalls, proxy, and other cloud-enabled services like Security Groups can redirect or define the traffic to the correct endpoints. With Kubernetes, private clusters can be used to send traffic securely.

How does the container store secrets? This is a question that your organization should ask when encrypting data at rest or throughout the OSI model.

 

Runtime security

Runtime security is often overlooked, but making sure that a team can detect and respond to security threats whilst running inside a container shouldn’t be overlooked. The should monitor abnormal behaviours like network calls, API calls, and even login attempts. If a threat is detected, what are the mitigation steps for that pod? Isolate the container on a different network, restarting it, or stopping it until the threat can be identified are all ways to mitigate if a threat is detected. Another overlooked runtime security is OS logging. Keeping the logs secured inside an encrypted read-only directory will limit tampering, but of course, someone will still have to sift through the logs looking for any abnormal behaviour.

Whenever security is discussed an image like the one shown above is commonly depicted. When it comes to security, it is ultimately the organization’s responsibility to keep the Application, Data, Identity, and Access Control secured. Cloud Providers do not prevent malicious attackers from attacking the application or the data. If untrusted libraries or access is misconfigured inside or around the containers then everything falls back on the organization.

 Check also my blog post Containers on AWS: a quick guide

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Blog

Controlling lights with Ikea Trådfri, Raspberry Pi and AWS

One of our developers build a smart lights solution with Ikea Trådfri, Raspberry Pi and AWS for his home.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Nordcloud launches new design studio: Intergalactico

CATEGORIES

News

Europe’s leading cloud-services provider Nordcloud has launched a new design brand and studio that will work across the company’s main markets. Intergalactico will focus on securing Nordcloud’s leadership in the design of cloud-first services, user interfaces and development strategies – ensuring Nordcloud stays at the forefront of high-end design for future cloud-based applications and experiences.

“Smart, high-quality, user-centric design for the cloud that brings our customers consistently great user-experiences at reasonable investment – that’s what Intergalactico is all about,” says Mikko Rajala, Head of Design at Intergalactico. 

“We see that the demand for excellent design is increasingly moving to upper-management level in organisations,” says Rajala. “Design is now seen as a competitive advantage, especially when companies are looking at new business areas and new solutions for their existing customers.” 

The design experts at the core of Intergalactico have a solid track record in designing digital services and experiences, focusing on sustainable and high-quality designs that delight users. The team has been developing its service portfolio since 2006, working both directly with Nordcloud’s customers and as part of multi-vendor teams.

“At the product level, our design operations have grown from doing single implementations to creating full-blown design systems,” says Rajala. “This means we’ve been designing validated, high-quality modular design patterns to serve multiple different teams within an organisation. Intergalactico is at the cutting-edge of cloud-first service design.”

By creating Intergalactico as a brand, Nordcloud is highlighting the importance of service design as a business advantage for its clients. This enhances Nordcloud’s ability to offer its customers full life-cycle support for their cloud-based applications – from inception to continuous development and maintenance.

Read more about Intergalactico and our design studio offering at www.intergalactico.io

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Blog

Six capabilities modern ISVs need in order to future-proof their SaaS offering

Successful SaaS providers have built their business around 6 core capabilities

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








The myth of the workload movement

CATEGORIES

Insights

Before you invest your time and money into hybrid cloud, you should understand the realities in workload movement between clouds. Who wouldn’t want to be able to move apps between clouds based on your own needs, not to mention that you wouldn’t need to do a single migration project again.

So you’ve moved your apps to containers, invested in a Cloud Management Platform (the brokerage engine for your clouds) and you’re all set.

Unfortunately, no. Even if you would have the technical possibility to move workloads, there are few critical topics that render it unusable in real life.

Let’s start with some basic facts:

  • Your existing apps are not built with workload movement in mind.
  • Even the new applications need to be designed and built with workload movement in mind in order for that to work. And that comes with a cost.
  • Every cloud is a silo with its own features, tweaks, cost optimization options etc.

So what are the difficulties with workload movement?

First is your deployment processes – a pretty normal step in any change (including migration) is to plan, design, implement and validate. For you to benefit from workload movement, you need to minimize these steps. Let’s assume you get a 10% cost saving by moving to a new cloud. How much of the design, validation and coordination work can you fund with that? If you follow a DevOps process for your cloud native app, this is probably doable, but for your other apps this is big culture change or you’ll burn most of your savings in the project whilst performing the change.

Data – most apps you consider to be a candidate for workload movement probably contain data, and lots of it. The data synchronization brings two challenges you need to tackle – data consistency and data transfer time. If data consistency is crucial for you, you first need to move every single byte of data before switching over, which means that at some point you need to stop taking new data in, sync the data and then start taking data in again. This probably means a production break, which means planning and execution work, which means costs and delays. The second issue in some cases is simply the time it takes to sync data. It can take weeks and can require separate physical swing boxes.

Platform dependencies – we’re all supposed to use APIs that hide underlying technical implementation and thus make us less platform dependent. But the reality is that most of the APIs we use are unique, and those are the biggest competitive advantage of the cloud provider. You can write totally platform agnostic code, (that was the big promise from Java from about 20 years ago) but for various good reasons (time to market, cost, laziness etc) we tend to use platform specific features and functionalities. The rise of PaaS will further increase your platform dependency.

We can be honest with ourselves and say that time and cost pressure in most apps dev project override any TCO, reusability and workload movement ambitions.

They are not alike – even if you have the same stack at both ends (regardless if it’s Microsoft, Vmware or OpenStack based) they are not the same. A cloud consists of a large number of components and having the same versions of each component on both ends is not going to work. VM sizes are also different. Some workloads will move just fine, some not, but you need to plan and test.

Skills – we’re in a global war for talent. Most companies have difficulties recruiting and developing skills for one cloud platform, so it’s tough finding people who not only can handle containers on multiple clouds but also do that on scale.

Cost optimization – private cloud and public cloud have very different cost drivers and optimization possibilities. Your cost optimization options include long-term commitments, licence cost savings, right-sizing etc, all reducing your ability to move workloads to a different place.

Getting your priorities right

Enabling true cloud brokerage and workload movement code can be achieved but at a high cost. In most cases, the reality is that workload movement requires a separate project that creates costs, delays and ultimately takes away real appetite to move the workload. At the end of the day, it’s all about your company’s priorities. Do you get the competitive benefit from cloud brokerage, or maybe using the same money to bring new services faster to market? Workload movement is a nice thing to have but has a little positive effect on your TCO. In this case, time to market and your apps dev cost are much bigger drivers.

  • To find out more about Hybrid cloud, read my blog ‘How Hybrid do you need to be’  here 
  • Norcloud have been placed on the Gartner Magic Quadrant for the MSP journey for the second year, read more about it here

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








SSM parameter store: Keeping secret information structured

CATEGORIES

Tech

AWS Systems Manager Parameter Store (SSM) provides you with a secure way to store config variables for your applications. You can access SSM via AWS API directly from within the app or just use from AWS CLI and it can store plaintext parameters or KMS encrypted secure strings. Since parameters are identified by ARNs, you can set a fine grain access control to your configuration bits with IAM, a truly versatile service!

Common use cases of SSM are storing configuration for Docker containers initialisation during the runtime, storing secrets for Lambda functions and app, and even using SSM Parameters in CloudFormation.

Parameters

You can set the parameters via AWS Console or CLI:

aws ssm put-paramater --name "DB_NAME" --value "myDb"

If you want to store a secure string parameter, you add the KMS key id and set a type to SecureString. Now your parameter will be stored in an encrypted way and you’ll be able to read it only if your IAM policy allows.

aws ssm put-parameter --name "DB_PASSWORD" --value "secret123" --type SecureString --key-id 333be3e-fb33-333e-fb33-3333f7b33f3

– Mind that KMS limits apply here, SecureString can’t be larger than 4096 bytes in size

Getting parameters is also easy:

aws ssm get-parameter --name "DB_NAME"

If you want to get an encrypted one, add --with-decryption. SSM will automatically decrypt the parameter on the fly and you will get the plain text value.

Versioning & Tagging

One of the cool features of SSM Parameters is that they are versioned, moreover, you can see who or what created which version. This way you can fix buggy apps or human mistakes, or at least blame a colleague who made a mistake ;).

Parameters are also tagged, which is a neat addition to group and target resources based on some common tag values.

Paths

Now for the juicy nice part. Parameters can be named either by a simple string, or a path. When you use paths — you introduce hierarchy for your parameters. This makes it easy to group parameters by stage, app or whatever structure you can think of. SSM allows you to use parameters by path.

Let’s say we have parameters:

  • /myapp/production/DB_NAME
  • /myapp/production/DB_PASSWORD
  • /myapp/production/DB_USERNAME

In order to get all of them you would do this:

aws ssm get-paramaters-by-path --with-decryption --path /myapp/production

This will produce a JSON array containing all of the parameters above. The Parameters might be encrypted or with plaintext, --with-decryption has no effect on plaintext parameters so you’ll always get a list of plaintext params.

Docker Case Study

Let’s go through a case study. If you have ever configured an app in a docker container, you probably needed to give away some secret information, like DB password, or some external services keys or tokens.

Rails app is a good example. Here, DB information is stored in a file called database.yml residing in the app config directory. In Rails, you can populate the config file with environment variables which will be read upon the start and populated.

production:
   adapter: 'postgresql'
   database: <%= ENV['DB_NAME'] %>
   username: <%= ENV['DB_USERNAME'] %>
   password: <%= ENV['DB_PASSWORD'] %>
   host:     <%= ENV['DB_HOST'] %>
   port: 5432

We can store these parameters in SSM, as encrypted secure strings, under a common path: /app/production/db/{DB_NAME, DB_USERNAME, DB_PASSWORD, DB_HOST}. Naturally, different environments will get different paths — with testing, staging etc.

In the docker entry point script, we can populate the variables before the rails server starts. First, we will get the parameters, then we will export them as environment variables. In this way, the variables will be there when the rails server starts so the database.yml file will get them. Easy peasy.

First, we get all parameters within the /app/production/db . Since this is a JSON output we use jq to extract the parameter name and value. We construct a line export PARAM_NAME=PARAM_VALUE already in jq. Since the name is a path — and it can’t be used as an env variable name, in the next step we use sed to cut out the path from the name, leaving the env name alone. The whole one-liner is being evaluated — and effectively the variables are set in this script. Rails server can read them and the app can connect to the database. Voila. End of story.

Best Practice & Caveats

I use SSM Parameters wherever I need to store something, below are some arbitrary best practices that I think make sense with SSM Parameter Store

  1. Do not use default KMS keys, create your own for the SSM usage, you will get better IAM policies if you keep all of it within one IaC codebase
  2. Use least privileged principle, give your app access to app specific parameters, you can limit access using path in Resource section in IAM Policy.
  3. You can’t use SecureString as CloudFormation parameter yet, you would have to code a custom resource for it.
  4. Name your parameters in a concise way and use paths, it will allow you to delete old and not needed parameters and avoid namespace clash.

If you would like to contact Nordcloud to find out more, contact us here.

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Blog

Controlling lights with Ikea Trådfri, Raspberry Pi and AWS

One of our developers build a smart lights solution with Ikea Trådfri, Raspberry Pi and AWS for his home.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








On the right track in Gartner’s Magic Quadrant for the MSP Journey

CATEGORIES

News

Some good news this week to warm the heart in this (still chilly) springtime; Gartner, the international research and advisory firm, has placed Nordcloud in its Magic Quadrant for MSP for the second year running.

It’s something that my very talented colleagues can take pride in. Many dedicated and hard-working Nordcloud people have brought us to this point, we’re blessed with a wealth of specialist expertise, and this is a good moment to express my thanks and admiration for all that they do.

No judgement from Gartner is ever an endorsement but it is a sign that shows Nordcloud is on the right track. Such recognition isn’t the destination, but it is a way-marker on the road; one of those things we make a note of as we continue on our path.

So what is the destination?

There are plenty of companies that trumpet that they want to be the world’s greatest, and of course we all have our dreams. But you can’t chart a course by dreams. I prefer to look to the horizon, however far, and make for a destination that is real and realisable.

Nordcloud is already the go-to company for native cloud services and application development right across the Nordic region. We’re the market leader. We want to extend that position throughout selected European countries. And we have a growing presence right across the continent. Last December, Deloitte ranked us the fastest growing company in our field in Europe, the Middle East and Africa.

That is broadly the where, but equally important is the how. Without wishing to be either complacent or smug we have most of the how in place.

Local knowledge, full stack-provision and proactive partnerships

Firstly, as we expand, it means establishing skilled local teams in every territory in which we operate supported when applicable by experienced colleagues abroad. Of course, we might expand faster if we took shortcuts and just ran everything from a central support centre, but that’s not the Nordcloud way.

Local knowledge is vital. It’s not just ensuring that customers can talk to someone in their own language, it means having local knowledge of the business and regulatory environment. It means being able to go and work alongside our customers whenever that will produce a better result. So our expansion will necessarily be measured and deliberate so that every Nordcloud customer knows that we’re there for them.

Secondly, it’s about full-stack provision. Not only do we help businesses move to the cloud, we help them make a success of it. We can offer a full range of options from making their existing apps cloudy to building native: new, better ones that take full advantage of the cloud.

Lastly, it’s about partnership; proactive partnership. Whether that’s deploying our expertise to guide customers through the range of cloud options on offer, single or multi-, or looking for ways to improve on software, or keeping our eyes peeled for business opportunities for them, we want to be a formidable ally for everyone we work with, including our partners Microsoft, AWS and Google.

Our goal must be to help all our customers and partners to be their best selves and to help them beat the competition, through new and better apps. And in helping them reach their destinations, we’ll reach ours.

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Blog

Six capabilities modern ISVs need in order to future-proof their SaaS offering

Successful SaaS providers have built their business around 6 core capabilities

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








How hybrid do you need to be?

CATEGORIES

Insights

Everybody appreciates the agility and elasticity of the public cloud, however, there are also large amounts of legacy apps that simply don’t move easily to the public cloud. This means that up until now, the Hybrid Cloud choir has been singing their ‘happily ever after’ song.

Unfortunately, this hybrid cloud choir has largely become led by the vendors who have the most to lose and use this as a way to stall things. So let’s break down what you actually need and what could be seen as a colossal waste of time and money.

The good

These activities will benefit you in the long-term

  • Common security and governance framework. Regardless of the clouds you use, you should enforce the same security and governance principles.
  • Every cloud is a silo so if you want to have an end to end understanding of your IT, you need a tool that monitors across all clouds. That also goes for all core ITBM tools.
  • A light portal will help end users by collecting the different clouds under one interface and access control. But keep it light as most cloud usage is through automated API calls, not through a manual portal.
  • Any activity that gets your apps to run on software-defined infra, including containers, network virtualization etc.

The bad

  • Any investment around workload movement between clouds (beyond virtualising and using containers) can waste you both money and time. Legacy app workload movement between clouds is not really achievable in the first place and you need to consider what is the real benefit (and your appetite to invest) of the ability to move between clouds. You’re much better off having a multi-cloud procurement approach where the threat of other clouds keeps the prices in check. You can then implement any technical cloud brokerage, workload movement solution etc.
  • Private cloud – yes it would be wonderful to have the same experience with ‘on-prem’ that you have in public clouds, but the cost of achieving this is just not worth it, especially considering that you have 15% less apps needing it every year. AWS and Azure implement hundreds of new services every year. How would you keep up with that in your own private cloud?
  • Inertia is probably the worst consequence of a hybrid strategy. Any investment in hybrid technologies is money off your budget which could be used to innovate, by modernising and building new apps. In short, money that is actually used to improve business. Secondly, any investment needs usage and you urgently need to prove that this is a good investment leading to suboptimal workload placement. Thirdly, you have a large, complex platform that is not keeping up with the demand for new services, needs constant upgrades and doesn’t scale when needed.

The bottom line of hybrid cloud

At the end of the day, this is all about focus. Hybrid cloud strategy means that you invest into 3 areas – implementing and improving your private cloud, your cloud brokerage platform, and increasing your usage of public clouds. How much more could you achieve with the money if you just focus on one of them?

Cloud is moving forward so fast that most hybrid cloud platforms and tools will probably be outdated before you get them up and running. Choose carefully what you need as core features, and plug into the knowledge of a company like Nordcloud who can help you to choose the right tools whilst also keeping them updated.

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.