Nordcloud Reaches 100 AWS Certified Cloud Architects

CATEGORIES

News

We’re excited to announce that Nordcloud has reached over 100 AWS certified architects!

APN Certification Distinctions provide AWS partners with the opportunity to showcase how many active AWS Certifications the company has collectively achieved and highlights the value AWS Certifications brings to customers. In Nordcloud’s case, the badge demonstrates our expertise to design, deploy, and operate highly available, cost-effective, and secure applications on the AWS platform.

 

AWS certified

 

Nordcloud is proud to be an AWS Premier Consulting Partner. This highly-valued status is awarded to leading cloud companies that have extensive experience in deploying customer solutions on AWS, a strong bench of trained and certified technical consultants, at least one APN Competency, expertise in project management, a healthy revenue-generating consulting business on AWS, and have invested significantly in their AWS practice. Nordcloud is the only AWS Premier Consulting Partner based in the Nordics and is one of just a few located across the EMEA Region.

If you’d like to learn more about our partnership with AWS, please get in contact with us here.

Blog

Microsoft awards Nordcloud Partner of the year for 2020

Nordcloud has been awarded the 2020 Microsoft Country Partner of the Year Award for Finland. The awards recognises partners that...

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Cloud security: Don’t be a security idiot

CATEGORIES

Tech

The cloud has some great advantages: storing a large amount of data and only paying for what you use without buying it all upfront or using hundreds of different services or API’s offered by a cloud provider.

We commonly hear that security is a major step when moving to the cloud, but we actually see quite the opposite. By the time a lift-and-shift or a refractor approach gets completed, the organisation has already invested in so much that they need the system up and running. Studies show that the movement to public cloud computing is not going to decrease anytime soon, but will increase by 100 billion USD. With this increase, be sure to expect not only a growth in security breaches but attacks as well.

 

Cloud Security Breaches & Attacks

In today’s digital world, data is the new currency. Attackers had a massive impact on businesses with the ransomware outbreaks like WannaCry and Petya, and with the increase of attacks and poor security standards, everyone and everything is vulnerable.

It might be easy to think we are all part of some sort of Darwin experiment because the same things keep happening around the industry. Budget cuts and time-to-market are both enablers that affect security. As a society, we have our security methods back to front and upside down and we forget the internet is relatively young.

We see it time to time again where organisations are deploying unsecured best practice approaches. For example, Accenture back in October 2017 left an S3 bucket open to the world. This was later found out by the public, but the biggest issue was the content inside the S3 bucket was a list of passwords and KMS (AWS Key Management System). It is unknown if the keys were used maliciously, but they are not the first nor will they be the last to let this slip.

Later in November, a programmer at DXC was sending the code to GitHub. Without thinking, this individual failed to realise that the code also had hard-coded AWS Keys into the code. It took 4 days before this was found out, but over 244 virtual machines were created in the meantime, costing the company a whopping 64,000 USD.
Dont_be_a_security_idiot_picture1

Sometime you can’t control the security issues, but that doesn’t mean you shouldn’t worry about it. A chip security flaw was announced to the public at the beginning of 2018 called Meltdown and Spectre that was released by a team called Google Project Zero. The chip flaw infected all Intel processors and attacked at the kernel level.

This meant that someone with that knowledge could theoretically create a virtual machine on any public cloud and could view the data inside the kernel level of all the virtual machines on that bare metal server. Most companies patched this back in the fall of 2017, but not everyone keeps the most updated security patches on the OS layer.

UPDATE: Intel has just released that not every CPU can be patched.
UPDATE: New Variation

 

Shared Responsibility

Cloud providers are paying close attention to the security risk, but they all have a shared-responsibility model. What this means is that a customer is 100-per cent accountable for securing the cloud. As the cloud provider doesn’t know the workload being used, they can’t limit all security risks. What the provider guarantees is the security of their data centres, usually software used to provide you with the API’s needed to create resources in the cloud.

Most providers will explain to you (multiple times!) that there is a shared-responsibility model, the above diagram shows the most up-to-date version.

 

Data Centre Security

Another big question that is commonly asked is, “What makes the cloud provider data centre more secure than my own data centre?”. To answer this question we first need to find out what the current Data Centre Tier is and compare that to a cloud provider.

Data Centres are often associated with data centre “Tier”, or its level of service. This standard came into existence back in 2005 from the Telecommunications Industry Association. The 4-tiers were developed by Uptime Institute. Both are maintained separately but have similar criteria. There are 4 tier rankings (I, II, III, or IV), and each tier reflects on the physical, cooling and power infrastructure, the redundancy level, and promised uptime.

 

Tier I
A Tier I data center is the simplest of the 4 tiers, offering little (if any) levels of redundancy, and not really aiming to promise a maximum level of uptime:

  • Single path for power and cooling to the server equipment, with no redundant components.
  • Typically lacks features seen in larger data centers, such as a backup cooling system or generator.

Expected uptime levels of 99.671% (1,729 minutes of annual downtime)

Tier II
The next level up, a Tier II data center has more measures and infrastructure in place that ensure it is not as susceptible to unplanned downtime as a Tier 1 data center:

  • Will typically have a single path for both power and cooling, but will utilise some redundant components.
  • These data centers will have some backup elements, such as a backup cooling system and/or a generator.

Expected uptime levels of 99.741% (1,361 minutes of annual downtime)

Tier III
In addition to meeting the requirements for both Tier I and Tier II, a Tier III data center is required to have a more sophisticated infrastructure that allows for greater redundancy and higher uptime:

  • Multiple power and cooling distribution paths to the server equipment. The equipment is served by one distribution path, but in the event that path fails, another takes over as a failover.
  • Multiple power sources for all IT equipment.
  • Specific procedures in place that allow for maintenance/updates to be done in the data center, without causing downtime.

Expected uptime levels of 99.982% (95 minutes of annual downtime)

Tier IV
At the top level, a Tier IV ranking represents a data centre that has the infrastructure, capacity, and processes in place to provide a truly maximum level of uptime:

  • Fully meets all requirements for Tiers I, II, and III.
  • Infrastructure that is fully fault tolerant, meaning it can function as normal, even in the event of one or more equipment failures.
  • Redundancy in everything: Multiple cooling units, backup generators, power sources, chillers, etc. If one piece of equipment fails, another can start up and replace its output instantaneously.

Expected uptime levels of 99.995% (26 minutes of annual downtime)

Now that we understand the tier level, where does your data centre fit?

For AWS, Azure, and GCP, the data centre tier is not relative to such a large scale, because none of them follows the TIA-942 or uptime institute standards. The reason for this is because each data centre would be classified as a Tier 4, but since you can build the cloud to your own criteria or based on each application, it’s difficult to put it into a box. Once you add the vast number of services, availability zones, and multi-regions, then this would be out of the scope of the Tier-X standards.

Don’t be a Security Idiot!

When it comes to security in the cloud, it all falls down to the end user. An end user is anyone with an internet connection or an internet enabled device and a good rule of thumb is to think that anyone can be hacked or any device can be stolen. Everything stems from the organisation and should be looked at from a top-down approach. Management must follow and also be on board with training and best practices when dealing with security.

Most organisations do not have security policies in place, and the ones who do haven’t updated them for years. The IT world changes every few hours and someone is always willing to commit a crime against you or your organisation.

password

 

Considerations

YOU ARE the first line of defence! Know if the data is being stored in a secured manner by using encryption and if backups are being stored offsite or in an isolated location.

Common Sense

Complacency: Wireless devices are common now, but does your organisation have a policy about this? Once or multiple times a year, all of your employees should have to review a security policy.

Strong Password policies: A typical password should be 16 characters long and consist of special characters, lowercase and capital letters. Something like: I<3Marino&MyDogs (This password would take years to crack with current technology). Suggestion: don’t use this exact password!

Multi-Factor Authentication: Multi-Factor authentication means “something you know” (like a password) and “something you have,” which can be an object like a mobile phone. MFA has been around a long time. When you use a debit/credit card it requires you to know the Pin Code and have the card. You do not want anyone taking your money, so why not use MFA on all your user data.

Security Patches: WannaCry is a perfect example of what happens when people don’t update security patches. Microsoft released a fix in March 2017, but still, 150 countries and thousands of businesses got hit by the attack in the Summer of 2017. This could all have been avoided if Security Patches were enforced. Always make sure your device is updated!

Surroundings: Situational awareness is key to staying safe. Knowing what is going on around you can help avoid social engineering. Maybe you are waiting for a meeting at a local coffee shop and decide to work a little before the meeting. The first thing you do is connect to an Open Wi-Fi and then you check your email. The person behind you is watching what you are doing and also has a keylogger running. They know what website you went to and what you typed in. Keep your screensaver password protected and locked after so many seconds of inactivity.

Report incidents: You are checking your email and received a zip file from a future client. You unzip the file and see a .exe, but think no more of it. You open the .exe and find out that your computer is now infected with malware or ransomware. The first thing you should do is turn off the internet or turn off your computer. Call or use your mobile to send a message to IT, and explain what has happened.

Education: The best way to prevent a security breach is to know what to look for and how to report incidents. Keep updated on new security trends and upcoming security vulnerabilities.

Reporting: Who do you report to if you notice or come into contact with a security issue? Know who to send reports to, whether it is IT staff or an email dedicated to incidents…

Encryption: Make sure that you are using HTTPS websites and that your data is encrypted both during transit and at rest.

Most of all when it comes to public cloud security, you share the security with the platform. The cloud platform is responsible for the infrastructure and for physical security. Ultimately, YOU ARE responsible for securing everything else in the cloud.

Blog

Stop murdering “Agile”, and be agile instead

“Agile Macabre” on techcamp.hamburg, Apr-2020 I’ve been leading projects since 2002, becoming a full-blown agilist around 2011. Not so long...

Blog

Make Work Better by Documentation

Sharing knowledge in a team is an ongoing challenge. Finding balance between writing documentation and keeping it up to date,...

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








The true cost of private cloud

CATEGORIES

Insights

I love cloud, so any step towards it I personally see as being truly great. But all choices have consequences and with this article, I want to highlight those of moving to private cloud.

A private cloud solution vendor can easily show you a calculation showing how much more cost efficient it is compared to public cloud, and of course, there are scenarios where that is true. If you have a set of applications that only require specific infrastructure services, the life cycle of those apps is known and predictable and your growth is also known and predictable. Ultimately, an Onprem Hyper Conversed solution with some private cloud features is probably cheaper, at least for a period of time.

However, the private cloud costs start to stack up over time…

Who does the R&D to maintain your private cloud stack (the software functionality to automate your infrastructure)? Take OpenStack, VMware stack, or any other way to build the needed cloud functionality. You can have over 20 software components each that have their own life cycle, and any update needs to be tested against all other components. Vendors are addressing this by packaging the components and doing the testing for you, but then you are tied to their cycles and version choices.

What does it cost to build additional services beyond basic IaaS features? Continuous delivery pipeline features which integrate your favourite tool to your private cloud, automated provisioning of entire servers including database and app servers, PaaS features, machine learning features, and these are just the bare minimum.

Lifecycle costs

There are of course, the costs of planning capacity and maintaining needed as a buffer for growth. But a far bigger cost is the tech refresh cycle with much needed migrations and forced upgrades. Let’s assume you have a 4 year refresh cycle. Year 3 and 4 capacity upgrades are ridiculously expensive. What if the private cloud had room for year 5 growth? How much does the maintenance extension cost for year 5? The list goes on, but I still have never seen calculations considering these unpleasant surprises which are, in most cases, inevitable (who knows how much capacity we would need in 4 years!)

Multiple cloud costs

There is no single cloud stack (hardware and software) that covers all your needs. The requirements of basic VM IaaS to SAP S4 Hana to Machine Learning are very different on all levels. As a basic example, ML requires GPUs, SAP requires a huge amount of memory etc. There is no practical way to deliver that from one cloud, so be prepared to build and manage multiple private clouds.

Consider all the costs of maintaining and developing your private cloud. Then consider the situation 4 years from now and the money you could potentially spend on that. Wouldn’t you have been better off spending that money modernising your applications and creating new digital innovation for your business and customers?

The cost of these in public cloud? You only pay for consumption, not for development, maintenance etc. The list goes on… Read more on public cloud cost clarity and optimisation in our blog post here

Blog

Time to rescue your data operations

The value created by data can fundamentally influence key areas of your business, from enabling, optimizing and steering key functions. ...

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.