Security In The Public Cloud: Finding What Is Right For You

What many businesses still don’t realise is that the public cloud is a shared responsibility, from both the cloud provider and customer.

Security concerns in the cloud pop up every now and then, especially when there has been a public breach of some sort. What many businesses still don’t realise is that the public cloud is a shared responsibility, from both the cloud provider and customer. Unfortunately, 99% of these breaches are down to the customer, not the cloud provider. Some of these cases are due simply to the customer not having the competences in building a secure service in the public cloud.

Cloud Comes In Many Shapes And Sizes

  • Public cloud platforms like AWS, Azure and GCP
  • Medium cloud players
  • Local hosting provider offerings
  • SaaS providers of variable capabilities and services: From Office 365 to Dropbox

However, if the alternative is to use your own datacenter, the data center of a local provider, or a SaaS service, it’s worth building a pros and cons table and making a selection after that.

Own data centre
Local hosting provider
Public cloud
 – Most responsibility
 – Competence varies
 – Variable processes
 – Large costs

However

 – Most choice in tech
 – A lot of responsibility
 – Competence varies
 – Variable processes
 – Large costs


 – Some choice in tech
 – Least responsibility
 – Proven competence & investment
 – Fully automated with APIsConsumption-based


 – Least amount of choice in tech

Lack of competence is typical when a business ventures into the public cloud on their own, without a partner with expertise. Luckily:

  • Nordcloud has the most relevant certifications on all of the major cloud platforms
  • Nordcloud is ISO/IEC 27001 certified to ensure our own services security is appropriately addressed
  • Typically Nordcloud builds and operates customer environments to meet customer policies, guidelines and requirements

Security responsibilities shift towards the platform provider the more high value services like IaaS, PaaS, SaaS are used. All major public cloud platform providers have proven security practices with many certifications such as:

  • ISO/IEC 27001:2013 27013, 27017:2015
  • PCI-DSS
  • SOC 1-3
  • FIPS 140-2
  • HIPAA
  • NIST

Gain The Full Benefits Of The Public Cloud

The more cloud capacity shifts towards the SaaS end of the offering, the less the business needs to build the controls on their own. However, existing applications are not built for the public cloud and therefore if the application is migrated to the public cloud as it is, similar controls need to be migrated too. Here’s another opportunity to build pros & cons table: Applications considered for public cloud migration ‘as is’, vs app modernisation.

‘As is’ migration
Modernise 
 – Less benefit of cloud platform IT-driven

BUT

– You start the cloud journey early
– Larger portfolio migration
 – Time to decommission old infra is fast
 – Slower decommissioning
 – Individual modernisations

BUT

– You can start you cloud-native journey
 – Use DevOps with improved productivity
 – You have the most benefit from using cloud platforms

Another suggestion would be to draw out a priority table of your applications so that you gain the full benefits of the public cloud.

In any case, the baseline security, architecture, cloud platform services need to be created to fulfil requirements in the company security policies, guidelines and instructions. For example:

  • Appropriate access controls to data
  • Appropriate encryption controls based on policy/guideline statements matching the classification
  • Appropriate baseline security services, such as application level firewalls and intrusion detection and prevention services
  • Security Information and Event Management solution (SIEM)

The areas listed above should be placed into a roadmap or project with strong ownership to ensure that the platform evolves to meet the demands of applications at various stages in their cloud journey. Once the organisation and governance are in place, the application and cloud platform roadmaps can be aligned for smooth sailing into the cloud where appropriate, and the cloud-native security controls and services are available. Nordcloud’s cloud experts would be able to help you and your business out here.

Find out how Nordcloud helped Unidays become more confident in the security and scalability of their platform.

Blog

Are you too late to go cloud native?

You’re never too late to choose a cloud native approach, no matter what stage of cloud maturity or digital transformation...

Blog

Why do so many CCoEs fail?

When you reach a certain stage of cloud adoption, you set up Cloud Centres of Excellence (CCoE). There are noble...

Blog

Part 1 – GCP Networking Philosophy

When working with cloud architecture, it's important to see the world from different perspectives.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








    Building operational resilience for FSI businesses

    CATEGORIES

    Blog

    A lot has been discussed and written about the recent TSB bank fiasco that saw its customers unable to access their banking services. The interesting thing is the planned downtime was supposed to last only 4 hours but instead went on for almost 48 hours (12 times more) for many of it’s customers, and left them ranting and raving about how helpless they felt while the bank took its (long), sweet time to sort out their mess!

    There are few fundamental issues here, but the one I want to focus on is the fact that this isn’t the first time it happened with TSB (it also occurred in April 2018) or within Financial Services & Insurance (FSI) vertical – there already have been occasions when an IT systems meltdown happened unplanned, or a planned maintenance lasted 12 times longer than expected. At the same time in a world where we talk about Robotic Process Automation (RPA) & Artificial Intelligence (AI) taking over everything that’s mundane, repeatable and quick to learn, do we really understand the problem fully enough to find a solution to building an enterprise business that’s operationally resilient?

    Firstly, let’s understand why Operational Resilience (OpR) is important in FSI:

    • Threatens the viability of firms within FSI and causes instability in financial systems.
    • Introduces significant reputational & business risk within the ecosystem, hampering growth and confidence for all participants involved.
    • Hinders the ability of firms to prevent and respond to operational disruption.

    One way to get a grip on the OpR problem is to look at these incidents and happenings from the perspective of other participants, (apart from Banks and Financial Institutions (FI)) in the Financial Services ecosystem, i.e. Customers & Regulators. There have been numerous questions and concerns from both these participants but to keep it simple, the top three that we hear the most are:

    1. How can you not get it right after you have failed many times?
    2. How long is long enough to be unexpected?
    3. How do we reward for failures?

    How can you not get it right after you have failed many times?

     “The true sign of intelligence is not knowledge but imagination” – Albert Einstein

    The point being, we need to apply new age design thinking into age old processes and do some right brain activities to come up with out-of-the-box ideas. You might think it’s easier said than done but going back to the principle of design thinking – empathising with the user- is a great place to start.

    Since FSI is such heavily regulated sector, it needs to develop focus areas based on an end-to-end lifecycle of business services (i.e. inception, delivery & maintenance) that impacts its market participants, profitability and risks directly. In the diagram below, I look at an example of a business service – “Retail Mortgages”.

    Looking specifically at this service, the FIs need to think how they can break it down (for OpR) into three key pillars – Focused Services, Technical Enhancement & Risk Management, followed by a framework that identifies, maps, assesses, tests & governs the whole mechanism in a periodic way.

    This is exactly where public clouds are such key enablers for this new world design thinking due to the flexibility, security, standardisation and resiliency they provide. As you introduce new business services into this framework the communication and governance should be standardised and in-line with your internal audit policies, only then will you be able to achieve true OpR by investing in all the aspects of that service. This also helps you answer questions like ‘should we buy more capacity and IT staff for testing a CRM system or should we improve the OpR of business-critical mortgage services?’

    retail mortgages

    Business Architecture for OpR Framework in public clouds

    How long is long enough to be unexpected?

    When I ask this question to some of my colleagues in financial services, their response is typically synonymous to the answer you might get if you ask someone ‘how long is a piece of string?’ which frustrates me. Ultimately, you don’t know until you measure it. Once you have applied some design thinking on your business services the next step is to measure & communicate them. With public cloud there are numerous ways to develop an automated process or introduce new tooling that can help set-up impact tolerances specific to your business service. You can run stress testing and simulate numerous operational scenarios and report back on a management dashboard, (without significant capital expenditures) present it to your company board, internal audit teams (to match alignment) and keep it ready for your external auditors when they ask for it. You can therefore measure your OpR and predict the expected downtimes in a much more accurate way rather than running into long and unexpected downtimes.

    How can we reward failures?

    This is a bit of a grey area hence it is important to get straight to the point – financial institutions put aside a lot of money for paying fines to regulators, compensating customers for loss of service and confidence, and at the same time, giving big pay-outs to staff for achieving irrelevant goals that are not in line with business focus areas. To fix this, the overall governance framework needs to set goals specific to selected business services, empower staff with set of right tools that helps run & manage the operations and have board level oversight to measure goals through open, fair and transparent metrics that not only looks at internal participants but include the interest of market participants like customers & regulators. By having this mind-shift and moving to the public cloud, financial institutions can lower compensation towards failures and invest where they really need to.

    In a nutshell

    to build an operationally resilient public cloud infrastructure:

    1. Focus on business services in the order of highest priority based on your organisational goals, you will notice that by doing so the investments are going to the right places, in the right detail, in a timely and systematic way and failures are only going to make you better, not miserable.
    2. Set-up operational metrics and impact tolerances that will be collected and reported to measure your operational resiliency. Use tooling and automation offered within the public cloud to improve governance & actionability within your organisation.
    3. Manage business risk through goal setting and empower your teams with the right tools and transparent processes.

    It’s high time the financial institutions stop ‘putting the cart in front of the horse’ or doing the same thing over and over again expecting a different result. It’s time to re-think, to re-imagine, to re-invent and to re-organise the mess by embracing the public cloud and delivering what your customers really expect by working with market leaders like Nordcloud.

    FSI ready high grade offering from Nordcloud

    Nordcloud offers a full stack cloud offerings, starting from enablement, governance, migration to business service operations. Within FSI we have designed specific frameworks that comply with regulatory standards and can be adopted out-of-the-box with bespoke configuration. Letting you focus on your core business while we take care of everything else.

    Contact us here to learn more about how to build an operationally resilient business.

    Cloud computing is on the rise in the financial services – are you ready?

    Download our free white paper Compliance in the cloud: How to embrace the cloud with confidence, where we outline some of the many benefits that the cloud can offer, such as:

    • Lowered costs
    • Scalability and agility
    • Better customer insights
    • Tighter security

    Download white paper

    Blog

    Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

    When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

    Blog

    Building better SaaS products with UX Writing (Part 3)

    UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

    Blog

    Building better SaaS products with UX Writing (Part 2)

    The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

    Get in Touch

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








      Security in the Public Cloud: Finding what is right for you

      CATEGORIES

      Blog

      Security concerns in the cloud pop up every now and then, especially when there has been a public breach of some sort. What many businesses still don’t realise is that the public cloud is a shared responsibility, from both the cloud provider and customer. Unfortunately, 99% of these breaches are down to the customer, not the cloud provider. Some of these cases are due simply to the customer not having the competences in building a secure service in the public cloud.

      Cloud comes in many shapes and sizes

      • Public cloud platforms like AWS, Azure and GCP
      • Medium cloud players
      • Local hosting provider offerings
      • SaaS providers of variable capabilities and services: From Office 365 to Dropbox

      However, if the alternative is to use your own datacenter, the data center of a local provider, or a SaaS service, it’s worth building a pros and cons table and making a selection after that.

      Own data centre
      Local hosting provider
      Public cloud
      • Most responsibility
      • Competence varies
      • Variable processes
      • Large costs

      However – Most choice in tech

      • A lot of responsibility
      • Competence varies
      • Variable processes
      • Large costs

      – Some choice in tech

      • Least responsibility
      • Proven competence & investment
      • Fully automated with APIs
      • Consumption-based

      -Least amount of choice in tech

      Lack of competence is typical when a business ventures into the public cloud on their own, without a partner with expertise. Luckily:

      • Nordcloud has the most relevant certifications on all of the major cloud platforms
      • Nordcloud is ISO/IEC 27001 certified to ensure our own services security is appropriately addressed
      • Typically Nordcloud builds and operates customer environments to meet customer policies, guidelines and requirements

      Security responsibilities shift towards the platform provider the more high value services like IaaS, PaaS, SaaS are used. All major public cloud platform providers have proven security practices with many certifications such as:

      • ISO/IEC 27001:2013 27013, 27017:2015
      • PCI-DSS
      • SOC 1-3
      • FIPS 140-2
      • HIPAA
      • NIST

      Gain the full benefits of the public cloud

      The more cloud capacity shifts towards the SaaS end of the offering, the less the business needs to build the controls on their own. However, existing applications are not built for the public cloud and therefore if the application is migrated to the public cloud as it is, similar controls need to be migrated too. Here’s another opportunity to build pros & cons table: Applications considered for public cloud migration ‘as is’, vs app modernisation.

      ‘As is’ migration
      Modernise 
      • Less benefit of cloud platform
      • IT-driven

      BUT

      • You start the cloud journey early
      • Larger portfolio migration
      • Time to decommission old infra is fast
      • Slower decommissioning
      • Individual modernisations

      BUT

      • You can start you cloud-native journey
      • Use DevOps with improved productivity
      • You have the most benefit from using cloud platforms

      Another suggestion would be to draw out a priority table of your applications so that you gain the full benefits of the public cloud.

      In any case, the baseline security, architecture, cloud platform services need to be created to fulfil requirements in the company security policies, guidelines and instructions. For example:

      • Appropriate access controls to data
      • Appropriate encryption controls based on policy/guideline statements matching the classification
      • Appropriate baseline security services, such as application level firewalls and intrusion detection and prevention services
      • Security Information and Event Management solution (SIEM)

      The areas listed above should be placed into a roadmap or project with strong ownership to ensure that the platform evolves to meet the demands of applications at various stages in their cloud journey. Once the organisation and governance are in place, the application and cloud platform roadmaps can be aligned for smooth sailing into the cloud where appropriate, and the cloud-native security controls and services are available. Nordcloud’s cloud experts would be able to help you and your business out here.

      Find out how Nordcloud helped Unidays become more confident in the security and scalability of their platform.

      Blog

      Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

      When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

      Blog

      Building better SaaS products with UX Writing (Part 3)

      UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

      Blog

      Building better SaaS products with UX Writing (Part 2)

      The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

      Get in Touch

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








        Cloud security: Don’t be a security idiot

        CATEGORIES

        Blog

        The cloud has some great advantages: storing a large amount of data and only paying for what you use without buying it all upfront or using hundreds of different services or API’s offered by a cloud provider.

        We commonly hear that security is a major step when moving to the cloud, but we actually see quite the opposite. By the time a lift-and-shift or a refractor approach gets completed, the organisation has already invested in so much that they need the system up and running. Studies show that the movement to public cloud computing is not going to decrease anytime soon, but will increase by 100 billion USD. With this increase, be sure to expect not only a growth in security breaches but attacks as well.

         

        Cloud Security Breaches & Attacks

        In today’s digital world, data is the new currency. Attackers had a massive impact on businesses with the ransomware outbreaks like WannaCry and Petya, and with the increase of attacks and poor security standards, everyone and everything is vulnerable.

        It might be easy to think we are all part of some sort of Darwin experiment because the same things keep happening around the industry. Budget cuts and time-to-market are both enablers that affect security. As a society, we have our security methods back to front and upside down and we forget the internet is relatively young.

        We see it time to time again where organisations are deploying unsecured best practice approaches. For example, Accenture back in October 2017 left an S3 bucket open to the world. This was later found out by the public, but the biggest issue was the content inside the S3 bucket was a list of passwords and KMS (AWS Key Management System). It is unknown if the keys were used maliciously, but they are not the first nor will they be the last to let this slip.

        Later in November, a programmer at DXC was sending the code to GitHub. Without thinking, this individual failed to realise that the code also had hard-coded AWS Keys into the code. It took 4 days before this was found out, but over 244 virtual machines were created in the meantime, costing the company a whopping 64,000 USD.
        Dont_be_a_security_idiot_picture1

        Sometime you can’t control the security issues, but that doesn’t mean you shouldn’t worry about it. A chip security flaw was announced to the public at the beginning of 2018 called Meltdown and Spectre that was released by a team called Google Project Zero. The chip flaw infected all Intel processors and attacked at the kernel level.

        This meant that someone with that knowledge could theoretically create a virtual machine on any public cloud and could view the data inside the kernel level of all the virtual machines on that bare metal server. Most companies patched this back in the fall of 2017, but not everyone keeps the most updated security patches on the OS layer.

        UPDATE: Intel has just released that not every CPU can be patched.
        UPDATE: New Variation

         

        Shared Responsibility

        Cloud providers are paying close attention to the security risk, but they all have a shared-responsibility model. What this means is that a customer is 100-per cent accountable for securing the cloud. As the cloud provider doesn’t know the workload being used, they can’t limit all security risks. What the provider guarantees is the security of their data centres, usually software used to provide you with the API’s needed to create resources in the cloud.

        Most providers will explain to you (multiple times!) that there is a shared-responsibility model, the above diagram shows the most up-to-date version.

         

        Data Centre Security

        Another big question that is commonly asked is, “What makes the cloud provider data centre more secure than my own data centre?”. To answer this question we first need to find out what the current Data Centre Tier is and compare that to a cloud provider.

        Data Centres are often associated with data centre “Tier”, or its level of service. This standard came into existence back in 2005 from the Telecommunications Industry Association. The 4-tiers were developed by Uptime Institute. Both are maintained separately but have similar criteria. There are 4 tier rankings (I, II, III, or IV), and each tier reflects on the physical, cooling and power infrastructure, the redundancy level, and promised uptime.

         

        Tier I
        A Tier I data center is the simplest of the 4 tiers, offering little (if any) levels of redundancy, and not really aiming to promise a maximum level of uptime:

        • Single path for power and cooling to the server equipment, with no redundant components.
        • Typically lacks features seen in larger data centers, such as a backup cooling system or generator.

        Expected uptime levels of 99.671% (1,729 minutes of annual downtime)

        Tier II
        The next level up, a Tier II data center has more measures and infrastructure in place that ensure it is not as susceptible to unplanned downtime as a Tier 1 data center:

        • Will typically have a single path for both power and cooling, but will utilise some redundant components.
        • These data centers will have some backup elements, such as a backup cooling system and/or a generator.

        Expected uptime levels of 99.741% (1,361 minutes of annual downtime)

        Tier III
        In addition to meeting the requirements for both Tier I and Tier II, a Tier III data center is required to have a more sophisticated infrastructure that allows for greater redundancy and higher uptime:

        • Multiple power and cooling distribution paths to the server equipment. The equipment is served by one distribution path, but in the event that path fails, another takes over as a failover.
        • Multiple power sources for all IT equipment.
        • Specific procedures in place that allow for maintenance/updates to be done in the data center, without causing downtime.

        Expected uptime levels of 99.982% (95 minutes of annual downtime)

        Tier IV
        At the top level, a Tier IV ranking represents a data centre that has the infrastructure, capacity, and processes in place to provide a truly maximum level of uptime:

        • Fully meets all requirements for Tiers I, II, and III.
        • Infrastructure that is fully fault tolerant, meaning it can function as normal, even in the event of one or more equipment failures.
        • Redundancy in everything: Multiple cooling units, backup generators, power sources, chillers, etc. If one piece of equipment fails, another can start up and replace its output instantaneously.

        Expected uptime levels of 99.995% (26 minutes of annual downtime)

        Now that we understand the tier level, where does your data centre fit?

        For AWS, Azure, and GCP, the data centre tier is not relative to such a large scale, because none of them follows the TIA-942 or uptime institute standards. The reason for this is because each data centre would be classified as a Tier 4, but since you can build the cloud to your own criteria or based on each application, it’s difficult to put it into a box. Once you add the vast number of services, availability zones, and multi-regions, then this would be out of the scope of the Tier-X standards.

        Don’t be a Security Idiot!

        When it comes to security in the cloud, it all falls down to the end user. An end user is anyone with an internet connection or an internet enabled device and a good rule of thumb is to think that anyone can be hacked or any device can be stolen. Everything stems from the organisation and should be looked at from a top-down approach. Management must follow and also be on board with training and best practices when dealing with security.

        Most organisations do not have security policies in place, and the ones who do haven’t updated them for years. The IT world changes every few hours and someone is always willing to commit a crime against you or your organisation.

        password

         

        Considerations

        YOU ARE the first line of defence! Know if the data is being stored in a secured manner by using encryption and if backups are being stored offsite or in an isolated location.

        Common Sense

        Complacency: Wireless devices are common now, but does your organisation have a policy about this? Once or multiple times a year, all of your employees should have to review a security policy.

        Strong Password policies: A typical password should be 16 characters long and consist of special characters, lowercase and capital letters. Something like: I<3Marino&MyDogs (This password would take years to crack with current technology). Suggestion: don’t use this exact password!

        Multi-Factor Authentication: Multi-Factor authentication means “something you know” (like a password) and “something you have,” which can be an object like a mobile phone. MFA has been around a long time. When you use a debit/credit card it requires you to know the Pin Code and have the card. You do not want anyone taking your money, so why not use MFA on all your user data.

        Security Patches: WannaCry is a perfect example of what happens when people don’t update security patches. Microsoft released a fix in March 2017, but still, 150 countries and thousands of businesses got hit by the attack in the Summer of 2017. This could all have been avoided if Security Patches were enforced. Always make sure your device is updated!

        Surroundings: Situational awareness is key to staying safe. Knowing what is going on around you can help avoid social engineering. Maybe you are waiting for a meeting at a local coffee shop and decide to work a little before the meeting. The first thing you do is connect to an Open Wi-Fi and then you check your email. The person behind you is watching what you are doing and also has a keylogger running. They know what website you went to and what you typed in. Keep your screensaver password protected and locked after so many seconds of inactivity.

        Report incidents: You are checking your email and received a zip file from a future client. You unzip the file and see a .exe, but think no more of it. You open the .exe and find out that your computer is now infected with malware or ransomware. The first thing you should do is turn off the internet or turn off your computer. Call or use your mobile to send a message to IT, and explain what has happened.

        Education: The best way to prevent a security breach is to know what to look for and how to report incidents. Keep updated on new security trends and upcoming security vulnerabilities.

        Reporting: Who do you report to if you notice or come into contact with a security issue? Know who to send reports to, whether it is IT staff or an email dedicated to incidents…

        Encryption: Make sure that you are using HTTPS websites and that your data is encrypted both during transit and at rest.

        Most of all when it comes to public cloud security, you share the security with the platform. The cloud platform is responsible for the infrastructure and for physical security. Ultimately, YOU ARE responsible for securing everything else in the cloud.

        Blog

        Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

        When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

        Blog

        Building better SaaS products with UX Writing (Part 3)

        UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

        Blog

        Building better SaaS products with UX Writing (Part 2)

        The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

        Get in Touch

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








          Container security: How to differ from the traditional

          CATEGORIES

          Blog

          Containerisation in the industry is rapidly evolving

           

          No, not shipping containers, but cloud containers. Fortune 500 organisations all use containers because they provide portability, simple scalability, and isolation. Linux distros have long been used, but this has since changed. Microsoft has now started to support Windows-based containers with Windows Server 2016 running on Windows Core or Nano. Even with a lot of organisations using containers, we are still seeing a lot of them reverting back to how security was for traditional VMs.

           

          If you already know anything about containers, then you probably know about Kubernetes, Docker, Mesos, CoreOS, but security measures still need to be carried out and therefore this is always a good topic for discussion.

           

           

          Hardened container image security

          Hardened container image security comes to mind first, because of how the image is deployed and if there are any vulnerabilities in the base image. A best practice would be to create a custom container image so that your organization knows exactly what is being deployed.

          Developers or software vendors should know every library installed and the vulnerabilities of those libraries. There is a lot of them, but try to focus on the host OScontainer dependencies, and most of all the application code. Application code is one of the biggest vulnerabilities, but practising DevOps can help prevent this. Reviewing your code for security vulnerabilities before committing it into production can cost time, but save you a lot of money if best practices are followed. It is also a good idea to keep an RSS feed on security blogs like Google Project Zero Team and  Fuzz Testing to find vulnerabilities.

          Infrastructure security

          Infrastructure security is a broad subject because it means identity management, logging, networking, and encryption.

          Controlling access to resources should be at the top of everyone’s list. Following the best practice of providing the least privileges is key to a traditional approach. Role-Based Access Control (RBAC) is one of the most common methods used. RBAC is used to restrict system access to only authorized users. The traditional method was to provide access to a wide range of security policies but now fine-tuned roles can be used.

          Logging onto the infrastructure layers is a must needed best practice. Audit logging using an API cloud vendor services such as AWS CloudWatchAWS CloudTrailsAzure OMS, and Google Stackdriver will allow you to measure trends and find abnormal behaviour.

          Networking is commonly overlooked because it is sometimes referred to as the magic unicorn. Understanding how traffic is flowing in and out of the containers is where the need for security truly starts. Networking theories make this complicated, but understanding the underlying tools like firewalls, proxy, and other cloud-enabled services like Security Groups can redirect or define the traffic to the correct endpoints. With Kubernetes, private clusters can be used to send traffic securely.

          How does the container store secrets? This is a question that your organization should ask when encrypting data at rest or throughout the OSI model.

           

          Runtime security

          Runtime security is often overlooked, but making sure that a team can detect and respond to security threats whilst running inside a container shouldn’t be overlooked. The should monitor abnormal behaviours like network calls, API calls, and even login attempts. If a threat is detected, what are the mitigation steps for that pod? Isolate the container on a different network, restarting it, or stopping it until the threat can be identified are all ways to mitigate if a threat is detected. Another overlooked runtime security is OS logging. Keeping the logs secured inside an encrypted read-only directory will limit tampering, but of course, someone will still have to sift through the logs looking for any abnormal behaviour.

          Whenever security is discussed an image like the one shown above is commonly depicted. When it comes to security, it is ultimately the organization’s responsibility to keep the Application, Data, Identity, and Access Control secured. Cloud Providers do not prevent malicious attackers from attacking the application or the data. If untrusted libraries or access is misconfigured inside or around the containers then everything falls back on the organization.

           Check also my blog post Containers on AWS: a quick guide

          Blog

          Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

          When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

          Blog

          Building better SaaS products with UX Writing (Part 3)

          UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

          Blog

          Building better SaaS products with UX Writing (Part 2)

          The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

          Get in Touch

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








            Is it safe to use AWS S3 after all the news about data breaches?

            CATEGORIES

            Blog

            There has recently been a lot of news about data breaches on AWS S3 (Simple Storage System). Sensitive data, passwords and access credentials have been exposed to the whole world.

            For many, this might have led to the assumption that S3 itself would be insecure and it would be better to avoid using it. The truth is quite the opposite. S3 is totally suitable for storing even sensitive data. As in most cases, the S3 data breaches happened because of human error and misconfiguration, not because of security issues in the service itself.

            What is S3 and how do data leaks happen?

            So, let’s rewind a bit to get to the bottom of this. What is S3? It’s a managed, highly available and highly scalable object storage which is used over an API. Typically, you access this API with secure credentials created for an AWS user. You create “Buckets” and store your objects (files) inside these buckets. You don’t provision any storage beforehand; you just use as much as you like and pay for what you use. S3 was one of the first services introduced by AWS over 10 years ago and has been truly battle-tested on performance, security and availability. It’s also one of the backbone services of AWS and is widely used by other AWS services too.

            So how do data leaks happen? The simple reason is that you can make your buckets or single objects inside a bucket public. This means that anyone with the correct URL can access that object. This is a very useful feature for sharing files to your users and it is widely used to deliver the static content of web applications. But no data inside S3 is ever public by default. You need to separately enable this.

            There are multiple ways to make objects public on bucket and object level including Bucket policies, Bucket ACLs and Object ACLs. This can be confusing but luckily AWS has recently introduced extremely good indication in the Management Console on what data is public and why. It takes some effort and lack of understanding to get this wrong if you make use of this information. In addition to this, there are AWS services like AWS Config and Trusted Advisor that can also give you reports on your publicly open buckets.

            Why do data leaks happen then?

            There are few typical explanations for this:

            1. The main reason is the lack of governance in the organisation. Governance and standards should be in place to ensure that best practices of the platform, as well as company cloud policies, are followed. This includes access management of S3 buckets.
            2. Without proper AWS knowledge, developers or operators don’t understand how S3 works. They might open the buckets for public access just to be able to access the data from an application that could use access credentials instead. They might not understand that “public” means “public”. It requires understanding of AWS Identity and Access Management together with IAM policies and Bucket policies to get this right.
            3. It might be that some “convenient” pre-created S3 Bucket is used for multiple different types of data including sensitive data and the bucket is exposed publicly for the original use case. Again, it comes down to understanding how S3 works.
            4. Some 3rd party tools that upload files to S3 might have default or optional settings to make the objects’ public with an object ACL during the upload. In some cases, it might be that these tools are used and that’s the reason for the public access. Again, understanding how S3 and AWS in general works would mitigate this.

            Understand the AWS platform

            To recap, it all comes down to basic training and understanding on the AWS platform. And this is not limited to S3. In some cases, there’s public access to services because AWS networking, firewall and access management concepts are not understood correctly. It might be that you don’t have proper authentication settings in place in the actual AWS accounts. Or it might be just that general security principles like proper patching plan are not followed.

            There’s a lot to learn when starting a cloud journey and a proper cloud foundation must be built for networking and security together with educating people on how to use the services. Luckily, we are here to help you out with all of that!

            Blog

            Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

            When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

            Blog

            Building better SaaS products with UX Writing (Part 3)

            UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

            Blog

            Building better SaaS products with UX Writing (Part 2)

            The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

            Get in Touch

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








              Design AWS API Access with Care – Case Onelogin

              CATEGORIES

              Blog

              The recent Onelogin breach, shown here in their official blog post highlighted to us that account access permissions can be devastating if not done right.

              From Kerbsonsecurity:

              “Our review has shown that a threat actor obtained access to a set of AWS keys and used them to access the AWS API from an intermediate host with another, smaller service provider in the US. Evidence shows the attack started on May 31, 2017 around 2 am PST. Through the AWS API, the actor created several instances in our infrastructure to do reconnaissance. OneLogin staff was alerted of unusual database activity around 9 am PST and within minutes shut down the affected instance as well as the AWS keys that were used to create it.”

              There are several ways to minimize the risk of a breach like this:

              • EC2 role-based access instead of API keys for services
              • SAML Identity and Access permissions for users
              • CLI IAM roles with Multi-factor authentication enforced for developers

              The simplest way of minimising the risks is to avoid creating API keys for any application or user if possible. In AWS, you can create roles which can be applied to most services, such as EC2 instances and Lambda functions. The role-based permission model lets you specify least privilege with conditions. For example, all API calls have to originate from specified AWS accounts only. The access keys used for these API calls are temporary and only valid for a certain time. The nice thing for an AWS user is that this is transparent and you don’t have to manage the key rotation at all.

              If your organisation currently has a user directory service then you can use SAML based federation for the users within your organisation. They will login to your organisation’s internal portal and will end up at the AWS Management Console without ever having to supply any AWS credentials. There are also third-party SAML solution providers which can be used to enable this.

              Sometimes it might make more sense to manage the users within AWS. Make then sure that you delegate access across AWS accounts using IAM roles with Multi-factor authentication enforced in the access policy document. Multi-factor authentication (or two-factor authentication) is a simple best practice that adds an extra layer of protection on top of your username and password. With MFA enabled, when a user signs in or wants to use an API key, they will need to provide their access key and secret key (the first factor: what they know), as well as for an authentication code from their MFA device (the second factor: what they have). Taken together, these multiple factors provide increased security for your AWS resources.

              With MFA enabled you will need to specify an AWS CLI configuration with the information about your MFA device. See this simple example for inspiration:

              [profile loginaccount]
              
              region = eu-west-1
              
              source_profile = loginaccount
              
              [profile dev]
              
              region = eu-west-1
              
              source_profile = loginaccount
              
              role_arn = arn:aws:iam::xxxxx:role/Developer
              
              mfa_serial = arn:aws:iam::xxxxx:mfa/foo@bar.com
              
              [profile prod]
              
              region = eu-west-1
              
              source_profile = loginaccount
              
              role_arn = arn:aws:iam::xxxxx:role/Developer
              
              mfa_serial = arn:aws:iam::xxxxx:mfa/foo@bar.com

              Then in the credentials file you have:

              [loginaccount]
              
              aws_access_key_id = AKXXXXXXXXXX
              
              aws_secret_access_key = SUPERSECRETKEY

              The only user within the organization is the one in the login account. To switch to another account, change the profile using the AWS CLI as such:

              $ aws ec2 describe-instances --profile dev

              or export the profile as an environment variable if you always want to use a specific account:

              $ export AWS_DEFAULT_PROFILE=dev

              Limes is an application that creates an easy workflow with MFA protected roles, temporary credentials and access to multiple roles/accounts.

              Have a look if you have a lot of accounts or roles to manage.

              It can be really hard to discover a breach. Onelogin were lucky that they saw the unusual database activity as fast as they did their own tests that have shown it takes between 2-15 minutes before someone starts using AWS API keys pushed to a GitHub repository. AWS themselves do look for published API keys in GitHub, but it might take up to 24 hours before they notify you of their finds.

              An audit and security account is one solution to this problem. The audit account should have access to all cloud trail logs and constantly monitor for any unusual activity such as the creation of IAM users.

              Need help securing your cloud? Nordcloud can help you with all of the points above as well as doing security audits and workshops. With your cloud trail logs monitored 24/7 by Nordcloud Managed Services Monitoring we can react with 30 minutes if an unauthorised API call is made.

              Blog

              Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

              When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

              Blog

              Building better SaaS products with UX Writing (Part 3)

              UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

              Blog

              Building better SaaS products with UX Writing (Part 2)

              The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

              Get in Touch

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.