Nordcloud joins the Center for Internet Security (CIS®)

Nordcloud has joined the Center for Internet Security (CIS®) organisation as a CIS SecureSuite® Product Consultant member. Why?

Making the connected world safer

The Center for Internet Security (CIS®) makes the connected world a safer place for people, businesses, and governments through our core competencies of collaboration and innovation. The community-driven nonprofit organisation is responsible for the CIS Controls® and CIS Benchmarks™, that are globally recognised for best practices in securing IT systems and data.

Nordcloud has been using CIS practices and security benchmarks for years both our Professional Services business but also tooling delivery. We have helped many companies adopt CIS hardened images in their immutable cloud infrastructure workloads, while integrating the image factories into DevOps pipelines for faster security delivery. 

As the leading cloud native business in Europe, we have helped various enterprises to secure their hyperscaler cloud platforms with CIS benchmarks using our range of Landing Zone products. 

An important step in accelerating digital transformation

Joining CIS was an important move for us, and ultimately we identified this as an important step in Nordcloud’s journey to accelerate digital transformation through public cloud for our customers. We recognise CIS as an industry standard for benchmarking various security controls where it comes to the platform itself, delivered OS images and various DevOps related artefact like containers.

Fundamentally, we are really looking forward to our technical workforce getting early access to upcoming benchmark updates and CIS CSAT tooling. we believe this will make our deliveries for customers even smoother and is a must have for our security practice.

Learn more about Nordcloud commitment to cloud security?

Blog

Automating your peace of mind, at scale

In Nordcloud, we continuously improve the design of our products in a way which is customer-oriented.

Blog

Are you too late to go cloud native?

You’re never too late to choose a cloud native approach, no matter what stage of cloud maturity or digital transformation...

Blog

Why do so many CCoEs fail?

When you reach a certain stage of cloud adoption, you set up Cloud Centres of Excellence (CCoE). There are noble...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








    Continuous cost savings practice at Nordcloud

    In Nordcloud we have automation in our DNA, be it in our infrastructure, development or even internal processes. 

    In this blog post we will share some best practices we use ourselves for cloud cost management, in reducing internal costs for AWS accounts we operate…and we have close 500 of them. 

    These accounts can vary, some run production systems with our software, others, run internal systems like VPNs and privacy proxies, however,  around 350 of them are so called personal AWS accounts. As part of our approach to AWS management, each employee who wishes to test something in AWS can automatically get an account and do some stuff there – like run EC2, Lambdas or EKS clusters. 

    Our experienced team are all highly skilled cloud ninjas, for sure, however, invariably make human mistakes at times. This is where unnecessary cost might get generated. So how can you control this on such a massive scale?  

    There are two ways: 

    Enforce something or audit something and act? 

    We have adopted the middle ground, able to implement both and take immediate action to reduce cloud cost in a fast and effective way.  Our cost visibility tool Insight allows us to easily spot which accounts are generating significant cost, and  we can if needed contact people owning the account to do some clean-up. Insight allows you to set budgets too and get alerts. 

    In addition, we are using homegrown automation to optimise the cloud, and clean unused resources every Thursday night – we call it ‘Black Thursday’. We then stop EC2 instances and delete old snapshots, remove unattached IPs, unattached volumes and so on. The automation behind the scenes uses serverless architecture and our home built software. We can get easy access to all accounts by mass deploying a role in them with our Provisioner.

    So how much have we actually saved in our IT infrastructure cost? 

    When we first executed the automation on all accounts – we managed to save around 12000 E/m. If we want to save more we can adjust the rules accordingly and be less liberal on what we allow in the accounts to run. 

    What about Azure and GCP ? 

    Our approach isn’t just on Amazon Web Services saving. At Nordlcoud our practice leads control cost with a similar automated approach, our solution  is suitable for a multi cloud approach and can be used everywhere. Nordcloud helps customers adopt  cost savings in a continuous manner via our FinOps and capacity services capacity services.

    The key learnings for us and the customers on this exercise are the following:

    • Cloud cost management should be  a continuous process – not ad hoc
    • Automation is the key to reduce IT infrastructure cost
    • You won’t know how much you save if you have no cost visibility tool 

    These key learnings map perfectly to our products and services: FinOps, Provisioner and Insight. 

    How can your business save significant costs?

     Find our more here 

    Blog

    Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

    When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

    Blog

    Building better SaaS products with UX Writing (Part 3)

    UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

    Blog

    Building better SaaS products with UX Writing (Part 2)

    The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

    Get in Touch

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








      How to use Azure API from Go SDK

      CATEGORIES

      Blog

      In this post we will go through some basic example on how to use Azure SDK in Go. The example program we will go through is pretty simple. First, it gets a list of all resource groups in an Azure subscription, then it iterates over all VMs within every resource group. And guess what.. it does all of that using Go’s awesome concurrency (go go goroutines). Sounds pretty straightforward, but in fact the operations we do in the example should give you a good overview on how to use Azure’s SDKs and API in general.

      The program is located in the github repository, you are free to clone it, fork it and work on it. You only need to have a working go and dep installations.

      Authenticating

      Let’s start ! However, before we proceed, we first need to somehow authenticate agains Azure API. For Azure this means creating a service principal account that our program will use to authenticate and assume a role with permissions needed to execute API actions.

      To create a service principal, let’s use Azure CLI, as shown below. The command will output an authentication file with information such as client id, client secrets and bunch of information needed to connect to Azure. Remember to keep it secure !

      az ad sp create-for-rbac —sdk-auth > my.auth

      Now that we have the service principal created, clone the repo of the example

      git clone git@github.com:nordcloud/azure-go-example.git

      The program is super simple and consists of one file — main.go. Before we proceed, we however need to run dep ensure to have the golang SDK dependencies vendor in our program directory.

      dep ensure

      The code 

      Let’s open the main.go file and see the main() method.

      What you can spot is the general flow of what example program does. First, it authorises using service provider identity we created in previous steps, then gets a list of all resource groups in a subscription, and finally for every resource group, it lists all the VMs.

      Ok, let’s see what happens in the code. Let’s look at the newSessionFromFile method. What is the interesting part is the line where we get the authorizer with SDK’s method NewAuthorizerFromFile.

      func newSessionFromFile() (*AzureSession, error) {
      authorizer, err := auth.NewAuthorizerFromFile(azure.PublicCloud.ResourceManagerEndpoint)
      if err != nil {
          return nil, errors.Wrap(err, "Can't initialize authorizer")
      }
      authInfo, err := readJSON(os.Getenv("AZURE_AUTH_LOCATION"))
      if err != nil {
          return nil, errors.Wrap(err, "Can't get authinfo")
      }
      sess := AzureSession{
          SubscriptionID: (*authInfo)["subscriptionId"].(string),
          Authorizer:     authorizer,
      }
      return &sess, nil
      }
      
      

      This method assumes that AZURE_AUTH_LOCATION env variable contains the path to the service principal auth file we created before. It reads the file and returns an authorizer that is later passed to resource API clients. We pack the authorizer into AzureSession struct, along with the subscription id. We read the id of our subscription from the same auth file file with the readJSON() method.

      Let’s go back to the main() method. Now, that we have a working session, we need to get a list of resource groups. For that I let’s see thegetGroups method. It takes a session as an argument and creates a new client for the groups API. The clients is passed an authorizer we created with in the previous step.

      grClient := resources.NewGroupsClient(sess.SubscriptionID)

      The pattern, where you create a client and execute its methods (typically List, Get, CreateOrUpdate) will be the same for all resources in the SDK. Once you get the knack of it, you will use Azure API in Go without looking into doc.

      To get the resource groups list we iterate over the list of all resource groups returned by the ListComplete method of the resource group client and add them to a list.

      Azure SDKs are auto generated and more or less very RESTful. Each resource type will have the same methods, such as CreateOrUpdate, List, and so on. You can see the API description here, the methods described there will map to SDK methods and return types to Go structs.

      for list, err := grClient.ListComplete(context.Background(), "", nil); list.NotDone(); err = list.Next() {
          if err != nil {
              return nil, errors.Wrap(err, "error traversing RG list")
          }
          rgName := *list.Value().Name
          tab = append(tab, rgName)
      }
      
      

      Ok what we have done so far is we authorised us and got a list of resource groups. Now, for every group we will list all VMs that are in the group. Moreover, we will do this concurrently using the go routines ! For loop in the main() method iterates over resource groups returned by the getGroups methods and for every rg returned, it runs a concurrent goroutine using the go keyword.

      go getVM(sess, group, &wg)
      
      

      The goroutine is implemented in the getVM method. The method does similar stuff to getGroups. It creates a virtual machines client (NewVirtualMachinesClient) and iterates over all VMs printing them.

      for vm, err := vmClient.ListComplete(context.Background(), rg); vm.NotDone(); err = vm.Next() {
          if err != nil {
              log.Print("got error while traverising RG list: ", err)
          }
          i := vm.Value()
          fmt.Printf("(%s) VM %s\n", rg, *i.Name)
      }
      
      

      The main thread is waiting for the termination of all concurrent goroutines using the WaitGroup primitive and it’s Wait() method, which is used here to implement a barrier that waits for all go routines to finish. You can read more about WaitGroups and synchronization here.

      Running the example

      Before we run the program, we first need to export the AZURE_AUTH_LOCATION variable with the path to the my.auth file with service principal information.

      export AZURE_AUTH_LOCATION=/path/to/my.auth
      go run main.go

      You should see something like this as an output:

      (rg1) VM ubuntu
      (rg1) VM ubuntutest
      (rg2) VM ubuntu2
      (rg2) VM ubuntu3
      (rg2) VM ubuntu5
      (rg2) VM ubuntu44
      (rg2) VM ubuntu333
      (rg2) VM ubuntu3
      (myGroup) VM windows
      (myGroup) VM Windas
      (testGroup233) VM vm-1-west
      (testGroup233) VM vm-2-west
      (testGroup233) VM vm-3-west
      (testGroup233) VM vm-4-west
      ...

      In the next post we will show you how to use GCP SDK from Go 🙂

      Blog

      Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

      When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

      Blog

      Building better SaaS products with UX Writing (Part 3)

      UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

      Blog

      Building better SaaS products with UX Writing (Part 2)

      The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

      Get in Touch

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








        SSM parameter store: Keeping secret information structured

        CATEGORIES

        Blog

        AWS Systems Manager Parameter Store (SSM) provides you with a secure way to store config variables for your applications. You can access SSM via AWS API directly from within the app or just use from AWS CLI and it can store plaintext parameters or KMS encrypted secure strings. Since parameters are identified by ARNs, you can set a fine grain access control to your configuration bits with IAM, a truly versatile service!

        Common use cases of SSM are storing configuration for Docker containers initialisation during the runtime, storing secrets for Lambda functions and app, and even using SSM Parameters in CloudFormation.

        Parameters

        You can set the parameters via AWS Console or CLI:

        aws ssm put-paramater --name "DB_NAME" --value "myDb"

        If you want to store a secure string parameter, you add the KMS key id and set a type to SecureString. Now your parameter will be stored in an encrypted way and you’ll be able to read it only if your IAM policy allows.

        aws ssm put-parameter --name "DB_PASSWORD" --value "secret123" --type SecureString --key-id 333be3e-fb33-333e-fb33-3333f7b33f3

        – Mind that KMS limits apply here, SecureString can’t be larger than 4096 bytes in size

        Getting parameters is also easy:

        aws ssm get-parameter --name "DB_NAME"

        If you want to get an encrypted one, add --with-decryption. SSM will automatically decrypt the parameter on the fly and you will get the plain text value.

        Versioning & Tagging

        One of the cool features of SSM Parameters is that they are versioned, moreover, you can see who or what created which version. This way you can fix buggy apps or human mistakes, or at least blame a colleague who made a mistake ;).

        Parameters are also tagged, which is a neat addition to group and target resources based on some common tag values.

        Paths

        Now for the juicy nice part. Parameters can be named either by a simple string, or a path. When you use paths — you introduce hierarchy for your parameters. This makes it easy to group parameters by stage, app or whatever structure you can think of. SSM allows you to use parameters by path.

        Let’s say we have parameters:

        • /myapp/production/DB_NAME
        • /myapp/production/DB_PASSWORD
        • /myapp/production/DB_USERNAME

        In order to get all of them you would do this:

        aws ssm get-paramaters-by-path --with-decryption --path /myapp/production

        This will produce a JSON array containing all of the parameters above. The Parameters might be encrypted or with plaintext, --with-decryption has no effect on plaintext parameters so you’ll always get a list of plaintext params.

        Docker Case Study

        Let’s go through a case study. If you have ever configured an app in a docker container, you probably needed to give away some secret information, like DB password, or some external services keys or tokens.

        Rails app is a good example. Here, DB information is stored in a file called database.yml residing in the app config directory. In Rails, you can populate the config file with environment variables which will be read upon the start and populated.

        production:
           adapter: 'postgresql'
           database: <%= ENV['DB_NAME'] %>
           username: <%= ENV['DB_USERNAME'] %>
           password: <%= ENV['DB_PASSWORD'] %>
           host:     <%= ENV['DB_HOST'] %>
           port: 5432
        
        

        We can store these parameters in SSM, as encrypted secure strings, under a common path: /app/production/db/{DB_NAME, DB_USERNAME, DB_PASSWORD, DB_HOST}. Naturally, different environments will get different paths — with testing, staging etc.

        In the docker entry point script, we can populate the variables before the rails server starts. First, we will get the parameters, then we will export them as environment variables. In this way, the variables will be there when the rails server starts so the database.yml file will get them. Easy peasy.

        First, we get all parameters within the /app/production/db . Since this is a JSON output we use jq to extract the parameter name and value. We construct a line export PARAM_NAME=PARAM_VALUE already in jq. Since the name is a path — and it can’t be used as an env variable name, in the next step we use sed to cut out the path from the name, leaving the env name alone. The whole one-liner is being evaluated — and effectively the variables are set in this script. Rails server can read them and the app can connect to the database. Voila. End of story.

        Best Practice & Caveats

        I use SSM Parameters wherever I need to store something, below are some arbitrary best practices that I think make sense with SSM Parameter Store

        1. Do not use default KMS keys, create your own for the SSM usage, you will get better IAM policies if you keep all of it within one IaC codebase
        2. Use least privileged principle, give your app access to app specific parameters, you can limit access using path in Resource section in IAM Policy.
        3. You can’t use SecureString as CloudFormation parameter yet, you would have to code a custom resource for it.
        4. Name your parameters in a concise way and use paths, it will allow you to delete old and not needed parameters and avoid namespace clash.

        If you would like to contact Nordcloud to find out more, contact us here.

        Blog

        Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

        When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

        Blog

        Building better SaaS products with UX Writing (Part 3)

        UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

        Blog

        Building better SaaS products with UX Writing (Part 2)

        The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

        Get in Touch

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








          Run Lambdas functions from Slack for fun and profit with Opsidian.AI

          CATEGORIES

          Blog

          For over a year now we have seen the rise of chatbots all over the planet and in many different industries. The same trend is being seen in the Cloud and IT market, especially since we’ve been working with Amazon Web Services very closely. Last year, there has been the “AWS Serverless Chatbot” competition where one tool left the jury absulutely breathless: opsidian.ai.

          At Nordcloud, we are working closely with the team at Opsidian.ai to get the latest and best out of using chatbots for both our customers and our own operations teams. Especially in the context of our Managed Cloud Service, this is a highly relevant aspect. We integrate with our customers on a deeper level and partner more closely through the use of chatbots and the related features and functions. In this blog post, we want to describe some of these aspects and encourage all of you to take a look at opsidian.ai in the near future.

          Enabling real ChatOps

          Opsidian helps you monitor and manage your AWS Infrastructure from Slack. Opsidian is a part of an increasingly popular trend, called ChatOps. ChatOps is about bringing your DevOps work to your chats and conversations within a team. You can, for example, deploy code, monitor servers, close issues in a bug tracker just by talking to a chatbot, which understands natural language or special commands. We have recognized that trend and Opsidian.ai was born – an AWS chatbot developed jointly with Nordcloud to manage AWS environments directly from Slack channels.

          With Opsidian.ai you can bring your everyday cloud management and monitoring tasks to where your team is – a Slack channel. This way, team members gain a better visibility into the discussed items and accelerate change by having a common view of a problem.

          Opsidian.ai can do a lot of nifty stuff, it can notify on CloudWatch alerts, display a filtered list of your load balancers, check DynamoDB throughput, start, stop and reboot EC2 instances and even plot metrics. Opsidian can understand free speech thanks to its advanced NLP algorithms but you can also talk to it with the /ops structured command. Now, here’s whats really new and interesting! Opsidian can run AWS Lambda functions directly from your Slack channel. The best part is that you can provide your own custom functions and effectively customize and extendOpsidian’s functionality! So how’s it done?

          User Manual for Getting Started

          Briefly, you register a Lambda function, existing in your account, with Opsidian. You choose your own command on which Lambda will be triggered (/ops run COMMAND). The Lambda has only one simple requirement: it needs to return a JSON object with a field called message. Whatever is placed in this field will be directly returned to the Slack, let us see this example:

          import json
          def lambda_handler(event, context):
              message = "Hi from a Lambda function !"
              return json.dumps({ “message” : message }
          
          

          As for permissions, make sure that the IAM role of Opsidian has “lambda:InvokeFunction” permissions, as described in http://opsidian.ai/lambda/.

          Let’s configure some Lambda functions! If you have your own function, feel free to configure it. I will use the summarize-aws-services function, which is already provided by Opsidian in their repository: http://github.com/OpsidianAI/opsidian-lambda-functions.

          First, a function needs to be registered with Opsidian, to do that — log in to the Opsidian Dashboard — https://dashboard.opsidian.ai/dashboard and proceed to the View Lambda Function panel.

          Now, click the “Add a new Lambda function command”. Choose your region, command name and fill in your function’s ARN. The command you choose will be used to identify your function within Opsidian, i.e. /ops run COMMAND. Tick “wait for response” if you want to get the result from your Lambda passed back to Slack, Opsidian will wait 5s for the function to finish and wait for whatever is returned.

          That’s it! You can now test your function. Go to a Slack channel and execute your command, e.g./ops run summarise As you can see, the summarize-aws-services function provides a simple summary of the most common resources used in your AWS Account, such as a number of ELB load balancers, EC2 instances or EMR clusters.

          CredentialReport Lambda, also present in the repository generates a credential report and checks when IAM users last rotated their AWS access keys. By default, it checks last 90 days but you can adjust the function to your needs. Parameters may be passed to a function via the event object – Opsidian will place them in the event[‘args’] field.

          Having accomplished all of the above, you are good to go and are ready to explore and enjoy the rich features of opsidian.ai and ChatOps as a concept further on your own.

          What’s next?

          We are very eager to enable our customers and partners in using products like opsidian.ai more broadly in their daily operations on Amazon Web Services. We encourage you to take the next logical steps on this journey:

          1. Clone the https://github.com/OpsidianAI/opsidian-lambda-functions and code your Lambdas!
          2. Join the Opsidian slack channel http://opsidian.ai/slack/
          3. Meet Nordcloud at the AWS Summit Berlin and talk to us about how you can leverage ChatOps in your cloud!

          Blog

          Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

          When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

          Blog

          Building better SaaS products with UX Writing (Part 3)

          UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

          Blog

          Building better SaaS products with UX Writing (Part 2)

          The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

          Get in Touch

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.