Multi-Cloud: Why stop at one platform?

CATEGORIES

Insights

Everyone knows the importance of picking the right tool for the job. Keen woodworkers have a selection of hammers, chisels and saws, golfers carry bags full of clubs, and even the most ardent sports car enthusiasts can see the limitations of a Ferrari when it’s time to take five kids to the beach.

The cloud is the same. We deal with a number of major providers, each with its own outstanding features and strengths. A business just has to identify its needs and pick the service that best meets those.

Except why stop at one cloud provider?

Let’s say, for the sake of an example, that a substantial portion of your compute needs are predictable, stable and that latency isn’t an overriding issue. You might consider a provider that offers a relatively inflexible service, doesn’t necessarily have data centres located close to your users, but that is highly cost effective. The lack of flexibility isn’t an issue because of the predictability of your requirements while the low cost makes it highly attractive.

However, let us also suppose that you also offer applications where latency is an issue and where it’s also important to be able to scale usage up and down do meet spikes in demand. A second cloud provider, one that has data centres close to your main users and that offers a flexible deal on capacity, is an attractive option even though its charges are higher than the first.

So, does it have to be an either or? Of course not. We live in a world where it’s possible to choose both.

But which cloud provider excels in which areas?

However, as the psychologist Barry Schwartz has argued, choices can complicate matters. You have to understand which cloud provider excels in which areas, the likely impact of their terms and conditions, and you also have to have a breadth of expertise in order to take advantage of multiple platforms, both to develop applications within the different environments and to create the architecture needed so that data can flow easily between platforms where required.

This is very much one of Nordcloud’s roles, to act as an expert facilitator between customer and cloud providers. It’s our job to know how to match a particular offering to a particular requirement. It’s our job to understand the implications of each provider’s terms of business for our customers, and it’s one of our great strengths that we have the resources to supplement our customers’ in-house technical expertise with our own. So, if your team’s proficiencies allow you to manage one provider’s platform but not another, we can help you to clear that hurdle. Our expertise in building a businesses’ Security & Governance models and core infrastructure, as well as delivering data centre migrations and optimised Cloud environments in a consistent way across the major Cloud platforms has allowed us to become one of the most trusted providers.

Benefits of Microsoft Azure

Though we were already working with a number of excellent cloud providers, we have partnered with Microsoft to offer Azure cloud services to our customers. Azure offers particular advantages that make it an attractive option for businesses looking to locate some or all of their computing needs in the cloud.

For starters, there’s the familiarity of the MS environment, though it should be pointed out that Azure is equally adept at hosting Linux-based applications. Windows is ubiquitous and Microsoft’s range of tools and apps is beyond comprehensive.

Azure has put especially put emphasis on simplicity and speed. If you need to spin up a project quickly, you need to consider Azure. The human resources are easy to come by – most businesses have no shortage of people skilled in Microsoft-related development – and the tools are easy to use.

Azure has also addressed concerns relating to server stability with a comprehensive outage protection plan that mirrors users’ data to secure virtual environments. If the possibility of outages and lost data is a worry, then Azure is a good answer. Microsoft has an impressive data centre network with global coverage and is moving into Southern Europe, Africa and South America ahead of the competition. We’re confident that, as providers expand their infrastructure, Azure users won’t find themselves left behind. The business also offers great means of analysing and mining your data for business intelligence through its managed SQL and NoSQL data services.

Of course, the other cloud services that Nordcloud offers come with their own strengths, but a growing number of businesses, perhaps a majority, are now looking to mix and match with cloud providers to get the best of each to suit their specific needs. It’s a trend we only expect to keep growing.

Blog

Time to rescue your data operations

The value created by data can fundamentally influence key areas of your business, from enabling, optimizing and steering key functions. ...

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








AWS Fargate – Bringing Serverless to Microservice

CATEGORIES

InsightsTech

Microservices architecture

Microservices architecture has been a key focus for a lot of organisations in the past few years. Organisations around the world are changing from the traditional monolithic architecture – to a faster time-to-market, automated, and deployable microservices architecture. Microservices architecture approach has its number of benefits, but the two that come up the most are how the software is deployed and how it is managed throughout its lifecycle.

Pokémon Go & Kubernetes

Let’s look at a real world scenario, Pokémon Go. We wouldn’t have Pokémon Go, if it wasn’t for Niantic Labs and Google’s Kubernetes. For those of you who played this once addictive game back in the summer of 2016, you know all about the technical issues they had. It was the microservice approach of using Kubernetes that allowed Pokémon Go to fix technical issues in a matter of hours, rather than weeks. This was due to the fact that each microservice was able to be updated with a new patch, and thousands of containers to be created during peak times within seconds.

When it comes to microservices and using the popular container engine like docker with a container orchestration software like Kubernetes (K8’s), with a microservice architecture everything in the website server is broken down into its own individual API’s. Giving microservices more agility, flexible scaling, and the freedom to pick what programming language or version is used for that one API instead of all of them.

It is can be defined more ways than one, but it is commonly used to deploy well-defined API’s, and to help make delivery and deployment streamlined.

 

Serverless the next big thing

Some experts believe that serverless will be the next big thing. Serverless doesn’t mean there is no servers, but it does mean that the management and capacity planning are hidden from the DevOps teams. Maybe you have heard about FaaS (Functions as a Service) or AWS Lambda. FaaS is not for everyone, but what if we could bring some of the serverless architecture along with the microservice architecture.

 

AWS Fargate

This is why back in November at the AWS re:Invent 2017 (see the deep dive here), AWS announced a new service called AWS Fargate. AWS Fargate is a container service that allows you to provision containers without the need to worry about the underlying infrastructure (VM/Container/Nodes instances). AWS Fargate will control ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service). Currently only available in the us-east-1 in Preview Mode.

AWS Fargate, simplifies the complex management of microservices, by allowing developers to focus on the main task of creating API’s. You will still need to worry about the memory and CPU that is required for the API’s or application, but the beauty of AWS Fargate is that you never have to worry about provisioning servers or clusters. This is because AWS Fargate will autoscale for you. This is where microservices and Serverless meet.

Blog

Stop murdering “Agile”, and be agile instead

“Agile Macabre” on techcamp.hamburg, Apr-2020 I’ve been leading projects since 2002, becoming a full-blown agilist around 2011. Not so long...

Blog

Make Work Better by Documentation

Sharing knowledge in a team is an ongoing challenge. Finding balance between writing documentation and keeping it up to date,...

Blog

Time to rescue your data operations

The value created by data can fundamentally influence key areas of your business, from enabling, optimizing and steering key functions. ...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Persisting Docker Volumes in ECS using EFS

CATEGORIES

Tech

Last week we faced a new challenge to persist our Docker Volume using EFS. Sounds easy, right? Well, it turned out to be a bit more challenging than expected and we were only able to find a few tips here and there. That is why we wrote this post so others may succeed faster.

Before digging into the solution, let’s take a minute to describe our context to elaborate a bit more on the challenge.
First of all, we believe in Infrastructure as Code and thereby we use CloudFormation to be able to recreate our environments. Luckily Amazon provides a working sample and we got EFS working quite easily. The next part was to get Docker to use a volume from EFS. We got lucky a second time as Amazon provides another working sample.

We managed to combine these resources and everything looked alright, but a closer look revealed that the changes did not persist. We found one explanation for why it didn’t work. It appears that we mount EFS after the Docker daemon starts and therefore the volume mounts an empty non-existing directory. In order to fix that we did two things, first we orchestrated the setup and then we added EFS to fstab in order to auto-mount on reboot.

The solution looks a bit like the following:

  
EcsCluster:
    Type: AWS::ECS::Cluster
    Properties: {}
  LaunchConfiguration:
    Type: AWS::AutoScaling::LaunchConfiguration
    Metadata:
      AWS::CloudFormation::Init:
        configSets:
          MountConfig:
          - setup
          - mount
        setup:
          packages:
            yum:
              nfs-utils: []
          files:
            "/home/ec2-user/post_nfsstat":
              content: !Sub |
                #!/bin/bash

                INPUT="$(cat)"
                CW_JSON_OPEN='{ "Namespace": "EFS", "MetricData": [ '
                CW_JSON_CLOSE=' ] }'
                CW_JSON_METRIC=''
                METRIC_COUNTER=0

                for COL in 1 2 3 4 5 6; do

                 COUNTER=0
                 METRIC_FIELD=$COL
                 DATA_FIELD=$(($COL+($COL-1)))

                 while read line; do
                   if [[ COUNTER -gt 0 ]]; then

                     LINE=`echo $line | tr -s ' ' `
                     AWS_COMMAND="aws cloudwatch put-metric-data --region ${AWS::Region}"
                     MOD=$(( $COUNTER % 2))

                     if [ $MOD -eq 1 ]; then
                       METRIC_NAME=`echo $LINE | cut -d ' ' -f $METRIC_FIELD`
                     else
                       METRIC_VALUE=`echo $LINE | cut -d ' ' -f $DATA_FIELD`
                     fi

                     if [[ -n "$METRIC_NAME" && -n "$METRIC_VALUE" ]]; then
                       INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
                       CW_JSON_METRIC="$CW_JSON_METRIC { \"MetricName\": \"$METRIC_NAME\", \"Dimensions\": [{\"Name\": \"InstanceId\", \"Value\": \"$INSTANCE_ID\"} ], \"Value\": $METRIC_VALUE },"
                       unset METRIC_NAME
                       unset METRIC_VALUE

                       METRIC_COUNTER=$((METRIC_COUNTER+1))
                       if [ $METRIC_COUNTER -eq 20 ]; then
                         # 20 is max metric collection size, so we have to submit here
                         aws cloudwatch put-metric-data --region ${AWS::Region} --cli-input-json "`echo $CW_JSON_OPEN ${!CW_JSON_METRIC%?} $CW_JSON_CLOSE`"

                         # reset
                         METRIC_COUNTER=0
                         CW_JSON_METRIC=''
                       fi
                     fi



                     COUNTER=$((COUNTER+1))
                   fi

                   if [[ "$line" == "Client nfs v4:" ]]; then
                     # the next line is the good stuff
                     COUNTER=$((COUNTER+1))
                   fi
                 done <<< "$INPUT"
                done

                # submit whatever is left
                aws cloudwatch put-metric-data --region ${AWS::Region} --cli-input-json "`echo $CW_JSON_OPEN ${!CW_JSON_METRIC%?} $CW_JSON_CLOSE`"
              mode: '000755'
              owner: ec2-user
              group: ec2-user
            "/home/ec2-user/crontab":
              content: "* * * * * /usr/sbin/nfsstat | /home/ec2-user/post_nfsstat\n"
              owner: ec2-user
              group: ec2-user
          commands:
            01_createdir:
              command: !Sub "mkdir -p /${MountPoint}"
        mount:
          commands:
            01_mount:
              command:
                Fn::Join:
                  - ""
                  - - "mount -t nfs4 -o nfsvers=4.1 "
                    - Fn::ImportValue:
                        Ref: FileSystem
                    - ".efs."
                    - Ref: AWS::Region
                    - ".amazonaws.com:/ /"
                    - Ref: MountPoint
            02_fstab:
              command:
                Fn::Join:
                  - ""
                  - - "echo \""
                    - Fn::ImportValue:
                        Ref: FileSystem
                    - ".efs."
                    - Ref: AWS::Region
                    - ".amazonaws.com:/ /"
                    - Ref: MountPoint
                    - " nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0\" >> /etc/fstab"
            03_permissions:
              command: !Sub "chown -R ec2-user:ec2-user /${MountPoint}"
            04_restart_docker_and_ecs:
              command: !Sub "service docker restart && start ecs"
    Properties:
      AssociatePublicIpAddress: true
      ImageId:
        Fn::FindInMap:
        - AWSRegionArch2AMI
        - Ref: AWS::Region
        - Fn::FindInMap:
          - AWSInstanceType2Arch
          - Ref: InstanceType
          - Arch
      InstanceType:
        Ref: InstanceType
      KeyName:
        Ref: KeyName
      SecurityGroups:
      - Fn::ImportValue:
          Ref: SecuritygrpEcsAgentPort
      - Ref: InstanceSecurityGroup
      IamInstanceProfile:
        Ref: CloudWatchPutMetricsInstanceProfile
      UserData:
        Fn::Base64: !Sub |
          #!/bin/bash -xe
          echo ECS_CLUSTER=${EcsCluster} >> /etc/ecs/ecs.config
          yum update -y
          yum install -y aws-cfn-bootstrap
          /opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource LaunchConfiguration --configsets MountConfig --region ${AWS::Region}
          crontab /home/ec2-user/crontab
          /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource AutoScalingGroup --region ${AWS::Region}
    DependsOn:
    - EcsCluster


Here is what we did compared to the original AWS provided template:
  1. extracted FileSystem EFS into another CF template and exported the EFS identifier so that we can use ImportValue
  2. added -p to the mkdir command just in case
  3. enhanced mount to use imported filesystem reference
  4. added mount to fstab so that we auto-mount on reboot
  5. recursive changed EFS mount ownership
  6. restarted Docker daemon to include mounted EFS and started ECS as it does not automatically restart when the Docker daemon restarts
  7. added ECS cluster info to ECS configuration
  8. added ECS agent security group so that port 51678 which the ECS agent uses is open
  9. added yum update just in case
  10. included launch configuration into auto scaling group for the ECS cluster and added depends on ECS cluster

We were a bit surprised that EFS does not require an additional volume driver to function. It appears to work out-of-the-box and turned out to be quite straightforward. Thank you for reading and enjoy using EFS as a means to persist your Docker Volumes in your ECS cluster!

Blog

Stop murdering “Agile”, and be agile instead

“Agile Macabre” on techcamp.hamburg, Apr-2020 I’ve been leading projects since 2002, becoming a full-blown agilist around 2011. Not so long...

Blog

Make Work Better by Documentation

Sharing knowledge in a team is an ongoing challenge. Finding balance between writing documentation and keeping it up to date,...

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








How to set up an Azure AD identity provider in AWS Cognito

CATEGORIES

Tech

Setting up an Azure AD identity provider in AWS Cognito

This post describes step-by-step how to set up an AWS Cognito User Pool with an Azure AD identity provider to allow your application to leverage single sign-on with Azure AD.

In order to get started, you need the following in place:

  • Azure account with Azure AD Premium enabled
  • AWS account
  • URL for the application that you will be integrating to Cognito (e.g. https://myapp.nordcloud.com)

The setup consist of 3 steps:

  1. Create an AWS Cognito user pool
  2. Create an Azure AD enterprise application
  3. Set up Azure AD identity provider to the Cognito User Pool

The federation is based on SAML, with the following login flow:

  1. The user lands on a page hosted by AWS Cognito (e.g. redirected by your application)
  2. Cognito redirects the user to an Azure AD login page (may have other identity providers available for selection)
  3. Azure AD passes the identity to Cognito, which redirects the user to the application login page with the access_token in the URL.

 

Create an AWS Cognito User Pool

In AWS, create a Cognito User pool with an application client. Otherwise, use the default settings. Memorise the Pool Id (e.g. us-east-1_P5fyukyC1I).

Screen Shot 2017-12-12 at 14.28.14

Screen Shot 2017-12-12 at 14.29.29

Add a domain for your Cognito application (e.g. mpu201712). Memorize the domain URL (e.g. https://mpu201712.auth.us-east-1.amazoncognito.com)

Screen Shot 2017-12-12 at 14.31.39

Create an Azure AD Enterprise Application

In Azure, create an Azure AD Enterprise Application, (requires Azure AD Premium) from your Azure AD blade -> Enterprise Applications -> New Application. Pick “Non-gallery application” as the app type.

Screen Shot 2017-12-12 at 14.33.49

Add a user to your application and configure Single sign-on with the following settings:

Screen Shot 2017-12-12 at 14.38.04

Finally, download the SAML Metada XML. You should now be set up on the Azure side.

 

Configure the Azure AD Identity Provider to Your Cognito Pool

In AWS, create a new SAML identity provider for your Cognito pool. Upload the SAML metadata downloaded for your Azure AD Enterprise App.

Screen Shot 2017-12-12 at 14.40.56

Add attribute mapping for email address (and other attributes you need).

  • SAML Attribute: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress
  • User pool Attribute: Email

Screen Shot 2017-12-12 at 14.43.52

Enable the identity provider created above for your cognito pool from App client settings. Add your app callback and signout URLs (e.g. https://myapp.nordcloud.com/login), enable the following oAuth 2.0 flows and scopes: code grant, implicit grant, email, openid, aws.cognito.signin.user.admin. Memorise the app client id (7hosfpqdh003qrng4hsu2ionjk)

Screen Shot 2017-12-12 at 14.46.26

You should now be set up and ready to test the setup.

Testing your setup

Enter the Cognito login page URL to your browser. It is located at a URL of format:

https://<cognito domain>/login?response_type=token&client_id=<app client id>&scope=<oauth scope>&redirect_uri=<your encoded redirect URI>

For example:

https://mpu201712.auth.us-east-1.amazoncognito.com/login?response_type=token&client_id=7hosfpqdh003qrng4hsu2ionjk&scope=email+openid&redirect_uri=https%3A%2F%2Fmyapp.nordcloud.com%2Flogin

Log in with your Azure AD credentials. You should redirected to your callback URL configured for your Cognito app and provided as the redirect URL  (e.g. https://myapp.nordcloud.com/login) with the access token stored in the id_token parameter. For example:

https://myapp.nordcloud.com/login#id_token=eyJraWQiOiJoU2lXcjFySFE0T3FsbDJVVkQ3MUZLWFcrSlVzcG9UOGUwRzR2YloybEJVPSIsImFsZyI6IlJTMjU2In0.eyJzdWIiOiIwNzQ4ZTRlMC0wZTgzLTQ5ZjYtODFkYS1kYzFmNDY4ZTM3ZmQiLCJhdWQiOiI3aG9zZnBxZGgwMDNxcm5nNGhzdTJpb25qayIsImNvZ25pdG86Z3JvdXBzIjpbInVzLWVhc3QtMV9QNWZ5dWt5QzFfQUQtTXB1MjAxNzEyIl0sImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwiaWRlbnRpdGllcyI6W3sidXNlcklkIjoibWlrYWVsLnB1aXR0aW5lbkBzYzUuaW8iLCJwcm92aWRlck5hbWUiOiJBRC1NcHUyMDE3MTIiLCJwcm92aWRlclR5cGUiOiJTQU1MIiwiaXNzdWVyIjoiaHR0cHM6XC9cL3N0cy53aW5kb3dzLm5ldFwvYTliNjcyNmEtNmMwMC00ZmQ2LWFhMjAtMTdkNzY5NmM5NTRlXC8iLCJwcmltYXJ5IjoidHJ1ZSIsImRhdGVDcmVhdGVkIjoiMTUxMzA4MTM3NDk1NCJ9XSwidG9rZW5fdXNlIjoiaWQiLCJhdXRoX3RpbWUiOjE1MTMwODE3NzQsImlzcyI6Imh0dHBzOlwvXC9jb2duaXRvLWlkcC51cy1lYXN0LTEuYW1hem9uYXdzLmNvbVwvdXMtZWFzdC0xX1A1Znl1a3lDMSIsImNvZ25pdG86dXNlcm5hbWUiOiJBRC1NcHUyMDE3MTJfbWlrYWVsLnB1aXR0aW5lbkBzYzUuaW8iLCJleHAiOjE1MTMwODUzNzQsImlhdCI6MTUxMzA4MTc3NCwiZW1haWwiOiJtaWthZWwucHVpdHRpbmVuQHNjNS5pbyJ9.q6iMvWDZ6o-e_xFhIoQ21ssIHnF9Ujznc1tSSeWiaNFbhK8e7HJBGOx8-NVy7cfnLyjPSnxuO5rUqlUQM-dFjQkuouK62VcAbS7wpIH7-6dKWtLzQTmUGHtLO7Us331GT6aEAOSy7Zbw63ZXl-vIrvnyqCv0XOLvMhqOIUiExiEumettW-m-6jZ0jedimQij8-UituR__iAPaM2yOPD24Yz5tWvIf-QHQUZ3FZyasDSKo-S9jUclqUInZYeoqNhPvtc3g80kcGGUwPuNfNdyP3cZCN3PbiSIqHk9MiJIDiaIrhy1gmrVMGH7ZBb5tRHWJi3-nyAf7nESqXtQBGJINg&access_token=eyJraWQiOiJhV05oNVJJc0NQQXNSWlZJXC9KUUdRcUpcL1wvdnRcL1wvaHl2S203WWxpK0FyYkU9IiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiIwNzQ4ZTRlMC0wZTgzLTQ5ZjYtODFkYS1kYzFmNDY4ZTM3ZmQiLCJjb2duaXRvOmdyb3VwcyI6WyJ1cy1lYXN0LTFfUDVmeXVreUMxX0FELU1wdTIwMTcxMiJdLCJ0b2tlbl91c2UiOiJhY2Nlc3MiLCJzY29wZSI6Im9wZW5pZCBlbWFpbCIsImlzcyI6Imh0dHBzOlwvXC9jb2duaXRvLWlkcC51cy1lYXN0LTEuYW1hem9uYXdzLmNvbVwvdXMtZWFzdC0xX1A1Znl1a3lDMSIsImV4cCI6MTUxMzA4NTM3NCwiaWF0IjoxNTEzMDgxNzc0LCJ2ZXJzaW9uIjoyLCJqdGkiOiIxMWExMjQ2Ny0yMGY2LTQ1YTQtYWFmNi0xZDQyNmQzNjM5ZmQiLCJjbGllbnRfaWQiOiI3aG9zZnBxZGgwMDNxcm5nNGhzdTJpb25qayIsInVzZXJuYW1lIjoiQUQtTXB1MjAxNzEyX21pa2FlbC5wdWl0dGluZW5Ac2M1LmlvIn0.irNfJPGKhcyez6_aEDm_OfUFMOh2oNC9xKTRAM97pvLejVBCrb_mXSMhB2-zzGp_uhH6ayJfwhWOMY2LRjnMa2sm85ExBCI6kw3D3lrViM0LTBPbGC3T6rhneA9lbAL7TRlLoFetp56wK_ojuTZpo-Esm-GlbpNenegZ9T_tL7LZ8xOpq1d25SYRyUUwp1LwajxmPIuzmBMXMw1qoOHt1i4L0IcXNi6HdMO6Z7lejxhRClCZUbE_FXHC9TcR-Bb9yXbxl0PAksHCrf4AQXY9BO_u2oD05OZA5n5r9FPlwHYwHDjYqfGoJ4mY15LAPC7QaIJgLRYPQDfBBVWq4MDH3Q&expires_in=3600&token_type=Bearer

The access token is in JWT format. In the case above, the contents of the token (decoded using https://jwt.io) are:

{
  "sub": "0748e4e0-0e83-49f6-81da-dc1f468e37fd",
  "aud": "7hosfpqdh003qrng4hsu2ionjk",
  "cognito:groups": [
    "us-east-1_P5fyukyC1_AD-Mpu201712"
  ],
  "email_verified": false,
  "identities": [
    {
      "userId": "mikael.puittinen@sc5.io",
      "providerName": "AD-Mpu201712",
      "providerType": "SAML",
      "issuer": "https://sts.windows.net/a9b6726a-6c00-4fd6-aa20-17d7696c954e/",
      "primary": "true",
      "dateCreated": "1513081374954"
    }
  ],
  "token_use": "id",
  "auth_time": 1513081774,
  "iss": "https://cognito-idp.us-east-1.amazonaws.com/us-east-1_P5fyukyC1",
  "cognito:username": "AD-Mpu201712_mikael.puittinen@sc5.io",
  "exp": 1513085374,
  "iat": 1513081774,
  "email": "mikael.puittinen@sc5.io"
}

The token also includes a cryptographic signature that should be used to verify its’ authenticity.

 

Additional Sources

More information on the subject is available from e.g.:

Blog

Stop murdering “Agile”, and be agile instead

“Agile Macabre” on techcamp.hamburg, Apr-2020 I’ve been leading projects since 2002, becoming a full-blown agilist around 2011. Not so long...

Blog

Make Work Better by Documentation

Sharing knowledge in a team is an ongoing challenge. Finding balance between writing documentation and keeping it up to date,...

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Nordcloud to sponsor Microsoft Tech Summits in Europe

CATEGORIES

EventsNews

Nordcloud is proud to be a Platinum Sponsor at the Microsoft Tech Summits in FrankfurtAmsterdamStockholm, and Warsaw.

Microsoft Tech Summit Frankfurt

Messe Frankfurt GmbH

February 21-22, 2018

Microsoft Tech Summit Amsterdam

RAI Amsterdam

March 28-29, 2018

Microsoft Tech Summit Stockholm 

Stockholm Waterfront Congress Centre

April 17-18, 2018

Microsoft Tech Summit Warsaw 

Expo XXI Convention Centre

April 25-26, 2018

Join us for these free, 2 day learning events happening over the next three months where you’ll be able to find out about the latest trends, tools, and Microsoft product roadmaps at over 80 sessions covering a range of focus areas and topics. There will also be keynotes, breakout sessions, and hands-on labs.

As a Microsoft Gold Cloud Partner we have solid experience and a team of experts working with the Azure cloud platform, so please stop by our stands to speak to our team of experts. We’re looking forward to hearing more about your Azure Cloud journey!

Blog

Microsoft awards Nordcloud Partner of the year for 2020

Nordcloud has been awarded the 2020 Microsoft Country Partner of the Year Award for Finland. The awards recognises partners that...

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








NIC: Future Edition – A Recap

CATEGORIES

News

Last week, the Nordic Infrastructure Conference (NIC) kicked off in Oslo, Norway for the seventh time. After the conference’s first debut in 2012, it has become one of the most valued events in the Nordics to attend.

NIC has grown to become one of the biggest events in the Nordics and allows business individuals and IT experts to achieve a deeper understanding of the future of the IT industry. The conference not only brings with it 1,000 plus attendees, but some of the biggest IT Vendors like Nordcloud, AWS, Microsoft, IBM, Veeam, redhat, Citrix, and many more.

What separates NIC from other conferences is its ability to aim at the growing trends in the Nordic region. This year there was certainly a lot of Microsoft focused talks at the conference, but it was broken down into nine different categories providing up to more than 100+ hours of sessions over three days. This year the eight different categories focused on Server & Client, Cloud Platform, Management & Automation, Security, Cloud Productivity and Analytics, CxO, Partners, and Instructor-Led Hands-on-Labs.

The biggest speakers this year were legends themselves, John Craddock (Identity and Security Architect), Paula Januszkiewicz (CEO, Cybersecurity Expert), Sami Laiho (Windows OS), and a lot more Microsoft MVP’s. Not forgetting our own Nordcloud CTO Ilja Summala (Cloud Expert), who along with the others gave a fantastic speech about the past and future of IT.

We hope that everyone enjoyed the event and want to thank all the wonderful people who came by the Nordcloud stand to not only to take part in our Nintendo Switch giveaway but to learn more about what Nordcloud can do for you. See you all next year!

Blog

Microsoft awards Nordcloud Partner of the year for 2020

Nordcloud has been awarded the 2020 Microsoft Country Partner of the Year Award for Finland. The awards recognises partners that...

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.