Building secure cloud environments for the customers in Sweden

We’re pleased to introduce you to Vladimir, who is our DevSecOps guru working at the Stockholm office. On a daily basis, he helps our customers in creating safe cloud environments. We decided to ask him about his experience in harnessing modern cloud technologies for our Swedish customers.

1. Where are you from and how did you end up at Nordcloud?

I’m originally from Russia but I have lived in Sweden since 2011. Before joining Nordcloud I used to work for Ericsson as a solution architect in the systems integration domain. At some point, I realised that I needed a major change, so I left Ericsson and joined Nordcloud to work on public and hybrid cloud projects.

2. What is your role and your core competence?

When it comes to core competencies, I have 25 years of experience spanning across many roles including software developer, UX designer, product manager and solution architect. Currently I’m addicted to building modern CI/CD pipelines with security focus, so called DevSecOps. 

3. What sets you on fire / what’s your favourite thing technically with public cloud?

I really like guiding customers in the best ways to develop and support modern containers / serverless-based applications and workloads.

4. What do you like most about working at Nordcloud?

I have the full freedom to do what I believe is best for the customer, I’m not limited by specific products, services, or processes.

5. What is the most useful thing you have learned at Nordcloud?

Ultimately, ‘learned’ is not the right word being in the past tense, as I have realised we need to learn constantly in our fast-changing world of IT. Nordcloud is a community of great colleagues, who are willing to share deep technical and “how-to” knowledge and experience.

6. What do you do outside work?

Trying to help my daughters to do things right. Personally I do a lot of sport activities – alpine skiing, mountain biking, calisthenics, and table tennis.

7. How would you describe Nordcloud’s culture?

While this is not our official culture, for me personally I think it’s built around a fast paced environment, that encourages each individual to have the freedom to use their skills to help customers challenges, while always going that extra mile to find solutions.

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

    DevOpsDays is coming to Poznań, Poland



    Join Nordcloud on Monday, 20th May during DevOpsDays Poznan 2019!

    For the first time, this event will take place in Poznań, so we couldn’t just miss it. Nordcloud supports this conference that brings development and operations together. The agenda looks promising and speakers will cover the hottest topics from cloud-related tech like serverless, microservices, containers and.. simply devops stuff!

    Date: 20.05.2019
    Time: 08:00 – 19:00
    Venue: Green Conference Room at Poznan Congress Center located at MTP, East Gate, ul Głogowska 14, Poznań

    Details & Program:

    Get your tickets here:



    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

      Web application development with .NET Core and Azure DevOps



      Initial setup of the environment

      In my previous post “Azure DevOps Services – cloud based platform for collaborating on code development from Microsoft” I presented the basic information about Azure DevOps capabilities. Now, I will focus on the details. Mainly, I will build together a full CI/CD pipeline for automated builds and deployments of sample .NET Core MVC web application to Azure App service.

      I assume that based on my previous post, a new project in Azure DevOps is created and you have access to it :). Before the fun begins, let’s setup a service connection between our CI/CD server and the Azure Portal. Navigate to the Azure portal. Service connections can be limited to subscription or resource group level. In our sample case we will set scope to the resource group. First, create a new resource group in Azure:

      Azure DevOps


      Ok, the resource group is ready, navigate now to the settings panel in Azure DevOps, then select the “Service connection” sub-category, and add a new “Azure Resource Manager” service connection:

      Azure DevOps

      After the confirmation, the provided values will be validated. If everything was provided correctly, the newly created service connection will be saved and ready to use in CI/CD pipelines configuration.

      First important thing is ready, now let’s focus on accessing to the Azure Repos GIT repository. In the Azure DevOps user profile you can easily add your existing SSH public key. To do this, navigate to the “SSH public keys” panel in your profile and add your SSH key:

      Azure DevOps


      Later on, clone our new empty repository to the local computer. In the “Repos” section in Azure DevOps portal you can find the “Clone” button which shows a direct link dedicated for cloning. Copy the link to the clipboard and run your local GIT bash console, and run “git clone” command:


      Azure DevOps


      One warning occurs, but it is ok – our repository is new and empty 🙂 Now, our environment is ready, then we will create a new project and its code will be committed to our repo.


      Creation of a development project and repository initialization

      I will use the Visual Studio 2017 but here we do not have a limitation to a specific version. When IDE will be ready, navigate to the project creation action, then select “Cloud” and “ASP.NET Core Web application”:

      Azure DevOps


      In next the step, we can specify the framework version and target template. I will base on the ASP.NET Core 2.1 Framework and MVC template:

      Azure DevOps

      After a few seconds a new sample ASP.NET Core MVC project will be generated. For our purpose it is enough to start the fun with Ci/CD pipeline configuration on the Azure DevOps side. This application will be used as a working example. Dealing with code and implementation of magic stuff inside the application is not a purpose of this post. When the project will be ready, you can run it locally, and after testing commit the code to our repository:

      Azure DevOps

      Build the pipeline (CI) by visual designer

      Our source code is in the repository, so let’s create a build pipeline for our solution. Navigate to the “Pipelines” section, next “Builds” and click “Create new”. In this example we will create our build pipeline from scratch with visual designer. If the pipeline would be committed to the repo in a YAML file (Pipeline as a code) then we have a possibility to load the ready pipeline from the file as well.


      Azure DevOps


      The full process contains three steps. At the beginning, we must specify where our code is located. The default option is the internal GIT repository in our project in the Azure DevOps. In our case this selection is correct. In second step, we can use one of the pre-defined build templates. Our application is based on the .NET Core framework, so I will select the ASP.NET Core template. After that the pre-defined agent job will be created for us. The job will contains steps like NuGet packages restoring, building of our solution, testing (in our app test doesn’t exist) and final publishing of the solution to package which will be used in the release pipeline. Here we can also export our pipeline to a YAML file:

      Azure DevOps


      On the “Triggers” tab the “Continuous integration” feature is not activated by default. Let’s enable them manually:


      Azure DevOps


      After saving all changes in the pipeline, our setup CI build pipeline is ready. Below you can see the sample action triggered by committing new changes in the repository. The new “CI build” runs immediately after new changes in the application code have been committed. Also email notifications will be sent to all subscribers. In the reason field, we can see “Continuous integration”. The icons in the email and in the build panel in the portal are an easy way to show the status of the build:

      Azure DevOps


      Release pipeline (CD)

      Now we will take care of the automatic release of our application to App Service on Azure. At the beginning, we need to create an App Service instance in the resource group, which has been selected to the service connection:


      Azure DevOps

      Ok, when the deployment is finished, we can start the creation process of the release pipeline in Azure DevOps. First we have to decide which template from the pre-defined list we will use. For our solution the “Azure App Service deployment” will be a good choice:


      Azure DevOps

      Next we must specify the source of artifacts. In our case, we will use the results of our build pipeline created in the previous steps:

      Azure DevOps


      Stages in the pipeline can be also renamed. This is a good practice in huge pipelines. We can observe what’s happened in each stage:

      Azure DevOps


      In the general settings for our new stage-agent, we must fill three very important parameters. First one is to specify which service connection the release pipeline will use, secondly, we must select the type of the application deployed to the  App Service. In the third parameter, we must put in the name of the existing App Service instance. We already created one at the beginning of this part of post.


      Azure DevOps


      Next in the configuration is to select the localization of the published application package. In our case this localization looks like this:

      Azure DevOps


      In a similar way like in the CI pipeline we need to enable the trigger in our newly created CD pipeline. To do that, we must click on the light icon on the artifact source and enable the trigger for the build:

      Azure DevOps


      It was the last step in the configuration. Looks like we are ready to commit some changes and check the final functionality 🙂


      Final check

      Let’s test our new full CI/CD pipeline then:

      1. Add the new code in the application:

      In the controller:

      Azure DevOps

      In the view:

      Azure DevOps

      2. Commit the changes to the repository:

      Azure DevOps


      3. Our CI build pipeline started automatically and is ready:

      Azure DevOps


      4. I received an email with confirmation of the correct building:

      Azure DevOps


      5. The release CD pipeline started automatically after success. The build from the CI pipeline and deployment is ready:

      Azure DevOps


      6. Changes have been deployed to Azure App Service and are visible:

      Azure DevOps


      As we can observe, the configuration of the full CI/CD pipeline for ASP.NET Core MVC web application is pretty easy. However, you must have some knowledge related to the configuration this stuff on the Azure DevOps side.

      We hope you will enjoy this post and this step-by-step guide will be useful for your future experience with Azure DevOps!


      This post is the 2nd part in our Azure DevOps series. Check out the other posts:

      #1: Azure DevOps Services – cloud based platform for collaborating on code development from Microsoft

      #3: Azure Cosmos DB – Multi model, Globally Distributed Database Service

      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

        Azure DevOps Services


        BlogTech Community

        This post will show the main features of the Azure DevOps services.

        Few words about Microsoft Azure DevOps

        Azure DevOps has been created as a successor of Visual Studio Team Services, mainly known as VSTS. In general, there is no one application. We should look at this service as a set of tools, which can help in several ways people who use the “DevOps” methodology for continuous delivery of the highest value for their end-users. Summarising, we can use Azure DevOps for storing our source code in the GIT repositories, setup an automatic build and release solutions for every new piece of code committed to the repository, as well as plan & track all the activities related to Agile project management on dedicated backlog and set of useful boards.


        Project setup

        Before we start building our pipelines and committing code, we must create a project in Azure DevOps. To do that, we need an account in the service. Account creation is free. When the account will be ready, the new user is able to fill in all the information related to the organisation, and after that he can create a new empty project.

        In the Azure DevOps project settings pane, the project owner can decide which features will be used. Below the picture shows a full list of available services. Let´s focus in detail on some of them:


        Azure DevOps


        When a new project starts, all the features should be divided into tasks and described in the project backlog. This is a place, where the Product Owner can create a proper order of activities, which will be implemented by the Development Team. Azure DevOps board features provide a way to create Epics, Features, User Stories filled by Tasks etc. Here testers can also create Test-Case scenarios and report bugs or issues.

        Azure Repos

        Azure DevOps provides an access to unlimited, private GIT repositories – Azure Repos. GIT is one of the most popular versions of control systems, with full scope of features like branching, tagging or pull requesting which are covered by the build-in repos service inside Azure DevOps. External providers like GitHub, Bitbucket or GitLab can be used as source repositories of an application code, when a build pipeline will be created in the Azure DevOps.


        Build pipelines

        Information about the build pipeline setup is stored inside the GIT repository in a YAML file. When the file exists in the repository, we can start the configuration process of our new build, and Azure DevOps will automatically use a pre-defined pipeline stored as a code inside project repository. This feature is really great. When in the future someone will need to re-create the pipeline, all information is stored in one file, which can also be used as a some kind of a process documentation.



        If the new project is configured, Azure DevOps project owner can use one of the pre-defined templates for build pipeline, and not only Microsoft solutions are supported:


        When the build pipeline is ready, we can easily queue our build, and when completed, track building history on dedicated pane:


        If something went wrong during the building process, all information will be available in the logs section inside the broken build. History pane shows also all additional information like the branch from which the code has been used for building, an icon adequate for the build status, and the unique number of the build from the solution.


        Release pipelines

        Release pipeline is a functionality which allow us to deploy our application to one or more destinations. Before we begin, we need to setup the correct service connections in the project settings pane. For example, if we want to deploy our web application to Azure App Service, we must configure service connection between Azure DevOps and Azure Resource Manager service. Part of the available service connections are shown below:


        In the first step we can decide if we want to use one of the pre-defined stage templates, or if we want to start with an empty job and setup all the steps on our own. Available templates for the stages are for example:  “Azure App Service Deployment”, “Deploy to Kubernetes cluster” etc. Picture below shows a part of the pre-defined templates list:

        A pipeline can be edited interactively. Every stage can be modified individually, and can consist of a set of tasks like: “Azure PowerShell” script execution, whole “Azure Resource Group Deployment” or even an interaction with Linux or macOS system by “Bash” script.

        A sample of the release pipeline is shown below. As the artifacts source for a pipeline has been selected, the build is created in the previous step. We can of course create the release pipeline without an associated build. If our repository contains only scripts, which do not have to be build, we can set as an artifacts source the GIT repository directly.



        This post is the 1st part in our Azure DevOps series. Check out the other posts:

        #2: Web application development with .NET Core and Azure DevOps

        #3: Azure Cosmos DB – Multi model, Globally Distributed Database Service

        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

          Predicting IT incidents in Financial Services



          As we’ve mentioned in previous blogs, one of the UK’s biggest banks, TSB, learnt the hard way earlier this year when it came to protecting their highly valuable systems from IT failures. The BBC coined the term ‘technology meltdown’ after 2 million customers of the bank lost access to their online banking services. Since then, a second ‘meltdown’ has occurred, and TSB’s CEO has stepped down.


          Banks have been slow to move legacy systems to cloud

          Banks and FSIs around the world have been slow on the uptake to modernise infrastructure and move legacy systems to the cloud. The complexities that surround moving large amounts of secure data, constantly changing market dynamics, and a need to shift company culture (such as moving to a more agile way of working) is tantamount to redesigning an entire industry. The problem is that this failure to move forward and be relevant has proved costly, and regulatory services have made the FSIs pay out large and easily preventable fines.

          Anuj Saxena, Head of FSI at Nordcloud, wrote in his blog that financial institutions often plan for highly available service operations and don’t consider potential failures. But one of the ways these businesses can improve their operational resilience is implementing automated tools and processes in order to recover from these potential incidents. Engaging with a Managed Cloud Services provider is the start of the solution.


          Planning for failure by implementing a well-oiled machine

          At the risk of sounding negative, planning for failure is the key to keeping systems up and running. Employing a DevOps function like the team at Nordcloud who have the experience in automating end to end deployments, operations & recovering cloud infrastructure, allows for flexibility and innovation, and creating runbooks and playbooks allows the teams to compare and match certain standards.

          FSIs need to become operationally resilient so they are not held back when an incident happens. Having a ‘well-oiled machine’ that will be able to respond to incidents quickly and agily will improve this resilience.


          But what’s the point of having this ‘holy-grail’ of automation unless you have someone who knows how to manage it?


          A dedicated Managed Services Provider

          Cloud experts within Nordcloud have experience in knowing what to monitor & what thresholds to configure out of the box, ensuring that problems are identified earlier and solved quicker.

          Our team uses an advanced adaptive (outlier detection), automated full-stack monitoring and instrumentation platform to enable a 360-degree view of a business’s infrastructure, ensuring that potential issues are identified and resolved before they become an issue. This automated response means reactions are faster, and human error is eliminated. In the same sense, developing a comprehensive runbook promotes standardised operating procedures which can be used repeatedly, allowing you to move to market faster.

          Businesses should also organise regular ‘Game Days’ where failure is simulated, and runbooks and playbooks are tested to ensure that in the event of failure, response and resolution is well rehearsed and therefore fast. Nordcloud’s team of experts can manage this and other day to day operations, helping our customers meet the regulatory compliance they require.

          IT time is valuable and generally scarce and your department should be focussed on projects that improve your company’s bottom line. FSIs who engage with Managed Cloud Service providers will be able to save sizeable amounts of money on potentially avoidable fines, and in the meantime make sure their customers’ online experience is not affected.  

          Realise all the benefits the public cloud has to offer FSI


          Cloud computing is on the rise in the financial services – are you ready?

          Download our free white paper Compliance in the cloud: How to embrace the cloud with confidence, where we outline some of the many benefits that the cloud can offer, such as:

          • Lowered costs
          • Scalability and agility
          • Better customer insights
          • Tighter security

          Download white paper

          Get in Touch.

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

            Cloud computing news #9: Bimodal IT strategy



            This week we focus on bimodal as IT strategy.

            Gartner defines bimodal as the practice of managing two separate, coherent modes of IT delivery, Mode 1 focused on stability and Mode 2 on agility. Mode 1 is sequential, emphasising predictability and accuracy when problems are well understood. Whereas Mode 2, is more agile, exploratory, aimed at solving new problems, drive business and deliver results.

            With digital transformation, IT delivery is not just about software and applications but also about business needs, customisations, scalability and efficiency. Bimodal IT helps in the adoption of new technology while keeping the space for traditional development open. According to Gartner survey 2016, 40% of CIO´s are on the bimodal journey, with the majority of the remainder planning to follow in the next three years.


            Insurance companies can embrace digitalization through Bimodal IT

            Insurance companies are in great need for agility and speedier time to market because of evolving customer demands, rise of FinTechs and regulatory requirements. According to Insurance Hub, the fundamental question is: How to reduce the complexity of a legacy IT landscape whilst promoting the development of new and innovative products and processes?

            In this context, Bimodal IT is the most promising approach. Some benefits of Bimodal approach are:

            1. Speed and flexibility – and ensuring efficient and safe operations of existing systems: new IT requirements can be quickly and flexibly implemented without replacing existing legacy systems. An integration layer synchronizes the ‘new’ and the ‘old’ world and makes functionalities from existing systems available for the new applications by means of defined interfaces.
            2. Cultural transformation: existing systems can be operated as usual but the development of the new innovative products need new agile approaches.
            3. Innovation at the interface to customers and partners: Through agile methods iterative character, products can be developed within short cycles, validated and adjusted based on customer feedback. As the insurance sector is often characterized by outdated system landscapes, Bimodal IT can be a crucial enabler in meeting the industries’ challenges.

            Read more in Insurance Hub


            Building For A Digital Future With Bimodal IT

            According to Digitalist Magazine, IT’s highest priority has been for decades to enhance control over the business systems of record – but this is not enough any more. Organizations must grow incrementally and exponentially – sometimes taking quantum leaps to get ahead of the market.

            For that reason, business leaders must shift from a traditional IT strategy to a bimodal IT approach that differentiates the business. Mode 1 systems helps guide and steer the business but mode 2 to catalyzes innovation by leveraging new sources of data, leading-edge technologies such as AI and machine learning, and massive compute power and storage functionality at scale.

            With a bimodal IT strategy, CIOs can capture data from IoT sensors, drones, devices, and new sources to come. Using powerful in-memory compute technology and a modern, data-based infrastructure, we can rapidly process that data and share it with mode 1 systems to create a more complete view of the business, its opportunities, and potential disruptors.

            There is no need to replace the technology that is working, however. Most companies will be able to retain their existing mode 1 systems. Teams can stand up new mode 2 systems and develop integrations between mode 1 and 2, allowing them to connect these systems to new sources of data.

            Read more in Digitalist Magazine



            We are ranked globally #2 by Gartner capability assessment in Mode 2 use cases such as supporting agile applications and cloud native transformation. We can help you move away from legacy applications and update your workflow with modern, cloud-based applications that are tailored to solve your most challenging problems. Benefit from scalability, and easier and more flexible management.

            How can we help you take your business to the next level? 

            Get in Touch.

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

              Cloud Computing News #5: AI, IoT and cloud in manufacturing



              This week we focus on how AI, IoT and cloud computing are transforming manufacturing.

              Cloud Computing Will Drive Manufacturing Growth

     lists 10 ways cloud computing will drive manufacturing growth during this year:

              1. Quality gains greater value company-wide when a cloud-based application is used to track, analyse and report quality status by center and product.
              2. Manufacturing cycle times are accelerated through the greater insights available with cloud-based manufacturing intelligence systems.
              3. Insights into overall equipment effectiveness (OEE) get stronger using cloud-based platforms to capture, track and analyse the health of the equipment.
              4. Automating compliance and reporting saves valuable time.
              5. Real-time tracking and traceability become easier to achieve with cloud based applications.
              6. APIs let help scale manufacturing strategies faster than ever.
              7. Cloud-based systems enable higher supply chain performance. 
              8. Order cycle times and rework are reduced.
              9. Integrating teams’ functions increases new product introduction success. 
              10. Perfect order performance is tracked across multiple production centers for the first time.


              Machine learning in manufacturing

              According to CIO Review, the challenge with machine learning in manufacturing is not always just the machines. Machine learning in IoT has focused on optimizing at the machine level but now it’s time for manufacturers, to unlock the true poten­tial of machine learn­ing, start looking at network-wide efficiency.

              By opening up the entire network’s worth of data to these network-based algorithms we can unlock an endless amount of previously unattainable opportunities:

              1. With the move to network-based machine learning algorithms, engineers will have the ability to determine the optimal workflow based on the next stage of the manufacturing process.
              2. Machine-learning algorithms can reduce labor costs and improve the work-life balance of plant employees. 
              3. Manufacturers will be able to more effectively move to a multi-modal facility production model where the capacity of each plant is optimized to increase the efficiency of the entire network.
              4. By sharing data across the network, manufacturing plants can optimize capacity.
              5. In the future, the algorithms will be able to provide the ability to schedule for purpose to optimize cost and delivery and to meet the demand.

              Read more in CIO Review

              Introducing IOT into manufacturing

              According to Global Manufacturing, IoT offers manufacturers many potential benefits in product innovation, but it also brings challenges, particularly around the increased dependency on software:

              1. Compliance: Manufacturers developing IoT-based products must demonstrate compliance due to critical safety and security demands. In order to do this, development organisations must be able to trace and even have an audit trail for all the changes involved in a product lifecycle.
              2. Diversity and number of contributors, who may be spread across different locations or time-zones and working with different platforms or systems.  Similarly, over-the-air updates, also exacerbate the need for control and managing complex dependency issues at scale and over long periods of time.
              3. Need to balance speed to market, innovation and flexibility, against the need for reliability, software quality and compliance, all in an environment that is more complex and involving many more components.

              Because of these challenges, increasing number of manufacturing companies revise how they approach development projects. More of them are moving away from traditional processes like Waterfall, towards Agile, Continuous Delivery and DevOps or hybrids of more than one.  These new ways of working also help empower internal teams, while simultaneously providing the rigour and control that management requires.

              In addition to new methodology, this change requires the right supporting tools. Many existing tools may no longer be fit for purpose, though equally many have also evolved to meet the specific requirements of IoT. Building the right foundation of tools, methodologies and corporate thinking in place is essential to success.

              Read more in Global Manufacturing

              Data driven solutions and devops at Nordcloud

              Our data driven solutions and DevOps will make an impact on your business with better control and valuable business insight with IoT, modern data platforms and advanced analytics based on machine learning. How can we help you take your business to the next level? 

              Get in Touch.

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.