SaaS Business Model and Public Cloud are a Winning Combination for ISVs

CATEGORIES

Blog

➡️Ready to join the conversation? Check out the full workshop agenda & sign up! ⬅️

With the seemingly unstoppable growth of cloud computing, and the rising trend of subscription-based services, many large organisations are committed to purchasing software as a service (SaaS) rather than buying and hosting software internally.

The already crowded software market is evolving, and industry researchers and commentators are taking note.

Gartner has for a long time asserted that “by 2020, all new entrants and 80% of historical vendors will offer subscription-based business models”.

For independent software vendors (ISVs) that built their business around the traditional model of selling licenses and maintenance agreements, moving to SaaS involves changes in everything from their business model, to their development strategies and their own IT requirements.

Cloud based models allow ISVs to focus on their core goals of developing and delivering applications and improving their customer experience.

Challenges

SaaS turns the traditional model of software delivery on its head. Rather than purchasing licenses, paying an annual maintenance fee for upgrades and support and running applications in-house, SaaS allows organisations to buy only the number of “seats” they require at any time.

This is not only less expensive than the traditional license model, but it also allows them to reduce or increase their software purchases as their needs fluctuate.

However, SaaS requires ISVs to transform from software developers to services providers. From an operational perspective, this requires new capabilities, such as meeting service level agreements, establishing real-time usage monitoring and billing capabilities and meeting strict security requirements.

The robust infrastructure required to provide SaaS services 24×7 requires a substantial investment.

The business challenges are even greater, ranging from the dramatically lower margins provided by SaaS, to changes in cash flow and pricing models, to requirements for customer support.

Faced with all these challenges, and because there are no standard pricing models, it may at first seem too daunting to embark on this journey. However, your competition may not feel the same way.

Opportunities

The SaaS model is creating new opportunities for both ISVs and their customers.

Consumption based charging models enable low-cost-of-entry and low-cost-of-software so clients can experiment with applications that optimise business processes, drive higher efficiency, productivity and growth.

Cloud based models allow ISVs to focus on their core goals of developing and delivering applications; and improving their customer experience. Tasks like capacity management, infrastructure budget management and platform availability can all be offloaded to a cloud partner; and importantly these costs can be married to usage and revenue for the ISV.

Potentially other tasks can be offloaded too – ISVs working with a Managed Service Provider can also offload tasks such as patching, replication, redundancy and security. With the right partner the ISV can deliver agility to the DevOps cycle and then rely on the MSP to implement change control, security or compliance enhancements, business continuity and a robust availability and performance SLA for the production applications.

It may at first seem too daunting to embark on the modernisation journey, however, your competition may not feel the same way.

Is it right for your software business?

The combination of opportunities presented by cloud and SaaS business models has expanded the options available to ISVs for software development and delivery; and in turn provided a greater number of options and better value solutions for end-users. The cloud is reducing barriers to entry for new software businesses and allowing existing ISVs to be more agile, customer responsive and innovative.

Both customers of these solutions and the ISVs themselves stand to gain considerable benefits in transitioning to the cloud and taking advantage of cloud infrastructure and managed services as long as due diligence is undertaken in this transition.

Nordcloud have helped many ISVs to leverage cloud technologies to effectively transition their business from that of a traditional software vendor to a SaaS provider, and are hosting a series of workshops to share our experiences and help you to decide when (or indeed, whether) to embark on this modernisation journey.

In the workshops, we will explore the business and technology challenges for ISVs moving to a SaaS model and highlight how effective use of cloud technologies and expertise can overcome many of them by providing entitlement, analytical, billing/payment and security services.

All ISVs are invited to attend, whether you might be considering taking those first steps, or perhaps you are well on your way and looking for some guidance and advice on best practice…?  Come along and join the conversation.

Dates & locations 

Amsterdam, the Netherlands

12.11.2019, 09:00 – 12:00
21.11.2019, 09:00 – 12:00
17.12.2019, 09:00 – 12:00

Microsoft Nederland

Read more and sign up (in Dutch)

Utrecht, the Netherlands

3.12.2019, 09:00 – 12:00

REGUS Oorsprongpark

Read more and sign up (in Dutch)

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.









    Benefits and risks of lift & shift migration to public cloud

    CATEGORIES

    Blog

    Lift & shift is a common option for moving on-premises apps in the cloud while avoiding application re-designing. The aim of Lift & shift is to provision, import, and deploy applications and infrastructure resources to match existing, on-premises architecture without modification. Our customer companies choose to lift and shift in order to reduce on-premise infrastructure costs and then re-architecture application once it is in the cloud.  

    Typical example of Lift & shift is copying virtual machines (containing applications and data) and storage files (just data) across the internet into a pre-deployed target public cloud account. Although Lift and shift can be done manually, the process can and should be automated with tools such as AWS Server Migration Service.

     

    Benefits of Lift and shift cloud migration

    While Lift and shift is not the only way of migrating to the cloud, it can be the fastest and cheapest migration method. Compared to replatforming and refactoring business are limited cost, effort and complexity. Some of our customers also find that it is easier to re-architecture applications once they are running in public cloud, mostly because during the process of migrating the application, data and traffic, they also develop better skills.

    Summary of benefits:

    • Migrate fast to public cloud
    • Reduced risk compared to replatforming and and refactoring
    • Lower initial cost compared to replatforming and and refactoring
    • Thanks to multiple cloud native and partner tools available, the process can be highly automated with limited or no downtime.

     

    Risks of Lift and shift

    Lift and shift is appeals because it is easiest way of migrating to public cloud, but it isn’t without its risks and opportunity costs.

    Most typical challenge we see with Lift and shift is that existing applications are not properly resized for public cloud, as outside the cloud applications are often over-provisioned for peak load. In worst case scenario the application architecture is not cloud ready or cloud friendly resulting in degraded performance, or operational issues.

    Secondly, just copying applications and data without understanding what’s what, means everything is pulled to the public cloud, including insecure configurations and malware. Therefore lift and shift project should not be conducted with lack of effective security governance, risk management, compliance with company’s security policy.  

    Also actual costs may be more than estimates, which can be caused by inaccurate resource estimates, providers changing prices due to upgrading, or bad performance resulting in the need for more resources.

    Thirdly, applications that are lift & shifted to the public cloud may be able to take full advantage of the cost-efficiencies of native cloud features such as autoscaling and ephemeral computing.

    Summary of risks:

    • Inefficient and expensive cloud consumption.
    • Lack of understanding of the cloud. Inefficiency of work, or data leakage with wrong operation due to lack of cloud knowledge.
    • Poor cost & workload estimation to due to lack of cloud skills or understanding of application data.

     

    Nordcloud has proven services that can effectively mitigate lift and shift risks. Our post migration capacity optimisation service reduces cloud spend to optimal level and our training and advisory services can train IT organisation and create operating model that leverages cloud benefits. We believe that lift and shift is a valid option for IT organisations that want to progress fast.

    Read about Nordcloud Migration Factory here

    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.









      Migrate to Azure & get 3 years support for SQL and Windows Server 2008. Check our special offer!

      CATEGORIES

      Microsoft Azure

      Microsoft Azure extends support for both SQL Server 2008 and Windows Server 2008 that are quickly approaching their end of support:

      • Extended support for SQL Server 2008 and 2008 R2 will end on July 9, 2019.
      • Extended support for Windows Server 2008 and 2008 R2 will end on January 14, 2020.

      Extended security updates will be available for free in Azure for 2008 and 2008 R2 versions of SQL Server and Windows Server to help secure your workloads for three more years after the End of support deadlineThis means that Azure is an ideal place for older SQL and Windows servers. 

       

      Here is how to make a business case out of migrating to Azure:

      • Azure costs: using reserved instances, hybrid benefits, rightsizing
      • Extended support cost if left as-is (20% of server license costs per year)
      • Free extended security updates
      • Azure also provides other excellent services for older servers, such as network micro-segmentation, automatic OS patches etc.

       

      Migrate to Azure with Nordcloud, a Microsoft Azure Expert Managed Services Provider

      Companies need to evaluate the workloads they have running and consider their options to avoid infrastructure and applications going unprotected, dramatically increasing risk to their IT operations. By migrating to Azure, Microsoft will provide extended security updates for Windows Server 2008 for an additional three years. That means protection for workloads plus ability to access the benefits Azure such as flexibility, cost reductions, reduced time to market and access to new services (such as PaaS, AI and ML).

      Nordcloud, a Microsoft Azure Expert MSP, can help you to evaluate your workloads and ensure that you are selecting the best and more cost effective migration process for your at risk workloads.

       

      We Discover – Migrate – Optimise – Manage

      • Discover: In depth discovery of 25 `VMWare’ hosts, enabling Nordcloud to provide cost analysis for running in Azure and a migration plan.
      • Migrate: Secure deployment of the required Azure infrastructure to ‘land’ your virtual machines. Once you have confirmed that everything is as it should be, Nordcloud conducts the final migration, allowing you to benefit from the security updates though to 2023.
      • Optimise: Once your applications are running smoothly in Azure, Nordcloud will start an optimisation process to ensure you are minimising your costs when running in Azure.
      • Manage: Nordclouds Managed Cloud Services team will manage the Azure infrastructure on your behalf, quickly resolving or escalating any issues that are detected.

      Get our special offer with project details here

      DOWNLOAD OUR OFFER – Migrate to Azure

      You can also read about our migration services here or  contact us directly here.

      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.









        Security in the Public Cloud: Finding what is right for you

        CATEGORIES

        Blog

        Security concerns in the cloud pop up every now and then, especially when there has been a public breach of some sort. What many businesses still don’t realise is that the public cloud is a shared responsibility, from both the cloud provider and customer. Unfortunately, 99% of these breaches are down to the customer, not the cloud provider. Some of these cases are due simply to the customer not having the competences in building a secure service in the public cloud.

        Cloud comes in many shapes and sizes

        • Public cloud platforms like AWS, Azure and GCP
        • Medium cloud players
        • Local hosting provider offerings
        • SaaS providers of variable capabilities and services: From Office 365 to Dropbox

        However, if the alternative is to use your own datacenter, the data center of a local provider, or a SaaS service, it’s worth building a pros and cons table and making a selection after that.

        Own data centre
        Local hosting provider
        Public cloud
        • Most responsibility
        • Competence varies
        • Variable processes
        • Large costs

        However – Most choice in tech

        • A lot of responsibility
        • Competence varies
        • Variable processes
        • Large costs

        – Some choice in tech

        • Least responsibility
        • Proven competence & investment
        • Fully automated with APIs
        • Consumption-based

        -Least amount of choice in tech

        Lack of competence is typical when a business ventures into the public cloud on their own, without a partner with expertise. Luckily:

        • Nordcloud has the most relevant certifications on all of the major cloud platforms
        • Nordcloud is ISO/IEC 27001 certified to ensure our own services security is appropriately addressed
        • Typically Nordcloud builds and operates customer environments to meet customer policies, guidelines and requirements

        Security responsibilities shift towards the platform provider the more high value services like IaaS, PaaS, SaaS are used. All major public cloud platform providers have proven security practices with many certifications such as:

        • ISO/IEC 27001:2013 27013, 27017:2015
        • PCI-DSS
        • SOC 1-3
        • FIPS 140-2
        • HIPAA
        • NIST

        Gain the full benefits of the public cloud

        The more cloud capacity shifts towards the SaaS end of the offering, the less the business needs to build the controls on their own. However, existing applications are not built for the public cloud and therefore if the application is migrated to the public cloud as it is, similar controls need to be migrated too. Here’s another opportunity to build pros & cons table: Applications considered for public cloud migration ‘as is’, vs app modernisation.

        ‘As is’ migration
        Modernise 
        • Less benefit of cloud platform
        • IT-driven

        BUT

        • You start the cloud journey early
        • Larger portfolio migration
        • Time to decommission old infra is fast
        • Slower decommissioning
        • Individual modernisations

        BUT

        • You can start you cloud-native journey
        • Use DevOps with improved productivity
        • You have the most benefit from using cloud platforms

        Another suggestion would be to draw out a priority table of your applications so that you gain the full benefits of the public cloud.

        In any case, the baseline security, architecture, cloud platform services need to be created to fulfil requirements in the company security policies, guidelines and instructions. For example:

        • Appropriate access controls to data
        • Appropriate encryption controls based on policy/guideline statements matching the classification
        • Appropriate baseline security services, such as application level firewalls and intrusion detection and prevention services
        • Security Information and Event Management solution (SIEM)

        The areas listed above should be placed into a roadmap or project with strong ownership to ensure that the platform evolves to meet the demands of applications at various stages in their cloud journey. Once the organisation and governance are in place, the application and cloud platform roadmaps can be aligned for smooth sailing into the cloud where appropriate, and the cloud-native security controls and services are available. Nordcloud’s cloud experts would be able to help you and your business out here.

        Find out how Nordcloud helped Unidays become more confident in the security and scalability of their platform.

        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.









          Persisting Docker Volumes in ECS using EFS

          CATEGORIES

          Blog

          Last week we faced a new challenge to persist our Docker Volume using EFS. Sounds easy, right? Well, it turned out to be a bit more challenging than expected and we were only able to find a few tips here and there. That is why we wrote this post so others may succeed faster.

          Before digging into the solution, let’s take a minute to describe our context to elaborate a bit more on the challenge.
          First of all, we believe in Infrastructure as Code and thereby we use CloudFormation to be able to recreate our environments. Luckily Amazon provides a working sample and we got EFS working quite easily. The next part was to get Docker to use a volume from EFS. We got lucky a second time as Amazon provides another working sample.

          We managed to combine these resources and everything looked alright, but a closer look revealed that the changes did not persist. We found one explanation for why it didn’t work. It appears that we mount EFS after the Docker daemon starts and therefore the volume mounts an empty non-existing directory. In order to fix that we did two things, first we orchestrated the setup and then we added EFS to fstab in order to auto-mount on reboot.

          The solution looks a bit like the following:

            
          EcsCluster:
              Type: AWS::ECS::Cluster
              Properties: {}
            LaunchConfiguration:
              Type: AWS::AutoScaling::LaunchConfiguration
              Metadata:
                AWS::CloudFormation::Init:
                  configSets:
                    MountConfig:
                    - setup
                    - mount
                  setup:
                    packages:
                      yum:
                        nfs-utils: []
                    files:
                      "/home/ec2-user/post_nfsstat":
                        content: !Sub |
                          #!/bin/bash
          
                          INPUT="$(cat)"
                          CW_JSON_OPEN='{ "Namespace": "EFS", "MetricData": [ '
                          CW_JSON_CLOSE=' ] }'
                          CW_JSON_METRIC=''
                          METRIC_COUNTER=0
          
                          for COL in 1 2 3 4 5 6; do
          
                           COUNTER=0
                           METRIC_FIELD=$COL
                           DATA_FIELD=$(($COL+($COL-1)))
          
                           while read line; do
                             if [[ COUNTER -gt 0 ]]; then
          
                               LINE=`echo $line | tr -s ' ' `
                               AWS_COMMAND="aws cloudwatch put-metric-data --region ${AWS::Region}"
                               MOD=$(( $COUNTER % 2))
          
                               if [ $MOD -eq 1 ]; then
                                 METRIC_NAME=`echo $LINE | cut -d ' ' -f $METRIC_FIELD`
                               else
                                 METRIC_VALUE=`echo $LINE | cut -d ' ' -f $DATA_FIELD`
                               fi
          
                               if [[ -n "$METRIC_NAME" && -n "$METRIC_VALUE" ]]; then
                                 INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
                                 CW_JSON_METRIC="$CW_JSON_METRIC { \"MetricName\": \"$METRIC_NAME\", \"Dimensions\": [{\"Name\": \"InstanceId\", \"Value\": \"$INSTANCE_ID\"} ], \"Value\": $METRIC_VALUE },"
                                 unset METRIC_NAME
                                 unset METRIC_VALUE
          
                                 METRIC_COUNTER=$((METRIC_COUNTER+1))
                                 if [ $METRIC_COUNTER -eq 20 ]; then
                                   # 20 is max metric collection size, so we have to submit here
                                   aws cloudwatch put-metric-data --region ${AWS::Region} --cli-input-json "`echo $CW_JSON_OPEN ${!CW_JSON_METRIC%?} $CW_JSON_CLOSE`"
          
                                   # reset
                                   METRIC_COUNTER=0
                                   CW_JSON_METRIC=''
                                 fi
                               fi
          
          
          
                               COUNTER=$((COUNTER+1))
                             fi
          
                             if [[ "$line" == "Client nfs v4:" ]]; then
                               # the next line is the good stuff
                               COUNTER=$((COUNTER+1))
                             fi
                           done <<< "$INPUT"
                          done
          
                          # submit whatever is left
                          aws cloudwatch put-metric-data --region ${AWS::Region} --cli-input-json "`echo $CW_JSON_OPEN ${!CW_JSON_METRIC%?} $CW_JSON_CLOSE`"
                        mode: '000755'
                        owner: ec2-user
                        group: ec2-user
                      "/home/ec2-user/crontab":
                        content: "* * * * * /usr/sbin/nfsstat | /home/ec2-user/post_nfsstat\n"
                        owner: ec2-user
                        group: ec2-user
                    commands:
                      01_createdir:
                        command: !Sub "mkdir -p /${MountPoint}"
                  mount:
                    commands:
                      01_mount:
                        command:
                          Fn::Join:
                            - ""
                            - - "mount -t nfs4 -o nfsvers=4.1 "
                              - Fn::ImportValue:
                                  Ref: FileSystem
                              - ".efs."
                              - Ref: AWS::Region
                              - ".amazonaws.com:/ /"
                              - Ref: MountPoint
                      02_fstab:
                        command:
                          Fn::Join:
                            - ""
                            - - "echo \""
                              - Fn::ImportValue:
                                  Ref: FileSystem
                              - ".efs."
                              - Ref: AWS::Region
                              - ".amazonaws.com:/ /"
                              - Ref: MountPoint
                              - " nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0\" >> /etc/fstab"
                      03_permissions:
                        command: !Sub "chown -R ec2-user:ec2-user /${MountPoint}"
                      04_restart_docker_and_ecs:
                        command: !Sub "service docker restart && start ecs"
              Properties:
                AssociatePublicIpAddress: true
                ImageId:
                  Fn::FindInMap:
                  - AWSRegionArch2AMI
                  - Ref: AWS::Region
                  - Fn::FindInMap:
                    - AWSInstanceType2Arch
                    - Ref: InstanceType
                    - Arch
                InstanceType:
                  Ref: InstanceType
                KeyName:
                  Ref: KeyName
                SecurityGroups:
                - Fn::ImportValue:
                    Ref: SecuritygrpEcsAgentPort
                - Ref: InstanceSecurityGroup
                IamInstanceProfile:
                  Ref: CloudWatchPutMetricsInstanceProfile
                UserData:
                  Fn::Base64: !Sub |
                    #!/bin/bash -xe
                    echo ECS_CLUSTER=${EcsCluster} >> /etc/ecs/ecs.config
                    yum update -y
                    yum install -y aws-cfn-bootstrap
                    /opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource LaunchConfiguration --configsets MountConfig --region ${AWS::Region}
                    crontab /home/ec2-user/crontab
                    /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource AutoScalingGroup --region ${AWS::Region}
              DependsOn:
              - EcsCluster
          
          
          Here is what we did compared to the original AWS provided template:
          1. extracted FileSystem EFS into another CF template and exported the EFS identifier so that we can use ImportValue
          2. added -p to the mkdir command just in case
          3. enhanced mount to use imported filesystem reference
          4. added mount to fstab so that we auto-mount on reboot
          5. recursive changed EFS mount ownership
          6. restarted Docker daemon to include mounted EFS and started ECS as it does not automatically restart when the Docker daemon restarts
          7. added ECS cluster info to ECS configuration
          8. added ECS agent security group so that port 51678 which the ECS agent uses is open
          9. added yum update just in case
          10. included launch configuration into auto scaling group for the ECS cluster and added depends on ECS cluster

          We were a bit surprised that EFS does not require an additional volume driver to function. It appears to work out-of-the-box and turned out to be quite straightforward. Thank you for reading and enjoy using EFS as a means to persist your Docker Volumes in your ECS cluster!

          Get in Touch.

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.