All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit?

The cloud won’t innovate on your behalf. If you have no plan for how to modernise your IT estate as part of a transition into cloud, you’re going to miss out on the real key benefits. This is a common mistake companies make when using the lift-and-shift model. There is no doubt this is a quick and valid way to move workloads into public cloud, especially when working with strict and rigid deadlines. An agreed datacentre exit is a good example.

Your end goal should not be to create traditional datacentres in the cloud, why go through all that effort and expense to create something that you already have available to you. The end of a successful lift-and-shift migration should be the catalyst for becoming truly cloud-native and that’s where the real benefit can be unlocked.

2) This is expensive, I don’t understand, I thought cloud would save us money.

A very common misconception is that cloud is simply cheaper in all aspects. This is false. Cloud can be significantly cheaper, however the execution is critical. If you treat your public cloud platform with the same approach you’ve always had to IT infrastructure then cloud becomes costly, and very quickly. Here’s some some useful tips below for the types of changes that can prevent your cloud costs spiralling out of control;

  • Identify which workloads can be refactored to be truly cloud native. Traditional IaaS workloads can often be transitioned into PaaS and SaaS offerings that unlock large cost savings. Can websites previously running on virtual machines be reworked to run in App Services? Can Microsoft SQL Databases be migrated into Azure SQL or SQL Managed Instances?
  • Use the built-in tools that cloud providers offer to constantly monitor your workloads. Tools such as Azure Advisor provide free recommendations on things such as cost, security and performance to help you better understand your cloud workloads. Virtual Machines running in Azure are often over allocated resources that can be “right-sized” to a different SKU to save costs, for example.
  • Use tagging to attribute resources to specific cost-centres, you’ll be much more popular with the finance department if you do! All too often companies have deployed resources into the cloud, costs are starting to grow and they’ve no way of telling which costs are attributed to specific work streams.

3) We don’t need our internal IT staff now, right?

Incorrect. Any migration to public cloud can create concern for IT departments on both sides of the argument. IT leadership can sometimes incorrectly believe a mass culling of IT staff can begin, whilst other members of the team will often believe the cloud is going to render them redundant and no longer required.

The fact is, any successful cloud adoption strategy should place the re-skilling of IT staff at the forefront, and it’s vital this process begins in advance of any migration tasks taking place. Whole IT departments must buy into the process of upskilling and modernising staff to be ready for cloud. Public cloud providers offer free credit to start using their platform, free (and paid for) training events and certification programs to drive this process forward. Mass skill gaps within IT teams is good for nobody.

There is no doubt that more traditional IT roles must evolve as part of the process, age-old tasks such as dealing with Server hardware issues of course will be removed, but that doesn’t mean IT roles as a whole should be.

4) After re-platforming, it turns out our legacy application doesn’t work in the cloud and can’t be refactored to be Cloud native.

This is becoming far more common, companies embark on a journey to migrate legacy applications into the cloud, without fully understanding the implications, and whether it’s technically possible. Not every application can be moved into the cloud, sometimes legacy applications have to remain in situ, it simply unavoidable. Selecting the right workloads to migrate into cloud is vital, a failed migration is costly, needlessly drains the time of technical resources and makes it much more difficult to secure buy-in from leadership teams to move additional applications later down the line.

It’s important to make use of the application assessment toolkits that are available to you. Public cloud providers supply tools such as Azure Migrate to help assess the feasibility of your migration and locate any bumps in the road you may encounter along the way. Often, an estimated costing is provided to help shape your decision on whether migrating is worthwhile and cost effective. One failed migration doesn’t mean that public cloud isn’t for you, it more likely means sufficient pre-planning didn’t take place on your first attempt.

5) Our Cloud migration seems to be taking ages, I thought this was simple.

A common misconception is that cloud is easy, whether migrating workloads or setting up from scratch. The process can’t be that difficult, can it? A cloud migration is no small undertaking, many companies under-estimate the volume of work associated with making a success of their cloud journey.

Whether or not your transition into Azure can be considered a success is not decided by how long the process takes, key stakeholders and interested parties will always want an indication on how long it’s likely to take though. Below is a list of key points for managing the expectation surrounding your migration and how to keep the process moving in a positive direction;

  • Split the migration down into small, more manageable work streams. Application by application, or in small groups of applications. It’s very easy to lose track of project status when moving too many workloads at the same time, an initial large upturn in Azure spend on a non-finished project can also raise awkward questions from your finance teams.
  • Set realistic goals for your cloud migration, create milestones for the project based on these goals. Using tools such as Boards within Azure DevOps to monitor outstanding tasks and backlogs is a great way to monitor your own progress but also demo key wins and successes to interested parties.
  • Use the tools you have available to you to map out a realistic plan. Tools such as Azure migrate should be used to sanity your migration plan, ensuring that any workload you intend to move is suitable before you start.

6) We constantly hear about DevOps and Automation, but I don’t see where it fits for us.

Whenever cloud is mentioned, DevOps and Automation follow not too far behind. Often companies have heard how using automation can streamline processes, save money and remove tedious manual tasks from their day-to-day duties, but they don’t fully understand how it fits into their business.

Automation is a major step forward in modernising your IT estate and becoming truly cloud native.

  • Modernise your approach to IT infrastructure, going further than just migrating to cloud native services. Do you need to run your workloads 24 hours a day? Why pay for resources when nobody is working to consume them? Can servers be switched on in the morning and turned off at night? Use Azure Automation accounts to handle this with no human intervention.
  • Take Automation one step further by utilising features within Azure DevOps. Do your non-production environments need to available constantly? Make use of CI/CD pipelines, with infrastructure-as-code, to remove whole environments when they are not required and spin them up when they are.
  • Embrace DevOps, modernise working practices. Create cross-functional, multi-skilled teams and remove the frustrations caused by the historic Developer vs IT Operations culture.
  • Instant access to new regions and markets as they become a requirement. Using repeatable infrastructure-as-code templates and CI/CD pipelines, have new regions online in hours, not weeks/months.

7) Everything works, but is it secure? How do I tell?

One major concern when thinking about cloud is how can they tell the cloud is truly secure? Often, they already have production workloads online, but have no real grasp on whether it is secured correctly or not.

Security is not just about malicious threats from external sources, these are valid concerns, but it also about what guardrails can be put in place to protect against human error and lack of understanding. Cloud platforms can unwittingly allow instant and privileged access to business-critical resources for staff members that shouldn’t have access, if your cloud platform is not governed correctly.

  • Create a well governed cloud. Working with the principle of least privilege, use role-based access control to grant users access to exactly what they require, and no more. Custom roles can be created and tailored to suit business needs, these can be granular to the extent that access is granted at specific resource levels, resource groups or subscriptions.
  • Familiarise your Security Operations team with the new threats and issues that cloud brings. Understand that cloud comes with a shared responsibility model, there are parts that are fully within your administrative control, but certain services and platform offerings require a greater level of trust in cloud providers to keep you secure.
  • Utilise the tools that can highlight security misconfigurations within your cloud platform. Azure Security Centre and Azure Advisor will constantly alert on insecurities and often provide solutions for how these can be resolved. Regularly monitoring and checking this is a very useful way to stay up to date with the status of your cloud security.
  • Finally, don’t forget that traditional security best practices still apply. The cloud brings new challenges, but traditional methods for securing your IT estate are still very valid. 

So whats is next in your migration?

Book a free advisory session so we can help you maximise the value of your cloud migration

Blog

Microsoft awards Nordcloud Partner of the year for 2020

Nordcloud has been awarded the 2020 Microsoft Country Partner of the Year Award for Finland. The awards recognises partners that...

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Stop murdering “Agile”, and be agile instead

CATEGORIES

AgileTech

“Agile Macabre” on techcamp.hamburg, Apr-2020

We keep hearing “Agile is Dead” – sceptics joking, shrugging it off, having had enough of the contentless propaganda of the agilists. When the #tchh guys reached out for us to give a talk, we decided to tickle this topic provocatively. You agree “Agile is Dead”? It’s because you’re murdering it.

I’ve been leading projects since 2002, becoming a full-blown agilist around 2011. Not so long after that, one of the creators of the Agile Manifesto, Dave Thomas proclaimed “Agile is Dead” in his famous 2014 blog and talk. Ever since, I thought, everybody’s doing a Danse Macabre around agility, so let’s use this ghastly theme and its legacy to explain things.

The Dance of Death concerns a few very spiritual topics, such as prayer to free the dead from sin, resurrection of the dead and even creatio ex nihilo (in the sense of meaning of existence). So, to translate this death theme to our ways of working, perhaps we should start with the last: to explain why we are here.

Any normal business wants to disrupt their way of working in order to achieve one or more of the classic business goals: 

  • reduce time to market and/or operational costs
  • increase performance and/or transparency. 

If you cannot identify your goal fitting to these four, you should immediately stop your transformation because you’re doing it just for the change itself, without a tangible business purpose. And then we cannot talk about agility as you’re not a customer in need. 

However, if either of the above four aspects applies to your business, you should then know why you want “Agile”? (By the way, the capitalization of this word will pretty soon go away when you start to practice. As in, it’s an adjective and as such spelled lowercase.) So, why? Ideally,

  1. not because you don’t know what you want, but because you like to change directions, or
  2. not because you don’t want to plan, but because you want to have metrics to measure your progress.

But if you’re so clear in what you want and why you want it, how come you’re still in agony? As I experienced, there are four basic reasons why teams struggle with agility.

  1. OTIF – Wanting It All. Failing to prioritize and accept/measure throughput (the concept of MVP, duh?)
  2. Super – Wanting It Best. Not allowing to fail fast in small amounts in order to help understand the whole product better
  3. Random – No Respect to Plans. Ignoring agreements, changing requirements often
  4. Blind – No Reflection. Showing no remorse, marching on without asking what went wrong and how to improve

So to be fair, poor agility is agonizing because you’re slaughtering it. Stop the torture, and you’ll be friends.

How to be friends? A lot of posts are talking about rules and laws, and we seem to like these when they come in threes. So, my “3 Laws of Agility”?

  1. Product Vision. Try to understand what you build, and instead of project planning, focus on Sprint outcomes. (Iteration, the very core of agility.) Don’t try to be budget (as in estimations, capacity or velocity) driven, rather emphasise value maximisation. And don’t be so service or tech oriented: develop a product mindset, understand what you will provide your customers with.
  2. Tech Enablement. Without this your “agile” project is as good as dead. Really. When you think of application modernisation, you want to be well architected, set up development processes, focus on testing, automate the hell out of it and embrace nativity in the cloud. I promise you, if you’re offline and manual, collaboration, monitoring and measuring will suck. You want working software in production, ASAP. Well, here you go: no stress about environments and deployments, just fun and results – if the tech is right.
  3. Team Discourse. You want results, habitually. First, you should focus on doability: small bits that give the sense of achievement and winning. This way you build trust and camaraderie, and your team will love working for you. Second, embrace iterativity. Continuous discourse and cycled structure will give you the same sense as above, but also you have a much easier time reporting and reflecting.

Finally, about a tech camp being forced online. It’s not a bad thing after all. Actually fitting the tech enablement bit above. Once we’re forced to be locked away, we start talking to each other more, using smarter collaborative techniques, less words, more meaning. It’s a resurrection in a way. 

So, if you want to resurrect it, accept that agile is an adjective. Just as dead is. Both can be made nouns, but that’s very scary and contagious. Don’t be a zombie, choose life.

Blog

Stop murdering “Agile”, and be agile instead

“Agile Macabre” on techcamp.hamburg, Apr-2020 I’ve been leading projects since 2002, becoming a full-blown agilist around 2011. Not so long...

Blog

Make Work Better by Documentation

Sharing knowledge in a team is an ongoing challenge. Finding balance between writing documentation and keeping it up to date,...

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Getting started with ARO – Application deployment

This is the fourth blog post in a four-part series aimed at helping IT experts understand how they can leverage the benefits of OpenShift container platform.

In the first blog post I was comparing OpenShift with Kubernetes and showing the benefits of enterprise grade solutions in container orchestration.

The second blog post introduced some of the OpenShift basic concepts and architecture components.

The third blog post was about how to deploy the ARO solution in Azure.

This last blog post in the series is about how to use the ARO/OpenShift solution to host applications.

In particular, I will walk through a 3 tier application stack deployment. The app stack has a Database, an API and a Web tier, running in different OpenShift apps.

After the successful deployment of an Azure Red Hat OpenShift cluster Developers, Engineers, DevOps and SRE can start to use it easily through the portal or through the oc CLI.

With a configured AAD connection admins can use their domain credentials to authenticate through the OpenShift API. If you have not yet acquired the API URL you can do it with the following command and store it in a variable.

apiServer=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query apiserverProfile.url -o tsv)

After acquiring the API address the oc command can be used to log in to the cluster as follows.

oc login $apiServer

Authentication required for https://api.zf34w66z.westeurope.aroapp.io:6443 (openshift)
Username: username
Password: pass

After the successful login the oc command similarly to kubectl can be used to manage the cluster. First of all, let’s turn on command completion to speed up the operational tasks with the cluster

source <(oc completion bash)

Then for example show the list of nodes with the following familiar command.

oc get nodes
NAME                                          STATUS   ROLES    AGE    VERSION
aroncweutest-487zs-master-0                   Ready    master   108m   v1.16.2
aroncweutest-487zs-master-1                   Ready    master   108m   v1.16.2
aroncweutest-487zs-master-2                   Ready    master   108m   v1.16.2
aroncweutest-487zs-worker-westeurope1-dgbrd   Ready    worker   99m    v1.16.2
aroncweutest-487zs-worker-westeurope2-4876d   Ready    worker   98m    v1.16.2
aroncweutest-487zs-worker-westeurope3-vsndp   Ready    worker   99m    v1.16.2

So now that we have our cluster up and running and we are connected to it lets create a project and deploy a few workloads.

Use the following command to create a new project.

oc new-project testproject

The command will provide the following output

Now using project "testproject" on server "https://api.zf34w66z.westeurope.aroapp.io:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app django-psql-example

to build a new example application in Python. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node

We will not follow the recommended commands to do some tests as I assume everybody can copy-paste a few lines. If you are interested in what happens go and try it for yourself.

We are going to use a Microsoft provided application stack to show how easy it is to deploy workloads to Azure. The application will have a Database of MongoDB an API of Node.JS and a WEB Frontend.

For the database we are going to use a template-based deployment approach. Red Hat OpenShift provides a list of different templates for different applications. For MongoDB there are two templates a mongodb-ephemeral and a mongodb-persistent. The ephemeral version comes with an ephemeral storage meaning that, when the container restarts, the data is lost. Conversely, the persistent version comes with a persistent volume allowing the container to be restarted/moved between nodes. Resulting in a better solution for production workloads.

List the available templates with the following command.

oc get templates -n openshift

The command will list about 125 templates databases, webservers, api etc.

…
mariadb-ephemeral                               MariaDB database service, without persistent storage. For more information ab...   8 (3 generated)   3
mariadb-persistent                              MariaDB database service, with persistent storage. For more information about...   9 (3 generated)   4
mongodb-ephemeral                               MongoDB database service, without persistent storage. For more information ab...   8 (3 generated)   3
mongodb-persistent                              MongoDB database service, with persistent storage. For more information about...   9 (3 generated)   4
mysql-ephemeral                                 MySQL database service, without persistent storage. For more information abou...   8 (3 generated)   3
mysql-persistent                                MySQL database service, with persistent storage. For more information about u...   9 (3 generated)   4
nginx-example                                   An example Nginx HTTP server and a reverse proxy (nginx) application that ser...   10 (3 blank)      5
nodejs-mongo-persistent                         An example Node.js application with a MongoDB database. For more information...    19 (4 blank)      9
nodejs-mongodb-example                          An example Node.js application with a MongoDB database. For more information...    18 (4 blank)      8
…

As mentioned previously, we will use the mongodb-persistent template to deploy the database on to the cluster. The oc process command uses a template as an input and populates it with the provided template variables and as an output creates a JSON output by default. This output can be used as an input for the oc create command to create on the fly the resources in the project. If -o yaml is used, the output will be in YAML not JSON format.

oc process openshift//mongodb-persistent \
    -p MONGODB_USER=ratingsuser \
    -p MONGODB_PASSWORD=ratingspassword \
    -p MONGODB_DATABASE=ratingsdb \
    -p MONGODB_ADMIN_PASSWORD=ratingspassword | oc create -f -

If everything works well a similar output should show up.

secret/mongodb created
service/mongodb created
persistentvolumeclaim/mongodb created
deploymentconfig.apps.openshift.io/mongodb created

After a few minutes execute the following command to list the deployed resources

oc status 
…
svc/mongodb - 172.30.236.243:27017
  dc/mongodb deploys openshift/mongodb:3.6
    deployment #1 deployed 9 minutes ago - 1 pod
…

The output will show that OpenShift used a deploymentconfig and a deployment pod to deploy the mongodb database, it configured a replication controller with 1 pod and exposed the database as a service but no cluster external rout has been defined.

In the next step we are going to deploy the API server. For the sake of this demo we are going to use Source-2-Image as a build strategy for the API server. The source code is located in a git repository which needs to be forked first.

In the next step the oc new-app command will be used to build a new image from the source code in the git repository.

oc new-app https://github.com/sl31pn1r/rating-api --strategy=source

The picture below shows that the S2I process was able to identify the source code as a Node.js 10 code.

If we execute oc status again, it will show the newly deployed pod with its build related objects.

oc status
…
svc/rating-api - 172.30.90.52:8080
  dc/rating-api deploys istag/rating-api:latest <-
    bc/rating-api source builds https://github.com/sl31pn1r/rating-api on openshift/nodejs:10-SCL
    deployment #1 deployed 20 seconds ago - 1 pod
…

If you are interested in all the deployed Kubernetes objects so far, execute the oc get all command. It will show for example that for this Node.js deployment it used a build container and a deployment container as well, to prepare and deploy the source code into a application container. It also created an imagestream and a buildconfig.

The API needs to know where it can access the MongoDB. This can be configured through an environment variable in the deployment configuration. The environment variable name must be MONGODB_URI and the value needs to be the service FQDN is formed of [service name].[project name].svc.cluster.local.

After configuring the environment variable OpenShift will automatically redeploy the API pod with the new configuration.

To verify that the API can access the MongoDB use the WEB UI or the oc logs PODNAME command. You should see something similar as the below.

If you want to trigger a deployment whenever you change something in your code, a GitHub webhook must be configured. In the first step, get the GitHub secret.

oc get bc/rating-api -o=jsonpath='{.spec.triggers..github.secret}'

Then retrieve the webhook trigger URL.

oc describe bc/rating-api

In your GitHub repository, go to Settings → Webhooks and select Add webhook.

Paste the URL output with the replaced secret into the Payload URL field and change the Content type to application/json. Leave the secret empty on the GitHub page and click on Add webhook.

For the WEB frontend the same Source-2-Image approach can be followed. First forking the repository into your own GitHub account.

Then using the same oc new-app command to deploy the application from the source code.

oc new-app https://github.com/sl31pn1r/rating-web --strategy=source

After the successful deployment the WEB service needs to know where the API server can be found. The same way as we used an environment variable to define where the database was for the API server, we can point the WEB server to the API service’s FQDN by creating an API environment variable.

oc set env dc rating-web API=http://rating-api:8080

This service now is deployed and configured; the only minor issue is that there is no way to access it as of now from the external world. Kubernetes/OpenShift Pods and services are by default only accessible from the cluster, therefore the WEB frontend needs to be exposed.

This can be done with a short command.

oc expose svc/rating-web

After the service has been successfully exposed the external route can be queried with the following command.

oc get route rating-web

The command should return something similar to the following output, where the host name received can be opened in a web browser.

After the successful setup of the web service, configure the GitHub webhook the similar way you did for the API service.

oc get bc/rating-web -o=jsonpath='{.spec.triggers..github.secret}'
oc describe bc/rating-web

Configure the webhook under your GitHub repo’s settings with the secret and the URL collected from the previous two commands’ output.

As a final step secure the API service in the cluster by creating a network policy. Network policies allow cluster admins and developers to secure their workloads within the cluster by defining from where and to what services traffic can flow from. Use the following code to create the network policy. This policy will only allow the web service to connect to the API service on ingress.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-allow-from-web
  namespace: testproject
spec:
  podSelector:
    matchLabels:
      app: rating-api
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: rating-web

How could this work for your business?

Come speak to us and we will walk you through exactly how it works.

Blog

Microsoft awards Nordcloud Partner of the year for 2020

Nordcloud has been awarded the 2020 Microsoft Country Partner of the Year Award for Finland. The awards recognises partners that...

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Migration Strategies Compared: Traditional System Integrator vs Cloud-native Partner

In the current environment, IT projects need to tick 2 boxes: cost savings and value delivery. Cloud migration ticks both – when you have the right strategy and partner supporting your end-to-end cloud migration approach. Here, we analyse 3 elements driving cloud success:

  • Automation – which is key to achieving cost savings and agility with cloud
  • Operating models and governance – which are key to speeding up time to market and deriving long-term value with cloud
  • Cloud managed services – because your approach to managed services is key to maximise cost savings and innovation delivery post-migration

In this article, we review the cost and value implications of working with a traditional SI vs a cloud-native provider, and how it affects these 3 elements of your cloud migration strategy.

Automation – key to cost savings and agility

Data centre migration to the cloud is often simplified into a binary choice. Either you lift-and-shift everything, retaining existing inefficiencies. Or you go through an expensive transformation initiative to untangle your complex digital and data estate. This is a false choice. It’s possible to achieve a fast data centre exit while future-proofing your architecture and reducing technical debt. But automation is crucial to this, so you must start by qualifying partners based on their ability to automate. 

Traditional SI & automation

A traditional SI typically has less than 10% public cloud workload penetration. This means their processes are optimised for manual, legacy data centre-type delivery (because they’re geared towards delivering for the 90% on-premises). There’s little to no established automation capability, and it can take months to get it in place. As a result, it’s easy to end up with a lengthy transformation consultancy initiative. Migration itself requires time-consuming customisation, and managed services are ticket-based. 

Cloud-native provider & automation

A cloud-native provider has 100% workloads in public cloud. They have established, automation-driven tools and processes to speed up migration, host operations and managed services. This includes everything from mature automated landing zones to patch and back-up automation. 

There’s no need for an expensive planning process because you can do a tool-driven review to develop the business case and roadmap. As a benchmark, we deliver a TCO assessment within 2 weeks and a recommended migration approach within 4 weeks, so there’s no analysis paralysis.

Thanks to automation, you can have a fixed-price migration instead of a budget based on hourly cost (giving you more control and transparency). An experienced infrastructure automation person is 10x more productive than a legacy counterpart. And all this means you see savings and ROI faster.

Operating models and governance – key to fast time-to-market and long-term value

27.5% of companies say their number one modernisation challenge is accelerating development cycle times. And only 15% can push code into production weekly or more frequently.  Migrating to cloud can speed up time to market for new features and capabilities, thanks to flexible architecture and faster development cycles. But it has to be more than just a hosting change. 

Cloud is a foundation for new ways of working (agile/DevOps). Without cloud-enabled operating and governance models in place, you won’t maximise the cost and value benefits from migration.

The traditional SI approach

Too often, the SI migration approach is focused on a lift and shift, cloud migration strategy, retaining legacy inefficiencies. You end up using cloud primarily for capacity, with teams continuing with old ways of working. This means you’re missing opportunities for cost savings and value creation. As a result:

  • You’re not fully optimising consumption, which drives up TCO
  • Your application and data estate haven’t been optimised for cloud, which can lead to higher ongoing management costs
  • You don’t have the culture and processes in place to speed up development cycles, which means you have the opportunity cost associated with slower times to market

The cloud-native approach

A cloud-native provider helps you migrate to modernise, with the right operating and governance models embedded from the beginning. You exit your data centre in the way that’s right for your business, with an efficient roadmap for refactoring and replatforming to maximise cloud benefits for your infrastructure, application and data estate. 

Knowledge transfer and upskilling are built into the process, so your organisation can quickly prepare for cloud and benefit from a Cloud Centre of Excellence. Teams are engaged with and nurtured through the transformation, creating a bridge for your traditional infra people and giving you a foundation for driving sustainable value. 

Because you have this robust foundation of operating model and governance, you can scale out quickly and cost-effectively, as well as respond flexibly to changing customer requirements.

Cloud managed services – key to cost savings and innovation velocity

Once you’re up and running on cloud, you face 3 different cost types:

  • The cost of use – the cost to run the systems, incorporating capacity, development and support
  • The cost of unavailability – from service instability to security breaches, these are costs of not running systems as desired
  • The cost of inflexibility – the real and opportunity costs incurred when systems can’t keep up with changing needs

Traditional SI managed services approach

An on-premises managed services approach doesn’t translate to the cloud when it comes to managing these costs. The traditional SI managed service model penalises downtime, which effectively disincentivises changes and stifles innovation. It’s ticket-based and generally involves outsourcing, which means teams don’t have enough control to deliver true innovation ongoing. 

A cloud managed services approach

Cloud-native managed services are designed to minimise costs of use, unavailability and inflexibility – and to free teams to focus on delivering customer value. 

  • There’s no vendor lock-in, long contracts, enforced software or minimum requirements beyond the service
  • You benefit from scalability on demand, so your service model reflects the way you use cloud – optimising capacity and performance
  • By maximising use of PaaS and automation, you reduce support costs, increase resilience and boost development
  • There’s partner integration with your internal teams – using DevOps and agile to accelerate innovation velocity.
Fig 1: How do the 2 approaches match up

Be recognised for driving a successful cost saving and value creation initiative

When you have the right approach to automation, operating models, governance and managed services, your cloud migration will maximise savings, future-proof your architecture, reduce ongoing technical debt and drive business value.

The key: having a cloud-native partner that’s the right fit for your objectives, digital maturity and culture. That way, you have a compass guiding you through the migration journey, helping you avoid pitfalls that cause unnecessary cost, delay and risk.

The result: a cloud migration that delivers the cost savings, agility, speed and innovation velocity the business needs – over the short and long term.

What next?

Download our Data centre migration guide: 7 false assumptions that cause unnecessary cost, delay and risk.

Blog

Microsoft awards Nordcloud Partner of the year for 2020

Nordcloud has been awarded the 2020 Microsoft Country Partner of the Year Award for Finland. The awards recognises partners that...

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Blog

Getting started with ARO – Application deployment

This is the fourth blog post in a four-part series aimed at helping IT experts understand how they can leverage...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Make Work Better by Documentation

Sharing knowledge in a team is an ongoing challenge. Finding balance between writing documentation and keeping it up to date, and handling daily operations and discussing matters face to face or via chat is not an easy task. I’ve found written documentation to be well worth the time it takes. Here’s two examples of how documentation has helped to make our work better.

Make uncertainties visible

Documenting software requirements has a profession of its own. But one doesn’t need to be an expert in requirements engineering to be able to enhance customer communication and ease the development with documentation.

Problem

I had a customer at work who wanted to integrate a 3rd party service. The service was under development at the same time that we started discussing integrating it. The service would replace old processes in the customer’s existing service. There were lots of questions about how the new service would function, and what exactly it would mean to replace existing functionality with it.

Solution

I sat down with one of my colleagues who knew the customer software. With pen and paper, we drew several sequence diagrams to represent current processes. Then we created sequence diagrams to propose how the new service would be integrated. We drew big question marks where we were unsure of the current process or how the new service would function. We even included questions for manual processes in the customer end to better understand what information the new service should provide.

Then I transformed the pictures from paper to digital form and included the diagrams to a wiki page. I gathered all questions to a table in the same wiki page. Next time we discussed with the customer, we went through the wiki page, answered all the questions and wrote a bunch more. The wiki page served well during the requirements gathering phase. It was liked by both the developers and the customer.

Take-aways

  • Sequence diagrams are a lovely way to present processes.
  • Table of questions, with an empty column for an answer, invites discussion.
  • Wiki page with diagrams and question-answer tables serves well as documentation.

Bring clarity with meaningful ways

Managing development tasks, bug tickets and service requests can become a full-time job. When a team has several customers, and each customer has a lot going on, kanban boards can end up in a cluttered condition. More time is spent figuring out what to do instead of doing it.

Problem

We used a kanban board for service requests, bug tickets and sharing information with a customer. Their system was in beta test phase, and lots of questions and enhancement ideas were coming up. It was difficult for the team to keep up with the latest priorities. There was a weekly meeting with the customer, but information was poorly shared for those of the team who did not attend the meeting.

Solution

To have a common starting point, I listed all ongoing tickets and described their status in the very first agenda of weekly meetings. In the meeting, we agreed on a priority order for the tickets, and I wrote that down to the agenda, which became the meeting minutes on the fly.

For the next weekly meeting, again I gathered information for all ongoing tasks. It wasn’t a fast thing to do, because there were so many sources of information for the work we were doing for the customer. But it was worth it. Weekly meetings became easier to lead, when we had a ready-made skeleton to follow, instead of tumbling around the kanban board. Also, having written documentation ensured that the whole team was informed about work ongoing and priorities for the next week.

Take-aways

  • Prepare for meetings with an agenda. Share it beforehand with everyone involved and store it in a place available for both the team and the customer.
  • Write meeting minutes. An easy way is to transform the agenda to the minutes by updating the document during the meeting.
  • Make sure the team knows there is documentation in the form of agenda/minutes.

Document when necessary

As we’ve seen, written documentation can help a lot when planning a new service or handling quickly changing requirements. Documenting must be allocated a time of its own. It might be difficult to take the time for writing documents, if it is not a common practice within your team.

Personal skills have a role to play, as well. Finding important information in the clutter of communication and getting it to a written form might not be easy for everyone. Leverage the various skills of your team and find the best ways of documenting your important matters.

Blog

Nordcloudian story of Briana Romero!

1. Where are you from and how did you end up at Nordcloud? I’m from northern california, USA and moved...

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

Meet Petri Kallberg – our CTO from Finland!

Get to know Petri Kallberg who works as a CTO in Finland!

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Getting started with Azure Red Hat OpenShift (ARO)

This is the third blog post in a four-part series aimed at helping IT experts understand how they can leverage the benefits of OpenShift container platform.

The first blog post was comparing OpenShift with Kubernetes and showing the benefits of enterprise grade solutions in container orchestration.

The second blog post introduced some of the OpenShift basic concepts and architecture components.

This third blog post is about how to deploy the ARO solution in Azure.

The last blogpost will cover how to use the ARO/OpenShift solution to host applications.

ARO cluster deployment

Until recently ARO was a preview feature which had to be enabled – this is not necessary anymore.

Pre-requisites

I recommend upgrading to the latest Azure CLI. I am using an Ubuntu 18.04 as my host for interacting with the Azure API. Use the following command to only upgrade azure-cli before you start.

sudo apt-get install --only-upgrade azure-cli -y

After upgrading Azure CLI log in to your subscription

az login
az account set -s $subscriptionId

Resource groups

It is important to mention, before we start with the cluster deployment, that the user selected for the deployment needs permissions to be able to create resource groups in the subscription. ARO deployment, similarly to AKS, creates a separate resource group for cluster resources and uses 2 resource groups: one for the managed application and one for the cluster resources. You cannot just deploy ARO to an existing resource group.

I am using a new resource group for all my ARO related base resources, VNet, Manage Application as an example.

az group create --name dk-os-weu-sbx --location westeurope

VNet and SubNets

I am creating a new VNet for this deployment, however it is possible to use an existing VNet with two new SubNets for master and worker nodes.

az network vnet create \
  -g "dk-os-weu-sbx" \
  -n vnet-aro-weu-sbx \
  --address-prefixes 10.0.0.0/16 \
  >/dev/null 

After the VNet, I’m creating my two SubNets one for the workers and one for the masters. For the Workers it is important to allocate a large enough network range to allow for autoscaling. I am also enabling the Azure internal access to the Microsoft Container Registry service.

az network vnet subnet create \
    -g "dk-os-weu-sbx" \
    --vnet-name vnet-aro-weu-sbx \
    -n "snet-aro-weu-master-sbx" \
    --address-prefixes 10.0.1.0/24 \
    --service-endpoints Microsoft.ContainerRegistry \
    >/dev/null 
az network vnet subnet create \
    -g "dk-os-weu-sbx" \
    --vnet-name vnet-aro-weu-sbx \
    -n "snet-aro-weu-worker-sbx" \
    --address-prefixes 10.0.2.0/23 \
    --service-endpoints Microsoft.ContainerRegistry \
    >/dev/null

After creating the subnets, we need to disable the private link network policies on the subnets.

az network vnet subnet update \
  -g "dk-os-weu-sbx" \
  --vnet-name vnet-aro-weu-sbx \
  -n "snet-aro-weu-master-sbx" \
  --disable-private-link-service-network-policies true \
    >/dev/null
az network vnet subnet update \
  -g "dk-os-weu-sbx" \
  --vnet-name vnet-aro-weu-sbx \
  -n "snet-aro-weu-worker-sbx" \
  --disable-private-link-service-network-policies true \
    >/dev/null

Azure API providers

To be able to deploy an ARO cluster, several Azure providers need to be enabled first.

az provider register -n "Microsoft.RedHatOpenShift"  
az provider register -n "Microsoft.Authorization" 
az provider register -n "Microsoft.Network" 
az provider register -n "Microsoft.Compute"
az provider register -n "Microsoft.ContainerRegistry"
az provider register -n "Microsoft.ContainerService"
az provider register -n "Microsoft.KeyVault"
az provider register -n "Microsoft.Solutions"
az provider register -n "Microsoft.Storage"

Red Hat pull secret

In order to get Red Hat provided templates, an image pull secret needs to be acquired from Red Hat:

Cluster deployment

Create the cluster with the following command, replace the relevant parameters with your values.

az aro create \
  -g "dk-os-weu-sbx" \
  -n "aroncweutest" \
  --vnet vnet-aro-weu-sbx \
  --master-subnet "snet-aro-weu-master-sbx" \
  --worker-subnet "snet-aro-weu-worker-sbx" \
  --cluster-resource-group "aro-aronctest" \
  --location "West Europe" \
  --pull-secret @pull-secret.txt

If you are planning to use a company domain use the –domain parameter to define it. The full list of parameters can be found in the Azure CLI documentation.

Deployment takes about 30-40 minutes and as an output you will get back some of the relevant information.

apiserverProfile contains the API URL necessary to log in to the cluster

consoleProfile contains the URL for the WEB UI.

The command will deploy a cluster with 3 masters and 3 workers. The Masters will be D8s_v3 while the workers D4s_v3 machines deployed into three different Availability Zones within a Region. If these default sizes are not fitting for the purpose they can be parameterized within the create command with the –master-vm-size and –worker-vm-size parameters.

If a deployment fails or the cluster needs to be removed for any other reason, use the following command to delete it. The complete removal of a successfully deployed cluster can take about 30-40 minutes.

az aro delete -g "dk-os-weu-sbx"   -n "aroncweutest"

Connecting to a cluster

After the successful deployment of a cluster the cluster admin credentials can be acquired with the following command.

az aro list-credentials --name "aroncweutest" --resource-group "dk-os-weu-sbx"

The command will return the following JSON.

{ 
  "kubeadminPassword": "vJiK7-I9MZ7-RKrPP-9V5Gi", 
  "kubeadminUsername": "kubeadmin" 
}

To log in to the cluster WEB UI, open the URL provided by the consoleProfile.url property and enter the credentials.

After the successful login, the Dashboard will show the initial cluster health.

To log in to the API through the CLI, download the OC binary and execute the following command.

oc login apiserverProfile.url

Then enter the credentials and you can start to use the “oc” command to manage the cluster.

Azure AD Integration

With Azure Active Directory (AAD) integration through OAuth, companies can leverage the existing team structures, groups from their Active Directory to separate responsibilities, access to the OpenShift cluster.

To start with it first create a few environment variables to support the implementation

domain=$(az aro show -g dk-os-weu-sbx -n aroncweutest --query clusterProfile.domain -o tsv)  

location=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query location -o tsv)  

apiServer=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query apiserverProfile.url -o tsv)  

webConsole=$(az aro show -g dk-os-weu-sbx -n aroncweutest  --query consoleProfile.url -o tsv)  

oauthCallbackURL=https://oauth-openshift.apps.$domain.$location.aroapp.io/oauth2callback/AAD  

 
appName=app-dk-os-eus-sbx
appSecret="SOMESTRONGPASSWORD" 

tenantId=YOUR-TENANT-ID

An Azure AD Application needs to be created to integrate OpenShift authentication with Azure AD.

az ad app create \ 
  --query appId -o tsv \ 
  --display-name $appName \ 
  --reply-urls $oauthCallbackURL \ 
  --password $appSecret 

Create a new environment variable with the application id.

appId=$(az ad app list --display-name $appName | jq -r '.[] | "\(.appId)"')

This user will need an Azure Active Directory Graph scope (Azure Active Directory Graph.User.Read) permission.

az ad app permission add \
 --api 00000002-0000-0000-c000-000000000000 \
 --api-permissions 311a71cc-e848-46a1-bdf8-97ff7156d8e6=Scope \
 --id $appID

Create optional claims to use e-mail with an UPN fallback authentication.

cat > manifest.json<< EOF
[{
  "name": "upn",
  "source": null,
  "essential": false,
  "additionalProperties": []
},
{
"name": "email",
  "source": null,
  "essential": false,
  "additionalProperties": []
}] 
EOF 

Configure optional claims for the Application

az ad app update \
  --set optionalClaims.idToken=@manifest.json \
  --id $appId

Configure an OpenShift OpenID authentication secret.

oc create secret generic openid-client-secret-azuread \
  --namespace openshift-config \
  --from-literal=clientSecret=$appSecret

Create and openshift OAuth resource object which connects the cluster with the AAD.

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: AAD
    mappingMethod: claim
    type: OpenID
    openID:
      clientID: $appId
      clientSecret:
        name: openid-client-secret-azuread
      extraScopes:
      - email
      - profile
      extraAuthorizeParameters:
        include_granted_scopes: "true"
      claims:
        preferredUsername:
        - email
        - upn
        name:
        - name
        email:
        - email
      issuer: https://login.microsoftonline.com/$tenantId
EOF

Apply the YAML file to create the resource. (You need to be logged in with the kubeadmin user)

The reply URL in the AAD application needs to point to the oauthCallbackURL; this can be changed through the portal.

oc apply -f oidc.yaml

After a few minutes you should be able to log in on the OpenShift web UI with any AAD user.

After choosing the AAD option for login, using the AAD credentials, a user can log in and start to work.

In my next blog post I am going to continue with some basic application implementations on OpenShift ARO.

How could this work for your business?

Come speak to us and we will walk you through exactly how it works.

Blog

Microsoft awards Nordcloud Partner of the year for 2020

Nordcloud has been awarded the 2020 Microsoft Country Partner of the Year Award for Finland. The awards recognises partners that...

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Meet Eija from Nordcloud’s Design Studio Intergalactico!

CATEGORIES

Life at Nordcloud

Eija works as a Senior UX Designer at Nordcloud Jyväskylä and she is one of Design Studio Intergalactico‘s experts with superpowers. Everyone at Nordcloud knows that ‘where there is Eija, there is laughter and joy’! Eija is a real pro in design from multiple perspectives and she enjoys designing both visual aspect and user-friendliness of services. On her free time she works as a freelancer photographer. You can see her photos on Instagram. But read her story first!

I live in Central Finland, Jyväskylä to be exact. I’ve lived here my whole life, but love the fact that I get to travel around Finland and sometimes around the world for work. When I started at Nordcloud (or SC5 back then) five years ago I got to dive straight into a really interesting project; Making online games, or rather electric raffle tickets for Veikkaus. It was exciting and I learned a great deal from the project, the client and of course my colleagues. 

My title is Senior UX Designer, UX design is what I mostly do. My daily work consists of a wide range of design work and I love the fact that I get to do so much of what I feel enthusiastic about. Solving real problems and coming up with something game changing is what sets me on fire! It is fulfilling when you’re presented with a problematic use case and you turn it into a seriously dazzling user experience! WOW! This of course applies in digital products, but why not also in social or communicational scenes. 

On top of the inspiring work and professional growth that I’ve experienced at Nordcloud I have to say that the single best thing here are the people! I get to work with tech and design professionals, and it blows my mind how much I can learn from them. And even though we’re all nerdy we are also super cool and hip and fun!

On top of being a designer at Nordcloud I’m also doing a lot of Culture Ambassadory things. For me this means being the ears and voice of my colleagues, developing our culture, tackling possible issues in our work community or environment with our people operations team and initiating fun things to do together. Along these I also do quite a lot of photography both at work and in my free time. To balance it all out I work out in different forms at the gym, dance studio and outdoors. And hey, these are all things that are supported and encouraged by Nordcloud!

Blog

Nordcloudian story of Briana Romero!

1. Where are you from and how did you end up at Nordcloud? I’m from northern california, USA and moved...

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

Meet Petri Kallberg – our CTO from Finland!

Get to know Petri Kallberg who works as a CTO in Finland!

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Time to rescue your data operations

The value created by data can fundamentally influence key areas of your business, from enabling, optimizing and steering key functions. 

Data enablement has been trending for the past few years as businesses have been building “data driven” capabilities and the “digitalization” of everything. The data initiative has become a strong interest area for  business executives, and ownership has started to shift away from the core IT organisation.

What is the state of the data initiatives today?

I’ve heard some interesting viewpoints on this….

Business unit head: “We get constantly beaten by the competition who have 10x more value to offer online. We have been waiting to modernize our online service for 2 years. The differentiated value of the new service would be based on the combined data coming from our different units. The quality of the data is inadequate because everyone is building the new things for their own purposes. We´ll have to wait for someone to fix this at an enterprise level before it makes sense to do anything. Right now, it would be too expensive”

HR business partner: “I needed to make some informed HR decisions for Q3 within one specific team. I went to the named BI guy to ask for data. His response was – I am busy right now…come back in 3 weeks and we´ll have a look.”

Head of development: “I don’t know who owns the data initiative from the enterprise perspective. IT is in love with the tech assets they have built years ago, and to me it feels like business units have constantly some new data projects ongoing”.

Gartner: “Data and analytics leaders often struggle to balance the need for both centralized and decentralized data and analytics approaches. Too much centralization stifles flexibility and agility that business domains seek. Although too much decentralization can create chaos, wasted resources, duplication of effort and siloed data create stacks of questionable data, as well as the inability to create trusted insights across different domains” (https://www.gartner.com/en/documents/3970860/how-to-create-data-and-analytics-everywhere-for-everyone)

The same key issues keep appearing in our discussions..

·   Narrow use case driven success stories

·   Limited availability of data & severe data quality issues

·   Architectural complexity constantly increasing

·   Very little re-use of assets over use cases

·   Exploding cost on enterprise level (Business & IT) – little metrics on value

·   Data initiatives fail to deliver the business outcomes

·   Risking security & compliance

·   Dozens of technologies in use and more coming in

·   Massive workload on delivering the basic capabilities

·   Shortage of talent as an outcome of the complexity

In our data  initiatives, we have identified that most organisations are facing challenges due to their current approach,  focused on use case specific implementations. We have realized that in order for data initiatives to provide more value, this needs to be addressed now and addressed holistically. 

We’ve identified the situation to be similar to the early days of cloud adoption, i.e. business units taking cloud into use for their specific use cases, different environments accumulating within companies, eventually leading to chaos. We solved that chaos by helping customers build an operating model around cloud services and deploying a common set of services that ensure compliant, secure and cost-efficient operations without sacrificing agility. We have taken a similar approach to tackle the emerging data chaos with Data Estate Modernisation.

How do we tackle Data Estate Modernisation?

We help our customers take control of their overall data initiative ensuring value creation is stronger than it has ever been, and the operation is fully optimized. We implement an operating model backed up by the latest tooling capabilities. This enables onboarding new data initiatives in a fast manner, safeguarding  the entire operations from security, compliance and cost perspective.

To achieve that, we have defined three journeys based on the angle of entry:

  • The data value track which is focused at identifying business opportunities around data and building the capabilities required using Azure analytics services
  • The data foundation track which is aimed at building the holistic capabilities (operating model + technology) on Azure data services to enable data driven operations
  • The data modernisation track which is aimed at modernizing existing database technologies to Azure to drive cost savings

Data enablement

The data enablement track ensures that there is a holistic model and platform for managing data from different sources, used  for different use cases from the business.

We recommend to start by aligning the stakeholders on key principles on data operations and define a shared data operating model. This ensures that the business objectives, responsibilities around data onboarding, consumption, security and compliance are clear to all relevant stakeholders. Nordcloud Data Enablement workshops are designed to get you there.

Once the responsibilities and key principles are clear, the next logical step is to use data services available on Azure. Ideally you need a specific business use case in mind (i.e. Minimum Viable Product), but understanding that you should implement on the basis that you are ready to onboard the next use cases with minimal impact on the setup. I.e. think more of “generic” services rather than implementations optimized for one specific use case. The evolution of this “data foundation” proceeds this MVP stage in an iterative way, by adding in use cases and data sources.  

Data value 

It is key to ensure that the data initiative is tightly connected to business outcomes and driven by clear business objectives rather than driving the initiative as an IT “capabilities”- project disconnected from real business cases. For example in the AI domain, proof-of-concepts have been failing to provide the value expected to businesses, which has led to only a minimal amount of PoC ever seeing light as production solutions. The issue is that these AI initiatives have been driven from a capability perspective (adoption of AI) rather than business outcomes. 

We also have seen examples of IT organizations building state of the art data capabilities which have not been adopted by the business for broader use. It is always easier to change when you have been part of planning the change.

On our data value journey, there are three stages described to tackle the challenge above:

  1. Genius workshop – identify the relevant and most impactful data opportunities for your business
  2. Proof-of-value – put the objectives to a test and prove that the set objectives can be achieved with data
  3. Production project – instrument the proof-of-value to a production grade solution that is embedded to your operations and ensures continuous value production

Data modernisation 

In many cases there are already existing solutions in the data domain. These solutions may be outdated, i.e. not capable to support the data velocity, variety and volume expectations from today’s businesses, or expensive compared to the cloud native alternatives available on Microsoft Azure. The data modernisation track helps customers modernise their current data tooling to modern Azure based alternatives. This drives better performance at a lower cost. We’ve seen a lot of customers moving from old-on-premise based Microsoft SQL Server based solutions to managed cloud, but also customers migrating from 3rd party database engines such as Oracle to Azure data services such as Azure SQL, to ensure low operating costs and maximum availability / reliability. 

It is rare to have just data without it being connected to applications that produce or consume it. Therefore, modernisation initiatives often require you to take related applications into account. You need to carefully assess the related application landscape, identify changes required by the data modernisation initiative, and plan for those. Typically the data estate modernisation initiative consists of not only the database / data technology modernisation but also has an aspect of application modernisation.

Want to learn more about maximising value from your data?

Check out our free 1 day data workshops.

Blog

Challenges of virtual workshops

In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...

Blog

All at sea in cloud migration? These 7 considerations might just save you

1) We moved all our Servers into the cloud, nothing appears to have changed, where’s the benefit? The cloud won’t...

Blog

Getting started with ARO – Application deployment

This is the fourth blog post in a four-part series aimed at helping IT experts understand how they can leverage...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.