HUS chooses Nordcloud and Google to Accelerate its Digital Competence

CATEGORIES

News

Nordcloud will harness its years of expertise in public cloud to benefit HUS in migrating to Google Cloud and its services.

Google’s europe-north1 region, located in Hamina, Finland, is a determining factor for many Finnish organisations in choosing between hyperscale providers.

The arrival of Google’s public cloud in Hamina was a central developmental step for digitalisation of public sector in Finland. Google Cloud’s Finnish region enables the use of cloud tools and added value while at the same time storing the data physically in Finland”, says Lars Oehlandt, VP Google Partnership of Nordcloud.  “As the Nordic public cloud pioneer with hundreds of projects under our belt, Nordcloud has refined a method we call Cloud Journey for taking business to cloud in a controlled and smart manner. Our cloud journey concept is an excellent fit for both public administration and health care organizations in order to secure full value of public cloud.”

Google Cloud’s Finnish region enables the use of cloud tools and added value while at the same time storing the data physically in Finland.

– Lars Oehlandt, VP Google Partnership, Nordcloud

In addition to the advantageous region, HUS will obtain access to Google Cloud’s numerous features: advanced analytics, machine learning and container services, high-level information security, and world-class infrastructure.

“Our cooperation with Nordcloud has deepened fast with the interest and demand generated by the Hamina region. Together we will serve Finnish companies and public administration organisations that are developing and modernising their operations with Google Cloud”, says Carita Mäkinen, Field Sales, Google Cloud Platform.

Google announced in May that it will invest 600 million euros to expand its data center in Hamina. The new construction will add to Google’s existing data-center complex in Hamina on the south coast of Finland, taking the company’s total investment there to 1.4 billion euros. Google’s Hamina complex will be powered by renewable energy acquired from three new wind farms in the Nordic nation

Earlier this month, Nordcloud was the first Nordic company to achieve the status of Authorized Google Cloud Platform Training Partner.

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Blog

Six capabilities modern ISVs need in order to future-proof their SaaS offering

Successful SaaS providers have built their business around 6 core capabilities

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








From Azure padawan to Azure knight

AzureWhiteboardDay_9

Going into the Azure Talent Acceleration Program, I wasn’t sure what to expect. I knew that it would be an intensive learning experience but coming from a background of on-prem infrastructure management, I knew it would be extra difficult to get out of that on-prem mindset and into a cloud one.

WEEK 1

Week 1 was very interesting. We had the whole group turn up to Poznan in Poland. People coming from Finland, Germany, Netherlands and of course Poland.

It was a great time to meet who we will be working with, what their backgrounds are and what they want to achieve in Nordcloud. With the team going out for dinner at Brovaria it was a good place to break the ice.

I was nervous to say the least, with no idea what to expect, but we all just jumped straight into AZ-900 learning materials, with Jarkko (MCT) as our trainer for the first few weeks. Following the learning materials and the online labs, there was a ton of information to digest. To say it was overwhelming is an understatement, but over time as we started to pickup on those fundamentals, the materials started to become clearer.

While week 1 was primarily an introduction to those fundamentals, Darek spoke to us about our career paths in Nordcloud, giving us an overview of what we can expect working here.

We also took part in a workshop with Teemu (Azure Guru). This workshop was jumping ahead to Azure Architecture with small case studies. There’s an existing blog with more details covering that workshop here.

WEEK 2

We went into Week 2 expecting a similar experience. But this time it was a bit more in depth with Powershell. Remotely deploying resources through a few lines of Powershell code? Awesome. Being familiar with Powershell, it was my tool of choice as we went through labs. We also had the option of using this or CLI, which although looked intuitive and simple to learn, I’m a Powershell fanboy through and through, so I stuck with it.

The labs were quite easy to follow, as they provided you with a step by step on how to achieve the goal. So, I, along with a few others in the group decided to try and perform these tasks using Powershell/ CLI instead of using the portal.

This made it a bit more of a challenge, but it helped us learn much quicker.

Later in the week were our self-study days to help us prepare for the subsequent weeks. Mostly using Pluralsight covering Git, Powershell, Docker and DevOps.

WEEK 3

With the first 2 weeks behind us, I was getting more comfortable with what was happening. Week 3 was focused more with classroom learning, specifically on AZ300. Although much of what was covered, we had already gone through in previous weeks, it was still good to keep our minds fresh with information.

By this time, instead of doing the labs, Jarkko set us some challenges to complete with the information we just learned. Again, instead of using the portal, we’d be using Powershell/ CLI, maybe mix in some ARM Templates, or perhaps utilize key vault… adding bonus objectives with areas we hadn’t covered yet to help us learn the areas quickly. We enjoyed it, and it was fun coming up with different methods on achieving what Jarkko had asked us to do.

Friday of that week though was the first kick off day to our case study. The case study would be the biggest focus for the following weeks.

We were split into teams, representative of the cities we came from. The same teams we used in Teemu’s session.

The case study was split into 4 phases:

  1. Design a solution for migrating a 3-tier web app to Azure.
  2. Implement the solution using the portal.
  3. Implement the solution using ARM templates.
  4. Implement the solution using Devops.

Our first task, designing the solution.

We had most of one day to design the solution. This took us back to our original task with Teemu for designing a solution in the cloud. All of us learned a lot since then, and felt that this time, our design was clear and informative.

The key part was to remember that what we were designing, was what we were implementing. So, making it overcomplicated would impede us later. Keeping it simple was the way to go.

WEEK 4

This week we were primarily focusing on the next 2 phases of our case study. We had a day to implement our solution via the portal. It wasn’t too bad to get through. Using the portal to implement our solution was a great way to learn more features that we hadn’t used previously, but with that said, the knowledge we had already learned helped us get through the implementation without an issue.

But on Tuesday, that was when the difficult part began. Implementing our solution via ARM templates.

It wasn’t just writing out a single template and throwing it up into Azure, we needed to ensure that we were using features such as nesting and linked templates. The extra challenges made it extra difficult.

It was tough, but at the same time the difficulty of the task helped us learn more about ARM templates.

We were still split into teams, but that didn’t stop the groups from helping each other. We would often reach out to other teams to get their take on what we were struggling with and vice versa. After all, although we were in mini-teams, we are all trying to achieve the same goal. There is no shame in asking for help and advice, and it was great to see teams helping each other throughout the tasks.

WEEK 5

Our colleagues who were with us in Week 1 came back to Poznan in Week 5. It was great to see each other again after a few weeks. We took the time to go out for drinks to catch up as well as talking about what we had learned and how we accomplished the case study the previous week.

Starting off the week with going out was a good idea considering what we were going to be learning that week.

It was a very detailed and intensive few days of DevOps with Krzysztof, learning Azure DevOps, Git and CI/CD Pipelines.

Considering our next task was to implement our solution using DevOps, the trainings we had were super helpful, and well taught. The lessons were tailor made to our group, and really opened our eyes to the possibilities behind CI/CD processes and automation.

Once we had gone through these trainings and labs, we had half a day focusing on our soft skills. Hosted by a third party, it was primarily focused around public speaking, and getting us used to being able to talk openly to a large group of people. It was a nice break from the intensive technical training, as many of us were still processing the raw information provided to us the days before.

And then came Friday, Kubernetes day. Piotr hosted a workshop not only for us, but for anyone who wanted to join. Going through the many subjects surrounding Kubernetes, performing labs using either our existing clusters from labs before, or using MiniKube.

The workshop was once again super helpful.

That evening though, it was nice to relax at an office party. The timing was great as we still had our colleagues on the TAP in Poznan who could join us. A fun party to get to meet and greet people who we hadn’t yet spoken to, drinks, pizza, it was great!

Not to mention a couple of surprise birthdays on that day too with 2 of our TAP colleagues. An excellent evening that was most definitely needed, after a super intensive week.

FINAL WEEK

In our final week, we had the whole week to complete our DevOps task. This was truly a step up, as we had heard that in the past TAP teams had struggled with this case study. The intricacies in the task were difficult to deal with, but it didn’t stop us.

Building our repo through to building the branches and pipelines… we found ourselves restructuring a few times as we were trying to automate the entire deployment.

Once again, we were reaching out to other teams to get their take on certain methods and vice versa. We also had a lot of help from Krzysztof guiding us in the right direction.

After multiple failures…

We finally managed to get it working. We were super happy with what we had accomplished. The feeling of accomplishment was one of a kind.

That afternoon we talked about the TAP, giving overall feedback and what happens next. We received our diplomas and now… fully qualified Azure Knights! Next step? Becoming an Azure Master.

 

One thing! We haven’t planned a new TAP yet, however stay tuned since after holiday season it might happen again! So far, we don’t stop recruiting for Azure Cloud Architects in all countries where we have the offices. Follow the link here and check our current openings.

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

Nordcloudians at NodeConf EU 2019

Our developers Henri Nieminen and Perttu Savolainen share their presentation tips and tell about their experience at NodeConf EU 2019...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Next generation networking, food for thought?

CATEGORIES

BlogInsights

A few of the large announcements included Anthos and Cloud run. It is easy to get overwhelmed by the sheer amount of presentations and announcements.

This year there were two presentations that I felt may have flown under the radar, but would be a shame to miss out on.

 

Istio Service mesh for VM’s

Service meshes and overlay networking have been around for a while. Tools like Istio and such, enabled engineers to create overlay networks between containers. These networks allow for software-based networking between services and higher level features like:
circuit-breaking, latency-aware load balancing, and service discovery.

One of the drawbacks of these tools was the fact that most of the relied on sidecar containers. As a result, setting this up for non-container workloads like VM’s was pretty difficult. In this talk Chris Crall and Jianfei Hu show an easy way of integrating Istio with VM’s. This means that we can now integrate almost anything into our service mesh. This includes things like databases, legacy workloads or anything else that runs on a VM.

Even though it might seem like a minor feature, this is pretty game-breaking. Imagine migrating a large application landscape critical of legacy workloads into containers: Istio can do weight-based routing. This means that we can set up many endpoints for the same service, all receiving only part of the traffic. By doing this for an application we’re trying to migrate, we can compare the performance of the old- to the new containerised version.

 

Zero-trust networking and PII

Another video that would be easy to miss, but definitely worth a watch is the one by Roy Bryant from Scotiabank. They’ve started shifting recently from a financial institution to ‘a tech company that also does banking’. As shown by them starting to push code open-source to GitHub.

Being a bank, they deal with a lot of PII (Personally identifiable information). As a result, security is one of their main concerns.  In the video they mention that besides using ML to tokenise things like CC numbers, they leverage intent-based zero trust networking. This might sound complex but in reality it is quite elegant.

Traditionally, access between services or computers is enforced through firewalls and network configurations. With the emergence of software-defined networks, and layer-7 routing we can start thinking about other ways.

In the video, they mention that instead of configuring firewalls, they started expressing intent: “I want service A to be able to read 10 records per second from service B for the next 5 minutes”

By versioning these intents and abstracting the logic behind it away into libraries, we are no longer maintaining complex sets of firewall rules. Access is now governed in a transparent maintainable manner, allowing for an intuitive way of approaching security.

 

Conclusion

A blogpost like this can only cover so much ground, and these are complex subjects. I recommend watching the videos mentioned here, and checking out the links in the reference below. I’d like to end this post with some food for thought:

Currently in modern clouds, a large part of the security model relies on network security through firewalls and NACLs in addition to IAM.

With the increasing usage of layer-7 overlay-networking I expect to see these two amalgamate into new multi-disciplinary security mechanisms.

References

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Azure Cosmos DB – Multi model, Globally Distributed Database Service

CATEGORIES

BlogTech

Introduction to AZURE COSMOS DB

Azure Cosmos DB  is a multi-model database service by design, which can be easily globally distributed. Database engine supports storing of data in documents, key-value pairs and even graphs. Scaling of Cosmos DB across any number of available locations is extremally easy –  just press the appropriate button in the Azure portal. In modern web-based applications low latency is expected by the end users. With Cosmos DB you can store data closer to application users. Database can be distributed and available in 50+ regions. This creates enormous opportunities. Regions management can be involved at any time in application lifecycle.

Based on the above, global distribution of data with Cosmos DB provides a set of benefits such as:

  • support for NoSQL approach for data management,
  • easy management of massive data amounts (read, writes operations close to end users),
  • simplicity of cooperation with mobile, web or even IoT solutions,
  • low latency,
  • high throughput and availability.

For development purpose Microsoft provide Azure Cosmos DB emulator. Functionality is close to native cloud version of Cosmos DB. Developer will be able to create and query JSON documents, deal with collections and test stored procedures or triggers on database level. We need to understand that some features are not fully supported locally. These are among others multi-region replication or scalability.

Later in this post I will try to explain more details about supported data models. All of them use main, cool features provided by Azure Cosmos DB.

Supported data models

1. SQL API

This kind of Cosmos DB API provides capabilities to dealing with data for users, which are familiar with SQL queries standards. In general, data is stored as a JSON, but we can query them in easy way with SQL-like queries. Communication is handled by HTTP/HTTPS endpoints which process several requests. Microsoft provide dedicated SDK’s for this kind of API, for most of popular programming languages like .NET, Java, Python or JavaScript. Developers can load dedicated library in their application and start very fast read/write operations directly to Cosmos DB. Sample flow has been shown below.

 

2. MongoDB API

Existing instances of MongoDB can be migrated to Azure Cosmos DB without huge effort for this activities. Both standards are compatible. If new environment is created, change between native MongoDB instance and Cosmos DB instance (by MongoDB API) comes to change a connection string in application. Existing drivers for application written for MongoDB are fully supported. By design all properties within documents are automatically indexed.

Let’s check, how simple queries for identical documents collection as used in previous point will look like:

 

 

As a result, specified sub-JSON contains data will be returned. If query doesn’t return results, empty object will be send as a response to query.

3. Table API

This kind of API can be used by applications prepared natively for close working with Azure Storage tables. Of course Cosmos DB provides some premium capabilities comparing to Storage tables e.g. high availability or global distribution. Migration to new DB source for application doesn’t require changes in code. User can query data in a few ways. Also lot of SDKs are provided by design. Below sample will show how to query data by .NET SDK with LINQ. During execution LINQ query will be translated to ODATA query expression.

 

4. Cassandra API

Azure Cosmos DB Cassandra API is dedicated data store for applications created for Apache Cassandra. User is able to interact with data via CQL (Cassandra Query Language). In many cases action for changing DB source from Apache Cassandra to Azure Cosmos DB ‘s Cassandra API is just changing a connection string in application. From code perspective integration with Cassandra is realized via dedicated SDK (NuGet -> Install-Package CassandraCSharpDriver). Sample code for connecting to Cassandra cluster from .NET application is presented below.

 

5. Gremlin API

The last API provided by Azure Cosmos DB (on the day of writing this article 😉) is Gremlin API. This kind of interface can be used for storing and operation on graph data. API supports natively possibilities to graph modeling and traversing. We can query the graphs with millisecond latency and evolve the graph structure and schema in easy way. For querying activities we can use Gremlin or Apache TinkerPop languages. Step by step process from NuGet package installation to run first query is has been shown below.

Summary

From the developer perspective, Azure Cosmos DB is very interesting service. Huge range of available APIs allows for using mentioned database in various scenarios. Below you can find information from official Azure Cosmos DB site about availabilities of APIs per programming language.

Source: Azure Cosmos DB Documentation

***

This post is the last part in our Azure DevOps series. Check out the previous posts:

#1: Azure DevOps Services – cloud based platform for collaborating on code development from Microsoft

#2: Web application development with .NET Core and Azure DevOps

 

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Unleash the Potential of Your Legacy Software with Application Modernisation

CATEGORIES

BlogInsightsTech

Introduction

In the modern all-digital business environment, applications play not only a critical role for driving the internal business processes but are also increasingly key to generating all new digital revenue sources or driving traditional revenues using digital channels. This has led to 100 million applications being created during the last 40 years (according to IDC) with the same amount of applications estimated to be created during the next 5 years alone.

Clearly, not all applications of these applications have been built in light of the current requirements nor using the cloud technologies that drive much of the current digital businesses. These traditional companies becoming increasingly digital are now facing new software related challenges impacting their core businesses that they had not faced in the past.

Such challenges are for example:

  • The increasing demand for digital services combined with legacy “monolithic” applications leading to inability to maintain a 24/7 availability and scale to peak loads
  • Tight coupling with enterprise applications operated “Mode 1” combined with overly complex monolithic architectures leading to inability to respond to external pressure to develop new features (in some cases driven by regulatory changes such as GDPR) at the pace expected by internal or external customers
  • Difficulties or increasing costs in recruiting experts to develop and maintain applications based on legacy technologies on legacy platforms
  • Re-hosting of legacy applications as-is to public cloud platforms leading to suboptimal operating costs, e.g. due to legacy software licensing models being incompatible with cloud platforms.

Luckily the public cloud provides a rich set of PaaS services that go beyond the virtual infrastructure provided by traditional data centres. These services, such as managed queues, databased and serverless compute, drive new development paradigms such as serverless or cloud-native application development, which have a track record not only in making the operations of the applications more flexible and cost-effective (up to 90% savings) but increased developer productivity (up to double). These advantages are driving high adoption. According to IDC, two thirds of new enterprise applications will be developed cloud native by 2021. Cloud technologies enable not only new application development but also make modernisation initiatives of legacy applications more viable and attractive.

Approaches to application modernisation

A multitude of strategy alternatives can be used to tackle the business challenges of applications. A typical modernisation initiative consists of different strategies that are applied to the different components of an application.

The simplest approach for modernisation consists of replacing some of the generic services of the application with managed cloud services to gain cost savings and/or improve scalability. For example,

  • replacing your relational database with a managed cloud database such as Amazon Relational Database Service. 
  • replacing a message queue with a managed service such as Amazon Simple Queue Service
  • replacing custom built user management with PaaS / SaaS based solutions such as Amazon Cognito

This approach typically requires minimal changes to the application source code and enables to get an improved and cloud-optimised version of your application in production in minimal time. In 6R terms, this is what is typically referred to as “Re-platforming”.

One approach used is the introduction of an API façade based on cloud native technologies such as AWS Lambda and Amazon API Gateway and decoupling of the legacy application from the user interface layer. Some business logic related to the application may be introduced to this API layer but it is a good practice to keep that fairly simple. This approach enables the following:

  • Development of user experience independently of the legacy application development to drive better customer engagement
  • Development (e.g. modernisation) of backend independently and transparently from the user interface
  • Leveraging the data and processes of the legacy application for other application needs via the API

The most complex approach is a full rewrite of an application or component (in 6R terms “Refactoring”). This approach consists of re-architecting the application or component to fully leverage the capabilities of the public cloud and rewrite most of the application code, possibly in a different programming language than originally used. Typical architectural patterns that are applied during a full rewrite are:

  • Stateless execution to enable horizontal scaling and improve fault tolerance
  • Microservice based approach to minimise unit of deployment and enable agile development in small teams
  • Serverless (e.g. based on AWS Lambda) wherever suitable to focus development on business logic and enable out-of-the-box scalability

In addition to the technology and architectural changes, it is key to have also modern software development tools and practices in place to get most of the benefits out of the modernisation initiative.

In real life modernisation initiatives, it is typical that the strategy is a combination of the above and modernisation is run in a phased approach by modernising component by component. This enables faster time-to-production with the initial architecture and ability to leverage the modernised architecture sooner than an “all-at-once” approach. The downside of a phased approach is the added cost of running both the legacy and modern applications in parallel but that is often justified by the positive impact on other strategic business metrics than operating cost. An example scenario for a phased modernisation is the following:

  1. Building an API façade to make the data and processes of the original application easily accessible.
  2. Modernising the user interface/consumer layer to leverage the API layer, providing decoupling of backend and consumer logic and enabling parallel and independent development.
  3. Step-by-step modernisation of the original backend and development of new capabilities behind the API layer.

Getting there

With the multitude of approaches available, embarking on the modernisation journey for an application or portfolio applications requires clear objectives and understanding of the capabilities. Our recommendation is to start with an application assessment which consists of the following:

  • Setting the key business objectives and priorities for the initiative (improve scalability, improve development capabilities, save costs, …)
  • Understanding the current application architecture and identify which components are conflicting with the objectives or represent the highest opportunities for improvement
  • Understanding the current state of software development practices and tools
  • Setting a shared ambition state for the application architecture and software development tools/practices

The above are key to ensure that the objective and business rationale for the modernisation is clear and driving the actual modernisation work aligned with the set objectives.

Nordcloud helps customers with the full modernisation journey from assessment to managing the modernised applications in public cloud environments.

Read more about our application management services

P.S. Nordcloud will be attending the AWS Summit in Stockholm on May 22nd, 2019. Come visit us at booth P2 and discuss more about our experiences in improving digital business capabilities with application modernisation. Sign up at https://aws.amazon.com/events/summits/stockholm/

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Right Partners are the Key to Digital Transformation Success

CATEGORIES

BlogInsights

According to a recent IDC Infobrief, 65% of European CEO’s are under considerable pressure to deliver a successful digital transformation strategy. This comes as no surprise – we are seeing similar kind of challenges throughout our customer base. The pressure is very real and organizations are likely to fail unless they get three things right: digital skills, organizational structures and governance.   

Successful Organizations Learn and Adapt with Transformational Partners

In many organizations, the main driver for public cloud is digital transformation of business. This can mean pressure from new digital competitors or the need to find new business models or reinvent existing ones. These businesses needs place new demands on the agility and delivery capabilities of both infrastructure and IT organization itself. 

All the successful transformation of IT and business towards a cloud native operating model have been made by starting with a new dedicated team.

The marriage of agile business development and traditional IT organization have proven to be a very problematic matter and therefore CEO’s and his team need to complete rethink the approach they are taking. Nordcloud has not yet seen documented cases where adapting the old legacy operating model and organization to a cloud native one has been successful.

So far, all the successful transformation of IT and business towards a cloud native operating model have been made by starting with a new dedicated team or organization concentrating on cloud native work and scaling up the actions in this area, without taking legacy operating models into use.

This means that a decision must be made that a bi-modal way of working is allowed.

As digital becomes embedded in the business, there is a need to redefine organizational culture and traditional structures.

The ideal team structure combines technological architecture, DevOps processes for agile development and multi cloud governance frameworks.

Successful digital transformation requires managing complex set of new requirements, which cannot be solved by relying on old ways of working nor cooperating with same set of partners as in the past. Successful organizations learn and adapt with transformational partners and drive for success with faster time to market and highly adaptive way of responding to market challenges.

 

Cloud Center of Excellence Boosts Agile Way of Working

Based on Nordcloud’s experience, setting up cloud center of excellence for cloud governance has been the key for solving most of the challenges organizations are facing when operating in agile way. Cloud center of excellence helps in making DevOps the most important factor of the enterprise transformation journey. This model also highlights developers as the most important customers of cloud center of excellence.  

The defined vision for cloud center of excellence is to enable business’ objectives and innovation across the organization. Cloud center of excellence will drive agility in developing new services and enable business to utilize the broad cloud service offering from cloud providers. It will provide value added services for business and drive the optimization of processes and tasks through automation.

In the second part of this series about the role of transformational partners in organizational change, I will introduce Nordcloud’s vision for Cloud Center of Excellence and multi-cloud governance.

To pursue the keys to supercharging your digital success, download our IDC Infobrief Hyperscale Cloud Platforms As An Accelerator For Digital!

Yes, I want to learn more

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








DevOpsDays is coming to Poznań, Poland

CATEGORIES

Events

Join Nordcloud on Monday, 20th May during DevOpsDays Poznan 2019!

For the first time, this event will take place in Poznań, so we couldn’t just miss it. Nordcloud supports this conference that brings development and operations together. The agenda looks promising and speakers will cover the hottest topics from cloud-related tech like serverless, microservices, containers and.. simply devops stuff!

Date: 20.05.2019
Time: 08:00 – 19:00
Venue: Green Conference Room at Poznan Congress Center located at MTP, East Gate, ul Głogowska 14, Poznań

Details & Program: https://devopsdays.org/events/2019-poznan/welcome/

Get your tickets here: https://devopsdayspoznan.evenea.pl/

 

 

Blog

Meet us at Microsoft Ignite 2019

Explore the latest tools and technology – join Nordcloud at Microsoft Ignite in Orlando on November 4-8!

Blog

SaaS Business Model and Public Cloud are a Winning Combination for ISVs

Our experts have helped many ISVs to leverage cloud technologies to transition their business from that of a traditional software...

Blog

Migrations to MS Azure – Best Practices shared in Poland

Microsoft & Nordcloud Poland on the road.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Administrating Cloud Databases

CATEGORIES

Life at Nordcloud

Time for a new Nordcloudian story from our Polish cloud database administration!  

 

1. Where are you from and how did you end up at Nordcloud?

I’m working at Nordcloud as a Cloud Database Administrator. From early childhood, I’ve been fascinated by the analysis of information and solving puzzles (of course at an appropriate age level 😊). This is why I came across computer science in primary school by enrolling in a Computer Club at the Youth Cultural Center. There, I learned to program in such languages ​​as LOGO or PASCAL on ZX-Spectrum or Amstad Schneider.

Later on, I also worked on Commodore C64 and IBM PC 386 (I developed my university projects on the latter, which was a big challenge and good training of writing efficient applications). After years of working as a DBA with various systems and database engines, I wanted to gain even more knowledge and started to look for more possibilities. I heard about Nordcloud from my colleague who convinced me that I would be able to carry out my tech aspiration in cloud native company like Nordcloud. It was hard to believe, but now I know it’s true. Apart from daily tasks I enjoy the atmosphere of work. The general enthusiasm of the cloud here very often reminds me of the fascination with the technology that I had in the Computer Club (the form of breaks is similar :D).

What’s more, I’m just a Polish woman who is fascinated by the diversity of other cultures and think that every culture is a beautiful part of the entire history of the world, which is good to know and understand.

2.What is your core competence?

Relational Databases and data analysis.
First I was working as a DB2/zOS Database Engineer and also used my knowledge on such environments as DB2/AS400, DB2 LUW (Linux,Unix,Windows). The next step in my experience was gathering an Oracle/Linux, Oracle/Windows Database and SQL Server Database. The icing on the cake is using and expanding my experience in the multi-cloud environment. It’s really awesome!

3.What do you like most about working at Nordcloud?

Here I can develop my skills in many areas of database engineering. Being able to work with databases in public clouds such as AWS, Azure and GCP is a unique opportunity to gather experience in such broad areas. Here I can work with people who have the same passion in what they do.

The possibility of remote work makes it easier to fulfil professional and family duties.

Nordcloud is a company that brings together exceptional specialists in various fields of computer science.

4.What is the most useful thing you have learned at Nordcloud?

Cloud is the future in the rapidly changing world of IT, which is worth of attention. By working in the cloud, the boundaries between teams, such as the system administrator, database administrator, network administrator or developer are blurred, but skills in all these areas are still very desirable.

5.What sets you on fire/What’s your favourite thing technically with public cloud ?

Everything. The solution is very complex and amazing.

6.What do you do outside work?

After work I’m involved in advanced optimisation of structural household processes implemented as a part of individual projects at my home.
I support ventures in board games and lessons on percussion. 
In a nutshell, I have two fantastic sons 😊

BTW, Nordcloud keeps growing so stay in touch and keep an eye on our openings:

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

Nordcloudians at NodeConf EU 2019

Our developers Henri Nieminen and Perttu Savolainen share their presentation tips and tell about their experience at NodeConf EU 2019...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Nordcloud is the first Nordic Authorized GCP Training Partner

CATEGORIES

BlogNews

Nordcloud is the first Nordic company to achieve the status of Authorized Google Cloud Platform Training Partner. This makes Nordcloud an authorized training partner with all three hyperscale providers (AWS, Azure and GCP).

Google Cloud is a vital addition to Nordcloud’s training offering. ”Authorized trainings are a fundamental part of the cloud journey in any company. Availability of Google Cloud trainings will significantly lower any barriers to start working with Google Cloud. As the pioneer in training new cloud skills around Europe, we are excited to be able to answer to the rapidly growing demand in the market”, says Jan Kritz, CEO of Nordcloud. “Tailored cloud trainings and agile change management upskill customers’ existing staff and also accelerate further the digital transformation significantly, as customers may source key positions by themselves”.

Authorized trainings are a fundamental part of the cloud journey in any company.

Jan Kritz, CEO, Nordcloud

IDC* predicts that lack of IT skills alone, cloud-related in particular, will cost European companies in $91 billion in lost revenue annually.

“Companies really need to understand what public cloud is all about to gain full benefits. Research shows that trained organizations are multiple times faster to adopt cloud, and four times more likely to meet ROI requirements”, says Johan Wangel, Trainings Sales Manager of Nordcloud.  “As a cloud native company we know what it takes to succeed in public cloud. Our trainers have delivered over 300 courses since 2013 with an excellent participant satisfaction and are looking forward to continuing on this path”, Wangel rejoices.

*IDC Infobrief sponsored by Nordcloud (2019): Hyperscale Cloud Platforms As An Accelerator For Digital: The Role Of Transformational Partners.

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.