How Do Medium-Sized Companies Adapt Amazon Web Services With Success?

CATEGORIES

BlogInsights

How do companies successfully adapt Amazon Web Services? Which AWS customer did it how? How can Nordcloud help medium-sized companies in particular to be successful in the cloud? On October 26th, this and other questions from Nordcloud were answered together with AWS as part of a specialist lecture on “SMEs in the clouds – If cloud, then right”. The participants in the workshop on the premises of the IT system house LEITWERK in Appenweier are made up of IT managers from regional companies and employees of the host.

Together with Christopher Ziegler (Industry 4.0 Lead from AWS Germany), our Thomas Baus showed the participants of the workshop, in addition to a general introduction to Amazon Web Services, some ways to success in the cloud. The focus was on the following topics, among others:

A holistic view of cloud projects  is the key to success, because cloud is not a purely technological topic. It’s not just about replacing virtual servers with cloud-based instances. A cloud – in particular AWS Cloud – offers much more than just infrastructure and requires much more than a purely technical migration. The successful and sustainable use of cloud services is an end-2-end transformation and should therefore not be driven as a pure IT project.

Bi-modal IT organization  as a concept for dividing corporate IT into modern and traditional subject areas to enable fast and efficient adaptation of new paradigms such as that of cloud services. Our customer Husqvarna was mentioned as an example. There, together with AWS and Nordcloud, a digital IT service unit has been established as a backend for all cloud-based innovation topics such as IoT and analytics.

Successful case studies  from the local market were also considered. In the course of this, some of the typical concerns of medium-sized companies (security, costs, employees …) were taken up and eliminated – in the truest sense of the word – by intelligent approaches. In this context, the great case study by IDC on AWS use of Deutsche Bahn was also referenced. You can find them  here .

Licensing – a key issue in cloud migrations

In addition to AWS and us, there was also a LEITWERK speaker on “Licensing in the Cloud”. The possible advantages and challenges with regard to their licensing when moving workloads to the cloud were discussed. Subsequently, the topics of finger food and cold drinks were further deepened in a relaxed atmosphere and concrete use cases were discussed.

At AWS and Nordcloud, we deliberately want to make small and medium-sized companies in the German-speaking world sustainable through our services around Amazon Web Services. The innovative strength and agility of the numerous hidden champions is very great, for example, in the area of ​​manufacturing companies.

Blog

Nordcloud positioned in Gartner’s Magic Quadrant for Public Cloud Infrastructure Professional and Managed Services, Worldwide

We are joining the group of few Google Cloud Premier Partners and MSPs.

Blog

How Do Medium-Sized Companies Adapt Amazon Web Services With Success?

How do companies successfully adapt Amazon Web Services? Which AWS customer did it how? How can Nordcloud help medium-sized companies in particular...

Blog

Top 5 Cloud Strategy Tips For the Year 2020

In this chapter of ‘The Role of Transformational Partners in Organization Change’, we introduce Nordcloud’s vision for Cloud Strategy and...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Problems with DynamoDB Single Table Design

CATEGORIES

Tech

Summary

DynamoDB is Amazon’s managed NoSQL database service. DynamoDB provides a simple, schemaless database structure and very high scalability based on partitioning. It also offers an online management console, which lets you query and edit data and makes the overall developer experience very convenient.

There are two main approaches to designing DynamoDB databases. Multi Table Design strores each database entity in a separate table. Single Table Design stores all entities in one big common table.

This article focuses mostly on the development experience of creating DynamoDB applications. If you’re working with a large scale project, performance and scalability may be more important aspects for you. However, you can’t completely ignore the developer experience. If you apply Single Table Design, the developer experience will be more cumbersome and less intuitive than with Multi Table Design.

Multi Table Design Overview

DynamoDB is based on individual tables that have no relationships between each other. Despite the limitation, we tend to use them in the same way as SQL database tables. We name a DynamoDB table according to a database entity, and then store instances of that database entity in that table. Each entity gets their own table.

We can call this approach Multi Table Design, because an application usually requires multiple entities. It’s the default way most of us create DynamoDB applications.

Let’s say we have the entities User, Drive, Folder and File. We would typically then have four DynamoDB tables as shown in the database layout below.

The boldface headers are field names, and the numbers are field values organized into table rows. For simplicity, we’re only dealing with numeric identifiers.


USERS
UserId(PK)
1
DRIVES
UserId(PK)  DriveId(SK)
1           1
1           2
FOLDERS
UserId(PK)  FolderId(SK)  ParentDriveId
1           1             1  
1           2             2
FILES
UserId(PK)  FileId(SK)    ParentFolderId
1           1             1
1           2             2
1           3             2

Note: PK means Partition Key and SK means Sort Key. Together they are the table’s unique primary key.

It’s pretty easy to understand the structure of this database. Everything is partitioned by UserId. Underneath each User there are Drives which may contain Folders. Folders may contain Files.

The main limitation of Multi Table Design is that you can only retrieve data from one table in one query. If you want to retrieve a User and all their Drives, Folders and Files, you need to make four separate queries. This is particularly inefficient in use cases where you cannot make all the queries in parallel. You need to first look up some data in one table, so that you can find the related data in another table.

Single Table Design Overview

Single Table Design is the opposite of Multi Table Design. Amazon has advocated this design pattern in various technical presentations. For an example, see DAT401 Advanced Design Patterns for DynamoDB by Rick Houlihan.

The basic idea is to store all database entities in a single table. You can do this because of DynamoDB’s schemaless design. You can then makes queries that retrieve several kinds of entities at the same time, because they are all in the same table.

The primary key usually contains the entity type as part of it. The table might thus contain an entity called “User-1” and an entity called “Folder-1”. The first one is a User with identifier “1”. The second one is a Folder with identifier “1”. They are separate because of the entity prefix, and can be stored in the same table.

Let’s say we have the entities User, Drive, Folder and File that make up a hierarchy. A table containing a bunch of these entities might look like this:


PK        SK         HierarchyId
User-1    User-1     User-1/
User-1    Drive-1    User-1/Drive-1/
User-1    Folder-1   User-1/Drive-1/Folder-1/
User-1    File-1     User-1/Drive-1/Folder-1/File-1/
User-1    Folder-2   User-1/Drive-1/Folder-2/
User-1    File-2     User-1/Drive-1/Folder-2/File-2/
User-1    File-3     User-1/Drive-1/Folder-2/File-3/

Note: PK means Partition Key and SK means Sort Key. Together they are the table’s unique primary key. We’ll explain HierarchyId in just a moment.

As you can see, all items are in the same table. The partition key is always User-1, so that all of User-1’s data resides in the same partition.

Advantages of Single Table Design

The main advantage that you get from Single Table Design is the ability to retrieve a hierarchy of entities with a single query. You can achieve this by using Secondary Indexes. A Secondary index provides a way to query the items in a table in a specific order.

Let’s say we create a Secondary Index where the partition key is PK and the sort key is HierarchyId. It’s now possible to query all the items whose PK is “User-1” and that have a HierarchyId beginning with “User-1/Drive-1/”. We get all the folders and files that the user has stored on Drive-1, and also the Drive-1 entity itself, as the result.

The same would have been possible with Multi Table Design, just not as efficiently. We would have defined similar Secondary Indexes to implement the relationships. Then we would have separately queried the user’s drives from the Drives table, folders from the Folders table, and files from the Files table, and combined all the results.

Single Table Design can also handle other kinds of access patterns more efficiently than Multi Table Design. Check the YouTube video mentioned in the beginning of this article to learn more about them.

Complexity of Single Table Design

Why would we not always use Single Table Design when creating DynamoDB based applications? Do we lose something significant by applying it to every use case?

The answer is yes. We lose simplicity in database design. When using Single Table Design, the application becomes more complicated and unintuitive to develop. As we add new features and access patterns over time, the complexity keeps growing.

Just managing one huge DynamoDB table is complicated in itself. We have to remember to include the “User-” entity prefix in all queries when working with AWS Console. Simple table scans aren’t possible without specifying a prefix.

We also need to manually maintain the HierarchyId composite key whenever we create or update entities. It’s easy to cause weird bugs by forgetting to update HierarchyId in some edge case or when editing the database manually.

As we start adding sorting and filtering capabilities to our database queries, things get even more complicated.

Things Get More Complicated

Now, let’s allow sorting files by their creation date. Extending our example, we might have a table design like this:


PK      SK        HierarchyId                      CreatedAt
User-1  User-1    User-1/                          2019-07-01
User-1  Drive-1   User-1/Drive-1/                  2019-07-02
User-1  Folder-1  User-1/Drive-1/Folder-1/         2019-07-03
User-1  File-1    User-1/Drive-1/Folder-1/File-1/  2019-07-04
User-1  Folder-2  User-1/Drive-1/Folder-2/         2019-07-05
User-1  File-2    User-1/Drive-1/Folder-2/File-2/  2019-07-06
User-1  File-3    User-1/Drive-1/Folder-2/File-3/  2019-07-07

How do we retrieve the contents of Folder-2 ordered by the CreatedAt field? We add a Global Secondary Index for this access pattern, which will consist of GSI1PK and GSI1SK:


PK      SK        HierarchyId                      CreatedAt   GSI1PK            GSI1SK
User-1  User-1    User-1/                          2019-07-01  User-1/           ~
User-1  Drive-1   User-1/Drive-1/                  2019-07-02  User-1/           2019-07-02
User-1  Folder-1  User-1/Drive-1/Folder-1/         2019-07-03  User-1/Folder-1/  ~
User-1  File-1    User-1/Drive-1/Folder-1/File-1/  2019-07-04  User-1/Folder-1/  2019-07-04
User-1  Folder-2  User-1/Drive-1/Folder-2/         2019-07-05  User-1/Folder-2/  ~
User-1  File-2    User-1/Drive-1/Folder-2/File-2/  2019-07-06  User-1/Folder-2/  2019-07-06
User-1  File-3    User-1/Drive-1/Folder-2/File-3/  2019-07-07  User-1/Folder-2/  2019-07-07

We’ll get to the semantics of GSI1PK and GSI1SK in just a moment.

But why did we call these fields GSI1PK and GSI1SK instead of something meaningful? Because they will contain different kinds of values depending on the entity stored in each database item. GSI1PK and GSI1SK will be calculated differently depending on whether the item is a User, Drive, Folder or File.

Overloading Names Adds Cognitive Load

Since it’s not possible to give GSI keys sensible names, we just call them GSI1PK and GSI1SK. These kind of generic field names add cognitive load, because the fields are no longer self-explanatary. Developers need to check development documentation to find out what exactly GSI1PK and GSI1SK mean for some  particular entity.

So, why is the GSI1PK field not the same as HierarchyId? Because in DynamoDB you cannot query for a range of partition key values. You have to query for one specific partition key. In this use case, we can query for GSI1PK = “User-1/” to get items under a user, and query for GSI1PK  = “User-1/Folder-1” to get items under a user’s folder.

What about the tilde (~) characters in some GS1SK values? They implement reverse date sorting in a way that also allows pagination. Tilde is the last printable character in the ASCII character set and will sort after all other characters. It’s a nice hack, but it also adds even more cognitive load to understanding what’s happening.

When we query for GSI1PK = “User-1/Folder-1/”  and sort the results by GSI1SK in descending key order, the first result is Folder-1 (because ~ comes after all other keys) and the following results are File-2 and File-3 in descending date order. Assuming there are lots of files, we could continue this query using the LastEvaluatedKey feature of DynamoDB and retrieve more pages. The parent object (Folder-1) always appears in the first page of items.

Overloaded GSI Keys Can’t Overlap

You may have noticed that we can now also query a user’s drives in creation date order. The GSI1PK and GSI1SK fields apply to this relationship as well. This works because the relationship between the User and Drive entities does not not overlap with the relationship between the Folder and File entities.

But what happens if we need to query all the Folders under a Drive? Let’s say the results must, again, be in creation date order.

We can’t use the GSI1 index for this query because the GSI1PK and GSI1SK fields already have different semantics. We already use those keys to retrieve items under Users or Folders.

So, we’ll create a new Global Secondary Index called GSI2, where GSI2PK and GSI2SK define a new relationship. The fields are shown in the table below:


PK      SK        HierarchyId                      CreatedAt   GSI1PK            GSI1SK      GSI2PK           GSI2SK
User-1  User-1    User-1/                          2019-07-01  User-1/           ~
User-1  Drive-1   User-1/Drive-1/                  2019-07-02  User-1/           2019-07-02  User-1/Drive-1/  ~
User-1  Folder-1  User-1/Drive-1/Folder-1/         2019-07-03  User-1/Folder-1/  ~           User-1/Drive-1/  2019-07-03
User-1  File-1    User-1/Drive-1/Folder-1/File-1/  2019-07-04  User-1/Folder-1/  2019-07-04  User-1/Drive-1/  2019-07-04
User-1  Folder-2  User-1/Drive-1/Folder-2/         2019-07-05  User-1/Folder-2/  ~           User-1/Drive-1/  2019-07-05
User-1  File-2    User-1/Drive-1/Folder-2/File-2/  2019-07-06  User-1/Folder-2/  2019-07-06
User-1  File-3    User-1/Drive-1/Folder-2/File-3/  2019-07-07  User-1/Folder-2/  2019-07-07

Note: Please scroll the table horizontally if necessary.

Using this new index we can query for GSI2PK = “User-1/Drive-1/” and sort the results by GSI2SK to get the folders in creation date order. Drive-1 has a tilde (~) as the sort key to ensure it comes as the first result on the first page of the query.

Now It Gets Really Complicated

At this point it’s becoming increasingly more complicated to keep track of all those GSI fields. Can you still remember what exactly GSI1PK and GSI2SK mean? The cognitive load is increasing because you’re dealing with abstract identifiers instead of meaningful field names.

The bad news is that it only gets worse. As we add more entities and access patterns, we have to add more Global Secondary Indexes. Each of them will have a different meaning in different situations. Your documentation becomes very important. Developers need to check it all the time to find out what each GSI means.

Let’s add a new Status field to Files and Folders. We will now allow querying for Files and Folders based on their Status, which may be VISIBLE, HIDDEN or DELETED. The results must be sorted by creation time.

We end up with a design that requires three new Global Secondary Indexes. GSI3 will contain files that have a VISIBLE status. GSI4 will contain files that have a HIDDEN status. GSI5 will contain files that have a DELETED status. Here’s what the table will look like:


PK      SK        HierarchyId                      CreatedAt   GSI1PK            GSI1SK      GSI2PK           GSI2SK      Status    GSI3PK                    GSI3SK      GSI4PK                   GSI4SK      GSI5PK                     GSI5SK
User-1  User-1    User-1/                          2019-07-01  User-1/           ~
User-1  Drive-1   User-1/Drive-1/                  2019-07-02  User-1/           2019-07-02  User-1/Drive-1/  ~
User-1  Folder-1  User-1/Drive-1/Folder-1/         2019-07-03  User-1/Folder-1/  ~           User-1/Drive-1/  2019-07-03  VISIBLE   User-1/Folder-1/VISIBLE/  ~           User-1/Folder-1/HIDDEN/  ~           User-1/Folder-1/DELETED/   ~
User-1  File-1    User-1/Drive-1/Folder-1/File-1/  2019-07-04  User-1/Folder-1/  2019-07-04  User-1/Drive-1/  2019-07-04  VISIBLE   User-1/Folder-1/VISIBLE/  2019-07-04  User-1/Folder-1/HIDDEN/  2019-07-04  User-1/Folder-1/DELETED/
User-1  Folder-2  User-1/Drive-1/Folder-2/         2019-07-05  User-1/Folder-2/  ~           User-1/Drive-1/  2019-07-05  VISIBLE   User-1/Folder-2/VISIBLE/  ~           User-1/Folder-2/HIDDEN/  ~           User-1/Folder-2/DELETED/   ~
User-1  File-2    User-1/Drive-1/Folder-2/File-2/  2019-07-06  User-1/Folder-2/  2019-07-06                               HIDDEN    User-1/Folder-2/VISIBLE/              User-1/Folder-2/HIDDEN/  2019-07-06  User-1/Folder-2/DELETED/
User-1  File-3    User-1/Drive-1/Folder-2/File-3/  2019-07-07  User-1/Folder-2/  2019-07-07                               DELETED   User-1/Folder-2/VISIBLE/              User-1/Folder-2/HIDDEN/              User-1/Folder-2/DELETED/   2019-07-07

Note: Please scroll the table horizontally if necessary.

You may think this is getting a bit too complicated. It’s complicated because we still want to be able to retrieve both a parent item and its children in just one query.

For example, let’s say we want to retrieve all VISIBLE files in Folder-1. We query for GSI3PK = “User-1/Folder-1/VISIBLE/” and again sort the results in descending order as earlier. We get back Folder-1 as the first result and File-1 as the second result. Pagination will also work if there are more results. If there are no VISIBLE files under the folder, we only get a single result, the folder.

That’s nice. But can you now figure out how to retrieve all DELETED files in Folder-2? Which GSI will you use and what do you query for? You probably need to stop your development work for a while and spend some time reading the documentation.

The Complexity Multiplies

Let’s say we need to add a new Status value called ARCHIVED. This will involve creating a yet another GSI and adding application code in all the places where Files or Folders are created or updated. The new code needs to make sure that GSI6PK and GSI6SK are generated correctly.

That’s a lot of development and testing work. It will happen every time we add a new Status value or some other way to perform conditional queries.

Later we might also want to add new sort fields called ModifiedAt and ArchivedAt. Each new sort field will require its own set of Global Secondary Indexes. We have to create a new GSI for every possible Status value and sort key combination, so we end up with quite a lot of them. In fact, our application will now have GSI1-GSI18, and developers will need to understand what GSI1PK-GSI18PK and GSI1SK-GSI18SK mean.

In fairness, this complexity is not unique to Single Table Design. We would have similar challenges when applying Multi Table Design and implementing many different ways to query data.

What’s different in Multi Table Design is that each entity will live in its own table where the field names don’t have to be overloaded. If you add a feature that involves Folders, you only need to deal with the Folders table. Indexes and keys will have semantically meaningful names like “UserId-Status-CreatedAt-index”. Developers can understand them intuitively without referring to documentation all the time.

Looking for a Compromise

We can make compromises between Single Table Design and Multi Table Design to reduce complexity. Here are some suggestions.

First of all, you should think of Single Table Design as an optimization that you might be applying prematurely. If you design all new applications from scratch using Single Table Design, you’re basically optimizing before knowing the real problems and bottlenecks.

You should also consider whether the database entities will truly benefit from Single Table Design or not. If the use case involves retrieving a deep hierarchy of entities, it makes sense to combine those entities into a single table. Other entities can still live in their own tables.

In many real-life use cases the only benefit from Single Table Design is the ability to retrieve a parent entity and its children using a single DynamoDB query. In such cases the benefit is pretty small. You could just as well make two parallel requests. Retrieve the parent using GetItem and the children using a Query. In an API based web application the user interface can perform these requests in parallel and combine the results in the frontend.

Many of the design patterns related to Single Table Design also apply to Multi Table Design. For instance, overloaded composite keys and secondary indexes are sometimes quite helpful in modeling hierarchies and relationships. You can use them in Multi Table Design without paying the full price of complexity that Single Table Design would add.

In summary, you should use your judgment case by case. Don’t make blanket policy to design every application using either Single Table Design or Multi Table Design. Learn the design patterns and apply them where they make sense.

Blog

Minimizing AWS Lambda deployment package size in TypeScript

Our Senior Developer Vitalii explains how to significantly reduce the deployment package size of AWS Lambda functions written in TypeScript...

Blog

Problems with DynamoDB Single Table Design

Single Table Design is a database design pattern for DynamoDB based applications. In this article we take a look at...

Blog

Modular GraphQL server

Read about Kari's experiences with GraphQL modules!

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Findings from AWS re:Invent 2019, Part 2

CATEGORIES

BlogInsights

I was expecting the usual set of service and feature announcements in Wernel Vogels’ Thursday keynote, but instead he did focus on what is happening behind the scenes of AWS, especially EC2 Nitro architecture and S3. So instead of analyzing Werner’s keynote, I picked 2 announcements from Wednesday that didn’t make to keynotes but are worthy of attention because how these will simplify building APIs and distributed applications.

Amazon API Gateway HTTP APIs

Amazon API Gateway HTTP APIs will lower the barrier of entry when starting to build that next great service or application. It is now trivial to get started with HTTP proxy for lambda function(s);

% aws apigatewayv2 create-api \
    —-name MyAPIname \
    —-protocol-type HTTP \
    --target arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION

It is also nice that HTTP API has Serverless Application Model (SAM) support from day 1. And when your API start getting attention, pricing is up to 70% cheaper than generic API Gateway. Compatible API Gateway definitions (=HTTP and Lambda backends with OIDC/JWT based authorization) can be exported and re-imported as HTTP APIs.

Amplify DataStore

Amplify DataStore is queryable, on-device data store for web, IoT, and mobile developers using React Native, iOS and Android. Idea is that you don’t need to write separate code for offline and online scenarios. Working with distributed cross-user data is as simple as using local data. DataStore is available with the latest Amplify Javascript client, iOS and Android clients are in preview.

DataStore blog post and demo app is a good way to get your feet wet with DataStore and see how simple it can be to create applications using shared state between multiple online and offline clients.

Interested in reading more about Petri’s views and insights? Follow his blog CarriageReturn.Nl

Blog

Nordcloud positioned in Gartner’s Magic Quadrant for Public Cloud Infrastructure Professional and Managed Services, Worldwide

We are joining the group of few Google Cloud Premier Partners and MSPs.

Blog

How Do Medium-Sized Companies Adapt Amazon Web Services With Success?

How do companies successfully adapt Amazon Web Services? Which AWS customer did it how? How can Nordcloud help medium-sized companies in particular...

Blog

Top 5 Cloud Strategy Tips For the Year 2020

In this chapter of ‘The Role of Transformational Partners in Organization Change’, we introduce Nordcloud’s vision for Cloud Strategy and...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Findings from AWS re:Invent 2019, Part 1

CATEGORIES

BlogInsights

ML/AI was definitely the topic of Andy Jassy’s re:Invent Tuesday keynote. Another area of major investment was service proximity to customers and end-users. With that it was only natural there were also some new networking features to help building multi-region connectivity.

Machine Learning for the Masses

ML/AI received a lot of love in Tuesday announcements. If there is one thing to pick from the group, it would be SageMaker Autopilot:

“With this feature, Amazon SageMaker can use your tabular data and the target column you specify to automatically train and tune your model, while providing full visibility into the process. As the name suggests, you can use it on autopilot, deploying the model with the highest accuracy with one click in Amazon SageMaker Studio, or use it as a guide to decision making, enabling you to make tradeoffs, such as accuracy with latency or model size.”

Together with SageMaker Studio web-based IDE this is to democratize artesan work of data analytics. There were also 3 interesting real-world applications of ML announced (all in preview);

  • Amazon CodeGuru for automated code reviews and application performance recommendations.
  • Amazon Fraud Detector is managed service to identify fraudulent activities such as online payment fraud and the creation of fake accounts.
  • Amazon Detective is service to analyze, investigate and find root cause for potential security issues or suspicious activities based on analysis of logs from AWS resources.

As services these are all very easy to consume and can bring a lot of value in preventing costly mistakes from happening. These also follow the same pattern as SageMaker Autopilot, automating artesan work traditionally performed by skilled (but overloaded) individuals.

Getting Closer to Customer

Another theme in Tuesday’s announcements was cloud services getting physically closer to customers. This is important when you must keep your data in certain country or need very low latencies.

AWS Local Zone is an extension of AWS region. It brings compute, storage and selected subset of AWS services closer to customer. The very first local zone was announced in Los Angeles but I would expect these to be popping up in many cities around the world that don’t yet have their own AWS region nearby.

If local zone is not close enough, then there is AWS Wavelength. This is yet another variation of (availability) zone. Wavelength has similar (but not the same?) subset of AWS services as Local Zone. Wavelength zones are co-located at 5G operators edges that helps in building ultra low latency services for mobile networks.

AWS Outpost is now in GA and support for EMR and container services like ECS, EKS and App Mesh was added to service mix of Outpost. Pricing starts from $225k 3-year-upfront or $7000/month for 3 year subsciption. I think many customers would want to wait and see how Local Zones are expanding before investing in on-prem hardware.

Networking

AWS has had a tradition of changing networking best-practices every year at re:Invent. This year it wasn’t quite as dramatic but there were very welcome feature announcements that go nicely with the idea of different flavours of local regions.

Transit Gateway inter-region peering allows you to build global WAN within AWS networks. This is great feature when building multi-region services or have your services spread across multiple regions because of differences in local service mix. That said, please notice inter-region peering is only available at certain regions at launch.

Transit Gateway Network Manager enables you centrally manage and monitor your global network, not only on AWS but also on-premises. As networking is getting much more complex this global view and management is going to be most welcome help. It will also help in shifting the balance of network management from on-premises towards public cloud.

Finally support for multicast traffic was one of the last remaining blockers for moving applications to VPC. With the announcement of Transit Gateway Multicast support even that is now possible. Fine print says multicast is not supported over direct connect, site-to-site VPN or peering connections.

Interested in reading more about Petri’s views and insights? Follow his blog CarriageReturn.Nl

Blog

Nordcloud positioned in Gartner’s Magic Quadrant for Public Cloud Infrastructure Professional and Managed Services, Worldwide

We are joining the group of few Google Cloud Premier Partners and MSPs.

Blog

How Do Medium-Sized Companies Adapt Amazon Web Services With Success?

How do companies successfully adapt Amazon Web Services? Which AWS customer did it how? How can Nordcloud help medium-sized companies in particular...

Blog

Top 5 Cloud Strategy Tips For the Year 2020

In this chapter of ‘The Role of Transformational Partners in Organization Change’, we introduce Nordcloud’s vision for Cloud Strategy and...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Partner and capacity management with Peter Bakker

CATEGORIES

Life at Nordcloud

1. Where are you from and how did you end up at Nordcloud?

I’m Dutch, living in Rotterdam.

I started the Azure relationship between Microsoft and Mirabeau when I was working at Mirabeau.

I grew their Azure business, we became an MSP and I was asked to join the Partner Advisory team by Microsoft.

There I met Nordcloud’s founder Fernando.

Mirabeau was acquired by Cognizant and integrated as of January 1st of this year.

In terms of my career, I was in the middle of a journey with different changes and Fernando suggested for me to join Nordcloud in the spring of this year. His words were: “We always have room for good people”.

I had a chat with Nordcloud’s CEO Jan and after some discussion, we agreed on interesting goals, I switched clouds from Microsoft to AWS and became the AWS partner manager at Nordcloud. 

 

2. What is your role and core competence?

I was hired as Partner Manager for AWS. My responsibility was first to move from escalation management to opportunity management. Working with different AWS managers we started fixing things and recently signed a joint partner plan for 2020. We now have a joint ambition for what we are aiming to achieve together and this is actually one of my best memories since working at Nordcloud!

My role has also evolved since I started and I also have the hat of Head of Capacity now. I’m commercially responsible for reselling AWS, Azure, and GCP, managing our margins, making our Sales colleagues life is a bit easier and understanding cloud costs, cost optimisation and the real value of capacity management.

I fly around a lot and get to work with different teams as we’re active in 10 countries. My daughter recently asked me if I was working at KLM.

 

3. What do you like most about working at Nordcloud?

1) Depth and broadness of skill levels: we have so many talented, amazing colleagues.

2) The great names that we work for and all the great things we do for example for BMW, SKF, Volvo or Redbull.

3) Freedom and opportunity to learn and grow. 

 

5. What sets you on fire/ what’s your favourite thing with public cloud?

Digital transformation! All the new business opportunities that our customers get by adopting cloud.

For example last week at the AWS Partner summit Konecranes presented a great case of Nordcloud helping them in a very short timeframe to build a serverless solution using IoT that helps them to weigh containers. This solution is now fitted in new equipment and retrofitted into existing equipment. 

The payback time for Konecranes was only 3 months. Sales of their equipment were boosted.

It’s great seeing how starting small and laying foundations sets us and our clients up for success and even bigger projects. 

 

6. What do you do outside work?

I’m a passionate golf player as well as a youth at our golf club in Rotterdam.

 

8. How would you describe our culture?

Open and flat organisation!

There is no hierarchy at Nordcloud. We are all colleagues and together we help our customers to get cloud native. 

 

9. What are your greetings/advice for someone who might be considering a job in Nordcloud?

Somebody in a recruitment process recently asked me how I like it here at Nordcloud. I answered, ” I should have done this a year ago”!

As there is a lot of freedom and opportunity to learn and grow, you must remember to take care of yourself too. There is always something interesting to do, so it’s very much about finding the right balance. As things get excited, I sometimes have to remind myself; there is also always tomorrow!

Blog

Nordcloudians at NodeConf EU 2019

Our developers Henri Nieminen and Perttu Savolainen share their presentation tips and tell about their experience at NodeConf EU 2019...

Blog

Learning from each other

Johanna from Managed Cloud Applications team sheds light on how their team shares knowledge and stays effective!

Blog

A story of a busy bee from the Copenhagen swarm by Casper Bøgeholdt Andersen

Read a story of Casper from Copenhagen office who shared his story about how is it to work on finding...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Nordcloud Achieves AWS Financial Services Competency Status

CATEGORIES

NewsPress releases

Nordcloud has achieved Amazon Web Services (AWS) Financial Services Competency status. This designation recognizes Nordcloud for providing deep expertise to help organizations manage critical issues pertaining to the industry, such as risk management, core systems implementations, data management, navigating compliance requirements, and establishing governance models.

This competency will help us offer our public cloud services to an even larger group of FSI customers in all of our 10 countries.

Jan Kritz, CEO, Nordcloud

Achieving the AWS Financial Services Competency differentiates Nordcloud as an AWS Partner Network (APN) member that has demonstrated relevant technical proficiency and proven customer success, delivering solutions seamlessly on AWS. To receive the designation, APN Partners must possess deep AWS expertise and undergo an assessment of the security, performance, and reliability of their solutions. 

“We are excited to be recognised for our FSI achievements,  as it is our major focus area in terms of industry and solutions. A big thanks to our team and, of course, our beloved customers for trusting in Nordcloud’s ability,” said Jan Kritz, CEO of Nordcloud. “This competency will help us offer our public cloud services to an even larger group of FSI customers in all of our 10 countries.”

AWS is enabling scalable, flexible, and cost-effective solutions from startups to global enterprises. To support the seamless integration and deployment of these solutions, AWS established the AWS Competency Program to help customers identify Consulting and Technology APN Partners with deep industry experience and expertise.

“The main value of Nordcloud is to power up our customer’s digital transformation enabled by public cloud,” Kritz concluded.

Blog

Nordcloud positioned in Gartner’s Magic Quadrant for Public Cloud Infrastructure Professional and Managed Services, Worldwide

We are joining the group of few Google Cloud Premier Partners and MSPs.

Blog

Nordcloud Achieves AWS Financial Services Competency Status

Nordcloud is one of the first Nordic based AWS Partner Network members to receive the designation. To achieve AWS Financial...

Blog

HUS Chooses Nordcloud As Partner for Amazon Web Services Development

Nordcloud's AWS capacity management, managed services and consulting services enable HUS to leverage Amazon Web Services as a platform for...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








HUS Chooses Nordcloud As Partner for Amazon Web Services Development

CATEGORIES

News

Helsinki and Uusimaa Hospital District (HUS) has chosen Nordcloud as a partner to develop and manage its Amazon Web Services environments. The contract contains AWS capacity management and managed services, as well as consulting services, and it enables HUS, a Finnish pioneer in digital transformation of healthcare, to leverage Amazon Web Services as a platform for new services and data analytics development.

Nordcloud is proud to be an AWS Premier Consulting Partner since 2014 and an AWS Managed Service Provider since 2015. Making use of our years of experience and utilising best practices picked up along the way, Nordcloud is able to design and build cloud environments that match customer budget and demands whilst being completely elastic and scalable.

For HUS, Nordcloud will initiate the development of the cloud foundation, as well as, setting up new data management solutions. The foundation is a vital step on the enterprise cloud journey, as it operates as an enabler for automated operations and scalable services. 

Amazon Web Services is one of the leading cloud computing platforms providing a reliable, scalable, and low-cost set of remote computing services. The AWS cloud was formed by the people behind Amazon.com in 2006 when Amazon started to offer businesses IT infrastructure services. These were in the form of web services, now commonly known as cloud computing. Today, Amazon Web Services powers hundreds of thousands of businesses in 190 countries around the world. With data centre locations in North America, Europe, Brazil, Singapore, Japan, and Australia, customers across all industries are taking advantage of the AWS cloud.

Blog

Nordcloud positioned in Gartner’s Magic Quadrant for Public Cloud Infrastructure Professional and Managed Services, Worldwide

We are joining the group of few Google Cloud Premier Partners and MSPs.

Blog

Nordcloud Achieves AWS Financial Services Competency Status

Nordcloud is one of the first Nordic based AWS Partner Network members to receive the designation. To achieve AWS Financial...

Blog

HUS Chooses Nordcloud As Partner for Amazon Web Services Development

Nordcloud's AWS capacity management, managed services and consulting services enable HUS to leverage Amazon Web Services as a platform for...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Look ma, I created a home IoT setup with AWS, Raspberry Pi, Telegram and RuuviTags

CATEGORIES

Tech

Hobby projects are a fun way to try and learn new things. This time, I decided to build a simple IoT setup for home, to collect and visualise information like temperature, humidity and pressure. While learning by doing was definitely one of the reasons I decided to embark the project, I for example wanted to control the radiators located in the attic: Not necessarily by switching power on/off, but getting alarms if I’m heating it too much or little, so that I can tune the power manually. Saving some money, in practice. Also, it is nice the get reminders from humidor that the cigars are getting dried out 😉

I personally learned several things while working on it, and via this blog post, hopefully you can too!

Overview

Idea of the project is relatively simple: Place a few RuuviTag -sensors around the house, collect the data and push it into AWS cloud for permanent storage and additional processing. From there, several solutions can be built around the data, visualisation and alarms being being only few of them.

Overview of the setup

Solution is built using AWS serverless technologies that keeps the running expenses low while requiring almost non-existing maintenance. Following code samples are only snippets from the complete solution, but I’ve tried to collect the relevant parts.

Collect data with RuuviTags and Raspberry Pi

Tag sensors broadcasts their data (humidity, temperature, pressure etc.) via Bluetooth LE periodically. Because Ruuvi is an open source friendly product, there are already several ready-made solutions and libraries to utilise. I went with node-ruuvitag, which is a Node.js module (Note: I found that module works best with Linux and Node 8.x but you may be successful with other combinations, too).

Raspberry Pi runs a small Node.js application that both listens the incoming messages from RuuviTags and forwards them into AWS IoT service. App communicates with AWS cloud using thingShadow client, found in AWS IoT Device SDK module. Application authenticates using X.509 certificates generated by you or AWS IoT Core.

The scripts runs as a Linux service. While tags broadcast data every second or so, the app in Raspberry Pi forwards the data only once in 10 minutes for each tag, which is more than sufficient for the purpose. This is also an easy way to keep processing and storing costs very low in AWS.

When building an IoT or big data solution, one may initially aim for near real-time data transfers and high data resolutions while the solution built on top of it may not really require it. Alternatively, consider sending data in batches once an hour and with 10 minute resolution may be sufficient and is also cheaper to execute.

When running the broadcast listening script in Raspberry Pi, there are couple things to consider:

  • All the tags may not appear at first reading: (Re)run ruuvi.findTags() every 30mins or so, to ensure all the tags get collected
  • Raspberry Pi can drop from WLANSetup a script to automatically reconnect in a case that happens

With these in place, the setup have been working without issues, so far.

Process data in AWS using IoT Core and friends

AWS processing overview

Once the data hits the AWS IoT Core there can be several rules for handling the incoming data. In this case, I setup a lambda to be triggered for each message. AWS IoT provides also a way to do the DynamoDB inserts directly from the messages, but I found it more versatile and development friendly approach to use the lambda between, instead.

AWS IoT Core act rule

DynamoDB works well as permanent storage in this case: Data structure is simple and service provides on demand based scalability and billing. Just pay attention when designing the table structure and make sure it fits with you use cases as changes done afterwards may be laborious. For more information about the topic, I recommend you to watch a talk about Advanced Design Patterns for DynamoDB.

DynamoDB structure I end up using

Visualise data with React and Highcharts

Once we have the data stored in semi structured format in AWS cloud, it can be visualised or processed further. I set up a periodic lambda to retrieve the data from DynamoDB and generate CSV files into public S3 bucket, for React clients to pick up. CSV format was preferred over for example JSON to decrease the file size. At some point, I may also try out using the Parquet -format and see if it suits even better for the purpose.

Overview visualisations for each tag

The React application fetches the CSV file from S3 using custom hook and passes it to Highcharts -component.

During my professional career, I’ve learnt the data visualisations are often causing various challenges due to limitations and/or bugs with the implementation. After using several chart components, I personally prefer using Highcharts over other libraries, if possible.

Snapshot from the tag placed outside

Send notifications with Telegram bots

Visualisations works well to see the status and how the values vary by the time. However, in case something drastic happens, like humidor humidity gets below preferred level, I’d like to get an immediate notification about it. This can be done for example using Telegram bots:

  1. Define the limits for each tag for example into DynamoDB table
  2. Compare limits with actual measurement whenever data arrives in custom lambda
  3. If value exceeds the limit, trigger SNS message (so that we can subscribe several actions to it)
  4. Listen into SNS topic and send Telegram message to message group you’re participating in
  5. Profit!

Limits in DynamoDB

 

Summary

By now, you should have some kind of understanding how one can combine IoT sensor, AWS services and outputs like web apps and Telegram nicely together using serverless technologies. If you’ve built something similar or taken very different approach, I’d be happy hear it!

Price tag

Building and running your own IoT solution using RuuviTags, Raspberry Pi and AWS Cloud does not require big investments. Here are some approximate expenses from the setup:

  • 3-pack of RuuviTags: 90e (ok, I wish these were a little bit cheaper so I’d buy these more since the product is nice)
  • Raspberry Pi with accessories: 50e
  • Energy used by RPi: http://www.pidramble.com/wiki/benchmarks/power-consumption
  • Lambda executions: $0,3/month
  • SNS notifications: $0,01/month
  • S3 storage: $0,01/month
  • DynamoDB: $0,01/month

And after looking into numbers, there are several places to optimise as well. For example, some lambdas are executed more often than really needed.

Next steps

I’m happy say this hobby project has achieved that certain level of readiness, where it is running smoothly days through and being valuable for me. As a next steps, I’m planning to add some kind of time range selection. As the amount of data is increasing, it will be interesting to see how values vary in long term. Also, it would be a good exercise to integrate some additional AWS services, detect drastic changes or communication failures between device and cloud when they happen. This or that, at least now I have a good base for continue from here or build something totally different next time 🙂

References, credits and derivative work

This project is no by means a snowflake and has been inspired by existing projects and work:

 


For more content follow Juha and Nordcloud Engineering on Medium.

At Nordcloud we are always looking for talented people. If you enjoy reading this post and would like to work with public cloud projects on a daily basis — check out our open positions here.

Blog

Minimizing AWS Lambda deployment package size in TypeScript

Our Senior Developer Vitalii explains how to significantly reduce the deployment package size of AWS Lambda functions written in TypeScript...

Blog

Problems with DynamoDB Single Table Design

Single Table Design is a database design pattern for DynamoDB based applications. In this article we take a look at...

Blog

Modular GraphQL server

Read about Kari's experiences with GraphQL modules!

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Responsibilities & freedom

CATEGORIES

Life at Nordcloud

Jonah joined Nordcloud around half a year ago – now he shares his thoughts and wisdom, for example about the freedom we offer, but also the responsibilities that come with it!

He explains how we don’t have strict rules or a lot of people giving us directions, so it means that everyone needs to be able to work independently, with that freedom.

 

It’s all about common trust!

1. Where are you from and how did you end up at Nordcloud?

I’m from the Netherlands, born in Amsterdam.

I was looking for the next, bigger challenge and was contacted by Anna (Talent Acquisition Specialist of Nordcloud) in Linkedin and we started talking. I felt that as a professional who is always looking to develop, Nordcloud was the right size for me to do that and in about 3,5 months I started at Nordcloud.

2. What is your role and core competence?

I’m a Cloud Architect, infra as code and automation as my core competences.

 

3. What do you like most about working at Nordcloud?

Interesting projects and a lot of freedom to do things that I find useful and meaningful.

By simply being interested and showing it, I’m getting the chance to contribute to areas that I personally feel like we should develop.

4. What is the most useful thing you have learned at Nordcloud?

Organising things in a distributed, flat company and navigating through the whole web of people

in different countries to get things done.

5. What sets you in fire/ what’s your favourite thing with public cloud?

There are a lot of tools that are available with public cloud with very little effort, but sometimes there are gaps in different functionalities (for example between AWS and GCP).

Filling those gaps by engineering we can achieve very large things with very little effort.

6. What do you do outside work?

Spend time with family and cooking. I also enjoy making tangible things, like building furniture or fixing things.

7. Best Nordcloudian memory?

Conferences are always fun and there’s interesting talks and meetings but one that professionally stands out is when we did a well architected review for a client in Stockholm.

They had a really interesting and innovative product and the client here was like a kid in a candy shop!

He really understood the potential what we could do for them and we got to do cool things!

 

8. How would you describe our culture?

We get recognised for contribution; that’s very clear and open.

There is a high degree of trust in our engineering skills.

We are a very diverse group of different nationalities!

Nordcloud NL is also a very tight group and we have casual work place banter in my opinion more than in the other countries.

We do work remotely a lot but when we are together we go for lunches and have a good time.

There is no power distance and we are very flat organisation so I can make fun of my manager and vice versa and it’s all fun!

 

Would you fit in a team with freedom and lots of changes to influence? Well Jonah is looking for more colleagues, so do get in touch! Click here for open vacancies in the Netherlands

Blog

Nordcloudians at NodeConf EU 2019

Our developers Henri Nieminen and Perttu Savolainen share their presentation tips and tell about their experience at NodeConf EU 2019...

Blog

Learning from each other

Johanna from Managed Cloud Applications team sheds light on how their team shares knowledge and stays effective!

Blog

A story of a busy bee from the Copenhagen swarm by Casper Bøgeholdt Andersen

Read a story of Casper from Copenhagen office who shared his story about how is it to work on finding...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Getting from A to B with AWS Team Lead

CATEGORIES

Life at Nordcloud

This week’s Nordcloudian Story is shared by AWS Team Lead, Tamas Kiss, from our London office. Tamas has worked at Nordcloud since 2017 and is a big fan of cloud, Open Source, infrastructure as code, heavy configuration management and deep monitoring. Let’s hear his story!

 

1. Where are you from and how did you end up at Nordcloud?

“I am originally from Hungary. Prior to Nordcloud, I was working with DevOps but started to get a bit bored so I was ready for new challenges with more variety. I looked up different consultancies, and got approached by a headhunter. Even if I wasn’t actively yet looking for a new job, Nordcloud’s type of projects, flexibility, size of the team, quick recruitment process and new line of work attracted me to join!”

2. What is your role and core competence?

I wear many hats! My official title is AWS Team Lead, but I also work as Senior Cloud Architect and Solution Architect supporting sales team.

3. What do you like most about working at Nordcloud?

I really enjoy the initial phases of project where we are figuring out the customer goals and get to do design and play around to get from A to B to achieve the business goals.

As an example from air crafts, Aériane Swift or Cessna 150 or Magnus Fusion UL, Airbus A380 or Antonov An-225, or something unusual like the P-791 or the Horten Ho 229, all can get you from A to B (with some luck), but they all have something unique which makes them the best choice for a problem. I strongly believe in the right tool for the right job; having the whole landscape in front of me when picking is a lot of fun. 

4. What is the most useful thing you have learned at Nordcloud?

“Every estimate must be multiplied with two. This means if you think you are done in a week, count it with two!

Project management is hard. At least for me it’s much harder than the technical stuff.

I also learned that having a good head set for calls is crucial, as every meeting starts with “can you hear me?!” 😁

5. What’s your favourite thing with public cloud?

“It’s a full time job to just stay up to date with the demand of changes and new features.

You can reinvent yourself every second week. Right now I’m learning about Kubernetes, serverless and CDK”.

6. What do you do outside work?

“I have couple of open source pet projects going on and I enjoy spending time with my family.”

7. Best Nordcloudian memory?

“So this one time I took Ilkka, our new Cloud Engineer, to a client of ours. I was introduced as the Senior Engineer and walked in with t-shirt and shorts (as I do). 

The management joked around; “how much do we pay this guy again?”, as in “he must be good as he’s got the confidence of walking in in shorts!”. Ilkka then walked in, wearing a suit, and the client went “He must be new”. 😂

Keen to hear more about our UK AWS team? We are growing and looking for senior architects to Tamas’ team. 

Check out the opportunity HERE!

Blog

Nordcloudians at NodeConf EU 2019

Our developers Henri Nieminen and Perttu Savolainen share their presentation tips and tell about their experience at NodeConf EU 2019...

Blog

Learning from each other

Johanna from Managed Cloud Applications team sheds light on how their team shares knowledge and stays effective!

Blog

A story of a busy bee from the Copenhagen swarm by Casper Bøgeholdt Andersen

Read a story of Casper from Copenhagen office who shared his story about how is it to work on finding...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.