What We Learned at Full Stack Fest 2019

Full Stack Fest is a 3-day single track conference that focuses on the future of the web. This year, five Nordcloudians attended this event to gain knowledge about new trends, the best software development practices and get the energy and vibrance of the tech community in one of the most beautiful cities in the world – Barcelona.  The topics of the conference ranged from GraphQL, WebAssembly, JAMStack to Automation Testing, Serverless and P2P web. There was a wide choice of different subjects so everyone was able to find something interesting and familiarize themselves with the technologies they had never had the opportunity to touch before. You can find the full list of the videos here. In this article, we will highlight the most interesting and intriguing topics from our point of view. Perhaps, this is not something that you can apply to your current project right now, but it definitely shapes the future of web development.

WebAssembly and Serverless

Two talks were dedicated to increasingly popular WebAssembly. Lin Clark presented WASI – WebAssembly system interface. Wasmtime allows to run wasm programs outside the web browsers and make them interact with the OS interfaces in a safe way. Currently, it’s possible to write projects for WASI in C/C++ and Rust. The project is under active development and not production-ready yet. But as Solomon Hykes, co-founder of Docker, said,

If WASM+WASI existed in 2008, we wouldn’t have needed to create Docker. That’s how important it is. WebAssembly on the server is the future of computing.

Evolving the idea of WebAssembly on the server, Steve Klabnik announced the services that already support serverless environments for wasm-files execution: CloudFlare and Fastly. WebAssembly claims to serve as an abstraction which allows users to run any program language with wasm-compiler in an environment that has WebAssembly runtime. An example of a use case would be to improve the performance of calculation-heavy applications including graphics and real-time streaming. At the moment, only strongly typed languages can be compiled into WebAssembly. But it’s not required for JS developers to know C/C++ or Rust to use all the power of this technology. A simpler option could be AssemblyScript that compiles TypeScript into WebAssembly.

The Future of Web Animation and CSS

In her talk, Sarah Drasner pointed out a next type of responsiveness which could be called 3D-responsiveness. It provides 3D in-browser experience that can be especially interesting for the VR-set owners. VR and 3D, in general, are becoming more and more popular in browsers thanks to the open-source projects such as Three.js and A-Frame. One other popular trend nowadays is a full page transition that is made possible by combining CSS3 and the JS libraries so they produce compelling visual effects.

One more exciting technology that was highlighted at the conference is CSS Houdini which is a new collection of browser APIsthat allow developers to access a browser’s CSS engine and create custom styles or implement polyfills for the CSS-features that are not yet supported by the browsers. But at the moment, Houdini is by itself still in an early stage of development and it is not supported by a majority of the browsers. CSS Houdini is probably not something that you will work with on a daily basis, but it certainly can be useful for library developers in the future.

And as a bonus, we would like to share a link to the amazing video that Sara Soueidan used in her presentation about Applied Accessibility. This video, in a gamified and very friendly manner, reminds us how important it is to think about the accessibility ofthe web applications:


We are sure you will find other interesting topics among all the other Full Stack Fest 2019 talks that were not highlighted in this post. Thanks for reading 🙂

Get in Touch.

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

    Building Cloud-Based IoT Solutions and Serverless Web-Apps

    Our Cloud Application Architect Afaque Hussain has been on his cloud journey for some years already. At Nordcloud he builds cloud-native IoT solutions in our Data-Driven team. Here’s his story!

    aws cloud IoT internet of things web applications development

    1. Where are you from and how did you end up at Nordcloud?

    I’ve been living in Finland for the past 7 years and I’m from India. Prior to Nordcloud, I was working at Helvar, developing cloud-based, IoT enabled, lighting solutions. I’ve been excited about public cloud services ever since I got to know them and I generally attend cloud conferences and meetups. During one such conference, I met the Nordcloud team who introduced me to the company and invited me for an interview and since then, my Nordcloud journey has begun.

    2. What is your core competence? On top of that, please also tell about your role and projects shortly.

    My core-competence is building cloud-based web-services, that act as an IoT platform to which IoT devices connect and exchange data. Generally preferring Serverless computing and Infrastructure as Code, I primarily use AWS and Javascript (Node.js) in our projects.

    My current role is Cloud Application Architect, where I’m involved in our customer projects in designing and implementing end-to-end IoT solutions. In our current project, we’re building a web-service using which our customer can connect, monitor and manage their large fleet of sensors and gateways. The CI/CD pipelines for our project have been built using AWS Developer Tools such CodePipeline, CodeBuild & CodeDeploy. Our CI/CD pipelines have been implemented as Infrastructure as Code, which enables us to deploy another instance of our CI/CD pipelines in a short period time. Cool!

    3. What sets you on fire / what’s your favourite thing technically with public cloud?

    The ever increasing serverless service offerings by public cloud vendors, which enables us to rapidly build web-applications & services.

    4. What do you like most about working at Nordcloud?

    Apart from the opportunity to work on interesting projects, I like my peers. They’re very talented, knowledgeable and ready to offer help when needed.

    5. What is the most useful thing you have learned at Nordcloud?

    Although I’ve learnt a lot at Nordcloud, I believe the knowledge of  the toolkit and best practices for cloud-based web-application development has been the most useful thing I’ve learnt.

    6. What do you do outside work?

    I like doing sports and I generally play cricket, tennis or hit the gym. During the weekends, I generally spend time with my family, exploring the beautiful Finnish nature, people or different cuisines. 

    7. How would you describe Nordcloud’s culture in 3 words?

    Nurturing, collaborative & rewarding.

    8. Best Nordcloudian memory?

    Breakfast @ Nordcloud every Thursday. I always look forward to this day. I get to meet other Norcloudians, exchange ideas or just catch-up over a delicious breakfast!


    Get in Touch.

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

      Serverless Days Helsinki



      Join Nordcloud at Serverless Days Helsinki that focuses on the reality of serverless based solutions.

      One day, one track, one community

      ServerlessDays Helsinki is a developer-oriented conference about serverless technologies. It takes place in Helsinki at Bio Rex on the 25th of April.

      You can find the detailed agenda here.


      April 25, 2019


      Bio Rex

      Get in Touch.

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

        Serverless Days Amsterdam



        Join Nordcloud at Serverless Days Amsterdam that focuses on the reality of serverless based solutions.

        One day, one track, one community

        ServerlessDays Amsterdam is a developer-oriented conference about serverless technologies. It takes place in Amsterdam on the 29th of March.

        You can find the detailed agenda here.


        March 29, 2019


        Pakhuis de Zwijger

        Get in Touch.

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

          Serverless Meetup at Nordcloud 26.2.



          Serverless Meetup Tuesday, February 26, 2019

          • Serverless with nameless-deploy-tools, Pasi Niemi
          • Appsync, Arto Liukkonen
          • Still room for a third topic if there are voluntary speakers.

          Date: 26.2.2019

          Time: 17.30-19.30

          Venue: Nordcloud, 4.th floor, Antinkatu 1, Helsinki

          There is a waiting list for this event at the moment.


          Check availability from the waiting list

          Get in Touch.

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

            Lambda layers for Python runtime


            BlogTech Community

            AWS Lambda

            AWS Lambda is one of the most popular serverless compute services in the public cloud, released in November 2014. It runs your code in the response to events like DynamoDB, SNS or HTTP triggers without provisioning or managing any infrastructure. Lambda takes care of most of the things required to run your code and provides high availability. It allows you to execute even up to 1000 parallel functions at once! Using AWS lambda you can build applications like:

            • Web APIs
            • Data processing pipelines
            • IoT applications
            • Mobile backends
            • and many many more…

            Creating AWS Lambda is super simple: you just need to create a zip file with your code, dependencies and upload it to S3 bucket. There are also frameworks like serverless or SAM that handles deploying AWS lambda for you, so you don’t have to manually create and upload the zip file.

            There is, however, one problem.

            You have created a simple function which depends on a large number of other packages. AWS lambda requires you to zip everything together. As a result, you have to upload a lot of code that never changes what increases your deployment time, takes space, and costs more.

            AWS Lambda Layers

            Fast forward 4 years later at 2018 re:Invent AWS Lambda Layers are released. This feature allows you to centrally store and manage data that is shared across different functions in the single or even multiple AWS accounts! It solves a certain number of issues like:

            • You do not have to upload dependencies on every change of your code. Just create an additional layer with all required packages.
            • You can create custom runtime that supports any programming language.
            • Adjust default runtime by adding data required by your employees. For example, there is a team of Cloud Architects that builds Cloud Formation templates using the troposphere library. However, they are no developers and do not know how to manage python dependencies… With AWS lambda layer you can create a custom environment with all required data so they could code in the AWS console.

            But how does the layer work?

            When you invoke your function, all the AWS Lambda layers are mounted to the /opt directory in the Lambda container. You can add up to 5 different layers. The order is really important because layers with the higher order can override files from the previously mounted layers. When using Python runtime you do not need to do any additional operations in your code, just import library in the standard way. But, how will my python code know where to find my data?

            That’s super simple, /opt/bin is added to the $PATH environment variable. To check this let’s create a very simple Python function:

            import os
            def lambda_handler(event, context):
                path = os.popen("echo $PATH").read()
                return {'path': path}

            The response is:

                "path": "/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin\n"


            Existing pre-defined layers

            AWS layers have been released together with a single, publicly accessible library for data processing containing 2 libraries: NumPyand SciPy. Once you have created your lambda you can click  `Add a layer` in the lambda configuration. You should be able to see and select the AWSLambda-Python36-SciPy1x layer. Once you have added your layer you can use these libraries in your code. Let’s do a simple test:

            import numpy as np
            import json
            def lambda_handler(event, context):
                matrix = np.random.randint(6, size=(2, 2))
                return {
                    'matrix': json.dumps(matrix.tolist())

            The function response is:

              "matrix": "[[2, 1], [4, 2]]"


            As you can see it works without any effort.

            What’s inside?

            Now let’s check what is in the pre-defined layer. To check the mounted layer content I prepared simple script:

            import os
            def lambda_handler(event, context):
                directories = os.popen("find /opt/* -type d -maxdepth 4").read().split("\n")
                return {
                    'directories': directories

            In the function response you will receive the list of directories that exist in the /opt directory:

              "directories": [

            Ok, so it contains python dependencies installed in the standard way and nothing else. Our custom layer should have a similar structure.

            Create Your own layer!

            Our use case is to create an environment for our Cloud Architects to easily build Cloud Formation templates using troposphere and awacs libraries. The steps comprise:
            <h3″>Create virtual env and install dependencies

            To manage the python dependencies we will use pipenv.

            Let’s create a new virtual environment and install there all required libraries:

            pipenv --python 3.6
            pipenv shell
            pipenv install troposphere
            pipenv install awacs

            It should result in the following Pipfile:

            url = "https://pypi.org/simple"
            verify_ssl = true
            name = "pypi"
            troposphere = "*"
            awacs = "*"
            python_version = "3.6"

            Build a deployment package

            All the dependent packages have been installed in the $VIRTUAL_ENV directory created by pipenv. You can check what is in this directory using ls command:

            ls $VIRTUAL_ENV

            Now let’s prepare a simple script that creates a zipped deployment package:

            mkdir -p $PY_DIR                                              #Create temporary build directory
            pipenv lock -r > requirements.txt                             #Generate requirements file
            pip install -r requirements.txt --no-deps -t $PY_DIR     #Install packages into the target directory
            cd build
            zip -r ../tropo_layer.zip .                                  #Zip files
            cd ..
            rm -r build                                                   #Remove temporary directory

            When you execute this script it will create a zipped package that you can upload to AWS Layer.


            Create a layer and a test AWS function

            You can create a custom layer and AWS lambda by clicking in AWS console. However, real experts use CLI (AWS lambda is the new feature so you have to update your awscli to the latest version).

            To publish new Lambda Layer you can use the following command (my zip file is named tropo_layer.zip):

            aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip

            As the response, you should receive the layer arn and some other data:

                "Content": {
                    "CodeSize": 14909144,
                    "CodeSha256": "qUz...",
                    "Location": "https://awslambda-eu-cent-1-layers.s3.eu-central-1.amazonaws.com/snapshots..."
                "LayerVersionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1",
                "Version": 1,
                "Description": "",
                "CreatedDate": "2018-12-01T22:07:32.626+0000",
                "LayerArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test"

            The next step is to create AWS lambda. Yor lambda will be a very simple script that generates Cloud Formation template to create EC2 instance:

            from troposphere import Ref, Template
            import troposphere.ec2 as ec2
            import json
            def lambda_handler(event, context):
                t = Template()
                instance = ec2.Instance("myinstance")
                instance.ImageId = "ami-951945d0"
                instance.InstanceType = "t1.micro"
                return {"data": json.loads(t.to_json())}

            Now we have to create a zipped package that contains only our function:

            zip tropo_lambda.zip handler.py

            And create new lambda using this file (I used an IAM role that already exists on my account. If you do not have any role that you can use you have to create one before creating AWS lambda):

            aws lambda create-function --function-name tropo_function_test --runtime python3.6 
            --handler handler.lambda_handler 
            --role arn:aws:iam::xxxxxxxxxxxx:role/service-role/some-lambda-role 
            --zip-file fileb://tropo_lambda.zip

            In the response, you should get the newly created lambda details:

                "TracingConfig": {
                    "Mode": "PassThrough"
                "CodeSha256": "l...",
                "FunctionName": "tropo_function_test",
                "CodeSize": 356,
                "RevisionId": "...",
                "MemorySize": 128,
                "FunctionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:function:tropo_function_test",
                "Version": "$LATEST",
                "Role": "arn:aws:iam::xxxxxxxxx:role/service-role/some-lambda-role",
                "Timeout": 3,
                "LastModified": "2018-12-01T22:22:43.665+0000",
                "Handler": "handler.lambda_handler",
                "Runtime": "python3.6",
                "Description": ""

            Now let’s try to invoke our function:

            aws lambda invoke --function-name tropo_function_test --payload '{}' output
            cat output
            {"errorMessage": "Unable to import module 'handler'"}

            Oh no… It doesn’t work. In the CloudWatch you can find detailed log message: `Unable to import module ‘handler’: No module named ‘troposphere’` This error is obvious. Default python3.6 runtime does not contain troposphere library. Now let’s add layer we created in the previous step to our function:

            aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1

            When you invoke lambda again you should get the correct response:

              "data": {
                "Resources": {
                  "myinstance": {
                    "Properties": {
                      "ImageId": "ami-951945d0",
                      "InstanceType": "t1.micro"
                    "Type": "AWS::EC2::Instance"

            Add a local library to your layer

            We already know how to create a custom layer with python dependencies, but what if we want to include our local code? The simplest solution is to manually copy your local files to the /python/lib/python3.6/site-packages directory.

            First, let prepare the test module that will be pushed to the layer:

            $ find local_module
            $ cat cat local_module/echo.py
            def echo_hello():
                return "hello world!"

            To manually copy your local module to the correct path you just need to add the following line to the previously used script (before zipping package):

            cp -r local_module 'build/python/lib/python3.6/site-packages'

            This works, however, we strongly advise transforming your local library into the pip module and installing it in the standard way.

            Update Lambda layer

            To update lambda layer you have to run the same code as before you used to create a new layer:

            aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip

            The request should return LayerVersionArn with incremented version number (arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2 in my case).

            Now update lambda configuration with the new layer version:

            aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2

            Now you should be able to import local_module in your code and use the echo_hello function.


            Serverless framework Layers support

            Serverless is a framework that helps you to build applications based on the AWS Lambda Service. It already supports deploying and using Lambda Layers. The configuration is really simple – in the serverless.yml file, you provid the path to the layer location on your disk (it has to path to the directory – you cannot use zipped package, it will be done automatically). You can either create a separate serverless.yml configuration for deploying Lambda Layer or deploy it together with your application.

            We’ll show the second example. However, if you want to benefit from all the Lambda Layers advantages you should deploy it separately.

            service: tropoLayer
              individually: true
              name: aws
              runtime: python3.6
                path: build             # Build directory contains all python dependencies
                compatibleRuntimes:     # supported runtime
                  - python3.6
                handler: handler.lambda_handler
                   - node_modules/**
                   - build/**
                  - {Ref: TropoLayerLambdaLayer } # Ref to the created layer. You have to append 'LambdaLayer'
            string to the end of layer name to make it working

            I used the following script to create a build directory with all the python dependencies:

            mkdir -p $PY_DIR                                              #Create temporary build directory
            pipenv lock -r > requirements.txt                             #Generate requirements file
            pip install -r requirements.txt -t $PY_DIR                   #Install packages into the target direct

            This example individually packs a Lambda Layer with dependencies and your lambda handler. The funny thing is that you have to convert your lambda layer name to be TitleCased and add the `LambdaLayer` suffix if you want to refer to that resource.

            Deploy your lambda together with the layer, and test if it works:

            sls deploy -v --region eu-central-1
            sls invoke -f tropo_test --region eu-central-1


            It was a lot of fun to test Lambda Layers and investigate how it technically works. We will surely use it in our projects.

            In my opinion, AWS Lambda Layers is a really great feature that solves a lot of common issues in the serverless world. Of course, it is not suitable for all the use cases. If you have a simple app, that does not require a huge number of dependencies it’s easier for you to have everything in the single zip file because you do not need to manage additional layers.

            Read more on AWS Lambda in our blog!

            Notes from AWS re:Invent 2018 – Lambda@edge optimisation

            Running AWS Lambda@Edge code in edge locations

            Amazon SQS as a Lambda event source

            Get in Touch.

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

              What is Amazon FreeRTOS and why should you care?


              BlogTech Community

              At Nordcloud, we’ve been working with AWS IoT since Amazon released it

              We’ve enabled some great customer success stories by leveraging the high-level features of AWS IoT. We combine those features with our Serverless development expertise to create awesome cloud applications. Our projects have ranged from simple data collection and device management to large-scale data lakes and advanced edge computing solutions.


              In this article we’ll take a look at what Amazon FreeRTOS can offer for IoT solutions

              First released in November 2017, Amazon FreeRTOS is a microcontroller (MCU) operating system. It’s designed for connecting lightweight microcontroller-based devices to AWS IoT and AWS Greengrass. This means you can have your sensor and actuator devices connect directly to the cloud, without having smart gateways acting as intermediaries.

              DSC06409 - Saini Patala Photography

              What are microcontrollers?

              If you’re unfamiliar with microcontrollers, you can think of them as a category of smart devices that are too lightweight to run a full Linux operating system. Instead, they run a single application customized for some particular purpose. We usually call these applications firmware. Developers combine various operating system components and application components into a firmware image and “burn” it on the flash memory of the device. The device then keeps performing its task until a new firmware is installed.

              Firmware developers have long used the original FreeRTOS operating system to develop applications on various hardware platforms. Amazon has extended FreeRTOS with a number of features to make it easy for applications to connect to AWS IoT and AWS Greengrass, which are Amazon’s solutions for cloud based and edge based IoT. Amazon FreeRTOS currently includes components for basic MQTT communication, Shadow updates, AWS Greengrass endpoint discovery and Over-The-Air (OTA) firmware updates. You get these features out-of-the-box when you build your application on top of Amazon FreeRTOS.

              Amazon also runs a FreeRTOS qualification program for hardware partners. Qualified products have certain minimum requirements to ensure that they support Amazon FreeRTOS cloud features properly.

              Use cases and scenarios

              Why should you use Amazon FreeRTOS instead of Linux? Perhaps your current IoT solution depends on a separate Linux based gateway device, which you could eliminate to cut costs and simplify the solution. If your ARM-based sensor devices already support WiFi and are capable of running Amazon FreeRTOS, they could connect directly to AWS IoT without requiring a separate gateway.

              Edge computing scenarios might require a more powerful, Linux based smart gateway that runs AWS Greengrass. In such cases you can use Amazon FreeRTOS to implement additional lightweight devices such as sensors and actuators. These devices will use MQTT to talk to the Greengrass core, which means you don’t need to worry about integrating other communications protocols to your system.

              In general, microcontroller based applications have the benefit of being much more simple than Linux based systems. You don’t need to deal with operating system updates, dependency conflicts and other moving parts. Your own firmware code might introduce its own bugs and security issues, but the attack surface is radically smaller than a full operating system installation.

              How to try it out

              If you are interested in Amazon FreeRTOS, you might want to order one of the many compatible microcontroller boards. They all sell for less than $100 online. Each board comes with its own set of features and a toolchain for building applications. Make sure to pick one that fits your purpose and requirements. In particular, not all of the compatible boards include support for Over-The-Air (OTA) firmware upgrades.

              At Nordcloud we have tried out two Amazon-qualified boards at the time of writing:

              • STM32L4 Discovery Kit
              • Espressif ESP-WROVER-KIT (with Over-The-Air update support)

              ST provides their own SystemWorkBench Ac6 IDE for developing applications on STM32 boards. You may need to navigate the websites a bit, but you’ll find versions for Mac, Linux and Windows. Amazon provides instructions for setting everything up and downloading a preconfigured Amazon FreeRTOS distribution suitable for the device. You’ll be able to open it in the IDE, customize it and deploy it.

              Espressif provides a command line based toolchain for developing applications on ESP32 boards which works on Mac, Linux and Windows. Amazon provides instructions on how to set it up for Amazon FreeRTOS. Once the basic setup is working and you are able to flash your device, there are more instructions for setting up Over-The-Air updates.

              Both of these devices are development boards that will let you get started easily with any USB-equipped computer. For actual IoT deployments you’ll probably want to look into more customized hardware.


              We hope you’ll find Amazon FreeRTOS useful in your IoT applications.

              If you need any help in planning and implementing your IoT solutions, feel free to contact us.

              Get in Touch.

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                Cloud computing news #10: Serverless, next-level cloud tech



                This week we focus on serverless computing which continues to grow and enables agility, speed of innovation and lower cost to organizations.

                Serverless Computing Spurs Business Innovation

                According to Digitalist Magazine, serverless computing is outpacing conventional patterns of emerging technology adoption. Organizations across the globe see technology-driven innovation as essential to compete. Serverless computing promises to enable faster innovation at a lower cost and simplify the creation of responsive business processes.

                But what does “serverless computing” mean and how can companies benefit from it?

                1. Innovate faster and at a lower cost: Serverless cloud computing execution model in which the cloud provider acts as the server, dynamically managing the allocation of machine resources. This means that developers are able to focus on coding instead of managing deployment and runtime environments. Also, pricing is based on the actual amount of resources consumed by an application. Thus, with serverless computing, an organization can innovate faster and at a lower cost. Serverless computing eliminates the risk and cost of overprovisioning, as it can scale resources dynamically with no up-front capacity planning required.
                2. Enable responsive business processes: Serverless function services – function as a service (FaaS) – can automatically activate and run application logic that carry out simple tasks in response to specific events. If the task enchained by an incoming event involves data management, developers can leverage serverless backends as a service (BaaS) for data caching, persistence, and analytics services via standard APIs. With this event-driven application infrastructure in place, one organization can decide at any moment to execute a new task in response to a given event.

                Organizations also need the flexibility to develop and deploy their innovations where it makes the most sense for their business. Platforms that rely on open standards, deploy on all the major hyperscale public clouds, and offer portability between the hyperscaler IaaS foundations are really the ideal choice for serverless environments.

                Read more in Digitalist Magazine

                Nordcloud tech blog: Developing serverless cloud components

                cloud component contains both your code and the necessary platform configuration to run it. The concept is similar to Docker containers, but here it is applied to serverless applications. Instead of wrapping an entire server in a container, a cloud component tells the cloud platform what services it depends on.

                A typical cloud component might include a REST API, a database table and the code needed to implement the related business logic. When you deploy the component, the necessary database services and API services are automatically provisioned in the cloud.

                Developers can assemble cloud applications from cloud components. This resembles the way they would compose traditional applications from software modules. The benefit is less repeated work to implement the same features in every project over and over again.

                Check out our tech blog that takes a look at some new technologies for developing cloud components

                Nordcloud Case study: Developing on AWS services using a serverless architecture for Kemppi 

                Nordcloud helped Kemppi build the initial architecture based on AWS IoT Core, API Gateway, Lambda and other AWS services. We also designed and developed the initial Angular.js based user interface for the solution.

                Developing on AWS services using a serverless architecture enabled Kemppi to develop the solution in half the time and cost compared to traditional, infrastucture based architectures. The serverless expertise of Nordcloud was key to enable a seamless rampup of development capabilities in the Kemppi development teams.

                Read more on our case study here

                Serverless at Nordcloud

                Nordcloud has a long track record with serverless, being among the first companies to adopt services such as AWS Lambda and API gateway for production projects already in 2015. Since then, Nordcloud has executed over 20 customer projects using serverless technologies for several use case such as web applications, IoT solutions, data platforms and cloud infrastructure monitoring or automation.

                Nordcloud is an AWS Lambda, API Gateway and DynamoDB parter, a Serverless framework partner and contributor to the serverless community via contribution to open source projects, events and initiatives such as the Serverless Finland meetup.

                How can we help you take your business to the next level with serverless?

                Get in Touch.

                Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                  Developing Serverless Cloud Components



                  A cloud component contains both your code and the necessary platform configuration to run it. The concept is similar to Docker containers, but here it is applied to serverless applications. Instead of wrapping an entire server in a container, a cloud component tells the cloud platform what services it depends on.

                  A typical cloud component might include a REST API, a database table and the code needed to implement the related business logic. When you deploy the component, the necessary database services and API services are automatically provisioned in the cloud.

                  Developers can assemble cloud applications from cloud components. This resembles the way they would compose traditional applications from software modules. The benefit is less repeated work to implement the same features in every project over and over again.

                  In the following sections we’ll take a look at some new technologies for developing cloud components.

                  AWS CDK

                  AWS CDK, short for Cloud Development Kit, is Amazon’s new framework for defining AWS cloud infrastructure with code. It currently supports TypeScript, JavaScript and Java with more language support coming later.

                  When developing with AWS CDK, you use code to define both infrastructure and business logic. These codebases are separate. You define your component’s deployment logic in one script file, and your Lambda function code in another script file. These files don’t have to be written in the same programming language.

                  AWS CDK includes the AWS Construct Library, which provides a selection of predefined cloud components to be used in applications. It covers a large portion of Amazon’s AWS cloud services, although not all of them.

                  These predefined constructs are the smallest building blocks available in AWS CDK. For instance, you can use the AWS DynamoDB construct to create a database table. The deployment process translates this construct into a CloudFormation resource, and CloudFormation creates the actual table.

                  The real power of AWS CDK comes from the ability to combine the smaller constructs into larger reusable components. You can define an entire microservice, including all the cloud resources it needs, and use it as a component in a larger application.

                  This modularity can also help standardize multi-team deployments. When everybody delivers their service as an AWS CDK construct, it’s straightforward to put all the services together without spending lots of time writing custom deployment scripts.

                  AWS CDK may become very important for cloud application development if third parties start publishing their own Construct Libraries online. There could eventually be very large selection of reusable cloud components available in an easily distributable and deployable format. Right now the framework is still pending a 1.0 release before freezing its APIs.

                  Serverless Components

                  Serverless Components is an ambitious new project by the makers of the hugely popular Serverless Framework. It aims to offer a cloud-agnostic way of developing reusable cloud components. These components can be assembled into applications or into higher order components.

                  The basic idea of Serverless Components is similar to AWS CDK. But while CDK uses a programming language to define components, Serverless has chosen a declarative YAML syntax instead. This results in simpler component definitions but you also lose a lot of flexibility. To remedy this, Serverless Components lets you add custom JavaScript files to perform additional deployment operations.

                  The Serverless Components project has its own component registry. The registry includes some basic components for Amazon AWS, Google Cloud, Netlify and GitHub. Unlike in some other projects, developers are writing these components manually instead of auto-generating them from service definitions. It will probably take a while before all cloud features are supported.

                  One controversial design decision of Serverless Components is to bypass the AWS CloudFormation stack management service. The tool creates components directly on AWS and other cloud platforms. It writes their state to a local state.json file, which developers must share.

                  This approach offers speed, flexibility and multi-cloud support, but also requires Serverless Components to handle deployments flawlessly in every situation. Enterprise AWS users will probably be wary of adopting a solution that bypasses CloudFormation entirely.


                  Pulumi.io is a cloud component startup offering a SaaS service subscription combined with an open source framework. Essentially Pulumi aims to replace AWS CloudFormation and other cloud deployment tools with its own stack management solution. Pulumi’s cloud service deploys the actual cloud applications to Amazon AWS, Microsoft Azure, Google Cloud, Kubernetes or OpenStack.

                  Pulumi supports a higher level of abstraction than the other component technologies discussed here. When you implement a serverless service using Pulumi’s JavaScript syntax, the code gets translated to a format suitable for the platform you are deploying on. You write your business logic as JavaScript handler functions for Express API endpoints. Pulumi’s tool extracts those handlers from the source code and deploys them as AWS Lambda functions, Azure Functions or Google Cloud Functions.

                  Writing completely cloud-agnostic code is challenging even with Pulumi’s framework. For certain things it offers cloud-agnostic abstractions like the cloud.Table component. When you use cloud. Table, your code automatically adapts to use either DynamoDB or Azure Table Storage depending on which cloud platform you deploy it on.

                  For many other things you have to write cloud-specific code. Or, you can write your own abstraction layer to complement Pulumi’s framework. Such abstraction layers tend to add complexity to applications, making it harder for developers to understand what the code is actually doing.

                  Ultimately it’s up to you to decide whether you want to commit to developing everything on top of an abstraction layer which everybody must learn. Also, as with Serverless Components, you can’t use AWS CloudFormation to manage your Pulumi-based stacks.


                  The main issue to consider in choosing a cloud component technology is whether you need multi-cloud support or not. Single-cloud development is arguably more productive and lets developers leverage higher level cloud services. On the other hand this results in increased vendor-lock, which may or may not be a problem.

                  For developers focusing on Amazon AWS, the AWS CDK is a fairly obvious choice. AWS CDK is likely to become a de-facto standard way of packaging AWS-based cloud components. As serverless applications get more and more popular, AWS CDK fills some important blank spots in the CloudFormation deployment process and in the reusability of components. And since AWS CDK still uses CloudFormation under the hood, adopters will be familiar with the underlying technology.

                  Developers that truly require multi-cloud will have to consider whether it’s acceptable to rely on Pulumi’s third party SaaS service for deployments. If the SaaS service goes down, deployed applications will keep working but you can’t update them. This is probably not a big problem for short periods of time. It will be more problematic if Pulumi ever shuts down the service permanently. For projects where this is not an issue, Pulumi may offer a very compelling multi-cloud scenario.

                  Multi-cloud developers that want to contribute to open source may want to check out the Serverless Components project. It’s too early to recommend using this project for actual use cases yet, but it may have an interesting future ahead. The project may attract a lot of existing users if the developers are able to provide a clear migration path from Serverless Framework.

                  If you would like more information on how Nordcloud can help you with serverless technologies, contact us here.

                  Get in Touch.

                  Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

                    Creating a business case for Reserved Instances



                    Elastic Compute Cloud (EC2) is one of the most used services on the AWS cloud. To give you a clear idea of how often this service is used within Nordcloud’s customer base, EC2 represents more than 60% of spend, providing businesses with many opportunities to reduce cost. Organisations that invest in this may see significant discounts of up to 75%, especially when comparing to On-Demand pricing, and will provide reserved capacity when used in a specific availability zone.

                    On the surface, Reserved Instances look simple enough, and who doesn’t like to reduce their costs! However, getting down into the nitty-gritty of Reserved Instances (RIs) can sometimes feel overwhelming. Here is a summary of the different types you can invest in, and how you can create a solid business case.

                    The different types of Reserved Instances

                    Standard RIs: These provide the most significant discount (up to 75% off On-Demand) and are best suited for steady-state usage.

                    Regional RI’s are just a change in the attributes applied to the tokens you purchase, moving the location attribute from the Availability Zone to the Region, allowing an instance of a given type, size and OS to be deployed anywhere within a region and always have RI coverage. Unlike a Standard RI, when purchasing a Regional RI you lose the capacity reservation so you are only benefiting from the cost savings. From March 2017, regional RIs now provide instance size flexibility in addition to AZ flexibility. With instance size flexibility, your regional RI’s discounted rate will automatically apply to usage of any size in the instance family, and in any AZ.

                    Convertible RI’s require a three-year term but offer a lot more flexibility – so you need to be confident that you are going to be using at least the same number of EC2 Instances (or more!) over the next three years. Consider this type of RI carefully. If you are planning to move workloads to PaaS or to re-architect and go Serverless, these might not be for you.

                    Convertible RI’s can be exchanged for any other type of RI (size, region, family, OS) so allows for future proofing changes in your infrastructure, or for changes to new instance families which AWS have not yet announced. If you initially purchased a convertible RI for an m4.2xlagre after two years you could swap this for a c4.2xlarge for RI for the final year. When converting the value of your RI’s has to remain at least the same – you may not have a 1:1 cost match with your new RI’s so you might need to buy a little bit more each time to ensure that you maintain the value.

                    Why should I make a business case?

                    Evaluating your need correctly will allow you to make the biggest savings and you’ll also be able to drastically reduce the risk associated with purchases simply by knowing how to analyse your usage. Reserved Instances are billed hourly and in order to get the best results of your usage analysis, you’ll have to analyse the data in the same way AWS bills it. Try to analyse at least the last six months in order to have a bigger picture of your usage, note the differences, and find trends.

                    If you aim for 100% coverage it is very easy to start losing money, because you end up with no On Demand usage left. However, because EC2 usage can be unpredictable, (especially if you’re using autoscaling) you would probably end up paying more than if you left some of your usage for On-Demand instances. If you keep track of utilisation you’ll be able to see early on whether you’re not using some of your purchased Reserved Instances, (you can see your coverage and utilisation data in the AWS Cost Explorer). You’ll then be able to react immediately. A way to repair this kind of situation is changing the usage of your projects to use the instance you have just bought. If you have bought a zonal Reserved Instance you might want to change the zone or modify the instance for regional scope. You might also want to change the size of the instance to cover your current usage.

                    To find out everything you need to know about optimising Reserved Instances, we’ve created a helpful, in-depth guide.

                    Get in Touch.

                    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.