Minimizing AWS Lambda deployment package size in TypeScript

AWS Lambda package size matters because of at least two reasons. The first one is the size limitations of the platform. At the time of writing of this article, deployment package size limits are 50 MB for zipped and 250 MB for unzipped functions including layers.

The second reason is the cold start time. AWS Lambda is a proprietary platform and we cannot check how exactly the function start is implemented but the experiments show that the functions with many dependencies can be 5-10 times slower to start. Though these numbers can be changed in the future for some AWS internal optimizations, they still give us food for thought and encourage minimization of the size of Lambda functions if possible.

Speaking about the future, Amazon announced Provisioned Concurrency, a feature that ensures that the Lambda function begins executing developers’ code within double digit milliseconds of being invoked.

In this article, we will step-by-step decrease the size of a simple GraphQL + DynamoDB Lambda function written in TypeScript. Other tools that we are going to use are Serverless Framework and webpack. You can find the initial project on Github in the ‘master’ branch. All optimizations are stored in the ‘step-*’ branches. Most of the concepts can be applied to the AWS Lambda functions written in JavaScript.

Initial Project

The simplest way to start development of the Lambda functions in TypeScript is to use Serverless Framework with serverless-plugin-typescript. This is the content of a serverless.yml file of our initial project:

From the serverless.yml file, we can see that we have two Lambda functions: authorizer and handler. The authorizer function is provided just to give an example of multiple functions inside of one project. In fact, it always allows execution of a request if it contains a non-empty ‘Authorization’ header.

And here is the content of the package.json file:

We have only a few dependencies. But even for such a simple project and small number of dependencies, the size of an AWS Lambda package will be 5.3 MB.

Also pay attention that by default, Serverless Framework creates one package and deploys it to all our Lambda functions. So that’s what we have for the initial project after we package our functions with the `sls package` command. You can also deploy the packages with ‘sls deploy‘ (if you are unfamiliar with Serverless Framework and AWS account configuration, read more here):

– handler package size: 5.3 MB
– authorizer package size: 5.3 MB

These deployment packages contain all npm dependencies (devDependencies are excluded) and JavaScript files transpiled from our TypeScript sources.

Step 1 – Introducing webpack

Webpack is a well-known tool serving to create bundles of assets (code and files). Serverless Framework has a webpack plugin that integrates into serverless a workflow and bundles the lambda functions.

We can now delete `serverless-plugin-typescript` and install `webpack`, `serverless-webpack` and `ts-loader` – a loader that will transpile our TypeScript code into JavaScript:

`npm remove serverless-plugin-typescript && npm install –save-dev webpack serverless-webpack ts-loader`

Usually “webpack for backend” tutorials recommend installing and using a `webpack-node-externals` plugin. Let’s follow this advice and then analyze the results:

`npm install –save-dev webpack-node-externals`

Let’s replace `serverless-plugin-typescript` with `serverless-webpack` in the serverless.yml file.

Now we can add the webpack configuration. By default the plugin will look for a webpack.config.js file in the project root directory.

Here is our webpack.config.js:

There are three important things to mention here. The first one is that the webpack plugin will create a chunk for each function defined in the serverless.yml file. It’s achieved with the help of the `slsw.lib.entries` object. The second one is the webpack rule to apply ts-loader to our ‘*.ts’ files.

The third one is to include our npm dependencies into the bundle as externals (which means without processing them with webpack). From `serverless-webpack` docs:

“All modules stated in externals will be excluded from bundled files. If an excluded module is stated as dependencies in package.json and it is used by the webpack chunk, it will be packed into the Serverless artifact under the node_modules directory.”

`webpack-node-externals` scans the node_modules folder to create an array of modules and sub-modules that shouldn’t be bundled. So we only need to add this parameter to the serverless.yml file to make our new solution work:

Okay, we are now ready to check the size of the package again. Let’s run the `sls package` and… it’s the same 5.3 MB. Technically it actually became 3 KB bigger.

Let’s analyze the size of our bundle. We can do it using an excellent webpack-bundle-analyzer plugin.

Following the instructions in the plugin’s README, we can generate this image:

webpack-bundle-analyze result without bundled packages

It shows that the size of the bundled files is only 6.11 KB for the handler function and 1.16 KB for the authorizer. It means that a significant part of our final package is taken by node modules that we copied there without any processing. It’s interesting that even if our own code is now minimized, we still have an extra 3 KB of size comparing to the initial package. The reason is that our package now contains the package-lock.json.

It’s worth mentioning that if our project contained more of our own code then even after this step we should be able to see smaller package sizes compared to our starting point. But so far we have the same numbers:

– handler package size: still 5.3 MB
– authorizer package size: still 5.3 MB

Step 2 – Bundle node_modules (be extra careful!)

Okay, we now understand that node modules obviously account for most of the space of our package. And we intentionally did it using `webpack-node-externals`. But do we really need it? As the documentation says:

“When bundling with Webpack for the backend, – you usually don’t want to bundle its node_modules dependencies”

and it refers to an article Backend Apps with Webpack that provides a detailed explanation:

“Webpack will load modules from the node_modules folder and bundle them in. This is fine for frontend code, but backend modules typically aren’t prepared for this (i.e. using require in weird ways) or even worse are binary dependencies. We simply don’t want to bundle in anything from node_modules.”

As an example, the author provides express.js framework that has some binary dependencies that can lead to an error if run with bundling.

But in our case we most probably don’t have any binary dependencies. So let’s try to bundle our project without `webpack-node-externals`.

After removing ‘externals’ from webpack.config.js and running the `sls package` command the size of our result zip file is 1.2 MB. Here is the image produced by webpack-bundle-analyzer plugin:

webpack-bundle-analyze result with bundled packages

And we can see something interesting. Yes, we have all our npm dependencies bundled, but among them we can see `aws-sdk` which is provided by the AWS Lambda environment and because of that, was purposely moved to the devDependencies. But with the current configuration, webpack doesn’t know that it should ignore devDependencies. Let’s add `aws-sdk` to the array of externals in the webpack.config.js and package our functions one more time (again, ‘externals’ prevent bundling of certain imported packages and instead retrieve these external dependencies at runtime). Now `aws-sdk` has disappeared from the bundle:

webpack-bundle-analyze result without aws-sdk

And the size of our functions is:

– handler package size: 445 KB
– authorizer package size: 445 KB

This step is marked as be extra careful. And the reason is that you should double check that your bundled dependencies don’t rely on any binaries, otherwise you will have troubles in production. One option to check that you’re safe is to implement good end-to-end tests of your deployed Lambda functions. Pay attention that unit tests won’t help you here because all node_modules will be in scope without webpack processing. If you happen to know that a specific npm package has binary dependencies, you can add it to the ‘externals’ block in the webpack config and still bundle all other packages.

* bundling the dependencies in our sample project will cause two warnings: “Module not found: Error: Can’t resolve ‘bufferutil'” and “Module not found: Error: Can’t resolve ‘utf-8-validate'”. It’s not a fault of our solution and definitely not a flaw in webpack. The reason is that one of our dependencies is trying to import these modules but they are not listed in any of the package.json files. If you want to understand the reason and find the ways to get rid of the warning, you can read this discussion on GitHub.

Step 3 – Package: individually

You already noticed that we always show the package size for two functions: handler and authorizer. But so far we always had one package deployed for both of them and the numbers were the same. But it doesn’t make sense especially because the authorizer function is of hundreds times smaller than the handler. You can see it in the last picture of the bundle analyzer. The small violet rectangle displays the relative size of the authorizer bundle very well. To produce separate packages for the separate lambda functions, we can simply add the following option to our serverless.yml file:

And here we get our final numbers that are ~10 times smaller than the initial one for the handler and ~7000 times smaller for the authorizer package:

– handler package size: 445 KB
– authorizer package size: 744 B

This step was very trivial and probably could be the first one, but `serverless-plugin-typescript` ignores the `individually: true` option so we delayed it until the webpack config was in place.

Conclusion

To summarize, when you are writing AWS Lambda functions in TypeScript, you can start with the convenient `serverless-plugin-typescript`. But once you need to optimize the size of the deployment packages you most probably need to tune your packaging process with webpack. You can start with individual packaging and continue with not only your source code bundling but also with the npm dependencies bundling. But make sure that these dependencies don’t use any binaries that can be dropped by webpack during the bundling process because this can lead to errors in production.

This article provided the basic configurations that served only one goal – showing how to minimize the size of AWS Lambda functions in TypeScript.

Webpack is a very powerful tool with many different configuration options that can help you to tune the bundle according to your needs, for instance, to add source maps or improve build process speed using the caching mechanism.

Blog

Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

Blog

Building better SaaS products with UX Writing (Part 3)

UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

Blog

Building better SaaS products with UX Writing (Part 2)

The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








    What We Learned at Full Stack Fest 2019

    Full Stack Fest is a 3-day single track conference that focuses on the future of the web. This year, five Nordcloudians attended this event to gain knowledge about new trends, the best software development practices and get the energy and vibrance of the tech community in one of the most beautiful cities in the world – Barcelona.  The topics of the conference ranged from GraphQL, WebAssembly, JAMStack to Automation Testing, Serverless and P2P web. There was a wide choice of different subjects so everyone was able to find something interesting and familiarize themselves with the technologies they had never had the opportunity to touch before. You can find the full list of the videos here. In this article, we will highlight the most interesting and intriguing topics from our point of view. Perhaps, this is not something that you can apply to your current project right now, but it definitely shapes the future of web development.

    WebAssembly and Serverless

    Two talks were dedicated to increasingly popular WebAssembly. Lin Clark presented WASI – WebAssembly system interface. Wasmtime allows to run wasm programs outside the web browsers and make them interact with the OS interfaces in a safe way. Currently, it’s possible to write projects for WASI in C/C++ and Rust. The project is under active development and not production-ready yet. But as Solomon Hykes, co-founder of Docker, said,

    If WASM+WASI existed in 2008, we wouldn’t have needed to create Docker. That’s how important it is. WebAssembly on the server is the future of computing.

    Evolving the idea of WebAssembly on the server, Steve Klabnik announced the services that already support serverless environments for wasm-files execution: CloudFlare and Fastly. WebAssembly claims to serve as an abstraction which allows users to run any program language with wasm-compiler in an environment that has WebAssembly runtime. An example of a use case would be to improve the performance of calculation-heavy applications including graphics and real-time streaming. At the moment, only strongly typed languages can be compiled into WebAssembly. But it’s not required for JS developers to know C/C++ or Rust to use all the power of this technology. A simpler option could be AssemblyScript that compiles TypeScript into WebAssembly.

    The Future of Web Animation and CSS

    In her talk, Sarah Drasner pointed out a next type of responsiveness which could be called 3D-responsiveness. It provides 3D in-browser experience that can be especially interesting for the VR-set owners. VR and 3D, in general, are becoming more and more popular in browsers thanks to the open-source projects such as Three.js and A-Frame. One other popular trend nowadays is a full page transition that is made possible by combining CSS3 and the JS libraries so they produce compelling visual effects.

    One more exciting technology that was highlighted at the conference is CSS Houdini which is a new collection of browser APIsthat allow developers to access a browser’s CSS engine and create custom styles or implement polyfills for the CSS-features that are not yet supported by the browsers. But at the moment, Houdini is by itself still in an early stage of development and it is not supported by a majority of the browsers. CSS Houdini is probably not something that you will work with on a daily basis, but it certainly can be useful for library developers in the future.

    And as a bonus, we would like to share a link to the amazing video that Sara Soueidan used in her presentation about Applied Accessibility. This video, in a gamified and very friendly manner, reminds us how important it is to think about the accessibility ofthe web applications:

     

    We are sure you will find other interesting topics among all the other Full Stack Fest 2019 talks that were not highlighted in this post. Thanks for reading 🙂

    Blog

    Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

    When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

    Blog

    Building better SaaS products with UX Writing (Part 3)

    UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

    Blog

    Building better SaaS products with UX Writing (Part 2)

    The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

    Get in Touch

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








      Building Cloud-Based IoT Solutions and Serverless Web-Apps

      Our Cloud Application Architect Afaque Hussain has been on his cloud journey for some years already. At Nordcloud he builds cloud-native IoT solutions in our Data-Driven team. Here’s his story!

      aws cloud IoT internet of things web applications development

      1. Where are you from and how did you end up at Nordcloud?

      I’ve been living in Finland for the past 7 years and I’m from India. Prior to Nordcloud, I was working at Helvar, developing cloud-based, IoT enabled, lighting solutions. I’ve been excited about public cloud services ever since I got to know them and I generally attend cloud conferences and meetups. During one such conference, I met the Nordcloud team who introduced me to the company and invited me for an interview and since then, my Nordcloud journey has begun.

      2. What is your core competence? On top of that, please also tell about your role and projects shortly.

      My core-competence is building cloud-based web-services, that act as an IoT platform to which IoT devices connect and exchange data. Generally preferring Serverless computing and Infrastructure as Code, I primarily use AWS and Javascript (Node.js) in our projects.

      My current role is Cloud Application Architect, where I’m involved in our customer projects in designing and implementing end-to-end IoT solutions. In our current project, we’re building a web-service using which our customer can connect, monitor and manage their large fleet of sensors and gateways. The CI/CD pipelines for our project have been built using AWS Developer Tools such CodePipeline, CodeBuild & CodeDeploy. Our CI/CD pipelines have been implemented as Infrastructure as Code, which enables us to deploy another instance of our CI/CD pipelines in a short period time. Cool!

      3. What sets you on fire / what’s your favourite thing technically with public cloud?

      The ever increasing serverless service offerings by public cloud vendors, which enables us to rapidly build web-applications & services.

      4. What do you like most about working at Nordcloud?

      Apart from the opportunity to work on interesting projects, I like my peers. They’re very talented, knowledgeable and ready to offer help when needed.

      5. What is the most useful thing you have learned at Nordcloud?

      Although I’ve learnt a lot at Nordcloud, I believe the knowledge of  the toolkit and best practices for cloud-based web-application development has been the most useful thing I’ve learnt.

      6. What do you do outside work?

      I like doing sports and I generally play cricket, tennis or hit the gym. During the weekends, I generally spend time with my family, exploring the beautiful Finnish nature, people or different cuisines. 

      7. How would you describe Nordcloud’s culture in 3 words?

      Nurturing, collaborative & rewarding.

      8. Best Nordcloudian memory?

      Breakfast @ Nordcloud every Thursday. I always look forward to this day. I get to meet other Norcloudians, exchange ideas or just catch-up over a delicious breakfast!

       

      Blog

      Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

      When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

      Blog

      Building better SaaS products with UX Writing (Part 3)

      UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

      Blog

      Building better SaaS products with UX Writing (Part 2)

      The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

      Get in Touch

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








        Serverless Days Helsinki

        CATEGORIES

        Blog

        Join Nordcloud at Serverless Days Helsinki that focuses on the reality of serverless based solutions.

        One day, one track, one community

        ServerlessDays Helsinki is a developer-oriented conference about serverless technologies. It takes place in Helsinki at Bio Rex on the 25th of April.

        You can find the detailed agenda here.

        Date

        April 25, 2019

        Location

        Bio Rex

        Blog

        Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

        When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

        Blog

        Building better SaaS products with UX Writing (Part 3)

        UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

        Blog

        Building better SaaS products with UX Writing (Part 2)

        The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

        Get in Touch

        Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








          Serverless Days Amsterdam

          CATEGORIES

          Blog

          Join Nordcloud at Serverless Days Amsterdam that focuses on the reality of serverless based solutions.

          One day, one track, one community

          ServerlessDays Amsterdam is a developer-oriented conference about serverless technologies. It takes place in Amsterdam on the 29th of March.

          You can find the detailed agenda here.

          Date

          March 29, 2019

          Location

          Pakhuis de Zwijger

          Blog

          Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

          When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

          Blog

          Building better SaaS products with UX Writing (Part 3)

          UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

          Blog

          Building better SaaS products with UX Writing (Part 2)

          The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

          Get in Touch

          Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








            Serverless Meetup at Nordcloud 26.2.

            CATEGORIES

            Blog

            Serverless Meetup Tuesday, February 26, 2019

             
            Agenda:
            • Serverless with nameless-deploy-tools, Pasi Niemi
            • Appsync, Arto Liukkonen
            • Still room for a third topic if there are voluntary speakers.

            Date: 26.2.2019

            Time: 17.30-19.30

            Venue: Nordcloud, 4.th floor, Antinkatu 1, Helsinki

            There is a waiting list for this event at the moment.

             

            Check availability from the waiting list

            Blog

            Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

            When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

            Blog

            Building better SaaS products with UX Writing (Part 3)

            UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

            Blog

            Building better SaaS products with UX Writing (Part 2)

            The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

            Get in Touch

            Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








              Lambda layers for Python runtime

              CATEGORIES

              BlogTech Community

              AWS Lambda

              AWS Lambda is one of the most popular serverless compute services in the public cloud, released in November 2014. It runs your code in the response to events like DynamoDB, SNS or HTTP triggers without provisioning or managing any infrastructure. Lambda takes care of most of the things required to run your code and provides high availability. It allows you to execute even up to 1000 parallel functions at once! Using AWS lambda you can build applications like:

              • Web APIs
              • Data processing pipelines
              • IoT applications
              • Mobile backends
              • and many many more…

              Creating AWS Lambda is super simple: you just need to create a zip file with your code, dependencies and upload it to S3 bucket. There are also frameworks like serverless or SAM that handles deploying AWS lambda for you, so you don’t have to manually create and upload the zip file.

              There is, however, one problem.

              You have created a simple function which depends on a large number of other packages. AWS lambda requires you to zip everything together. As a result, you have to upload a lot of code that never changes what increases your deployment time, takes space, and costs more.

              AWS Lambda Layers

              Fast forward 4 years later at 2018 re:Invent AWS Lambda Layers are released. This feature allows you to centrally store and manage data that is shared across different functions in the single or even multiple AWS accounts! It solves a certain number of issues like:

              • You do not have to upload dependencies on every change of your code. Just create an additional layer with all required packages.
              • You can create custom runtime that supports any programming language.
              • Adjust default runtime by adding data required by your employees. For example, there is a team of Cloud Architects that builds Cloud Formation templates using the troposphere library. However, they are no developers and do not know how to manage python dependencies… With AWS lambda layer you can create a custom environment with all required data so they could code in the AWS console.

              But how does the layer work?

              When you invoke your function, all the AWS Lambda layers are mounted to the /opt directory in the Lambda container. You can add up to 5 different layers. The order is really important because layers with the higher order can override files from the previously mounted layers. When using Python runtime you do not need to do any additional operations in your code, just import library in the standard way. But, how will my python code know where to find my data?

              That’s super simple, /opt/bin is added to the $PATH environment variable. To check this let’s create a very simple Python function:

              
              import os
              def lambda_handler(event, context):
                  path = os.popen("echo $PATH").read()
                  return {'path': path}
              

              The response is:

               
              {
                  "path": "/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin\n"
              }
              

               

              Existing pre-defined layers

              AWS layers have been released together with a single, publicly accessible library for data processing containing 2 libraries: NumPyand SciPy. Once you have created your lambda you can click  `Add a layer` in the lambda configuration. You should be able to see and select the AWSLambda-Python36-SciPy1x layer. Once you have added your layer you can use these libraries in your code. Let’s do a simple test:

              
              import numpy as np
              import json
              
              
              def lambda_handler(event, context):
                  matrix = np.random.randint(6, size=(2, 2))
                  
                  return {
                      'matrix': json.dumps(matrix.tolist())
                  }
              

              The function response is:

               >code>
              {
                "matrix": "[[2, 1], [4, 2]]"
              }
              

               

              As you can see it works without any effort.

              What’s inside?

              Now let’s check what is in the pre-defined layer. To check the mounted layer content I prepared simple script:

              
              import os
              def lambda_handler(event, context):
                  directories = os.popen("find /opt/* -type d -maxdepth 4").read().split("\n")
                  return {
                      'directories': directories
                  }
              

              In the function response you will receive the list of directories that exist in the /opt directory:

              
              {
                "directories": [
                  "/opt/python",
                  "/opt/python/lib",
                  "/opt/python/lib/python3.6",
                  "/opt/python/lib/python3.6/site-packages",
                  "/opt/python/lib/python3.6/site-packages/numpy",
                  "/opt/python/lib/python3.6/site-packages/numpy-1.15.4.dist-info",
                  "/opt/python/lib/python3.6/site-packages/scipy",
                  "/opt/python/lib/python3.6/site-packages/scipy-1.1.0.dist-info"
                ]
              }
              

              Ok, so it contains python dependencies installed in the standard way and nothing else. Our custom layer should have a similar structure.

              Create Your own layer!

              Our use case is to create an environment for our Cloud Architects to easily build Cloud Formation templates using troposphere and awacs libraries. The steps comprise:
              <h3″>Create virtual env and install dependencies

              To manage the python dependencies we will use pipenv.

              Let’s create a new virtual environment and install there all required libraries:

              
              pipenv --python 3.6
              pipenv shell
              pipenv install troposphere
              pipenv install awacs
              

              It should result in the following Pipfile:

              
              [[source]]
              url = "https://pypi.org/simple"
              verify_ssl = true
              name = "pypi"
              [packages]
              troposphere = "*"
              awacs = "*"
              [dev-packages]
              [requires]
              python_version = "3.6"
              

              Build a deployment package

              All the dependent packages have been installed in the $VIRTUAL_ENV directory created by pipenv. You can check what is in this directory using ls command:

               
              ls $VIRTUAL_ENV
              

              Now let’s prepare a simple script that creates a zipped deployment package:

              
              PY_DIR='build/python/lib/python3.6/site-packages'
              mkdir -p $PY_DIR                                              #Create temporary build directory
              pipenv lock -r > requirements.txt                             #Generate requirements file
              pip install -r requirements.txt --no-deps -t $PY_DIR     #Install packages into the target directory
              cd build
              zip -r ../tropo_layer.zip .                                  #Zip files
              cd ..
              rm -r build                                                   #Remove temporary directory
              
              

              When you execute this script it will create a zipped package that you can upload to AWS Layer.

               

              Create a layer and a test AWS function

              You can create a custom layer and AWS lambda by clicking in AWS console. However, real experts use CLI (AWS lambda is the new feature so you have to update your awscli to the latest version).

              To publish new Lambda Layer you can use the following command (my zip file is named tropo_layer.zip):

              
              aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip
              

              As the response, you should receive the layer arn and some other data:

              
              {
                  "Content": {
                      "CodeSize": 14909144,
                      "CodeSha256": "qUz...",
                      "Location": "https://awslambda-eu-cent-1-layers.s3.eu-central-1.amazonaws.com/snapshots..."
                  },
                  "LayerVersionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1",
                  "Version": 1,
                  "Description": "",
                  "CreatedDate": "2018-12-01T22:07:32.626+0000",
                  "LayerArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test"
              }
              

              The next step is to create AWS lambda. Yor lambda will be a very simple script that generates Cloud Formation template to create EC2 instance:

               
              from troposphere import Ref, Template
              import troposphere.ec2 as ec2
              import json
              def lambda_handler(event, context):
                  t = Template()
                  instance = ec2.Instance("myinstance")
                  instance.ImageId = "ami-951945d0"
                  instance.InstanceType = "t1.micro"
                  t.add_resource(instance)
                  return {"data": json.loads(t.to_json())}
              

              Now we have to create a zipped package that contains only our function:

              
              zip tropo_lambda.zip handler.py
              

              And create new lambda using this file (I used an IAM role that already exists on my account. If you do not have any role that you can use you have to create one before creating AWS lambda):

              
              aws lambda create-function --function-name tropo_function_test --runtime python3.6 
              --handler handler.lambda_handler 
              --role arn:aws:iam::xxxxxxxxxxxx:role/service-role/some-lambda-role 
              --zip-file fileb://tropo_lambda.zip
              

              In the response, you should get the newly created lambda details:

              
              {
                  "TracingConfig": {
                      "Mode": "PassThrough"
                  },
                  "CodeSha256": "l...",
                  "FunctionName": "tropo_function_test",
                  "CodeSize": 356,
                  "RevisionId": "...",
                  "MemorySize": 128,
                  "FunctionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:function:tropo_function_test",
                  "Version": "$LATEST",
                  "Role": "arn:aws:iam::xxxxxxxxx:role/service-role/some-lambda-role",
                  "Timeout": 3,
                  "LastModified": "2018-12-01T22:22:43.665+0000",
                  "Handler": "handler.lambda_handler",
                  "Runtime": "python3.6",
                  "Description": ""
              }
              

              Now let’s try to invoke our function:

              
              aws lambda invoke --function-name tropo_function_test --payload '{}' output
              cat output
              {"errorMessage": "Unable to import module 'handler'"}
              
              

              Oh no… It doesn’t work. In the CloudWatch you can find detailed log message: `Unable to import module ‘handler’: No module named ‘troposphere’` This error is obvious. Default python3.6 runtime does not contain troposphere library. Now let’s add layer we created in the previous step to our function:

              
              aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1
              

              When you invoke lambda again you should get the correct response:

              
              {
                "data": {
                  "Resources": {
                    "myinstance": {
                      "Properties": {
                        "ImageId": "ami-951945d0",
                        "InstanceType": "t1.micro"
                      },
                      "Type": "AWS::EC2::Instance"
                    }
                  }
                }
              }
              

              Add a local library to your layer

              We already know how to create a custom layer with python dependencies, but what if we want to include our local code? The simplest solution is to manually copy your local files to the /python/lib/python3.6/site-packages directory.

              First, let prepare the test module that will be pushed to the layer:

              
              $ find local_module
              local_module
              local_module/__init__.py
              local_module/echo.py
              $ cat cat local_module/echo.py
              def echo_hello():
                  return "hello world!"
              

              To manually copy your local module to the correct path you just need to add the following line to the previously used script (before zipping package):

              
              cp -r local_module 'build/python/lib/python3.6/site-packages'
              

              This works, however, we strongly advise transforming your local library into the pip module and installing it in the standard way.

              Update Lambda layer

              To update lambda layer you have to run the same code as before you used to create a new layer:

              
              aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip
              

              The request should return LayerVersionArn with incremented version number (arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2 in my case).

              Now update lambda configuration with the new layer version:

               
              aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2
              
              

              Now you should be able to import local_module in your code and use the echo_hello function.

               

              Serverless framework Layers support

              Serverless is a framework that helps you to build applications based on the AWS Lambda Service. It already supports deploying and using Lambda Layers. The configuration is really simple – in the serverless.yml file, you provid the path to the layer location on your disk (it has to path to the directory – you cannot use zipped package, it will be done automatically). You can either create a separate serverless.yml configuration for deploying Lambda Layer or deploy it together with your application.

              We’ll show the second example. However, if you want to benefit from all the Lambda Layers advantages you should deploy it separately.

              
              service: tropoLayer
              package:
                individually: true
              provider:
                name: aws
                runtime: python3.6
              layers:
                tropoLayer:
                  path: build             # Build directory contains all python dependencies
                  compatibleRuntimes:     # supported runtime
                    - python3.6
              functions:
                tropo_test:
                  handler: handler.lambda_handler
                  package:
                    exclude:
                     - node_modules/**
                     - build/**
                  layers:
                    - {Ref: TropoLayerLambdaLayer } # Ref to the created layer. You have to append 'LambdaLayer'
              string to the end of layer name to make it working
              

              I used the following script to create a build directory with all the python dependencies:

              
              PY_DIR='build/python/lib/python3.6/site-packages'
              mkdir -p $PY_DIR                                              #Create temporary build directory
              pipenv lock -r > requirements.txt                             #Generate requirements file
              pip install -r requirements.txt -t $PY_DIR                   #Install packages into the target direct
              

              This example individually packs a Lambda Layer with dependencies and your lambda handler. The funny thing is that you have to convert your lambda layer name to be TitleCased and add the `LambdaLayer` suffix if you want to refer to that resource.

              Deploy your lambda together with the layer, and test if it works:

              
              sls deploy -v --region eu-central-1
              sls invoke -f tropo_test --region eu-central-1
              

              Summary

              It was a lot of fun to test Lambda Layers and investigate how it technically works. We will surely use it in our projects.

              In my opinion, AWS Lambda Layers is a really great feature that solves a lot of common issues in the serverless world. Of course, it is not suitable for all the use cases. If you have a simple app, that does not require a huge number of dependencies it’s easier for you to have everything in the single zip file because you do not need to manage additional layers.

              Read more on AWS Lambda in our blog!

              Notes from AWS re:Invent 2018 – Lambda@edge optimisation

              Running AWS Lambda@Edge code in edge locations

              Amazon SQS as a Lambda event source

              Blog

              Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

              When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

              Blog

              Building better SaaS products with UX Writing (Part 3)

              UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

              Blog

              Building better SaaS products with UX Writing (Part 2)

              The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

              Get in Touch

              Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








                What is Amazon FreeRTOS and why should you care?

                CATEGORIES

                BlogTech Community

                At Nordcloud, we’ve been working with AWS IoT since Amazon released it

                We’ve enabled some great customer success stories by leveraging the high-level features of AWS IoT. We combine those features with our Serverless development expertise to create awesome cloud applications. Our projects have ranged from simple data collection and device management to large-scale data lakes and advanced edge computing solutions.

                 

                In this article we’ll take a look at what Amazon FreeRTOS can offer for IoT solutions

                First released in November 2017, Amazon FreeRTOS is a microcontroller (MCU) operating system. It’s designed for connecting lightweight microcontroller-based devices to AWS IoT and AWS Greengrass. This means you can have your sensor and actuator devices connect directly to the cloud, without having smart gateways acting as intermediaries.

                DSC06409 - Saini Patala Photography

                What are microcontrollers?

                If you’re unfamiliar with microcontrollers, you can think of them as a category of smart devices that are too lightweight to run a full Linux operating system. Instead, they run a single application customized for some particular purpose. We usually call these applications firmware. Developers combine various operating system components and application components into a firmware image and “burn” it on the flash memory of the device. The device then keeps performing its task until a new firmware is installed.

                Firmware developers have long used the original FreeRTOS operating system to develop applications on various hardware platforms. Amazon has extended FreeRTOS with a number of features to make it easy for applications to connect to AWS IoT and AWS Greengrass, which are Amazon’s solutions for cloud based and edge based IoT. Amazon FreeRTOS currently includes components for basic MQTT communication, Shadow updates, AWS Greengrass endpoint discovery and Over-The-Air (OTA) firmware updates. You get these features out-of-the-box when you build your application on top of Amazon FreeRTOS.

                Amazon also runs a FreeRTOS qualification program for hardware partners. Qualified products have certain minimum requirements to ensure that they support Amazon FreeRTOS cloud features properly.

                Use cases and scenarios

                Why should you use Amazon FreeRTOS instead of Linux? Perhaps your current IoT solution depends on a separate Linux based gateway device, which you could eliminate to cut costs and simplify the solution. If your ARM-based sensor devices already support WiFi and are capable of running Amazon FreeRTOS, they could connect directly to AWS IoT without requiring a separate gateway.

                Edge computing scenarios might require a more powerful, Linux based smart gateway that runs AWS Greengrass. In such cases you can use Amazon FreeRTOS to implement additional lightweight devices such as sensors and actuators. These devices will use MQTT to talk to the Greengrass core, which means you don’t need to worry about integrating other communications protocols to your system.

                In general, microcontroller based applications have the benefit of being much more simple than Linux based systems. You don’t need to deal with operating system updates, dependency conflicts and other moving parts. Your own firmware code might introduce its own bugs and security issues, but the attack surface is radically smaller than a full operating system installation.

                How to try it out

                If you are interested in Amazon FreeRTOS, you might want to order one of the many compatible microcontroller boards. They all sell for less than $100 online. Each board comes with its own set of features and a toolchain for building applications. Make sure to pick one that fits your purpose and requirements. In particular, not all of the compatible boards include support for Over-The-Air (OTA) firmware upgrades.

                At Nordcloud we have tried out two Amazon-qualified boards at the time of writing:

                • STM32L4 Discovery Kit
                • Espressif ESP-WROVER-KIT (with Over-The-Air update support)

                ST provides their own SystemWorkBench Ac6 IDE for developing applications on STM32 boards. You may need to navigate the websites a bit, but you’ll find versions for Mac, Linux and Windows. Amazon provides instructions for setting everything up and downloading a preconfigured Amazon FreeRTOS distribution suitable for the device. You’ll be able to open it in the IDE, customize it and deploy it.

                Espressif provides a command line based toolchain for developing applications on ESP32 boards which works on Mac, Linux and Windows. Amazon provides instructions on how to set it up for Amazon FreeRTOS. Once the basic setup is working and you are able to flash your device, there are more instructions for setting up Over-The-Air updates.

                Both of these devices are development boards that will let you get started easily with any USB-equipped computer. For actual IoT deployments you’ll probably want to look into more customized hardware.

                Conclusion

                We hope you’ll find Amazon FreeRTOS useful in your IoT applications.

                If you need any help in planning and implementing your IoT solutions, feel free to contact us.

                Blog

                Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

                When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

                Blog

                Building better SaaS products with UX Writing (Part 3)

                UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

                Blog

                Building better SaaS products with UX Writing (Part 2)

                The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

                Get in Touch

                Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








                  Cloud computing news #10: Serverless, next-level cloud tech

                  CATEGORIES

                  Blog

                  This week we focus on serverless computing which continues to grow and enables agility, speed of innovation and lower cost to organizations.

                  Serverless Computing Spurs Business Innovation

                  According to Digitalist Magazine, serverless computing is outpacing conventional patterns of emerging technology adoption. Organizations across the globe see technology-driven innovation as essential to compete. Serverless computing promises to enable faster innovation at a lower cost and simplify the creation of responsive business processes.

                  But what does “serverless computing” mean and how can companies benefit from it?

                  1. Innovate faster and at a lower cost: Serverless cloud computing execution model in which the cloud provider acts as the server, dynamically managing the allocation of machine resources. This means that developers are able to focus on coding instead of managing deployment and runtime environments. Also, pricing is based on the actual amount of resources consumed by an application. Thus, with serverless computing, an organization can innovate faster and at a lower cost. Serverless computing eliminates the risk and cost of overprovisioning, as it can scale resources dynamically with no up-front capacity planning required.
                  2. Enable responsive business processes: Serverless function services – function as a service (FaaS) – can automatically activate and run application logic that carry out simple tasks in response to specific events. If the task enchained by an incoming event involves data management, developers can leverage serverless backends as a service (BaaS) for data caching, persistence, and analytics services via standard APIs. With this event-driven application infrastructure in place, one organization can decide at any moment to execute a new task in response to a given event.

                  Organizations also need the flexibility to develop and deploy their innovations where it makes the most sense for their business. Platforms that rely on open standards, deploy on all the major hyperscale public clouds, and offer portability between the hyperscaler IaaS foundations are really the ideal choice for serverless environments.

                  Read more in Digitalist Magazine

                  Nordcloud tech blog: Developing serverless cloud components

                  cloud component contains both your code and the necessary platform configuration to run it. The concept is similar to Docker containers, but here it is applied to serverless applications. Instead of wrapping an entire server in a container, a cloud component tells the cloud platform what services it depends on.

                  A typical cloud component might include a REST API, a database table and the code needed to implement the related business logic. When you deploy the component, the necessary database services and API services are automatically provisioned in the cloud.

                  Developers can assemble cloud applications from cloud components. This resembles the way they would compose traditional applications from software modules. The benefit is less repeated work to implement the same features in every project over and over again.

                  Check out our tech blog that takes a look at some new technologies for developing cloud components

                  Nordcloud Case study: Developing on AWS services using a serverless architecture for Kemppi 

                  Nordcloud helped Kemppi build the initial architecture based on AWS IoT Core, API Gateway, Lambda and other AWS services. We also designed and developed the initial Angular.js based user interface for the solution.

                  Developing on AWS services using a serverless architecture enabled Kemppi to develop the solution in half the time and cost compared to traditional, infrastucture based architectures. The serverless expertise of Nordcloud was key to enable a seamless rampup of development capabilities in the Kemppi development teams.

                  Read more on our case study here

                  Serverless at Nordcloud

                  Nordcloud has a long track record with serverless, being among the first companies to adopt services such as AWS Lambda and API gateway for production projects already in 2015. Since then, Nordcloud has executed over 20 customer projects using serverless technologies for several use case such as web applications, IoT solutions, data platforms and cloud infrastructure monitoring or automation.

                  Nordcloud is an AWS Lambda, API Gateway and DynamoDB parter, a Serverless framework partner and contributor to the serverless community via contribution to open source projects, events and initiatives such as the Serverless Finland meetup.

                  How can we help you take your business to the next level with serverless?

                  Blog

                  Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

                  When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

                  Blog

                  Building better SaaS products with UX Writing (Part 3)

                  UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

                  Blog

                  Building better SaaS products with UX Writing (Part 2)

                  The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

                  Get in Touch

                  Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








                    Developing Serverless Cloud Components

                    CATEGORIES

                    Blog

                    A cloud component contains both your code and the necessary platform configuration to run it. The concept is similar to Docker containers, but here it is applied to serverless applications. Instead of wrapping an entire server in a container, a cloud component tells the cloud platform what services it depends on.

                    A typical cloud component might include a REST API, a database table and the code needed to implement the related business logic. When you deploy the component, the necessary database services and API services are automatically provisioned in the cloud.

                    Developers can assemble cloud applications from cloud components. This resembles the way they would compose traditional applications from software modules. The benefit is less repeated work to implement the same features in every project over and over again.

                    In the following sections we’ll take a look at some new technologies for developing cloud components.

                    AWS CDK

                    AWS CDK, short for Cloud Development Kit, is Amazon’s new framework for defining AWS cloud infrastructure with code. It currently supports TypeScript, JavaScript and Java with more language support coming later.

                    When developing with AWS CDK, you use code to define both infrastructure and business logic. These codebases are separate. You define your component’s deployment logic in one script file, and your Lambda function code in another script file. These files don’t have to be written in the same programming language.

                    AWS CDK includes the AWS Construct Library, which provides a selection of predefined cloud components to be used in applications. It covers a large portion of Amazon’s AWS cloud services, although not all of them.

                    These predefined constructs are the smallest building blocks available in AWS CDK. For instance, you can use the AWS DynamoDB construct to create a database table. The deployment process translates this construct into a CloudFormation resource, and CloudFormation creates the actual table.

                    The real power of AWS CDK comes from the ability to combine the smaller constructs into larger reusable components. You can define an entire microservice, including all the cloud resources it needs, and use it as a component in a larger application.

                    This modularity can also help standardize multi-team deployments. When everybody delivers their service as an AWS CDK construct, it’s straightforward to put all the services together without spending lots of time writing custom deployment scripts.

                    AWS CDK may become very important for cloud application development if third parties start publishing their own Construct Libraries online. There could eventually be very large selection of reusable cloud components available in an easily distributable and deployable format. Right now the framework is still pending a 1.0 release before freezing its APIs.

                    Serverless Components

                    Serverless Components is an ambitious new project by the makers of the hugely popular Serverless Framework. It aims to offer a cloud-agnostic way of developing reusable cloud components. These components can be assembled into applications or into higher order components.

                    The basic idea of Serverless Components is similar to AWS CDK. But while CDK uses a programming language to define components, Serverless has chosen a declarative YAML syntax instead. This results in simpler component definitions but you also lose a lot of flexibility. To remedy this, Serverless Components lets you add custom JavaScript files to perform additional deployment operations.

                    The Serverless Components project has its own component registry. The registry includes some basic components for Amazon AWS, Google Cloud, Netlify and GitHub. Unlike in some other projects, developers are writing these components manually instead of auto-generating them from service definitions. It will probably take a while before all cloud features are supported.

                    One controversial design decision of Serverless Components is to bypass the AWS CloudFormation stack management service. The tool creates components directly on AWS and other cloud platforms. It writes their state to a local state.json file, which developers must share.

                    This approach offers speed, flexibility and multi-cloud support, but also requires Serverless Components to handle deployments flawlessly in every situation. Enterprise AWS users will probably be wary of adopting a solution that bypasses CloudFormation entirely.

                    Pulumi

                    Pulumi.io is a cloud component startup offering a SaaS service subscription combined with an open source framework. Essentially Pulumi aims to replace AWS CloudFormation and other cloud deployment tools with its own stack management solution. Pulumi’s cloud service deploys the actual cloud applications to Amazon AWS, Microsoft Azure, Google Cloud, Kubernetes or OpenStack.

                    Pulumi supports a higher level of abstraction than the other component technologies discussed here. When you implement a serverless service using Pulumi’s JavaScript syntax, the code gets translated to a format suitable for the platform you are deploying on. You write your business logic as JavaScript handler functions for Express API endpoints. Pulumi’s tool extracts those handlers from the source code and deploys them as AWS Lambda functions, Azure Functions or Google Cloud Functions.

                    Writing completely cloud-agnostic code is challenging even with Pulumi’s framework. For certain things it offers cloud-agnostic abstractions like the cloud.Table component. When you use cloud. Table, your code automatically adapts to use either DynamoDB or Azure Table Storage depending on which cloud platform you deploy it on.

                    For many other things you have to write cloud-specific code. Or, you can write your own abstraction layer to complement Pulumi’s framework. Such abstraction layers tend to add complexity to applications, making it harder for developers to understand what the code is actually doing.

                    Ultimately it’s up to you to decide whether you want to commit to developing everything on top of an abstraction layer which everybody must learn. Also, as with Serverless Components, you can’t use AWS CloudFormation to manage your Pulumi-based stacks.

                    Conclusion

                    The main issue to consider in choosing a cloud component technology is whether you need multi-cloud support or not. Single-cloud development is arguably more productive and lets developers leverage higher level cloud services. On the other hand this results in increased vendor-lock, which may or may not be a problem.

                    For developers focusing on Amazon AWS, the AWS CDK is a fairly obvious choice. AWS CDK is likely to become a de-facto standard way of packaging AWS-based cloud components. As serverless applications get more and more popular, AWS CDK fills some important blank spots in the CloudFormation deployment process and in the reusability of components. And since AWS CDK still uses CloudFormation under the hood, adopters will be familiar with the underlying technology.

                    Developers that truly require multi-cloud will have to consider whether it’s acceptable to rely on Pulumi’s third party SaaS service for deployments. If the SaaS service goes down, deployed applications will keep working but you can’t update them. This is probably not a big problem for short periods of time. It will be more problematic if Pulumi ever shuts down the service permanently. For projects where this is not an issue, Pulumi may offer a very compelling multi-cloud scenario.

                    Multi-cloud developers that want to contribute to open source may want to check out the Serverless Components project. It’s too early to recommend using this project for actual use cases yet, but it may have an interesting future ahead. The project may attract a lot of existing users if the developers are able to provide a clear migration path from Serverless Framework.

                    If you would like more information on how Nordcloud can help you with serverless technologies, contact us here.

                    Blog

                    Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

                    When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

                    Blog

                    Building better SaaS products with UX Writing (Part 3)

                    UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

                    Blog

                    Building better SaaS products with UX Writing (Part 2)

                    The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

                    Get in Touch

                    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.