The close relationship between SAP and AWS provides benefits

CATEGORIES

Blog

The public cloud seems to be the promised land where IT grows flexibly with business and costs are low. This also increasingly applies to on-premise SAP ERP systems. There are different types of the cloud available, so why do many companies choose Amazon Web Services (AWS)?

A better competitive position due to a lower TCO

Costs are an important part in making service more attractive to customers. In order to to stay competitive, a low price is very important. That is why many companies choose the public cloud from AWS. They eliminate replacement costs for their own hardware and only pay for the digital resources that they use. This can be dynamically and automatically scaled up and down. SAP in the cloud saves up to 71%. In addition, organizations have a better overview of cost structure when they have fewer licenses.

Ability to respond faster to market developments

With on-premise solution you have to purchase and administrate hardware yourself which is both costly and requires specific IT competence in-house. Hardware replacement cycles take months and even longer periods depending on individual requirements.

The private cloud offers faster connections, low latency, and more speed and flexibility than on-premise but it still has a few disadvantages, such as increased IT complexity to adopt new management tools. In the public cloud this IT complexity is handled by the cloud provider enabling you to configure and scale in a matter of mouse clicks. AWS offers a wide range of cloud-based products and services, including IoT (Internet-of-Things), ML (Machine Learning), mobile, developer tools and security. These services help companies to innovate faster with lower IT costs, creating an agile and cost-efficient platform to test new business ideas.

The close relationship between SAP and AWS provides benefits

AWS and SAP have been working closely together since 2011. Their relationship is why tens of thousands of organizations in more than 190 countries trust the public cloud of AWS. AWS has worked closely with SAP to test and certify the AWS Cloud for SAP solutions, enabling organizations to get more out of their SAP and their AWS. Additionally, AWS offers a huge amount of traditional and progressive cloud native technologies that are easy to connect to your SAP. Think of IoT services that help you harvest data. Analytics and AI software translate IoT data into valuable insights, even new business models and strategic benefits. The nimble thing about the AWS marketplace is that it offers a continuous stream of new solutions.

Choosing the right partner

Migrating your company’s core system is quite a process. You can move to the cloud on your own, but most organizations opt for guidance from an experienced cloud technology partner. You need a partner who can help with the strategy, build a solid cloud foundation, and assist with the actual migration. You will have to review the competences of various partners. For instance, some partners only employ a few certified employees. Other partners have more competencies and are a AWS Premier Consulting Partner. Nordcloud is one such partner. It was born in the cloud in 2011 and has seven years of experience with hundreds of customers.

Would you like to know why more and more organizations migrate their SAP to AWS?

Download our free whitepaper, “A comparison of SAP on-premise, private cloud and public cloud”.

Download free whitepaper

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








How containerized applications boost business growth

CATEGORIES

BlogTech

When it comes to applications and other software, businesses need to balance innovation, security and user experience against the cost of development.

However, too many are stuck with legacy software that isn’t suitable for their needs and legacy development processes that increase the cost of building software without increasing its value.

This is why more and more companies are embracing containerised applications to improve development productivity, time to market and innovation without sacrificing security or reliability.

In this blog, we will not only explain what containers are, but show you how they can benefit your business.

What are containers?

Containers, in this context, are just another way to build and deploy applications. Instead of building monolithic applications that run in their entirety on servers, containerised applications are made up of Lego-like building blocks called containers. Each container is a standalone package of software that embodies an element of the overall application.

Containerised applications take advantage of the scalability of cloud infrastructure, coping with peaks and troughs in demand. They also move quickly and reliably from one computing environment to another, for example moving from a development environment to a full-scale production environment.

How containerized applications benefit your business?

Software is not simple. To work properly, it relies on everything doing its job at the right time. With multiple applications running on the same operating system, you may find that some don’t mesh together well. Containerisation reduces this risk by letting you build, test, update and integrate individual containers separately. In this way containerisation supports a DevOps approach to development.

In the past, this was done through virtual machines. But they take up a lot of storage, aren’t so portable and are difficult to update. What’s more they don’t provide the same level of continuous integration and delivery of service that containers do.

Here are three other ways that containerisation can benefit your business:

1. Agility

Containers make it easier to manage application operations and updates. For the best results, you need to work with a specialist IT partner to provide automated container orchestration. This means you can reap the benefits of your containerised applications quickly and effectively. While containerisation can sound complicated, with the right help, it can revolutionise the way you think about IT.

2. Scalability

You can meet growing demand and business need by scaling all or parts of your containerised applications almost instantly. You can do this by using software solutions like Openshift on Azure. Openshift on Azure is a free managed container orchestration service, hosted on Azure. It allows you to scale your application infrastructure efficiently.

3. Portability

One of the greatest benefits of containerisation is portability. You can reduce or avoid integration issues that slow your day-to-day business activities in a traditional computer environment. You can even stay productive if you move your application onto another device, cloud platform or operating system.

Make your business more productive and scalable with Openshift on Azure

With Openshift on Azure, you can manage and scale your containerised applications easily. This free, secure app allows you to boost your DevOps ambitions by accelerating containerised application development. So, if you want to develop and scale your IT infrastructure with confidence, look no further than Openshift on Azure.

Feel free to contact us if you need help in planning or executing your container workload.

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Notes from AWS Chalk session at AWS re:Invent 2018 – Lambda@Edge optimisations

CATEGORIES

Tech

Lambda@Edge makes it possible to run Lambda code in Edge locations to modify viewer/origin requests. This can be used to modify HTTP headers, change content based on user-agent and more. We’ve written about it previously, so feel free to read this blog post if you want an introduction: https://nordcloud.com/aws-lambdaedge-running-lambda-code-in-edge-locations/

There are quite a few limitations for Lambda@Edge which depends on which request event you are responding on. For example the maximum response that is generated by the lambda function is totally different depending on if it is a viewer or origin response (40 Kb vs 1 Mb). The function in itself also has limits, such as maximum 3GB of memory allocation and 50 Mb zipped deployment package size.

This means that most use-cases have a need for optimisation. First thing first: evaluate if you really need to use Lambda@Edge. Cloudfront currently have a lot of functionality that is possible to take part of before trying to reinvent the wheel – caching depending on device, selecting headers to base caching on, regional blocks with WAF, etc. Even your origin can sometimes handle header rewrites and other header manipulation, which means that there is no need to spend the time to build it yourself. So you should only use Lambda@Edge if you know that cloudfront can’t do it and that there will be a benefit to rendering or serving your content at the edge.

Optimise before the function

If you’ve decided to use Lambda@Edge you should first look into the optimisations you can do before the function is invoked by the event. Cloudfront does a lot of optimisations for you. It groups requests so that if the response time of the object fetch is the same it will put them together and do only one get instead of sending all of them to the origin. Note that cloudfront is a multilayered CDN which will try to catch the cache from the closest location in cloudfront on miss in a specific region as well, so there is no need to build multi-region caching yourself. Another thing to look at in cloudfront is the origin paths that the event reacts upon. Perhaps the function only needs to react on a very specific HTTP path. If possible it is also always better to let the function react on origin events instead of viewer events which in turn makes the amount of events to react upon fewer and you have higher limitations for function size, response time and resource allocation.

Coding optimisations

When writing the function you should try to utilise global variables as much as possible since they are re-used between invocations and cached on the workers for a couple of hours. Small things such as keeping TCP sockets usable and perhaps using UDP instead of TCP can make a difference especially since Lambda@Edge is synchronous.

Deployment testing

When deploying the function, look at minimising the code with different tools such as browserify. Also note that Lambda@Edge can be deployed with different memory allocations so make sure that you test which size gives you the best bang for the buck – sometimes raising the memory usage from 128 Mb to 256 Mb gives you much faster responses without costing that much more.

S3 performance

If you are fetching content from S3, try using S3 Select to get just what you need from a subset of data from an object by using simple SQL expressions, and even better, try to use cached content in Cloudfront instead of trying to fetch it from S3 or other origins. This makes a lot of sense especially if the data can be cached.

Last but not least: Remove the function when not in use. Don’t use Lambda@Edge if you don’t need to anymore.

If you’d like to learn more about moving your business to the Cloud, please contact us here.

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Blog

Controlling lights with Ikea Trådfri, Raspberry Pi and AWS

One of our developers build a smart lights solution with Ikea Trådfri, Raspberry Pi and AWS for his home.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Microsoft Ignite Berlin

CATEGORIES

Events

Join Nordcloud at Microsoft Ignite in Berlin in December 6–7.

Microsoft Ignite | The Tour Berlin

Get the latest insights and skills from technology leaders and practitioners shaping the future of cloud, data, business intelligence, teamwork, and productivity.

Explore the latest developer tools and cloud technologies and learn how to put your skills to work in new areas. Connect with Microsoft community to gain practical insights and best practices on the future of cloud development, data, IT, and business intelligence.

100 + deep dive sessions and workshops and 350 + experts. See the complete list of sessions here.
 

Follow us for news from Microsoft Ignite

Follow us also in twitter and Linkedin for news and inspiration from Microsoft Ignite!

 

Date

December 6 – 7, 2018

Location

Messe Berlin, Messe Berlin GmbH, Messedamm 22, 14055 Berlin, Germany

For transportation options and directions, visit Messe Berlin.

Blog

Meet us at Microsoft Ignite 2019

Explore the latest tools and technology – join Nordcloud at Microsoft Ignite in Orlando on November 4-8!

Blog

SaaS Business Model and Public Cloud are a Winning Combination for ISVs

Our experts have helped many ISVs to leverage cloud technologies to transition their business from that of a traditional software...

Blog

Migrations to MS Azure – Best Practices shared in Poland

Microsoft & Nordcloud Poland on the road.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Nordcloud at AWS re:Invent 2018

CATEGORIES

Events

We are attending AWS re:Invent Nov. 26-30, 2018 in Las Vegas.

AWS re:Invent 2018 is expecting some 40 000 attendees on 26.-30.11.2018 in Las Vegas, USA. This year, re:Invent will feature sessions that cover topics that seen also in past years, such as databases, analytics & big data, security & compliance, enterprise, machine learning, and compute. You can to cross-search these topics in more detail in the session catalog.

We picked some interesting sessions to check from re:Invent 2018:

  1. Optimizing Costs as You Scale on AWS
  2. The Future of Enterprise Applications is Serverless
  3. Driving DevOps Transformation in Enterprises
  4. How HSBC Uses Serverless to Process Millions of Transactions in Real Time
  5. Build, Train, and Deploy Machine Learning for the Enterprise with Amazon SageMaker
  6. Managing Security of Large IoT Fleets
  7. Meeting Enterprise Security Requirements with AWS Native Security Services

Follow our social media postings from Las Vegas

Join the conversations on your expectations from re:Invent 2018.

Here is our AWS guru Miguel´s video wish list:

Check also Mikael´s (Data Driven Business Lead and AWS guru) wish list:

Also, make sure you watch our AWS Alliance Lead Niko’s expectations for the event:

Blog

Meet us at Microsoft Ignite 2019

Explore the latest tools and technology – join Nordcloud at Microsoft Ignite in Orlando on November 4-8!

Blog

SaaS Business Model and Public Cloud are a Winning Combination for ISVs

Our experts have helped many ISVs to leverage cloud technologies to transition their business from that of a traditional software...

Blog

Migrations to MS Azure – Best Practices shared in Poland

Microsoft & Nordcloud Poland on the road.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Nordcloudians’ international opportunities

CATEGORIES

Life at Nordcloud

Nordcloud supports international career ambitions, whether it’s about couple of days/weeks/months’ project work or relocation to a new country.

Now it is time for another interview with another Nordcloudian, who’s moved back to homeland in Sweden after his years in UK. Without further ado, let me introduce you to our Senior Cloud Architect, Hans Christoffersson!

1. Where are you from and how did you end up at Nordcloud?

 I am originally from Sweden and moved to the UK nine years ago. I worked at a company who did business with Nordcloud. Nordcloud conducted a workshop for this company after which I got keen to join NC! I had an AWS Architect certificate and got offered a job at Nordcloud 2,5 years ago. When I joined I got the choice to start the Azure UK team as it was booming and I have been working with Azure since then. I have now relocated back to Sweden to work with our Swedish team!

2. What is your role and core competence?

Senior Cloud Architect in Azure; less implementing, and more architecture and defining project deliveries, tech pre-sales, guiding and supporting junior architects. When I have to do more hands on work, I like to go on projects that have more “exotic” features or deliveries as I like figuring things out, rather than “knowing it already”.

3. What do you like most about working at Nordcloud? 

Being able to work with different technologies, customers and colleagues and always learning.

4. What is the most useful thing you have learned at Nordcloud? 

That the ability and eagerness to quickly learn something new is more important than experience.

5. What sets you on fire/ what’s your favourite thing technically with public cloud? 

It is not something I get to do as much of anymore; I like scripting and writing code. It is like solving a puzzle and I find it very stimulating.

6.What do you do outside work? 

I play a lot with gadgets, I am interested in IoT home automation, electric skateboards and I am also into photography.

7. What’s your best memory at working as a Nordcloudian? 

Those moments when my amazing colleagues and I have pulled through a challenging situation.

Sounds good Hans!

If you feel like you could be a good fit to the Swedish team (Stockholm/Malmö/Gothenburg), please have a look at our open vacancies here:

Nordcloud Careers

/Anna

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

Nordcloudians at NodeConf EU 2019

Our developers Henri Nieminen and Perttu Savolainen share their presentation tips and tell about their experience at NodeConf EU 2019...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Web application development with .NET Core and Azure DevOps

CATEGORIES

TechUncategorized

Initial setup of the environment

In my previous post “Azure DevOps Services – cloud based platform for collaborating on code development from Microsoft” I presented the basic information about Azure DevOps capabilities. Now, I will focus on the details. Mainly, I will build together a full CI/CD pipeline for automated builds and deployments of sample .NET Core MVC web application to Azure App service.

I assume that based on my previous post, a new project in Azure DevOps is created and you have access to it :). Before the fun begins, let’s setup a service connection between our CI/CD server and the Azure Portal. Navigate to the Azure portal. Service connections can be limited to subscription or resource group level. In our sample case we will set scope to the resource group. First, create a new resource group in Azure:

Azure DevOps

 

Ok, the resource group is ready, navigate now to the settings panel in Azure DevOps, then select the “Service connection” sub-category, and add a new “Azure Resource Manager” service connection:

Azure DevOps

After the confirmation, the provided values will be validated. If everything was provided correctly, the newly created service connection will be saved and ready to use in CI/CD pipelines configuration.

First important thing is ready, now let’s focus on accessing to the Azure Repos GIT repository. In the Azure DevOps user profile you can easily add your existing SSH public key. To do this, navigate to the “SSH public keys” panel in your profile and add your SSH key:

Azure DevOps

 

Later on, clone our new empty repository to the local computer. In the “Repos” section in Azure DevOps portal you can find the “Clone” button which shows a direct link dedicated for cloning. Copy the link to the clipboard and run your local GIT bash console, and run “git clone” command:

 

Azure DevOps

 

One warning occurs, but it is ok – our repository is new and empty 🙂 Now, our environment is ready, then we will create a new project and its code will be committed to our repo.

 

Creation of a development project and repository initialization

I will use the Visual Studio 2017 but here we do not have a limitation to a specific version. When IDE will be ready, navigate to the project creation action, then select “Cloud” and “ASP.NET Core Web application”:

Azure DevOps

 

In next the step, we can specify the framework version and target template. I will base on the ASP.NET Core 2.1 Framework and MVC template:

Azure DevOps

After a few seconds a new sample ASP.NET Core MVC project will be generated. For our purpose it is enough to start the fun with Ci/CD pipeline configuration on the Azure DevOps side. This application will be used as a working example. Dealing with code and implementation of magic stuff inside the application is not a purpose of this post. When the project will be ready, you can run it locally, and after testing commit the code to our repository:

Azure DevOps

Build the pipeline (CI) by visual designer

Our source code is in the repository, so let’s create a build pipeline for our solution. Navigate to the “Pipelines” section, next “Builds” and click “Create new”. In this example we will create our build pipeline from scratch with visual designer. If the pipeline would be committed to the repo in a YAML file (Pipeline as a code) then we have a possibility to load the ready pipeline from the file as well.

 

Azure DevOps

 

The full process contains three steps. At the beginning, we must specify where our code is located. The default option is the internal GIT repository in our project in the Azure DevOps. In our case this selection is correct. In second step, we can use one of the pre-defined build templates. Our application is based on the .NET Core framework, so I will select the ASP.NET Core template. After that the pre-defined agent job will be created for us. The job will contains steps like NuGet packages restoring, building of our solution, testing (in our app test doesn’t exist) and final publishing of the solution to package which will be used in the release pipeline. Here we can also export our pipeline to a YAML file:

Azure DevOps

 

On the “Triggers” tab the “Continuous integration” feature is not activated by default. Let’s enable them manually:

 

Azure DevOps

 

After saving all changes in the pipeline, our setup CI build pipeline is ready. Below you can see the sample action triggered by committing new changes in the repository. The new “CI build” runs immediately after new changes in the application code have been committed. Also email notifications will be sent to all subscribers. In the reason field, we can see “Continuous integration”. The icons in the email and in the build panel in the portal are an easy way to show the status of the build:

Azure DevOps

 

Release pipeline (CD)

Now we will take care of the automatic release of our application to App Service on Azure. At the beginning, we need to create an App Service instance in the resource group, which has been selected to the service connection:

 

Azure DevOps

Ok, when the deployment is finished, we can start the creation process of the release pipeline in Azure DevOps. First we have to decide which template from the pre-defined list we will use. For our solution the “Azure App Service deployment” will be a good choice:

 

Azure DevOps

Next we must specify the source of artifacts. In our case, we will use the results of our build pipeline created in the previous steps:

Azure DevOps

 

Stages in the pipeline can be also renamed. This is a good practice in huge pipelines. We can observe what’s happened in each stage:

Azure DevOps

 

In the general settings for our new stage-agent, we must fill three very important parameters. First one is to specify which service connection the release pipeline will use, secondly, we must select the type of the application deployed to the  App Service. In the third parameter, we must put in the name of the existing App Service instance. We already created one at the beginning of this part of post.

 

Azure DevOps

 

Next in the configuration is to select the localization of the published application package. In our case this localization looks like this:

Azure DevOps

 

In a similar way like in the CI pipeline we need to enable the trigger in our newly created CD pipeline. To do that, we must click on the light icon on the artifact source and enable the trigger for the build:

Azure DevOps

 

It was the last step in the configuration. Looks like we are ready to commit some changes and check the final functionality 🙂

 

Final check

Let’s test our new full CI/CD pipeline then:

1. Add the new code in the application:

In the controller:

Azure DevOps

In the view:

Azure DevOps

2. Commit the changes to the repository:

Azure DevOps

 

3. Our CI build pipeline started automatically and is ready:

Azure DevOps

 

4. I received an email with confirmation of the correct building:

Azure DevOps

 

5. The release CD pipeline started automatically after success. The build from the CI pipeline and deployment is ready:

Azure DevOps

 

6. Changes have been deployed to Azure App Service and are visible:

Azure DevOps

 

As we can observe, the configuration of the full CI/CD pipeline for ASP.NET Core MVC web application is pretty easy. However, you must have some knowledge related to the configuration this stuff on the Azure DevOps side.

We hope you will enjoy this post and this step-by-step guide will be useful for your future experience with Azure DevOps!

***

This post is the 2nd part in our Azure DevOps series. Check out the other posts:

#1: Azure DevOps Services – cloud based platform for collaborating on code development from Microsoft

#3: Azure Cosmos DB – Multi model, Globally Distributed Database Service

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Blog

Controlling lights with Ikea Trådfri, Raspberry Pi and AWS

One of our developers build a smart lights solution with Ikea Trådfri, Raspberry Pi and AWS for his home.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








What is Amazon FreeRTOS and why should you care?

CATEGORIES

Tech

At Nordcloud, we’ve been working with AWS IoT since Amazon released it

We’ve enabled some great customer success stories by leveraging the high-level features of AWS IoT. We combine those features with our Serverless development expertise to create awesome cloud applications. Our projects have ranged from simple data collection and device management to large-scale data lakes and advanced edge computing solutions.

 

In this article we’ll take a look at what Amazon FreeRTOS can offer for IoT solutions

First released in November 2017, Amazon FreeRTOS is a microcontroller (MCU) operating system. It’s designed for connecting lightweight microcontroller-based devices to AWS IoT and AWS Greengrass. This means you can have your sensor and actuator devices connect directly to the cloud, without having smart gateways acting as intermediaries.

DSC06409 - Saini Patala Photography

What are microcontrollers?

If you’re unfamiliar with microcontrollers, you can think of them as a category of smart devices that are too lightweight to run a full Linux operating system. Instead, they run a single application customized for some particular purpose. We usually call these applications firmware. Developers combine various operating system components and application components into a firmware image and “burn” it on the flash memory of the device. The device then keeps performing its task until a new firmware is installed.

Firmware developers have long used the original FreeRTOS operating system to develop applications on various hardware platforms. Amazon has extended FreeRTOS with a number of features to make it easy for applications to connect to AWS IoT and AWS Greengrass, which are Amazon’s solutions for cloud based and edge based IoT. Amazon FreeRTOS currently includes components for basic MQTT communication, Shadow updates, AWS Greengrass endpoint discovery and Over-The-Air (OTA) firmware updates. You get these features out-of-the-box when you build your application on top of Amazon FreeRTOS.

Amazon also runs a FreeRTOS qualification program for hardware partners. Qualified products have certain minimum requirements to ensure that they support Amazon FreeRTOS cloud features properly.

Use cases and scenarios

Why should you use Amazon FreeRTOS instead of Linux? Perhaps your current IoT solution depends on a separate Linux based gateway device, which you could eliminate to cut costs and simplify the solution. If your ARM-based sensor devices already support WiFi and are capable of running Amazon FreeRTOS, they could connect directly to AWS IoT without requiring a separate gateway.

Edge computing scenarios might require a more powerful, Linux based smart gateway that runs AWS Greengrass. In such cases you can use Amazon FreeRTOS to implement additional lightweight devices such as sensors and actuators. These devices will use MQTT to talk to the Greengrass core, which means you don’t need to worry about integrating other communications protocols to your system.

In general, microcontroller based applications have the benefit of being much more simple than Linux based systems. You don’t need to deal with operating system updates, dependency conflicts and other moving parts. Your own firmware code might introduce its own bugs and security issues, but the attack surface is radically smaller than a full operating system installation.

How to try it out

If you are interested in Amazon FreeRTOS, you might want to order one of the many compatible microcontroller boards. They all sell for less than $100 online. Each board comes with its own set of features and a toolchain for building applications. Make sure to pick one that fits your purpose and requirements. In particular, not all of the compatible boards include support for Over-The-Air (OTA) firmware upgrades.

At Nordcloud we have tried out two Amazon-qualified boards at the time of writing:

  • STM32L4 Discovery Kit
  • Espressif ESP-WROVER-KIT (with Over-The-Air update support)

ST provides their own SystemWorkBench Ac6 IDE for developing applications on STM32 boards. You may need to navigate the websites a bit, but you’ll find versions for Mac, Linux and Windows. Amazon provides instructions for setting everything up and downloading a preconfigured Amazon FreeRTOS distribution suitable for the device. You’ll be able to open it in the IDE, customize it and deploy it.

Espressif provides a command line based toolchain for developing applications on ESP32 boards which works on Mac, Linux and Windows. Amazon provides instructions on how to set it up for Amazon FreeRTOS. Once the basic setup is working and you are able to flash your device, there are more instructions for setting up Over-The-Air updates.

Both of these devices are development boards that will let you get started easily with any USB-equipped computer. For actual IoT deployments you’ll probably want to look into more customized hardware.

Conclusion

We hope you’ll find Amazon FreeRTOS useful in your IoT applications.

If you need any help in planning and implementing your IoT solutions, feel free to contact us.

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Blog

Controlling lights with Ikea Trådfri, Raspberry Pi and AWS

One of our developers build a smart lights solution with Ikea Trådfri, Raspberry Pi and AWS for his home.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Cloud Computing News #13: Coming soon: AWS re:Invent 2018 and two new regions

CATEGORIES

BlogNews

This week we focus on latest news from our partner: AWS re:Invent and launch of two new regions.

AWS re:Invent 2018 – 6 Days to Go

AWS re:Invent 2018 is expecting some 40 000 attendees on 26.-30.11.2018 in Las Vegas, USA. This year, re:Invent will feature sessions that cover topics that seen also in past years, such as databases, analytics & big data, security & compliance, enterprise, machine learning, and compute. You can to cross-search these topics in more detail in the session catalog.

We picked some interesting sessions to check from re:Invent 2018:

  1. Optimizing Costs as You Scale on AWS
  2. The Future of Enterprise Applications is Serverless
  3. Driving DevOps Transformation in Enterprises
  4. How HSBC Uses Serverless to Process Millions of Transactions in Real Time
  5. Build, Train, and Deploy Machine Learning for the Enterprise with Amazon SageMaker
  6. Managing Security of Large IoT Fleets
  7. Meeting Enterprise Security Requirements with AWS Native Security Services

Our team is also attending the event. Follow our postings from Las Vegas, and join the conversations on your expectations from re:Invent 2018.

Here is our AWS guru Miguel´s video wish list:

Check also Mikael´s (Data Driven Business Lead and AWS guru) wish list:

Coming soon AWS region in Milan, Italy and a new region in Sweden set to launch later this year

Last week AWS announced that they are building a new AWS Region in Milan, Italy and plan to open it up in early 2020. The upcoming Europe Region will have three Availability Zones and will be AWS´s sixth region in Europe, joining the existing regions in France, Germany, Ireland, the UK, and the new region in Sweden that is set to launch later this year.

AWS currently has 57 Availability Zones in 19 geographic regions worldwide, and another 15 Availability Zones across five regions in the works for launch between now and the first half of 2020 (check out the AWS Global Infrastructure page for more info).

Read more in AWS blog

Nordcloud is AWS Premier Consulting Partner since 2014 and AWS Managed Service Provider since 2015

At Nordcloud we know the AWS cloud, and we can help you take advantages of all the benefits Amazon Web Services has to offer.

How can we help you take your business to the next level? 

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








10 examples of AI in manufacturing to inspire your smart factory

CATEGORIES

BlogTech

AI in manufacturing promises massive leaps forward in productivity, environmental friendliness and quality of life, but research shows that while 58 percent of manufacturers are actively interested, only 12 percent are implementing it.

We’ve gathered 10 examples of AI at work in smart factories to bridge the gap between research and implementation, and to give you an idea of some of the ways you might use it in your own manufacturing.

1. Quality checks

Factories creating intricate products like microchips and circuit boards are making use of ‘machine vision’, which equips AI with incredibly high-resolution cameras. The technology is able to pick out minute details and defects far more reliably than the human eye. When integrated with a cloud-based data processing framework, defects are instantly flagged and a response is automatically coordinated.

2. Maintenance

Smart factories like those operated by LG are making use of Azure Machine Learning to detect and predict defects in their machinery before issues arise. This allows for predictive maintenance that can cut down on unexpected delays, which can cost tens of thousands of pounds.

3. Faster, more reliable design

AI is being used by companies like Airbus to create thousands of component designs in the time it takes to enter a few numbers into a computer. Using what’s called ‘generative design’, AI giant Autodesk is able to massively reduce the time it takes for manufacturers to test new ideas.

4. Reduced environmental impact

Siemens outfits its gas turbines with hundreds of sensors that feed into an AI-operated data processing system, which adjusts fuel valves in order to keep emissions as low as possible.

5. Harnessing useful data

Hitachi has been paying close attention to the productivity and output of its factories using AI. Previously unused data is continuously gathered and processed by their AI, unlocking insights that were too time-consuming to analyse in the past.

6. Supply chain communication

The aforementioned data can also be used to communicate with the links in the supply chain, keeping delays to a minimum as real-time updates and requests are instantly available. Fero Labs is a frontrunner in predictive communication using machine learning.

7. Cutting waste

Steel industry uses Fero Labs’ technology to cut down on ‘mill scaling’, which results in 3 percent of steel being lost. The AI was able to reduce this by 15 percent, saving millions of dollars in the process.

8. Integration

Cloud-based machine learning – like Azure’s Cognitive Services – is allowing manufacturers to streamline communication between their many branches. Data collected on one production line can be interpreted and shared with other branches to automate material provision, maintenance and other previously manual undertakings.

9. Improved customer service

Nokia is leading the charge in implementing AI in customer service, creating what it calls a ‘holistic, real-time view of the customer experience’. This allows them to prioritise issues and identify key customers and pain points.

10. Post-production support

Finnish elevator and escalator manufacturer KONE is using its ‘24/7 Connected Services’ to monitor how its products are used and to provide this information to its clients. This allows them not only to predict defects, but to show clients how their products are being used in practice.

AI in manufacturing is reaching a wider and wider level of adoption, and for good reason. McKinsey predicts that ‘smart factories’ will drive $37 trillion in new value by 2025, giving rise to research projects like Reboot Finland IoT Factory, which involves organisations as diverse as Nokia and GE Healthcare. The technology is here and the research is ready – how will AI revolutionise your industry?

Check out our whitepaper: “Industry 4.0: 7 steps to implement smart manufacturing”

DOWNLOAD THE WHITEPAPER HERE

The uses of AI in future manufacturing technologies are varied. Contact us to discuss the possibilities and see how we can help you take the next steps towards the future.

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.