Counting Faces with AWS DeepLens and IoT Analytics

CATEGORIES

Tech

It’s pretty easy to detect faces with AWS DeepLens. Amazon provides a pre-trained machine learning model for face detection so you won’t have to deal with any low-level algorithms or training data. You just deploy the ML model and a Lambda function to your DeepLens device and it starts automatically sending data to the cloud.

In the cloud you can leverage AWS IoT and IoT Analytics to collect and process the data received from DeepLens. No programming is needed. All you need to do is orchestrate the services to work together and enter one SQL query that calculates daily averages of the faces seen.

Connecting DeepLens to the cloud

We’ll assume that you have been able to obtain a DeepLens device. They are currently only being sold in the US, so if you live in another country, you may need to get creative.

Before you can do anything with your DeepLens, you must connect it to the Amazon cloud. You can do this by opening the DeepLens service in AWS Console and following the instructions to register your device. We won’t go through the details here since AWS already provides pretty good setup instructions.

Deploying a DeepLens project

To deploy a machine learning application on DeepLens, you need to create a project. Amazon provides a sample project template for face detection. When you create a DeepLens project based on this template, AWS automatically creates a Lambda function and attaches the pre-trained face detection machine learning model to the project.

The default face detection model is based on MXNet. You can also import your own machine learning models developed with TensorFlow, Caffe and other deep learning frameworks. You’ll be able to train these models with the AWS SageMaker service or using a custom solution. For now, you can just stick with the pre-trained model to get your first application running.

Once the project has been created, you can deploy it to your DeepLens device.  DeepLens can run only one project at a time, so your device will be dedicated to running just one machine learning model and Lambda function continuously.

After a successful deployment, you will start receiving AWS IoT MQTT messages from the device. The sample application sends messages continuously, even if no faces are detected.

You probably want to optimize the Lambda function by adding an “if” clause to only send messages when one or more faces are actually detected. Otherwise you’ll be sending empty data every second. This is fairly easy to change in the Python code, so we’ll leave it as an exercise for the reader.

At this point, take note of your DeepLens infer topic. You can find the topic by going to the DeepLens Console and finding the Project Output view under your Device. Use the Copy button to copy it to your clipboard.

Setting up AWS IoT Analytics

You can now set up AWS IoT Analytics to process your application data. Keep in mind that because DeepLens currently only works in the North Virginia region (us-east-1), you also need to create your AWS IoT Analytics resources in this region.

First you’ll need to create a Channel. You can choose any Channel ID and keep most of the settings at their defaults.

When you’re asked for the IoT Core topic filter, paste the topic you copied earlier from the Project Output view. Also, use the Create new IAM role button to automatically create the necessary role for this application.

Next you’ll create a Pipeline. Select the previously created Channel and choose Actions / Create a pipeline from this channel.

AWS Console will ask you to select some Attributes for the pipeline, but you can ignore them for now and leave the Pipeline activities empty. These activities can be used to preprocess messages before they enter the Data Store. For now, we just want to messages to be passed through as they are.

At the end of the pipeline creation, you’ll be asked to create a Data Store to use as the pipeline’s output. Go ahead and create it with the default settings and choose any name for it.

Once the Pipeline and the Data Store have been created, you will have a fully functional AWS IoT Analytics application. The Channel will start receiving incoming DeepLens messages from the IoT topic and sending them through the Pipeline to the Data Store.

The Data Store is basically a database that you can query using SQL. We will get back to that in a moment.

Reviewing the auto-created AWS IoT Rule

At this point it’s a good idea to take a look at the AWS IoT Rule that AWS IoT Analytics created automatically for the Channel you created.

You will find IoT Rules in the AWS IoT Core Console under the Act tab. The rule will have one automatically created IoT Action, which forwards all messages to the IoT Analytics Channel you created.

Querying data with AWS IoT Analytics

You can now proceed to create a Data Set in IoT Analytics. The Data Set will execute a SQL query over the data in the Data Store you created earlier.

Find your way to the Analyze / Data sets section in the IoT Analytics Console. Select Create and then Create SQL.

The console will ask you to enter an ID for the Data Set. You’ll also need to select the Data Store you created earlier to use as the data source.

The console will then ask you to enter this SQL query:

SELECT DATE_TRUNC(‘day’, __dt) as Day, COUNT(*) AS Faces
FROM deeplensfaces
GROUP BY DATE_TRUNC(‘day’, __dt)
ORDER BY DATE_TRUNC(‘day’, __dt) DESC

Note that “deeplensfaces” is the ID of the Data Source you created earlier. Make sure you use the same name consistently. Our screenshots may have different identifiers.

The Data selection window can be left to None.

Use the Frequency setting to setup a schedule for your SQL query. Select Daily so that the SQL query will run automatically every day and replace the previous results in the Data Set.

Finally, use Actions / Run Now to execute the query. You will see a preview of the current face count results, aggregated as daily total sums. These results will be automatically updated every day according to the schedule you defined.

Accessing the Data Set from applications

Congratulations! You now have IoT Analytics all set up and it will automatically refresh the face counts every day.

To access the face counts from your own applications, you can write a Lambda function and use the AWS SDK to retrieve the current Data Set content. This example uses Node.js:

const AWS = require('aws-sdk')
const iotanalytics = new AWS.IoTAnalytics()
iotanalytics.getDatasetContent({
  datasetName: 'deeplensfaces',
}).promise().then(function (response) {
  // Download response.dataURI
})

The response contains a signed dataURI which points to the S3 bucket with the actual results in CSV format. Once you download the content, you can do whatever you wish with the CSV data.

Conclusion

This has been a brief look at how to use DeepLens and IoT Analytics to count the number of faces detected by the DeepLens camera.

There’s still room for improvement. Amazon’s default face detection model detects faces in every video frame, but it doesn’t keep track whether the same face has already been seen in previous frames.

It gets a little more complicated to enhance the system to detect individual persons, or to keep track of faces entering and exiting frames. We’ll leave all that as an exercise for now.

If you’d like some help in developing machine learning applications, please feel free to contact us.

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Blog

Controlling lights with Ikea Trådfri, Raspberry Pi and AWS

One of our developers build a smart lights solution with Ikea Trådfri, Raspberry Pi and AWS for his home.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Nordcloud is proud to support the AWS Europe (Stockholm) Region Launch

CATEGORIES

InsightsNews

Nordcloud started serving customers in the Nordics in 2011 and in 2013 we were awarded Amazon Web Services (AWS) Premier Partner status in the AWS Partner Network (APN), becoming the first company based in Northern Europe to gain this accreditation. Since then we’ve been looking forward to AWS Region in the Nordics to provide our customers with lower latency, higher availability and bigger bandwidth.

Now it has happened.

AWS Europe (Stockholm) Region enables Swedish and Nordic customers to benefit from decreased latency, local data sovereignty, to move the rest of their applications to AWS and enjoy cost and agility advantages across their entire workload.

Nordcloud is happy to announce a special offer to AWS Europe (Stockholm) Region customers to facilitate onboarding to this new Region. The offer includes a number of Cloud Onboarding and Cloud Governance workshops targeted at helping customers get full benefits from AWS Region Stockholm and the public cloud. Read more about our special offer here.

As an APN AWS Premier Consulting Partner and AWS Managed Service Provider, Nordcloud brings together the power of more than 100 AWS certified architects and engineers and experience of more than 500 successfully completed projects.

“Our customers in the Nordics have benefited from AWS products for years. The new AWS Region Stockholm announcement is not only about launch momentum, but really about on-going customer interest, migration and business benefits,” says Niko Nihtilä, Nordcloud Alliance Lead for AWS.

Here are the main benefits of the new AWS Region Stockholm:

Latency

Latency plays a huge part in customer’s perception of a high-quality experience and was proved to impact the behaviour of users to some noticeable extent: with lower latency generating more user engagements.

Data sovereignty

AWS Region in Stockholm gives customers the ability to store their data in Sweden with the assurance that their content will not move unless they move it.

Cost

Choosing an AWS Region is the first decision you have to make when you set up your AWS components. Proximity to AWS customers or to their end users is a first criterion that impacts the cost.

Agility

If you distribute your instances across multiple Availability Zones within the same region and one instance fails, you can design your application so that an instance in another Availability Zone can handle requests. A sort of emergency load balancer without using an actual load balancer.

If you’d like to use the Region in Sweden for your workloads, we’d be happy to hear from you.

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Public Cloud as key enabler for innovation

CATEGORIES

News

  • Already today, 19 percent of German companies are in active public cloud operation, 55 percent are preparing their deployment at full speed
  • Over two-thirds of companies plan to spend at least 10 to 20 percent of their annual infrastructure budget on public cloud issues by 2020
  • The public cloud market leaders AWS and Microsoft Azure share the majority of the current public cloud budget in German companies with 60 percent
  • 79 percent of companies make use of the services and know-how of external partners both in consulting and in operations

Nordcloud and the external IT research company Crisp Research have conducted a recent study to examine the strategies and mindset for public clouds in the German midmarket and to analyze the demands on managed public cloud providers. German companies are increasingly focusing on the public cloud and are dependent on the external expertise of IT service and consulting providers.

The study, which surveyed a total of 160 IT, digitization and business decision-makers, shows that the majority of German companies are already involved with the public cloud today. 19 percent are already actively using the public cloud in the form of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), while a full 55 percent are in the implementation or active preparation phase. Half of the companies prefer a phased relocation of their application and infrastructure portfolios, 33 percent design the cloud migration “all in”, i.e. the entire application portfolio is planned as a large-scale transformation project.

 

Trend towards dual and multi vendor strategy

Since the functional scope, complexity and speed of innovation of public cloud platforms are now very high, more than half (53 percent) of the companies surveyed specifically rely on two strategic cloud providers. In addition, a total of 18 percent stated that they used more than two providers. As reasons for a dual or multi vendor strategy, 34 percent cited the minimization of the risk of a vendor lock-in as the reason, 41 percent also estimated the possibility of global coverage of the data center locations as enablers for international rollouts of their new digital services.

 

Investment in the public cloud on the rise

In addition, more than two-thirds (72 percent) of respondents said they would invest at least 10 to 20 percent of their infrastructure budget for IaaS and PaaS in the public cloud by 2020. Around a quarter (26 percent) even plan to shift between 20 and 50 percent of their infrastructure budget to public clouds. On average, 27 percent of the cloud budget is allocated to pure cloud operation. At 60 percent, the public cloud majors AWS and Microsoft Azure share the majority of the current public cloud budget for IaaS and PaaS in companies, although providers such as Google Cloud Platform, IBM and Alibaba also play an important role in German-speaking countries.

 

Managed Public Cloud Providers in Demand as Experts

Since both the transformation into the public cloud and the operation of these systems are in part highly complex, the vast majority of the companies surveyed (79 percent) rely on the support of external service providers. 52 percent said they worked with one, 19 percent with two managed public cloud providers. The external expertise is used in almost all areas, from the cloud strategy (64 percent), monitoring and management (54 percent) and the development of the API strategy (51 percent) to the development of DevOps operating concepts (51 percent). Managed Public Cloud Providers thus have a high strategic relevance.

“In order to remain competitive in times of digital transformation, companies should increasingly rely on cloud computing technologies,” comments Uli Baur, Country Manager Nordcloud DACH. “Our study results make it clear that more and more German companies are aware of this necessary step and are therefore already preparing the implementation and planning to use more resources for cloud operation and all associated processes in the future. Through partners like Nordcloud, the numerous changes can also be mastered by companies that themselves lack the necessary specialist staff and know-how.”

Download the Nordcloud study in German here

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

5 Workplace health tips from Nordcloud

As COVID-19 continues to effect our working environment, how can we all strive to improve the health of our teams...

Blog

Six capabilities modern ISVs need in order to future-proof their SaaS offering

Successful SaaS providers have built their business around 6 core capabilities

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Public Cloud als Key Enabler für Innovation

CATEGORIES

BlogInsightsNews

Nordcloud und das externe IT-Forschungsunternehmen Crisp Research haben in einer aktuellen Studie die Strategien und das Mindset zu Public Clouds im deutschen Mittelstand untersucht und die Ansprüche an Managed Public Cloud Provider analysiert. Deutsche Unternehmen legen einen immer stärkeren Fokus auf die Public Cloud und sind dabei auf die externe Expertise von IT-Service- und Consulting-Anbieter angewiesen.

Die Studie, die insgesamt 160 IT-, Digitalisierungs- und Business-Entscheider befragte, zeigt, dass sich die Mehrheit der deutschen Unternehmen schon heute mit der Public Cloud befasst. 19 Prozent setzen die Public Cloud dabei bereits in Form von Infrastructure as a Service (IaaS) und Platform as a Service (PaaS) aktiv ein, ganze 55 Prozent befinden sich in der Implementierungs- oder aktiven Vorbereitungsphase. Hierbei bevorzugt die Hälfte der Unternehmen einen phasenweisen Umzug ihres Applikations- und Infrastruktur-Portfolios, 33 Prozent gestalten die Cloud Migration „all in“, d. h. das gesamte Anwendungsportfolio wird als ein groß angelegtes Transformationsprojekt geplant.

 

Trend zur Dual und Multi Vendor Strategie

Da der Funktionsumfang, die Komplexität sowie die Innovationsgeschwindigkeit der Public Cloud Plattformen mittlerweile sehr hoch sind, setzen über die Hälfte (53 Prozent) der befragten Unternehmen gezielt auf zwei strategische Cloud Provider. Darüber hinaus gaben insgesamt 18 Prozent an, mehr als zwei Anbieter in Anspruch zu nehmen. Als Gründe für eine Dual, bzw. Multi Vendor Strategie nannten 34 Prozent die Minimierung des Risikos eines Vendor Lock-ins, 41 Prozent schätzen zudem die Möglichkeit der globalen Abdeckung der Data Center Locations als Enabler für internationale Rollouts ihrer neuen digitalen Services.

Public Cloud

 

Investitionen in die Public Cloud steigen an

Über zwei Drittel (72 Prozent) der Befragten gaben darüber hinaus an, bis 2020 mindestens 10 bis 20 Prozent ihres Infrastruktur-Budgets für IaaS und PaaS in der Public Cloud zu investieren. Rund ein Viertel (26 Prozent) plant sogar zwischen 20 bis 50 Prozent ihres Infrastruktur-Budgets auf Public Clouds zu verlagern. Durchschnittlich entfallen 27 Prozent des Cloud-Budgets auf den reinen Cloud Betrieb. Mit 60 Prozent teilen sich dabei die Public Cloud Majors AWS und Microsoft Azure den Großteil des derzeitigen Public Cloud Budgets für IaaS und PaaS in Unternehmen, obgleich auch Anbieter wie Google Cloud Platform, IBM oder Alibaba im deutschsprachigen Raum eine wichtige Rolle spielen.

Public Cloud

 

Managed Public Cloud Provider als gefragte Experten

Da sowohl die Transformation in die Public Cloud als auch der Betrieb dieser Systeme zum Teil hochkomplex sind, greift die große Mehrheit der befragten Unternehmen (79 Prozent) auf die Unterstützung externer Dienstleister zurück. 52 Prozent gaben an mit einem, 19 Prozent mit zwei Managed Public Cloud Providern zusammenzuarbeiten. Die externe Expertise wird dabei nahezu in allen Teilbereichen beansprucht, von der Cloud-Strategie (64 Prozent), dem Monitoring und Management (54 Prozent) über die Entwicklung der API-Strategie (51 Prozent) bis hin zur Entwicklung von DevOps-Betriebskonzepten (51 Prozent). Managed Public Cloud Providern wird somit eine hohe strategische Relevanz zuteil.

„Um in Zeiten der digitalen Transformation wettbewerbsfähig zu bleiben, sollten Unternehmen verstärkt auf Cloud Computing Technologien setzen“, kommentiert Uli Baur, Country Manager Nordcloud DACH. „Unsere Studienergebnisse verdeutlichen, dass sich immer mehr deutsche Unternehmen dieses notwendigen Schrittes bewusst sind und daher bereits die Implementierung vorbereiten und planen, in Zukunft mehr Ressourcen auf den Cloud-Betrieb und alle damit verbundenen Prozesse zu verwenden. Durch Partner wie Nordcloud können die zahlreichen Veränderungen auch von Unternehmen gemeistert werden, denen es selbst an nötigem Fachpersonal und Know-how fehlt.“

 

Um die neue Studie von Nordcloud in voller Länge zu lesen,

Klicken Sie bitte hier

 

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Integrating quickly to Nordcloud culture

CATEGORIES

Life at Nordcloud

Nordcloud

Getting to know new people

It’s always interesting to get to know new people in the 8+ Nordcloudian offices and seeing some of the cultural differences (and similarities!).

Our growth story is pretty inspiring and many people often ask me how different are the cultures and teams from each other in different countries. Our Cloud Architect from the Amsterdam office, Martijn Scholten, gave me some answers (it’s really not that different!).

 

Here is his story:

  1. Where are you from and how did you end up at Nordcloud? I grew up in the east of Netherlands, next to the border of Germany in a small town called Denekamp. In 2014 I moved to Amersfoort for a job where I started as a Software Engineer and gradually found my passion in AWS. I joined Nordcloud in August 2018, as I was actively looking for a next challenge where I could work with a big team of cloud architects.
  2. What is your core competence? That would be AWS because I’m a double pro-certified AWS Cloud Architect and I’m also a skilled programmer in Java, Scala, Python, Bash, Javascript (React). By combining these two disciplines I’m able to design and implement good infrastructural designs using Infrastructure-as-Code.
  3. What do you like most about working at Nordcloud? Flexibility (like when and where to work) and open & flat organisation with great colleagues. Also the fact that I get the possibility to travel and to work with the latest technologies makes it an awesome company to work for.
  4. What is the most useful thing you have learned at Nordcloud? I think so far it’s the business side of things like proposals and I’ve realised how well our colleagues work together even though we are in different countries.
  5. What sets you on fire/ What’s your favourite thing technically with public cloud? Helping people out is something I really enjoy; this could be a customer or a co-worker with a difficult technical question. The favourite thing about public cloud is that it moves so fast that I keep learning new things everyday.
  6. What do you do outside work? One of my hobbies is Brazilian jiu-jitsu as I’ve practised martial arts for 13 years from Taekwondo, Kickboxing to MMA. I also like travelling to distant countries to experience different cultures. The highlights so far have been Vietnam, Indonesia and Morocco. I also enjoy watching movies, playing video games, hanging out with friends and family of course, and every now and then drinking a beer.
  7. Best memory at working as a Nordcloudian? Friday afternoon beers together. Those are great moments to wind down from a busy week and share thoughts together.

I hope that gave a good overall picture of a Nordcloudian, who’s relatively new to the company and has already gotten integrated to the team with great responsibilities, freedom and trust and of course, keeps learning new things every day. If a team like this sounds like a match to you, don’t hesitate to get in touch!!

/Anna

We are hiring a Cloud Architect to our Dutch team so we’d love to hear from you. Click the button to the job description and guidelines on how to apply!

Apply now!

 

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Blog

Nordcloudians at NodeConf EU 2019

Our developers Henri Nieminen and Perttu Savolainen share their presentation tips and tell about their experience at NodeConf EU 2019...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Want to become a part of the team of cloud superheroes at Nordcloud Germany?

Join us for a nice evening on the 11th December at Microsoft’s office in Munich

The European leader in public cloud solution and services invites you to a get-together on 11th of December, hosted in partnership with Microsoft. During the evening our experts will share their stories on what it’s like to work for Europe’s leader in public cloud infrastructure solutions and cloud-native application services. What does their normal day look like? What kind of projects and amazing technologies do they work with?

When, where and how?

You can stop by whenever you want after 5:30pm, but we would recommend arriving before 6 pm, as our CTO Pasi Katajainen will then introduce you to Nordcloud as a company as well as share the pivotal role our Cloud Architects play in helping our customers achieve success.

DATE
11th December 2018

CITY
Munich

VENUE
Microsoft Office

ADDRESS
Walter-Gropius-Straße 5, 80807 Munich

Sign up for the event here: https://www2.nordcloud.com/nordcloud-ms-cloud-insights-bootcamp

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

Nordcloud celebrates top spot worldwide for cloud services in the Magic Quadrant

Gartner has awarded Nordcloud the top cloud-native Managed Service Provider (MSP) for the execution of cloud Professional and Managed Services...

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








AWS re:Invent 2018: our recap

CATEGORIES

BlogInsights

Here are some updates and experiences from AWS re:Invent 2018 Las Vegas

What a week it was! Those were not well-rested people I saw on the flight back home to Finland.

Mainly three topics were covered last week by Nordcloud team: hosting clients, tuning in on new launches and updates, and personal skills development.

A plethora of new things were launched during the week – so one week didn´t seem to fit everything. Comprehensive list of all announcements can be found online and in this post I’ll focus on sharing our experiences and highlights from the event.

AWS has both the biggest market share and growth numbers*

It is safe to say that AWS is the biggest in the public cloud market (with 51.8% market share) and with biggest growth in absolute numbers ($2.1B). More than half of all Windows workloads in the public cloud (57.7%) are run on AWS. There are a total of 86 premier tier partners globally, and Nordcloud is one of them. The amount of premier tier partners keeps growing because the public cloud usage is constantly increasing. 

AWS estimates that $2T is spent annually on datacenter maintenance, i.e. keeping the lights on. This explains why Jeff Bezos is interested enough to have migrations reported to him on a monthly basis. From development point of view the future is serverless.

 

Our top 3 picks of new service launches

New service launches were announced during re:Invent. Our CTO Ilja Summala picked his top 3 announcements: Lambda Layers, AWS Security Hub and AWS Outposts.

Lambda Layers allows the code to be packaged and deployed across multiple functions – this helps code reuse and service management. Security Hub will enable large organisations centralise their control in multi-account environments. AWS Outposts launch means that AWS is entering the hybrid competition – you can have AWS services in your own on-prem datacenter. This will open new opportunities for clients who don’t yet want to migrate to public cloud.

Being a premier partner is becoming even more premium

The requirements for different partner tiers are going to be changed somewhat during the first half of 2019, and they are going to be a bit harder to achieve. As the only premier partner in the Nordics, we are going to continue to serve our AWS customers in the Nordics and all across Europe. Nordcloud is also an AWS Competency partner in DevOps. This program expanded with Containers Competency this year.

Stay tuned for more news on AWS – a new AWS Region Stockholm launch this month.

*Andy Jassy ‘Keynote’ AWS re:Invent; Las Vegas, USA: 26-30 November 2018

 

At Nordcloud we know the AWS cloud, and we can help you take advantages of all the benefits Amazon Web Services has to offer.

 

How can we take your business to the next level with AWS?

 

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

What it’s like to be a new hire during Covid-19

We all have been there before, the thrill of getting that call when your future manager makes the offer. You...

Blog

A Recruiter’s Perspective on Remote Work

In the times of the Coronavirus you can read a lot about ways of switching to the remote ways of...

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Looking ahead: what’s next for AI in manufacturing?

CATEGORIES

BlogTech

AI and manufacturing have been on an exciting journey together. It’s a combination that is fast changing the world of manufacturing: 92 percent of senior manufacturing executives believe that the ‘Smart Factory’ will empower their staff to work smarter and increase productivity.

How does AI benefit manufacturers?

Some of the biggest companies are already adopting AI. Why? A big reason is increased uptime and productivity through predictive maintenance. AI enables industrial technology to track its own performance and spot trends and looming problems that humans might miss. This gives the operator a better chance of planning critical downtime and avoiding surprises.

But what’s the next big thing? Let’s look to the immediate future, to what is on the horizon and a very real possibility for manufacturers.

Digital twinning

‘A digital twin is an evolving digital profile of the historical and current behaviour of a physical object or process that helps optimize business performance.’ – According to Deloitte.

Digital twinning will be effective in the manufacturing industry because it could replace computer-aided design (CAD). CAD is highly effective in computer-simulated environments and has shown some success in modelling complex environments, yet its limitations lay in the interactions between the components and the full lifecycle processes.

The power of a digital twin is in its ability to provide a real-time link between the digital and physical world of any given product or system. A digital twin is capable of providing more realistic measurements of unpredictability. The first steps in this direction have been taken by cloud-based building information modelling (BIM), within the AEC industry. It enables a manufacturer to make huge design and process changes ahead of real-life occurrences.

Predictive maintenance

Take a wind farm. You’re manufacturing the turbines that will stand in a wind farm for hundreds of years. With the help of a CAD design you might be able to ‘guesstimate’ the long-term wear, tear and stress that those turbines might encounter in different weather conditions. But a digital twin will use predictive machine learning to show the likely effects of varying environmental events, and what impact that will have on the machinery.

This will then affect future designs and real-time manufacturing changes. The really futuristic aspect will be the incredible increases in accuracy as the AI is ‘trained.’

Smart factories

An example of a digital twin in a smart factory setting would be to create a virtual replica of what is happening on the factory floor in (almost) real-time. Using thousands or even millions of sensors to capture real-time performance and data, artificial intelligence can assess (over a period of time) the performance of a process, machine or even a person. Cloud-based AI, such as those technologies offered by Microsoft Azure, have the flexibility and capacity to process this volume of data.

This would enable the user to uncover unacceptable trends in performance. Decision-making around changes and training will be based on data, not gut feeling. This will enhance productivity and profitability.

The uses of AI in future manufacturing technologies are varied. Contact us to discuss the possibilities and see how we can help you take the next steps towards the future.

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Lambda layers for Python runtime

CATEGORIES

Tech

AWS Lambda

AWS Lambda is one of the most popular serverless compute services in the public cloud, released in November 2014. It runs your code in the response to events like DynamoDB, SNS or HTTP triggers without provisioning or managing any infrastructure. Lambda takes care of most of the things required to run your code and provides high availability. It allows you to execute even up to 1000 parallel functions at once! Using AWS lambda you can build applications like:

  • Web APIs
  • Data processing pipelines
  • IoT applications
  • Mobile backends
  • and many many more…

Creating AWS Lambda is super simple: you just need to create a zip file with your code, dependencies and upload it to S3 bucket. There are also frameworks like serverless or SAM that handles deploying AWS lambda for you, so you don’t have to manually create and upload the zip file.

There is, however, one problem.

You have created a simple function which depends on a large number of other packages. AWS lambda requires you to zip everything together. As a result, you have to upload a lot of code that never changes what increases your deployment time, takes space, and costs more.

AWS Lambda Layers

Fast forward 4 years later at 2018 re:Invent AWS Lambda Layers are released. This feature allows you to centrally store and manage data that is shared across different functions in the single or even multiple AWS accounts! It solves a certain number of issues like:

  • You do not have to upload dependencies on every change of your code. Just create an additional layer with all required packages.
  • You can create custom runtime that supports any programming language.
  • Adjust default runtime by adding data required by your employees. For example, there is a team of Cloud Architects that builds Cloud Formation templates using the troposphere library. However, they are no developers and do not know how to manage python dependencies… With AWS lambda layer you can create a custom environment with all required data so they could code in the AWS console.

But how does the layer work?

When you invoke your function, all the AWS Lambda layers are mounted to the /opt directory in the Lambda container. You can add up to 5 different layers. The order is really important because layers with the higher order can override files from the previously mounted layers. When using Python runtime you do not need to do any additional operations in your code, just import library in the standard way. But, how will my python code know where to find my data?

That’s super simple, /opt/bin is added to the $PATH environment variable. To check this let’s create a very simple Python function:


import os
def lambda_handler(event, context):
    path = os.popen("echo $PATH").read()
    return {'path': path}

The response is:

 
{
    "path": "/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin\n"
}

 

Existing pre-defined layers

AWS layers have been released together with a single, publicly accessible library for data processing containing 2 libraries: NumPyand SciPy. Once you have created your lambda you can click  `Add a layer` in the lambda configuration. You should be able to see and select the AWSLambda-Python36-SciPy1x layer. Once you have added your layer you can use these libraries in your code. Let’s do a simple test:


import numpy as np
import json


def lambda_handler(event, context):
    matrix = np.random.randint(6, size=(2, 2))
    
    return {
        'matrix': json.dumps(matrix.tolist())
    }

The function response is:

 >code>
{
  "matrix": "[[2, 1], [4, 2]]"
}

 

As you can see it works without any effort.

What’s inside?

Now let’s check what is in the pre-defined layer. To check the mounted layer content I prepared simple script:


import os
def lambda_handler(event, context):
    directories = os.popen("find /opt/* -type d -maxdepth 4").read().split("\n")
    return {
        'directories': directories
    }

In the function response you will receive the list of directories that exist in the /opt directory:


{
  "directories": [
    "/opt/python",
    "/opt/python/lib",
    "/opt/python/lib/python3.6",
    "/opt/python/lib/python3.6/site-packages",
    "/opt/python/lib/python3.6/site-packages/numpy",
    "/opt/python/lib/python3.6/site-packages/numpy-1.15.4.dist-info",
    "/opt/python/lib/python3.6/site-packages/scipy",
    "/opt/python/lib/python3.6/site-packages/scipy-1.1.0.dist-info"
  ]
}

Ok, so it contains python dependencies installed in the standard way and nothing else. Our custom layer should have a similar structure.

Create Your own layer!

Our use case is to create an environment for our Cloud Architects to easily build Cloud Formation templates using troposphere and awacs libraries. The steps comprise:
<h3″>Create virtual env and install dependencies

To manage the python dependencies we will use pipenv.

Let’s create a new virtual environment and install there all required libraries:


pipenv --python 3.6
pipenv shell
pipenv install troposphere
pipenv install awacs

It should result in the following Pipfile:


[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
troposphere = "*"
awacs = "*"
[dev-packages]
[requires]
python_version = "3.6"

Build a deployment package

All the dependent packages have been installed in the $VIRTUAL_ENV directory created by pipenv. You can check what is in this directory using ls command:

 
ls $VIRTUAL_ENV

Now let’s prepare a simple script that creates a zipped deployment package:


PY_DIR='build/python/lib/python3.6/site-packages'
mkdir -p $PY_DIR                                              #Create temporary build directory
pipenv lock -r > requirements.txt                             #Generate requirements file
pip install -r requirements.txt --no-deps -t $PY_DIR     #Install packages into the target directory
cd build
zip -r ../tropo_layer.zip .                                  #Zip files
cd ..
rm -r build                                                   #Remove temporary directory

When you execute this script it will create a zipped package that you can upload to AWS Layer.

 

Create a layer and a test AWS function

You can create a custom layer and AWS lambda by clicking in AWS console. However, real experts use CLI (AWS lambda is the new feature so you have to update your awscli to the latest version).

To publish new Lambda Layer you can use the following command (my zip file is named tropo_layer.zip):


aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip

As the response, you should receive the layer arn and some other data:


{
    "Content": {
        "CodeSize": 14909144,
        "CodeSha256": "qUz...",
        "Location": "https://awslambda-eu-cent-1-layers.s3.eu-central-1.amazonaws.com/snapshots..."
    },
    "LayerVersionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1",
    "Version": 1,
    "Description": "",
    "CreatedDate": "2018-12-01T22:07:32.626+0000",
    "LayerArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test"
}

The next step is to create AWS lambda. Yor lambda will be a very simple script that generates Cloud Formation template to create EC2 instance:

 
from troposphere import Ref, Template
import troposphere.ec2 as ec2
import json
def lambda_handler(event, context):
    t = Template()
    instance = ec2.Instance("myinstance")
    instance.ImageId = "ami-951945d0"
    instance.InstanceType = "t1.micro"
    t.add_resource(instance)
    return {"data": json.loads(t.to_json())}

Now we have to create a zipped package that contains only our function:


zip tropo_lambda.zip handler.py

And create new lambda using this file (I used an IAM role that already exists on my account. If you do not have any role that you can use you have to create one before creating AWS lambda):


aws lambda create-function --function-name tropo_function_test --runtime python3.6 
--handler handler.lambda_handler 
--role arn:aws:iam::xxxxxxxxxxxx:role/service-role/some-lambda-role 
--zip-file fileb://tropo_lambda.zip

In the response, you should get the newly created lambda details:


{
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "CodeSha256": "l...",
    "FunctionName": "tropo_function_test",
    "CodeSize": 356,
    "RevisionId": "...",
    "MemorySize": 128,
    "FunctionArn": "arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:function:tropo_function_test",
    "Version": "$LATEST",
    "Role": "arn:aws:iam::xxxxxxxxx:role/service-role/some-lambda-role",
    "Timeout": 3,
    "LastModified": "2018-12-01T22:22:43.665+0000",
    "Handler": "handler.lambda_handler",
    "Runtime": "python3.6",
    "Description": ""
}

Now let’s try to invoke our function:


aws lambda invoke --function-name tropo_function_test --payload '{}' output
cat output
{"errorMessage": "Unable to import module 'handler'"}

Oh no… It doesn’t work. In the CloudWatch you can find detailed log message: `Unable to import module ‘handler’: No module named ‘troposphere’` This error is obvious. Default python3.6 runtime does not contain troposphere library. Now let’s add layer we created in the previous step to our function:


aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:1

When you invoke lambda again you should get the correct response:


{
  "data": {
    "Resources": {
      "myinstance": {
        "Properties": {
          "ImageId": "ami-951945d0",
          "InstanceType": "t1.micro"
        },
        "Type": "AWS::EC2::Instance"
      }
    }
  }
}

Add a local library to your layer

We already know how to create a custom layer with python dependencies, but what if we want to include our local code? The simplest solution is to manually copy your local files to the /python/lib/python3.6/site-packages directory.

First, let prepare the test module that will be pushed to the layer:


$ find local_module
local_module
local_module/__init__.py
local_module/echo.py
$ cat cat local_module/echo.py
def echo_hello():
    return "hello world!"

To manually copy your local module to the correct path you just need to add the following line to the previously used script (before zipping package):


cp -r local_module 'build/python/lib/python3.6/site-packages'

This works, however, we strongly advise transforming your local library into the pip module and installing it in the standard way.

Update Lambda layer

To update lambda layer you have to run the same code as before you used to create a new layer:


aws lambda publish-layer-version --layer-name tropo_test --zip-file fileb://tropo_layer.zip

The request should return LayerVersionArn with incremented version number (arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2 in my case).

Now update lambda configuration with the new layer version:

 
aws lambda update-function-configuration --function-name tropo_function_test --layers arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:layer:tropo_test:2

Now you should be able to import local_module in your code and use the echo_hello function.

 

Serverless framework Layers support

Serverless is a framework that helps you to build applications based on the AWS Lambda Service. It already supports deploying and using Lambda Layers. The configuration is really simple – in the serverless.yml file, you provid the path to the layer location on your disk (it has to path to the directory – you cannot use zipped package, it will be done automatically). You can either create a separate serverless.yml configuration for deploying Lambda Layer or deploy it together with your application.

We’ll show the second example. However, if you want to benefit from all the Lambda Layers advantages you should deploy it separately.


service: tropoLayer
package:
  individually: true
provider:
  name: aws
  runtime: python3.6
layers:
  tropoLayer:
    path: build             # Build directory contains all python dependencies
    compatibleRuntimes:     # supported runtime
      - python3.6
functions:
  tropo_test:
    handler: handler.lambda_handler
    package:
      exclude:
       - node_modules/**
       - build/**
    layers:
      - {Ref: TropoLayerLambdaLayer } # Ref to the created layer. You have to append 'LambdaLayer'
string to the end of layer name to make it working

I used the following script to create a build directory with all the python dependencies:


PY_DIR='build/python/lib/python3.6/site-packages'
mkdir -p $PY_DIR                                              #Create temporary build directory
pipenv lock -r > requirements.txt                             #Generate requirements file
pip install -r requirements.txt -t $PY_DIR                   #Install packages into the target direct

This example individually packs a Lambda Layer with dependencies and your lambda handler. The funny thing is that you have to convert your lambda layer name to be TitleCased and add the `LambdaLayer` suffix if you want to refer to that resource.

Deploy your lambda together with the layer, and test if it works:


sls deploy -v --region eu-central-1
sls invoke -f tropo_test --region eu-central-1

Summary

It was a lot of fun to test Lambda Layers and investigate how it technically works. We will surely use it in our projects.

In my opinion, AWS Lambda Layers is a really great feature that solves a lot of common issues in the serverless world. Of course, it is not suitable for all the use cases. If you have a simple app, that does not require a huge number of dependencies it’s easier for you to have everything in the single zip file because you do not need to manage additional layers.

Read more on AWS Lambda in our blog!

Notes from AWS re:Invent 2018 – Lambda@edge optimisation

Running AWS Lambda@Edge code in edge locations

Amazon SQS as a Lambda event source

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Blog

Controlling lights with Ikea Trådfri, Raspberry Pi and AWS

One of our developers build a smart lights solution with Ikea Trådfri, Raspberry Pi and AWS for his home.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








Four compelling reasons to use Azure Kubernetes Service (AKS)

CATEGORIES

BlogTech

Management overhead, inflexibility and lack of automation all stifle application development. Containers help by moving applications and their dependencies between environments, and Kubernetes orchestrates containerisation effectively.

But there’s another piece to the puzzle.

Azure Kubernetes Service (AKS) is the best way to simplify and streamline Kubernetes so you can scale your app development with real confidence and agility.

Read on to discover more key benefits and why AKS is the advanced technology tool you need to supercharge your IT department, drive business growth and give your company a competitive edge over its rivals.

Why worry about the complexity of container orchestration, when you can use AKS?

1. Accelerated app development

75 percent of developer’s time is typically spent on bug-fixing. AKS removes much of the time-sink (and headache) of debugging by handling the following aspects of your development infrastructure:

  • Auto upgrades
  • Patching
  • Self-healing

Through AKS, container orchestration is simplified, saving you time and enabling your developers to remain more productive. It’s a way to breathe life into your application development by combatting one of developer’s biggest time-sinks.

2. Supports agile project management

As this PWC report shows, agile projects yield strong results and are typically 28 percent more successful than traditional projects.

This is another key benefit to AKS – it supports agile development programs, such as continuous integration (CI), continuous delivery/continuous deployment (CD) and dev-ops. This is done through integration with Azure DevOps, ACR, Azure Active Directory and Monitoring. An example of this is a developer who puts a container into a repository, moves the builds into Azure Container Registry (ACR), and then uses AKS to launch the workload.

3. Security and compliance done right

Cyber security must be a priority for all businesses moving forward. Last year, almost half of UK businesses suffered a cyber-attack and, according to IBM’s study, 60 percent of data breaches were caused by insiders. The threat is large, and it often comes from within.

AKS protects your business by enabling administrators to tailor access to Azure Active Directory (AD) and identity and group identities. When people only have the access they need, the threat from internal teams is greatly reduced.

You can also rest assured that AKS is totally compliant. AKS meets the regulatory requirements of System and Organisation Controls (SOC), as well as being compliant with ISO, HIPAA and HITRUST.

4. Use only the resources you need

AKS is a fully flexible system that adapts to use only the resources you need. Additional processing power is supported via graphic processing units (GPUs) – processor intensive operations, such as scientific computations, enables on-top processing power. If you need more resources, it’s as simple as clicking a button and letting the elasticity of Azure container instances do the rest.

When you only use the resources you need, your software (and your business) enjoys the following benefits:

  • Reduced cost – no extra GPUs need to be bought and integrated onsite.
  • Faster start-up speed compared to onsite hardware and software which takes time to set-up.
  • Easier scaling – get more done now without worrying about how to manage resources.

Scale at speed with AKS

The world of applications moves fast. For example, 6140 Android apps were released in the first quarter of 2018 alone. Ambitious companies can’t afford the risk of slowing down. Free up time and simplify the application of containerisation by implementing AKS and take your software development to the next level.

To find out how we get things done, check out Nordcloud’s approach to DevOps and agile application delivery.

Feel free to contact us if you need help in planning or executing your container workload.

Blog

An introduction to OpenShift

This is the second blog in a four-part series on OpenShift and specifically on Azure Red Hat OpenShift (ARO). In...

Blog

How can you maximize the value from your data?

Your market is changing in faster and less predictable ways than ever before. Enterprises of all sizes suffer from data...

Blog

Introducing Google Coral Edge TPU – a New Machine Learning ASIC from Google

Introducing Google Coral Edge TPU - a new machine learning ASIC from Google.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.