AWSome Day Oslo: We Had An Awesome day!

CATEGORIES

Blog

We know what you’re thinking, but that’s not a typo.

AWSome Days (a play on words reflecting Amazon Web Services), are hosted around the world and will take you through a step-by-step deep-dive into AWS core services such as Compute, Storage, Database, and Networking.

Nordcloud has been a proud sponsor since the first Nordic AWSome Day in Helsinki back in 2014, where we showcased our AWS Authorized Training Partner, AWS Premier Consulting Partner, and our ongoing dedicated partnership. We have a strong collaboration with AWS that has been going on for several years, and this has helped us provide an accelerated cloud transformation among our customers, from migrating to multiple cloud technologies, or assisting with cloud-based innovation.

 

As an AWS APN Authorized Training Partner, we provide official AWS training, with the most up to date AWS services, and with certified training engineers like Olle Sundqvist, Michaela Vikman, and Juho Jantunen teaching the next wave of Cloud Architects. We currently host the following training sessions: Technical Essentials, Architecting on AWS, SysOps on AWS, Developing on AWS, Security Operations on AWS, and DevOps Engineering on AWS. We always have public and dedicated training going on, but keep an eye on our scheduled courses.

We’re still running an amazing discount (AWSOME) on the courses at a huge 25% off until March 16th. Be sure to have a look at what’s on offer and don’t forget to register to get the discount.

Nordcloud helps organizations use the cloud services from AWS, and other cloud providers to improve their productivity and efficiency.  We look forward to attending a lot more AWSome Days in the coming months, and continue to provide a growing partnership with AWS, providing the best advantages for our customers!

Hope to see you all at the next Nordic AWSome Days event in Helsinki this week!

Finally, a big shout out to our Nintendo Switch winner Mehrdad and the two Raspberry Pie winners: Sturla and Leszek.

Blog

Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

Blog

Building better SaaS products with UX Writing (Part 3)

UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

Blog

Building better SaaS products with UX Writing (Part 2)

The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

Get in Touch

Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








    NEW MACHINE LEARNING SERVICES ANNOUNCED AT THE RE:INVENT KEYNOTE

    CATEGORIES

    Blog

    Last Wednesday, AWS’s CEO Andy Jassy held his traditional keynote at AWS re:Invent, and on the machine learning front, there were several interesting announcements. Here’s a summary of what they were and why you should care…

    Amazon SageMaker – What is it?

    SageMaker is a fully managed service for implementation, training, automatic hyperparametre tuning, and deployment of machine learning models.

    Why should you care?

    SageMaker includes a hosted Jupyter environment that doesn’t limit you to a particular machine learning framework -TensorFlow, Caffe, MXnet, CNTK, Keras, Gluon and other major frameworks are all supported. This is in contrast to other Cloud vendors’ fully managed ML offerings, which only offer a single ML framework to work with.

    In addition, SageMaker automatically provisions EC2 instances for training and tears them down when the training is complete. This is really handy because up to this point, you had to handle instance provisioning a) manually or b) by implementing your own automation. This annoyance is now a thing of the past.

    SageMaker also does automatic hyper-parametre tuning, (no more manual trial-and-error tuning) and model deployment, giving you auto-scaling inference endpoints with very little hassle.

    AWS DeepLens – What is it?

    DeepLens is a deep learning enabled video camera and associated software toolkit.

    Why should you care?

    DeepLens includes an onboard graphics processor and over 100 GFLOPS of compute power. What this means in practice is that you can deploy a computer vision model on the device itself and run predictions/inference locally, without a round trip to the Cloud. DeepLens is fully programmable using the AWS Lambda serverless programming model. The models themselves even run as part of a Lambda function. All deep learning frameworks are supported, just like in SageMaker.

    Amazon Rekognition Video – What is it?

    Rekognition Video does object recognition for video files. Rekognition Video complements the original Rekognition service, which works on image data.

    Why should you care?

    Object recognition from video previously required you to extract frames from video, convert them to images and then feed them to Rekognition. This process was unwieldy, introducing latency that made it impossible to do near real-time inference. With Rekognition Video, you can do real-time recognition for video, which enables a lot of different use cases. Rekognition Video can detect faces, filter inappropriate content, detect activities and even track people, which is something that other cloud vendors’ object recognition services do not provide out-of-the-box.

    Amazon Kinesis Video Streams – What is it?

    Kinesis Video Streams is fully managed secure video ingestion and storage service.

    Why should you care?

    Streaming video to the Cloud is tricky business, typically requiring you to implement your own solution with sufficient protection, scalability and failover mechanisms. It’s a huge hassle, and it’s only a means to an end. A fully managed service that handles all of this is extremely welcome, and in true AWS fashion, it integrates seamlessly with other AWS services.

    Amazon Transcribe – What is it?

    Amazon Transcribe is machine learning-powered automatic speech recognition and transcription service.

    Why should you care?

    Transcription typically requires you to hire a transcription service, which may be prohibitively expensive depending on the use case. Amazon Transcription does transcription without manual work, adding in punctuation and, crucially, providing granular timestamps for each uttered word. As with other ready-made AI services, it’ll get better (more accurate) over time and you don’t have to do anything.

    Amazon Translate – What is it?

    Amazon Translate is a machine learning-powered language translation service.

    Why should you care?

    Translation services are provided by other Cloud vendors, but until now, AWS hasn’t had their own. Amazon Translate is useful because, as usual, it’s well integrated into other AWS services. It also increases competition in the translation space, which is a win for end users.

    Amazon Comprehend – What is it?

    Amazon Comprehend is a natural language processing (NLP) service that identifies key phrases, topic, places, people, brands, or events from text. It also does sentiment analysis.

    Why should you care?

    Entity recognition is, in general, a hard machine learning problem – rolling out your own model takes massive amounts of data, careful algorithm selection and long training times. A ready-made solution allows you to focus on implementing your use case.

    If you’d like to know more about these tools, and how best to use them, please contact us here.

    Blog

    Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

    When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

    Blog

    Building better SaaS products with UX Writing (Part 3)

    UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

    Blog

    Building better SaaS products with UX Writing (Part 2)

    The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

    Get in Touch

    Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.








      Day 2 at Re:Invent – Builders & Musicians Come Together

      CATEGORIES

      Blog

      When Werner Vogels makes bold statements, expectations are set high. So when Vogel’s tweeted 15 minutes before the start of re:Invent’s day 2 keynote, we had to wonder what was coming.

      And how right we were. The close to 3 hours spent in the Venetian hotel in Las Vegas was an experience in itself.

      Andy Jassy opened the keynote with a long list of customers and partners, alongside the latest business figures. AWS are currently running at an 18 billion run rate with an incredible 42% YoY growth. With millions of active customers – defined as accounts that have used AWS in the last 30 days – the platform is by far the most used on the planet.

      As per Gartner’s 2016 Worldwide Market Segment Share analysis, the company (successfully led by Jassy), has achieved a 44.1% market share in 2016, up from 39% in 2015, more than everyone else combined. This became easily noticeable when AWS displayed an entire catalogue of new services throughout the keynote. The general stance Jassy took this year was that AWS are trying to serve their customers exactly what they asked for in terms of new products. The mission of AWS is nothing short of fixing the IT industry in favour of the end-users and customers.

      The first on stage was a live ‘house’ band, performing a segment of ‘Everything is Everything’ by Lauryn Hill, the chorus rhyming with ‘after winter must come spring’. Presumably, AWS was referring to the world of IT still being in a kind of eternal ‘winter’. The concept we also heard here was that AWS would not stop building their portfolio and that they want to offer all the tools their ‘builders’ and customers need.

      AWS used Jassy’s keynote for some big announcements (of course, set to music), with themes across the following areas:

      • Compute
      • Database
      • Data Analytics
      • Machine Learning and
      • IoT

      The Compute Revolution Goes On

      Starting in the compute services area, an overview of the vast number of compute instance types and families were shown, with special emphasis given to the Elastic GPU options. There were a few announcements also made on the Tuesday night, including Bare Metal InstancesStreamlined Access to Spot Capacity & Hibernationmaking it easier for you to get up to 90% of savings on normal pricing. There was also M5 instances which offer better-priced performance than their predecessors, and H1 instances offering fast and dense storage for Big Data applications.

      However, with the arrival of Kubernetes in the industry, it was the release of the Elastic Kubernetes that was the most eagerly anticipated. Not only have AWS recognised that their customers wanted Kubernetes on AWS, but they also realise that there’s a lot of manual labour involved in maintaining and managing the servers that run ECS & EKS.

      To solve this particular problem, AWS announced AWS Fargate, a fully managed service for both ECS & EKS meaning no more server management and therefore increasing the ROI in running containers on the platform. This is available for ECS now and will be available for EKS in early 2018.

      Having started with servers and containers, Jassy then moved on to the next logical evolution of infrastructure services: Serverless. With a 300% usage growth, it’s fair to say that if you’re not running something on Lambda yet, you will be soon. Jassy reiterated that AWS are building services that integrate with the rest of the AWS platform to ensure that builders don’t have to compromise. They want to make progress and get things done fast. Ultimately, this is what AWS compute will mean to the world: faster results. Look out for a dedicated EKS blog post coming soon!

      Database Freedom

      The next section of the keynote must have had some of AWS’s lawyers on the edge of their seats, and also the founder of a certain database vendor… AWS seem to have a clear goal to put an end to the historically painful ‘lock-in’ some customers experience, referring frequently to ‘database freedom’. There’s a lot of cool things happening with databases at the moment, and many of the great services and solutions shown at re:Invent are built using AWS database services. Out of all of these, Aurora is by far growing the fastest, and actually is the fastest growing service in the entire history of AWS.

      People love Aurora because it can scale out for millions of reads per second. It can also autoscale new read replicas and offers seamless recovery from reading replica failures. People want to be able to do this faster, which is why AWS launched a new Aurora features, Auto Multi-Master. This allows for zero application downtime due to any write node failure (previously, AWS suggested this took around 30 seconds), and zero downtime due to an availability zone failure. During 2018 AWS will also introduce the ability to have multi-region masters – this will allow customers to easily scale their applications across regions have a single, consistent data source.

      Lastly, and certainly not least, was the announcement of Aurora Serverless. which is an on-demand, auto-scaling, Serverless version of Aurora. The users pay by the second – an unbelievably powerful feature for many use cases.

      Finally, Jassy turned its focus point to DynamoDB service, which scaled to ~12.9 million requests per second at its peak during the last Amazon Prime Day. Just let that sink in for a moment! The DynamoDB service is used by a huge number of major global companies, powering mission-critical workloads of all kinds. The reason for this is, from our perspective, is the fact that it’s very easy to access and use as a service. What was announced today was the new feature DynamoDB Global Tables. This enables users to build high performance, globally distributed applications.

      The final database feature released for DynamoDB was managed back-up & restore, allowing for on-demand backups, point-in-time recovery (in the past 35 days), allowing backups for data archival or regulatory requirements to be taken of hundreds of TB with no interruption.

      Jassy wrapped up the database section of his keynote by announcing Amazon Neptune, a fully managed graph database which will make it easy to build and run applications that work with highly connected data sets.

      Analytics

      Next Jassy turned to Analytics, commenting that people want to be using S3 as their data lake. Athena allows for easy querying of structured data within S3, however, most analytics jobs involve processing only a subset of the data stored within S3 objects and Athena requires the whole object to the processed. To ease the pain, AWS released S3 Select – allowing for applications, (including Athena) to retrieve a subset of data from an S3 object using simple SQL expressions – AWS claim drastic performance increases – possibly up to 400% performance.

      Many of our customers are required by regulation to store logs for up to 7 years and as such ship them to Glacier to reduce the cost of storage. This becomes problematic if you need to query this data though. How great would it be if this could become part of your data lake? Jassy asked, before announcing Glacier Select. Glacier Select allows for queries to be run directly on data stored in Glacier, extending your data lake into Glacier while reducing your storage costs.

      Machine Learning

      The house band introduced Machine Learning with ‘Let it Rain’ from Eric Clapton. Dr Matt Woods made an appearance and highlighted how important machine learning is to Amazon itself. The company uses a lot of it, from personal recommendations on Amazon.com to Fulfillment automation & inventory in its warehouses.

      Jassy highlighted that AWS only invests in building technology that its customers need, (and, remember Amazon.com is a customer!) not because it is cool, or it is funky. Jassy described three tiers of Machine Learning: Frameworks and Interfaces, Platform Services & Application Services.

      At the Frameworks and Interfaces tier emphasis was placed on the broad range of frameworks that could be used on AWS, recognising that one shoe does not fit every foot and the best results come when using the correct tool for the job. Moving to the Platform Services tier, Jassy highlighted that most companies do not have to expect machine learning practitioners (yet) – it is after all a complex beast. To make this easy for developers, Amazon SageMaker was announced – a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models at any scale.

      Also at the platform tier, AWS launched DeepLens, a deep learning enabled wireless video camera designed to help developers grow their machine learning skills. This integrates directly with SageMaker giving developers an end-to-end solution to learn, develop and test machine learning applications. DeepLens will ship in early 2018, available on Amazon.com for $249.

      The machine learning announcements did not stop there! As Jassy moved into the Application Services tier AWS launched:

      IoT

      Finally, Jassy turned to IoT – identifying five ‘frontiers’ each with its own release, either available now, or in early 2018:

      1. Getting into the game – IoT One Click (in Preview) will make it easy for simple devices to trigger AWS Lambda functions that execute a specific action.
      2. Device Management – AWS IoT Device Management will provide fleet management of connected devices, including the onboarding, organisation, monitor and remote management through a devices lifetime.
      3. IoT Security – AWS IoT Device Defender (early 2018) will provide security management to your fleet of IoT devices, including auditing to ensure your fleet meets best practice.
      4. IoT Analytics – AWS IoT Analytics, making it easy to cleanse, process, enrich, store, and analyze IoT data at scale.
      5. Smaller Devices – Amazon FreeRTOS, an operating system for microcontrollers.

      Over the next weeks and days, the Nordcloud team will be diving deeper into these new announcements, (including our first thoughts after getting our hands on the new releases) We’ll also publish our thoughts and how they can benefit you.

      It should be noted that, compared to previous years, AWS are announcing more outside the keynotes, in sessions and on their Twitch Channel and so there are many new releases which are not gaining the attention they might deserve. Examples include T2 UnlimitedInter-Region VPC Peering and Launch Templates for EC2 – as always the best place to keep up-to-date is the AWS ‘whats new‘ page.

      If you would like to discuss how any of today’s announcements could benefit your business, please get in touch.

      Blog

      Starter for 10: Meet Jonna Iljin, Nordcloud’s Head of Design

      When people start working with Nordcloud, they generally comment on 2 things. First, how friendly and knowledgeable everyone is. Second,...

      Blog

      Building better SaaS products with UX Writing (Part 3)

      UX writers are not omniscient, and it’s best for them to resist the temptation to work in isolation, just as...

      Blog

      Building better SaaS products with UX Writing (Part 2)

      The main purpose of UX writing is to ensure that the people who use any software have a positive experience.

      Get in Touch

      Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.