Challenges of virtual workshops
In March 2020 I was supposed to give a full-day training about Google Cloud Platform fundamentals. Unfortunately two days before...
When Werner Vogels makes bold statements, expectations are set high. So when Vogel’s tweeted 15 minutes before the start of re:Invent’s day 2 keynote, we had to wonder what was coming.
And how right we were. The close to 3 hours spent in the Venetian hotel in Las Vegas was an experience in itself.
Andy Jassy opened the keynote with a long list of customers and partners, alongside the latest business figures. AWS are currently running at an 18 billion run rate with an incredible 42% YoY growth. With millions of active customers – defined as accounts that have used AWS in the last 30 days – the platform is by far the most used on the planet.
As per Gartner’s 2016 Worldwide Market Segment Share analysis, the company (successfully led by Jassy), has achieved a 44.1% market share in 2016, up from 39% in 2015, more than everyone else combined. This became easily noticeable when AWS displayed an entire catalogue of new services throughout the keynote. The general stance Jassy took this year was that AWS are trying to serve their customers exactly what they asked for in terms of new products. The mission of AWS is nothing short of fixing the IT industry in favour of the end-users and customers.
The first on stage was a live ‘house’ band, performing a segment of ‘Everything is Everything’ by Lauryn Hill, the chorus rhyming with ‘after winter must come spring’. Presumably, AWS was referring to the world of IT still being in a kind of eternal ‘winter’. The concept we also heard here was that AWS would not stop building their portfolio and that they want to offer all the tools their ‘builders’ and customers need.
AWS used Jassy’s keynote for some big announcements (of course, set to music), with themes across the following areas:
Starting in the compute services area, an overview of the vast number of compute instance types and families were shown, with special emphasis given to the Elastic GPU options. There were a few announcements also made on the Tuesday night, including Bare Metal Instances, Streamlined Access to Spot Capacity & Hibernation, making it easier for you to get up to 90% of savings on normal pricing. There was also M5 instances which offer better-priced performance than their predecessors, and H1 instances offering fast and dense storage for Big Data applications.
However, with the arrival of Kubernetes in the industry, it was the release of the Elastic Kubernetes that was the most eagerly anticipated. Not only have AWS recognised that their customers wanted Kubernetes on AWS, but they also realise that there’s a lot of manual labour involved in maintaining and managing the servers that run ECS & EKS.
To solve this particular problem, AWS announced AWS Fargate, a fully managed service for both ECS & EKS meaning no more server management and therefore increasing the ROI in running containers on the platform. This is available for ECS now and will be available for EKS in early 2018.
Having started with servers and containers, Jassy then moved on to the next logical evolution of infrastructure services: Serverless. With a 300% usage growth, it’s fair to say that if you’re not running something on Lambda yet, you will be soon. Jassy reiterated that AWS are building services that integrate with the rest of the AWS platform to ensure that builders don’t have to compromise. They want to make progress and get things done fast. Ultimately, this is what AWS compute will mean to the world: faster results. Look out for a dedicated EKS blog post coming soon!
The next section of the keynote must have had some of AWS’s lawyers on the edge of their seats, and also the founder of a certain database vendor… AWS seem to have a clear goal to put an end to the historically painful ‘lock-in’ some customers experience, referring frequently to ‘database freedom’. There’s a lot of cool things happening with databases at the moment, and many of the great services and solutions shown at re:Invent are built using AWS database services. Out of all of these, Aurora is by far growing the fastest, and actually is the fastest growing service in the entire history of AWS.
People love Aurora because it can scale out for millions of reads per second. It can also autoscale new read replicas and offers seamless recovery from reading replica failures. People want to be able to do this faster, which is why AWS launched a new Aurora features, Auto Multi-Master. This allows for zero application downtime due to any write node failure (previously, AWS suggested this took around 30 seconds), and zero downtime due to an availability zone failure. During 2018 AWS will also introduce the ability to have multi-region masters – this will allow customers to easily scale their applications across regions have a single, consistent data source.
Lastly, and certainly not least, was the announcement of Aurora Serverless. which is an on-demand, auto-scaling, Serverless version of Aurora. The users pay by the second – an unbelievably powerful feature for many use cases.
Finally, Jassy turned its focus point to DynamoDB service, which scaled to ~12.9 million requests per second at its peak during the last Amazon Prime Day. Just let that sink in for a moment! The DynamoDB service is used by a huge number of major global companies, powering mission-critical workloads of all kinds. The reason for this is, from our perspective, is the fact that it’s very easy to access and use as a service. What was announced today was the new feature DynamoDB Global Tables. This enables users to build high performance, globally distributed applications.
The final database feature released for DynamoDB was managed back-up & restore, allowing for on-demand backups, point-in-time recovery (in the past 35 days), allowing backups for data archival or regulatory requirements to be taken of hundreds of TB with no interruption.
Jassy wrapped up the database section of his keynote by announcing Amazon Neptune, a fully managed graph database which will make it easy to build and run applications that work with highly connected data sets.
Next Jassy turned to Analytics, commenting that people want to be using S3 as their data lake. Athena allows for easy querying of structured data within S3, however, most analytics jobs involve processing only a subset of the data stored within S3 objects and Athena requires the whole object to the processed. To ease the pain, AWS released S3 Select – allowing for applications, (including Athena) to retrieve a subset of data from an S3 object using simple SQL expressions – AWS claim drastic performance increases – possibly up to 400% performance.
Many of our customers are required by regulation to store logs for up to 7 years and as such ship them to Glacier to reduce the cost of storage. This becomes problematic if you need to query this data though. How great would it be if this could become part of your data lake? Jassy asked, before announcing Glacier Select. Glacier Select allows for queries to be run directly on data stored in Glacier, extending your data lake into Glacier while reducing your storage costs.
The house band introduced Machine Learning with ‘Let it Rain’ from Eric Clapton. Dr Matt Woods made an appearance and highlighted how important machine learning is to Amazon itself. The company uses a lot of it, from personal recommendations on Amazon.com to Fulfillment automation & inventory in its warehouses.
Jassy highlighted that AWS only invests in building technology that its customers need, (and, remember Amazon.com is a customer!) not because it is cool, or it is funky. Jassy described three tiers of Machine Learning: Frameworks and Interfaces, Platform Services & Application Services.
At the Frameworks and Interfaces tier emphasis was placed on the broad range of frameworks that could be used on AWS, recognising that one shoe does not fit every foot and the best results come when using the correct tool for the job. Moving to the Platform Services tier, Jassy highlighted that most companies do not have to expect machine learning practitioners (yet) – it is after all a complex beast. To make this easy for developers, Amazon SageMaker was announced – a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models at any scale.
Also at the platform tier, AWS launched DeepLens, a deep learning enabled wireless video camera designed to help developers grow their machine learning skills. This integrates directly with SageMaker giving developers an end-to-end solution to learn, develop and test machine learning applications. DeepLens will ship in early 2018, available on Amazon.com for $249.
The machine learning announcements did not stop there! As Jassy moved into the Application Services tier AWS launched:
Finally, Jassy turned to IoT – identifying five ‘frontiers’ each with its own release, either available now, or in early 2018:
Over the next weeks and days, the Nordcloud team will be diving deeper into these new announcements, (including our first thoughts after getting our hands on the new releases) We’ll also publish our thoughts and how they can benefit you.
It should be noted that, compared to previous years, AWS are announcing more outside the keynotes, in sessions and on their Twitch Channel and so there are many new releases which are not gaining the attention they might deserve. Examples include T2 Unlimited, Inter-Region VPC Peering and Launch Templates for EC2 – as always the best place to keep up-to-date is the AWS ‘whats new‘ page.
If you would like to discuss how any of today’s announcements could benefit your business, please get in touch.
Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.
AWS re:Invent kicked off in Vegas yesterday, and a number of the Nordcloud team have travelled across the pond to attend what is probably the biggest Cloud event on the planet.
In 2016, Jaakko documented the sheer scale that is involved in running re:Invent, and this year AWS have managed to scale it up again. With a 44k+ attendees, (30k+ in 2016) the campus now spreads across 7 hotels the length of the strip – that’s a 50-minute walk end to end (luckily there are shuttle buses!). The partner expo has doubled in size and is now across two locations, making the scale of the conference hard to comprehend.
Sitting in the 16,800 capacity arena for the Partner Keynote on the first ‘formal’ day of the event was a little awe-inspiring. Werner Vogel’s keynote will be in the same location and we fully expect the atmosphere to be electric when filled to capacity on Thursday.
Today’s keynote was led by Terry Wise, AWS’ VP of Global Alliances, Ecosystems and Channels who highlighted both the growth of AWS, (1300 new releases so far this year and 70 on Monday of re:Invent alone) and the growth of the AWS Partner Community. AWS have clear evidence that those customers who work with companies within their partner ecosystem, such (as Nordcloud) are able to adopt the cloud faster and more effectively. In this way, AWS is committed to providing partners with the training and tools to help them do that (launching several within the keynote).
AWS recognise that their customers want skilled partners who are specialised and to support the AWS partner competency programme. AWS, therefore, audits the partners on their skillset and ensures that we have completed reference-able real-world projects. Today, AWS announced a Networking and Machine Learning Competency, and coming in 2018, Blockchain, Containers, End User Computing & Cloud Management Tools. Nordcloud already holds the DevOps Competency, and we are Managed Services, Lambda, DynamoDB and API Gateway Partners. We will, of course, be looking to add some of these new competencies to our lineup.
Wise believes that “Cloud is the foundation for innovation” and to help demonstrate this, he invited a number of people to the stage:
The Nordcloud team are in Las Vegas until Saturday morning, meeting with partners, customers & attending sessions. If you would like to discuss anything in this blog post of how Nordcloud can help, please get in touch!
Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.
For a few years now, I’ve been engaged in a personal passion project of explicating what the increasing abundance of data can do for design. My most recent definition of data-driven design is that it means digitalisation and automation of design research. In future, data-driven design will possibly reach out to decision making and generative design. But we’re not there yet.
As I’ve written over the years about the concept and tools of data-driven design, my musing around the topic has been somewhat limited. As I’m operating in a digital design and development company context, design has referred to interaction design: user interface design decisions and how to best implement certain features.
I have left several design domains with little attention. In this article, I will venture a bit beyond my home turf. I’ll change the question and think about what should we build, instead of how we create it. This question takes a step to a higher abstraction level, that commonly associated with service design. In the following, I’ll consider what the big data world could offer for service design.
Service design is a bit of niche area of its own originating from 1980’s. Starting from the design of banking services, it has since slowly grown to be recognised profession serving the development of many physical touch points. But nowadays professionals calling themselves service designers also regularly deal with digital touch points.
In the few visual depictions of what is the overall field of design visualised below, service design is totally missing from the left one (based Dan Saffer) illustrating UX design and occupies a small segment of human-centred design. But I assure you, it still exists, even thou it is clearly far out of the spotlight of more recent disciplines of digital design.
How about the use of data in this domain? The public examples of data-driven service design are rare. For instance, the global Service Design Network chapter Netherlands was apparently among the first to host a session specifically aimed at sharing experiences with data in service design.
The short story written about the data-driven service design event gives an opinion I can readily agree with: quantitative data must complement, challenge and give a foundation for qualitative data.
The long-term experience design specialist Kerry Bodine puts it as “service design requires a mix of research inputs.” She has expressed a great concern of over-reliance on big data methods without the complementary qualitative insights. This relationship has been previously highlighted by Pamela Pavliscak under the terms big and thick data, in order to highlight their contemporary nature.
In other words, data-driven design means using more data, particularly quantitative, in the design process.
A side note: a term that may seem relevant to data-driven service design is service analytics. Service analytics, in my opinion, are a subset of traditional analytics areas: web analytics, market intelligence, and business intelligence. For instance, in Sumeet Wadhwa’s article on the topic, service analytics are presented foremost as a tool quantify, track and manage service design efforts, not so much inspire or help to find new design opportunities. Thus they are not a creative driver for the design process.
Data can’t solve or even easily be used to support all design decisions. Given that people are naturally resistant to change, defending any major change using backward-looking data is not going be easy. In a recent post, frog founder Hartmut Esslinger provided strong criticism for misinterpretations of “big” data.
His examples very neatly illustrate conservative interpretation bias of data. For instance, in a 2001 Motorola case, the company discarded a touchscreen smartphone concept (later known as the iPhone) because market intelligence data clearly showed people wanted to buy phones akin to those designed by Nokia! Clearly, the data-based insight was inferior to a “designer-based” insight about what you should create.
Solving this challenge is not easy. I’ve personally helped to articulate one user acceptance testing approach called resonance testing originating from American design company Continuum. This method presents a quite specific procedure to investigate quantitatively consumers reactions to ‘what’ questions. However, this method is dependent upon face-to-face interactions and does not thus really fall within the domain of data-driven design as defined at the start.
The data-driven or data-informed design does not identify any particular design approach. However, I see that it requires a certain prototypical process to support it. First and foremost, it always requires real data. Representative data must be collected, analysed, inferences made and brought to bear upon design decisions and new designs.
What kind of data and which tools of analysis will help service designers to decide what needs to be created? In my previous writing, I’ve proposed a taxonomy of the different types of tools available for data-driven design. Starting from there, we can observe that we have three categories of tools that hold a promise in this direction. They are active data collection solutions, user recordings, and heat maps.
Once more the origin of these tools is within the digital domain, in the web and mobile apps, but it is more important to bear in mind that they are very heavily related to the foremost revision or assessment of existing features. They can give a glimpse of what else your customers might love, what they fail to achieve or which part of service they neglect.
Passive records from use sessions on digital or physical touch points can be revealing, but active data collection – from co-design to all manners of classical qualitative research has been the core of service design research. But are there any qualitative research methods that can scale, to provide the automation aspect I attach to data-driven design approach?
Different types of surveys naturally scale well. Especially digital environments offer unprecedented opportunities to target and trigger surveys, making them much more powerful than they were in the past. Of course, they are limited by the structure of their insight. But free, open-ended can be very intuitive and applicable in data-driven design if we can also provide the tools that automate the analysis of the inputs, not just collection. Sentiment analysis alone, as criticised by Boden above, is a weak method. Segmentation and automated summaries can add value to aggregate figures alone. This is bit futuristic but already feasible (see also Zendesk’s approach to data in automating customer service).
I had a chance to talk with Petteri Hertto, a long-term specialist in quantitative research, about the topic. He is a service designer currently working at Palmu agency in Helsinki, Finland. He says that too many projects feel obliged to gather quantitative data without good reasons. They end up with data that is non-actionable from a design point of view.
Petteri has personally transformed from a quantitative data specialist to a designer that sees value in both types of data. “The best uses of quantitative data lie in proofing new ideas and verifying a business case around it,” he believes. Petteri has documented a model of value measurement his agency prefers in a Touchpoint article (Touchpoint magazine is the journal published by Service Design Network).
Are there any new tools specifically for data-driven service design?
I further pressed Petteri on whether any (quantitative) design research tools have appeared in the past 10 years that would resemble my definition of data-driven design.
He recounted that there are few radically new developments. In the design approach favoured by their agency, they use the same tools as UX designers, including those data-intensive ones. However, he named one novel survey tool made possible by mobile technologies. It addresses several deficiencies of validity in traditional research.
Crowst is a Finnish startup which provides surveys targeted on verified user behaviour in the physical world, improving the quality of input.
Then again, this is an incremental improvement over existing tools, not a radically novel approach with unforeseen data masses, new level of insight or scalability.
Are we back to square one in terms of answering the question of what does the customer want? Yes and no. I believe a thoughtful analysis of big data can serve three purposes in service design:
* difficult to validate without a detailed implementation and answering the how question
However, the data about yesterday can’t really tell us what is going to happen tomorrow. We have to more or less make the future available today through scenarios and prototypes which can generate the data that illustrates the future.
Data-driven design in user interface level is in good speed, but the need for qualitative insight still dominates service design. Contemporary service designs acknowledge the potential – and danger – in big data, but the tools to transform the potential into a revolution in the ways of working is still missing.
It is evident service designers must be comfortable with working with data as big as it comes. However, ready-made tools and methods are far fewer than in user interface design. Answering the fundamental question “what to design” is notoriously difficult with data that describes things of the past.
I believe it is and will be possible even to a greater extent than we can today imagine in a couple of years. Join the revolution today!
Let’s discuss how we can help with your cloud journey. Our experts are standing by to talk about your migration, modernisation, development and skills challenges.
Sumo Logic Forges Its First European Managed Services Partnership with Nordcloud to Help Improve Security and Incident Management and Deliver Continuous Intelligence for Modern Applications.
Las Vegas 27. November 2017. Nordcloud, a leading public cloud managed services provider in Europe has teamed up with Sumo Logic, the cloud-native machine data analytics service. Through this partnership, customers can leverage Sumo Logic’s cloud-native machine data analytics platform as part of Nordcloud’s Managed Cloud service for real-time visibility of operational and security insights on Amazon Web Services and Microsoft Azure.
Nordcloud runs Managed Cloud with the best-in-class partner software.“We use our expertise to pick the best solutions available so our customers don’t have to spend time searching for the right tools,’ says Vesa Tiihonen, Nordcloud’s Head of Managed Cloud. ‘We’re absolutely delighted to be able to offer and recommend them Sumo Logic’s industry-leading service. It’s automated security and cloud audits, and Sumo’s proactive analytics powered by machine learning that gives our customers a fantastic operational intelligence resource, all available as a fully managed pay-per-use service.”
“We are really excited to solidify our first managed services provider partnership in Europe. Nordcloud is a great match for Sumo Logic given its roots in the cloud and its multi-cloud approach supporting public cloud providers such as AWS and Azure,” says Jabari Norton, Vice President of Global Partner Sales for Sumo Logic. “We look forward to working with Nordcloud to help European customers get real-time operational and security insights into their modern applications while delivering the continuous intelligence needed to tackle their toughest security challenges such as the upcoming EU GDPR deadline and the Payment Card Industry (PCI) compliance in the public cloud.”
Nordcloud’s Tiihonen says it will also meet users’ expectations of robust security and compliance. “As more and more of our enterprise customers are moving their regulated workloads to the public cloud, there’s an increasing demand for a compliant cloud-native scalable solution. This is where Sumo Logic’s offering is unparalleled in delivering security
compliance like PCI DSS.”
Nordcloud’s staff are already trained and experienced in Sumo Logic services to support customers’ business needs and provide the excellent customer service they expect.
Sumo Logic is the leading cloud-native, machine data analytics platform delivering real-time continuous intelligence, from structured, semi-structured and unstructured data across the entire application lifecycle and stack. More than 1,500 customers around the globe rely on Sumo Logic for the analytics and insights to build, run and secure their modern applications and cloud infrastructures. With Sumo Logic, customers gain a multi-tenant, service-model advantage to accelerate their shift to continuous innovation, increasing competitive advantage, business value and growth. Founded in 2010, Sumo Logic is a privately held company based in Redwood City, CA and is backed by Accel Partners, DFJ, Greylock Partners, IVP, Sapphire Ventures, Sequoia Capital and Sutter Hill Ventures. For more information, visit www.sumologic.com.
In this blog post, I will be picking up on what my colleague Sandip discussed in his latest blog post, ‘Innovating by Making a Difference’. Based on that, I wanted to take the opportunity to talk about how Nordcloud Germany have managed to stay on top of the industry for the last year or two. It’s been about focussing on the right things at the right time. For example, we haven’t worked in the Private Cloud space, and we haven’t been involved in the SaaS world of productivity, collaboration or CRM. We have stayed focussed purely on leading Public Cloud platforms; AWS, Azure & Google to deliver full-stack consultancy and services.
At Nordcloud, we’re able to keep our customers – not just ourselves – on top of the game, by understanding everything we can, identifying the most valuable for our customers and then adopting the latest services of each of the providers. These are, for example, services around containers, (Kubernetes for instance), and serverless (Lambda), and also the Internet of Things and Machine Learning. Our work with companies of all industries and sizes is the foundation of being able to filter the different technologies for what matters the most. In this sense, our customers are those who teach us how to help them best and we can then pick the best technologies to do just that.
We were recently screened by the leading Cloud market analyst in Germany against how we deliver state of the art managed Cloud services. Check out CRISP’s perspective here (in German).
We’re proud to be recognised as a leading provider in the Cloud consulting and service industry, who stands out amongst a vast number of peers in the market. If there is one thing we have realised throughout the years – both as a company and as individuals – it’s that you shouldn’t stop innovating and questioning. To stay on top, it’s not enough to just do the basics well. You have to keep going forward and step beyond your comfort zone at all times. At the same time, you shouldn’t be running after each new hype, but picking your game wisely and then building up expertise and concepts around that area.
Congratulations if you distribute content to customers in Finland, and a special congratulations if you are already doing it through Amazon CloudFront!
Amazon Web Services have just announced the opening of the first Amazon CloudFront Edge location in Finland. This means that you can now distribute your content to your customers in Finland faster, with greater bandwidth and with the simplicity that AWS can offer.
“As an AWS Consulting Partner originating in Finland, we’re thrilled how this new Edge location delivers faster service to the Finnish end users. Better yet, our current Finnish customers will get these improvements without any action or additional costs,” says Jaakko Kontiainen, Nordcloud Alliance Lead for AWS.
Amazon CloudFront is a global content delivery network (CDN) service that securely delivers data, videos applications, and APIs to your viewers with low latency and transfer speeds. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as software that works seamlessly with services including AWS shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and AWS Lambda to run custom code close to your viewers.
Some of the key benefits include extensive integration with other AWS services, ease of use, a cost-effective pay-as-you-go pricing scheme, and the growing global distribution network exhibited by the latest Edge location.
If you would like to talk to us further about Amazon CloudFront, other Amazon Web Service offerings, or migrating your business onto the AWS Cloud, please contact us here.
Nordcloud has appointed Jan Kritz as CEO as of 1 January 2018.
Jan has significant experience in leading multi-million euro businesses and implementing large IT deliveries on a global scale. These skills are vital to Nordcloud, its partners and its customers as the company enters its next phase of growth.
Jan joins Nordcloud from Capgemini, where he has served in senior positions with business and delivery responsibility since 2014. He has also held leading roles at Atos, Siemens and Nokia.
“I’m excited to join the European cloud pioneer Nordcloud and work together with our world-class cloud experts and global partners to deploy cloud technologies that generate maximum value for our customers – without them having to consider the constraints of a legacy integrator,” says Kritz.
Nordcloud has seen more than 70% year-on-year growth for the past consecutive five years – making it one of the fastest growing companies in Europe (Deloitte EMEA 500 in 2016). Nordcloud currently operates in eight countries with circa 250 people representing more than 30 nationalities.
“We believe the cloud is transformational for our customers and we wanted to strengthen our leadership as we scale across Europe. We are very happy to have Jan join the team to take us into this next phase of growth,” says Nordcloud chairman of the board Fernando Herrera.
Nordcloud continues to build upon its strong public cloud expertise and will keep investing in growing the capabilities of its teams and services to the benefit of its customers. The company remains fully committed to the public cloud as the best solution for its customers’ business needs and will continue to work in close collaboration with the global cloud leaders AWS, Microsoft and Google.
Nordcloud would like to thank outgoing CEO Esa Kinnunen for his strong leadership over the first part of the company’s journey. Esa is now moving into a role on the Nordcloud board where the company can continue to benefit from his expertise.
Update [16:00UTC]: AWS were quick to release a fix (aws-cfn-bootstrap-1.4-26) and -25 is still in the yum repositories. Unless you were unlucky and froze your environment today, the problem should solve itself.
The latest version of aws-cfn-bootstrap package aws-cfn-bootstrap-1.4-25.17.amzn1.noarch that was signed November 2 around 21:00 UTC changed how cfn-signal works. cfn-signal now picks up the the instance profile role’s api keys and try to sign the request by default. This causes the signal to fail if the instances IAM role does not have cloudformation:SignalResource permission.
cfn-signal has always supported signed requests but if access keys were not provided the following authentication method was used.
cfn-signal does not require credentials, so you do not need to use the –access-key, –secret-key, –role, or –credential-file options. However, if no credentials are specified, AWS CloudFormation checks for stack membership and limits the scope of the call to the stack that the instance belongs to.
This will only affect users that either build ami’s or update system packages on bootup. If you normally do a yum update replace it with yum -y upgrade –security or yum -y upgrade –exclude=aws-cfn-bootstrap
You could also add the Iam policy statement below to your instance role.
Please contact Nordcloud for more information on CloudFormation