Featured Stories

Filter By Categories

re:Invent 2019: “pre:Invent” roundup

By ACG Technical Editors Team  |  December 02, 2019  |   AWS re:Invent   |  

It’s the Monday after Thanksgiving, and that means it’s time for AWS re:Invent. Right now, sponsors are setting up booths, attendees are pouring into McCarran International Airport, and a whole S3 bucketful of AWS news is about to drop.

Before it does, we thought we’d share some of the more interesting AWS feature and update announcements of the past few weeks. Think of it as a “pre:Invent”; a chance to get wind of some truly useful additions before they’re completely buried in the avalanche of AWS announcements we’re sure to get this week.

And be sure to check back with us throughout the week, as we’ll be sharing all the coolest new stuff that gets announced out in Las Vegas.

1. RDS SQL logs can now be delivered to CloudWatch

What is it?

RDS, the managed database service, can now deliver SQL error logs directly to CloudWatch. Previously this was only available through the command line.

Why’s this awesome?

Database administrators (DBAs) can now access logs from the CloudWatch console, set up alerts for SQL events, and makes RDS a more appealing option. Many DBAs have been reluctant to move to RDS, since until now it’s meant losing visibility compared to on-premises/SQL servers running in EC2.

What’s next?

SQL agent and error logging to CloudWatch is available right now in almost all regions and can be enabled on new and existing instances.

2. Lambda errors no longer kick you back to the bottom of the hill!

What is it?

Lambda now has failure handling for Kinesis and DynamoDB batches!

Why’s this awesome?

Resilience is always awesome! With failure handling, the days of a Lambda function processing a shard, hitting an error, and having to start completely over are gone.

Now Lambda can handle errors, and with the addition of Lambda Destinations, can route information about an execution state to a destination (another Lambda function, SNS, SQS, or EventBridge) without the hassle of extra coding.

So hooray for more resilience for data processing functions!

3. AWS launches Tag Policies

What is it?

Tag Policies let you define rules on how tags can be used on AWS resources in your accounts in AWS Organizations. With Tag Policies, you can easily adopt a standardized approach for tagging AWS resources, enforce compliance when deploying objects, and even define allowed values of tags.

Why’s this awesome?

Cloud governance is key to managing your cloud resources, and resource tagging plays a major role in ensuring compliance for security, cost management, and reporting. Until now, tagging required CloudFormation scripts, Lambda functions, or slogging through the AWS Console. Tag compliance reporting also relied on custom Lambda functions, and they required updating with every tag change. Fun, right?

Tag Policies greatly simplifies tagging. You can now define your tagging strategy, naming, and values, and use Tag Policies to enforce compliance at an AWS Organization-level.

Or you could keep updating those custom Lambda functions if that’s your thing.

What’s next?

Faster, smoother cloud adoption. Governance is a huge challenge for many organizations, with strategy, implementation, deployment, and compliance taking many months. Tag Policies fast-tracks resource tagging at the AWS Organization-level and reduces the need for management overhead by enforcing compliance on its own. The upshot? Companies will be able to sustain momentum on their cloud initiatives, rather than getting bogged down in governance issues.

4. Centrally manage access to AWS for Azure AD users with AWS Single Sign-On (SSO)

What is it?

Customers can now connect Azure Active Directory (Azure AD) to AWS Single Sign-On (SSO) once, manage permissions to AWS centrally in AWS SSO, and enable users to sign in using Azure AD to access assigned AWS accounts and applications.

Why’s this awesome?

To date, AWS SSO has only available for on-premise ADFS identity environments and hasn’t supported those organizations who’ve started to leverage Azure AD as their authoritative identity store. SSO support for Azure AD enables organizations transitioning to Azure AD, and those already on Office 365, to leverage Azure AD to centrally control access to their AWS environments.

Organizations can transition from on-premise AD to Azure AD knowing they can easily manage AWS access and potentially retire their on-premise ADFS infrastructure.

What’s next?

Many large organizations are using both AWS and Azure cloud services. This addition to AWS SSO shows that AWS is acknowledging this reality and moving to retain customers by simplifying management into their AWS environments. We wouldn’t be surprised to see more moves toward interoperability between different cloud services in the coming year.

5. DynamoDB adaptive capacity puts eggs in different baskets.

What is it?

AWS will now automatically isolate frequently accessed items in DynamoDB to smooth out resource use, provide better performance, and reduce the need for constant tuning.

Why’s this awesome?

In a nutshell, you’ll now have AWS automation re-tuning and rebalancing your NoSQL DB to deliver a smoother, more predictable service for less effort.

DynamoDB was the initial AWS NoSQL database offering. It’s long been an attractive option for fast-moving development and evolving apps. However, as the application has grown, it’s also grown harder to manage efficiently and cost-effectively. Performance required well-designed keys, and if your environment or customer behavior changed it might be necessary to redesign your keys and recode to maintain that good performance.

In 2018 AWS released adaptive capacity, which allowed sharing of WCU/RCU capacity between partitions to avoid capacity exceeded errors. Now AWS has added to its adaptive capacity feature by rebalancing what keys are on which partitions to reduce that chance that frequently accessed (hot) keys are co-located on the same partition.

What’s all this mean? While you still need to be thoughtful of your key design, and still need to monitor performance, you now have AWS dynamically rebalancing the load in the backend to give you smoother performance, greater flexibility, and lower operational costs.

What’s next?

We’re seeing this same underlying partition design popping up in more AWS data services. This is a good sign that AWS plans to evolve its underlying capabilities to deliver a more feature-rich service.

6. Put a canary in your cloud mine

What is it?

CloudWatch now allows you to run ‘synthetic’ transactions for testing your applications deployed in AWS. This is much more detailed than running ping tests for latency or monitoring metrics - it’s actually testing your application as an end-user would. Running synthetic tests is called Canary Testing, an example would be writing a script for CloudWatch to adding items to a shopping basket and trying to complete the transaction, and report on any problems doing it. You can run them 24x7 and receive alerts on unexpected behavior.

Why’s this awesome?

It makes synthetic testing much more accessible. No more integrating third-party tooling or spending money on extra procurement. These synthetic transactions are directly baked into the AWS ecosystem. Just write your own tests using the ‘Puppeteer’ language and host your end-user monitoring in AWS. You only pay for what you use.

What’s next?

This puts some heat on third-party monitoring tools who previously served a segment of the market in which AWS wasn’t a player. Pricing could be a concern, however, when looking at estimated charges. Say you run your canary testing every few minutes 24x7, and there is no free tier allowance for CloudWatch Synthetics. The costs could add up fast. For now it still may be a cheaper to go for a third-party option with less integration to AWS.

7. EC2 instance metadata adds security enhancements

What is it?

EC2 instances have the ability to access certain information about themselves from the AWS systems. This includes potentially sensitive information, like privileged AWS credentials. Previously, this information could be accessed by anyone on the instance. Now, there are additional protections for this metadata; or the ability to disable it entirely.

Why’s this awesome?

Instance metadata is designed to be accessible only by users with access to the instance itself, but there are potential security flaws with Server-Side Request Forgery (SSRF) attacks, where a third party could reveal privileged credentials. With this new enhancement, you get defense-in-depth against unauthorized metadata access and a greater degree of control.

What’s next?

Reducing your attack footprint is critical to good perimeter security. Organizations should consider where they should disable the Metadata Service, or implement secure metadata. This, coupled with the principle of least privilege being applied to IAM policies, should go a long way to limit damage and keep organizations safe from this type of credential compromise.

8. Application Load Balancers now support weighted target groups

What is it?

Application Load Balancers now support attaching multiple target groups and weighting traffic between them.

Why’s this awesome?

This allows for canary-style deployment (e.g. direct 10% of traffic to the new target group of instances). Pulling this off previously required a new load balancer, and creating the weighted routing at Route53. With fewer services and hoops to jump through, this new ALB feature saves on both cost and time to perform canary deployments.

What’s next?

This update may not be groundbreaking, but it’ll certainly make life easier for a lot of Development/DevOps teams.

9. Most outstanding! ALBs can now use the Least Outstanding Requests algorithm.

What is it?

Application Load Balancers now support attaching multiple target groups and weighting traffic between them. This was previously default behavior on classic load balancers, but with the move to ALBs, round-robin routing became the only option.

Why’s this awesome?

Application Load Balancers - which previously could only round-robin between instances - can now direct traffic to the least busy instances in your Target Groups. This is cool because uneven workloads mean a server which is too busy to action requests can sometimes receive more requests which then start to queue, even when other servers are available to take the request.

What’s next?

LOR can be enabled on Application Load Balancers right now. By enabling LOR you can help ensure that overloaded instances in your Target Groups have time to recover from unusually expensive requests.

Keep up with all the re:Invent goodness right here

We’ll be posting updates throughout the week as AWS undoubtedly rolls out all kinds of new services and features at this year’s re:Invent. Be sure to check back for the latest.

If you’re going to be attending re:Invent this year, we’d love for you to swing by our booth and say hi! We always love hearing from our students. Here’s a handy roundup of where we’ll be throughout re:Invent.

We’d also recommend that you check out Mark Nunnikhoven’s AWS re:Invent Ultimate Guide for absolutely anything you might need to know about getting the most out of your time in Vegas.

 

 

Popular Tags

Enterprise Certification Career News New Products AWS Press Releases Partner Story Corporate Interviews Cloud Security Google Cloud Platform Machine Learning AWS Summit Serverless azure re:Invent Containers GCP tradeshows

Search the ACG blog