pre:Invent Round-up 2023

Written by Alex Kearns, AWS Ambassador & Community Builder, Principal Solutions Architect at Ubertas Consulting

re:Invent is upon us!

I’m here in Las Vegas with Ubertas co-founders John Lacey and Steven Crowley, ready for a week of wall-to-wall AWS announcements and innovations. We’re all very much looking forward to meeting up with AWS contacts, customers of past, present, and future, and hearing all about what AWS has been cooking up over the last twelve months.

Entering re:Invent week signals the close of the period fondly known as pre:Invent. AWS somehow manages to pack the weeks before re:Invent full of exciting announcements while holding the big-bang features back for the keynote stage.

I’ve kept a keen eye on the AWS What’s New page over the last month or so, noting down any particularly special announcements. There are far too many to mention a single blog post, so I’ve whittled the list down to my top ten to get you excited for the coming week.

pre:Invent announcements

Cost Explorer improvements in granularity

Every customer I speak to has cost on their mind in one way or another. Very few organisations have a limitless pot of money to spend on cloud. Even if they do, an unlimited budget doesn’t last forever, so it’s essential to be cost-optimised.

The three key improvements made to Cost Explorer are the extended lengths of daily and monthly granularity, and resource-level granularity capabilities. It’s now possible to see up to 14 months of cost history at a daily granularity, 38 months at a monthly granularity, and view daily resource-level cost for the last 14 days. This makes it much easier to draw comparisons across different periods.

Full details here.

PartyRock

Generative AI (GenAI) has without a doubt been the dominant theme in technology this year. With the explosion of tools like ChatGPT, AI has become accessible to the masses.

Earlier this year, AWS released Amazon Bedrock, a fully managed service for interacting with Generative AI foundation models. Customers only pay for what they use, making it possible to experiment with GenAI for cents rather than tens of thousands of dollars.

Now, PartyRock has been released to showcase the capabilities of Bedrock to prospective users and offer an accessible platform for those who want to learn about GenAI. Essentially, PartyRock is a playground for Amazon Bedrock that enables users to develop AI-powered applications using Generative AI instead of coding, without needing an AWS account.

My colleague Josh gave it a go and posted his thoughts on LinkedIn. PartyRock is great fun! I’d wholeheartedly recommend giving it a go.

Full details here.

Searching for resources across accounts

As an AWS estate grows, finding resources can become more difficult, especially when multiple accounts are involved (as is best practice). AWS Resource Explorer has been around for a little while, but it has always been constrained to the account from which you’re running the search.

This change increases the scope of Resource Explorer to search across multiple accounts and can be enabled either for a whole AWS Organization or a specific Organizational Unit. It will make life so much easier when you’re trying to track down a resource with just a name and don’t want to dig through accounts to find it.

Full details here.

Step Functions recovery improvements

AWS Step Functions is up there with my very favourite services, so I’m always intrigued when a new announcement comes around.

This time, the focus is on how it recovers after experiencing a failure. Step Functions already had Catch functionality that would allow you to control which state to pass error details to, but you’d then have to restart the state machine from the beginning.

The release of this new feature implements re-drive functionality for state machines, meaning that when failures are experienced, Step Functions can restart the execution from the point of failure.

Full details here.

Aurora to Redshift zero-ETL

I’m particularly excited about this one. How data is stored, used and moved is becoming increasingly important with the growth of areas like Generative AI, so any feature that seeks to make it easier interests me.

The core functionality behind this is that it allows you to replicate your transactional (OLTP) data stored in Amazon Aurora to Amazon Redshift, ready for analytical workloads (OLAP). The beauty of this is that you don’t have to build any pipelines. You just tell it where to take data from and where to put it, and it’ll do the rest.

Technically, this isn’t a new feature, as it’s been out in public preview for a while (it was announced at re:Invent 2022), but I’m glad to see it finally enter general availability. While in public preview, I gave it a live demo on an episode of ‘Let’s talk about data’ on the AWS Twitch channel, the recording of which can be viewed here.

Full details here.

Upgrading MySQL 5.7 snapshots to 8.0

Amazon RDS for MySQL can already do in-place major upgrades from MySQL 5.7 to 8.0, but the cluster needs to be running to do so. Many organisations have months or years of database snapshots held for compliance reasons, and some of these are likely to have been created on an older major version of MySQL.

Database snapshots can only be restored to a cluster of the same specification (e.g. MySQL 5.7 to 5.7), which presents a problem if you need to restore an old snapshot whose version may be end-of-life – an immediate upgrade would be required, adding precious time to disaster recovery situations.

You can now upgrade RDS MySQL 5.7 snapshots to 8.0 without launching a cluster first. Whilst this seems like a minor feature, it’ll be a real quality-of-life improvement for some.

Full details here.

Breaking things better

I’m a big advocate of chaos engineering to ensure that workloads are built to a Well-Architected standard, being as resilient and reliable as possible. While moving to AWS will make it easier to limit downtime, no service on AWS comes with a 100% uptime guarantee, so it’s important to consider worst-case scenarios.

AWS Fault Injection Service is a managed service for simulating failures in your AWS account. It’s always been very capable, allowing you to implement specific actions such as ‘block network traffic in and out of this subnet’. Sometimes, the level of detail available can be overwhelming; this is where the new feature, Scenarios, comes in.

Fault Injection Service Scenarios are AWS-managed templates for common experiments you may want to run to test reliability and resilience. Currently, the scenario library only supports EC2 and EKS, but I suspect others will follow shortly.

Read about the new Fault Injection Service features here.

Lock ’em up and throw away the key

Don’t worry, I’m only talking about S3 objects! S3 Object Lock isn’t new, but up until now you’ve only been able to turn it on at the point of bucket creation. Now, you can turn it on for existing buckets. A small addition to the functionality; however, with great power comes great responsibility.

S3 Object Lock can retain your objects indefinitely through the ‘Legal Hold’ functionality. This is very useful for some customers, but can also be extremely destructive. Imagine you have many PBs of data in an S3 bucket, your AWS account gets compromised, and an attacker applies a Legal Hold to the bucket. You’d be unable to delete any of those objects and would continue to pay for them. The only way out would be to delete your AWS account.

I’d recommend applying measures like Service Control Policies at an organisation level to prevent anyone from applying holds to existing buckets, granting access to do so only by exception and on a temporary basis.

Find out about this announcement here.

Exporting a Lambda to Application Composer

I’m very happy to see functionality in the AWS console that supports turning resources into infrastructure-as-code. When Application Composer was launched, I was sceptical about how good it would be, but I’ve learned to really like it.

This new feature for AWS Lambda integrates the two, supporting use cases where you create a Lambda function in the console but want to bring it into a production system (using the correct ways of working) using infrastructure-as-code. I’m looking forward to trying this one out.

Read more from AWS here and see my presentation featuring Application Composer here.

Unblocking EventBridge Pipes

EventBridge Pipes is another service that I’m particularly fond of. Having a single service to integrate many is appealing. When it was first launched, it immediately enabled patterns such as triggering Step Functions state machines from an SQS queue in a much cleaner way than was possible previously.

The one thing that’s always been lacking is better observability of the pipe itself. They were always a bit like a black box – you could see things entering but not necessarily leaving.

EventBridge Pipes can now log items passing through them to CloudWatch, Kinesis or S3, making debugging significantly easier. I’d recommend enabling this as soon as possible.

It’s worth noting that if the throughput of the pipe is particularly high, then a large amount of logs will be generated. If you’re storing in CloudWatch, be sure to set a sensible retention period. If you’re storing in S3, make use of the cheaper storage classes and lifecycle policies for deleting files once they are no longer of use.

Find out more about this announcement here.

Ubertas Consulting at re:Invent

If you’re also attending re:Invent, please feel free to reach out and we’ll find time to chat! You can find me on PeerTalk in AWS Events or online here by searching for my name (Alex Kearns). Alternatively, I’m in the AWS Ambassadors carousel on the front screen.

There’ll be plenty of content to follow throughout the week of re:Invent and afterwards (once I’ve slept off the jet lag!)

 


Alex Kearns
Principal Solutions Architect, Ubertas Consulting

LinkedIn