We share a few insights from the AWS re:Invent 2019 conference, focusing on points that will improve pain points and build on existing solutions.
This year was my second attending AWS re:Invent. What a wealth of information provided by AWS and partners! With more than 60,000 attendees, 3,000 technical sessions and 75 service launches, it’s nearly impossible to soak up all the information during the week.
Can you believe that today AWS has 175 distinct services, ranging from Compute, Security Content Delivery, Internet of Things, Machine Learning, Robotics, and even Quantum Technologies, to name a few?
As someone who has his head in the cloud every day, I cannot begin to keep up with every aspect of even a single cloud provider’s services, let alone the offerings of multiple cloud providers. AWS re:Invent is an excellent opportunity to take a break from doing what you do every day. You can:
- Learn about new AWS services
- Hear from others about how they use AWS to solve business problems
- Learn about newly launched services
- Uncover features the AWS team has released to solve some common pain points
Top Takeaways
Many blogs and posts hit the highlights and the big announcements from re:Invent 2019. I’m going to focus this blog on a few of the new features and services that help to solve some pain points that require many of us to artistically mold a workable solution using electronic bailing wire. Here are the top four new nuggets that will help many customers build or improve on existing solutions on the AWS platform.
1. AWS Transit Gateway supports Inter-Region Peering
AWS announced Transit Gateway last year at re:Invent 2018. It was a lifesaver for organizations of all sizes, especially larger organizations that previously managed hundreds or thousands of peer-to-peer network connections to connect VPC. Transit Gateway allowed you to significantly simplify the network by creating a single connection from each VPC on the on-prem network to a central gateway in a hub and spoke model. This update was great! Unfortunately, there was no way to connect multiple regions as part of the Transit Gateway.
The Fix: Transit Gateway now supports peering connections between Transit Gateways in different regions. With the new peering functionality, you can create a Transit Gateway in each region and peer the gateways, creating a truly global transit network between all VPC, regardless of where they live. The new feature allows you to simplify the network topology when connecting, for example, a primary and DR region with an on-prem network. For additional details, see AWS Transit Gateway.
As a bonus, AWS has also announced AWS Transit Gateway Multicast, which lets you migrate or build multicast applications on AWS. Multicast is a preferred protocol for streaming multimedia, and we commonly use it for clustering technologies such as Redhat Cluster Server.
2. S3 Access Points
S3 Storage is one of the most popular services used by almost every organization. Initially, it was a simple storage service for storing and retrieving data. It was relatively easy to manage security using S3 bucket policies, ACLs or IAM Policies. However, as more services access shared S3 buckets, such as Redshift, Athena, EMR and Lake Formation, access patterns and the overhead required to manage the access policies evolved and become more complicated. A single S3 bucket policy may be hundreds of lines long and challenging to maintain, understand and audit.
The Fix: AWS launched a new intermediary service to manage the security of these S3 buckets we can access by multiple resources or groups of people. S3 Access Points are unique hostnames with a dedicated access policy that describes how data can be accessed using that specific endpoint. With S3 Access Points, you no longer need to manage a single policy document on a bucket. Now you can add multiple unique access points for individual applications or teams with their access policy. Each Access Point has a unique DNS name we can assign to each application or team. For additional details, see S3 Access Points.
3. Amazon RDS Proxy
Developers originally created relational databases for more monolithic architecture, where you have a lower number of long-running compute instances connecting to the database. They don’t necessarily work well for the high number of ephemeral compute instances, such as Lambda, where numerous connections open and close frequently. This process can stress the database memory and easily overwhelm a database with too many connections, making it inaccessible.
The Fix: RDS Proxy is a new service that sits between your application and the database to pool and share established database connections, improving database efficiency and application scalability. Once established, you can point your application at the proxy, instead of the database. Additionally, the proxy handles automatic failover between multi-AZ RDS and Aurora database, and the failover is advertised at 66% faster than standard DNS failover. For additional details, see Amazon RDS Proxy.
4. EKS on AWS Fargate
By eliminating the need to provision and manage infrastructure for running containerized Docker applications, organizations saw Fargate for ECS as a big hit for many organizations. However, organizations that standardized on Kubernetes have the additional overhead of managing the underlying infrastructure and control plane.
The Fix: As of December 2019, you can now run Kubernetes clusters on AWS Fargate, eliminating the need to create or manage EC2 instances. You no longer have to worry about patching, securing or scaling a cluster of EC2 instances to run Kubernetes applications in the cloud. Like ECS, you can define and pay for resources at the pod-level. Along with this announcement, AWS has a command-line utility to manage EKS clusters called eksctl. For additional details, see Amazon EKS on AWS Fargate.
Conclusion
I hope these four AWS re:Invent takeaways give you a better look into what’s to come in AWS. The potential for advancement is unreal, and I find it incredibly exciting to be on the forefront!