Businesses move to the AWS cloud to save time, money, and to free their technologists to do the work that delivers value to their customers. But mismanaging AWS resources can quickly dilute your cost savings, put you at risk of security breaches, and cause problems in your cloud environment that require expensive, labor-intensive resolutions. To help you avoid such a fate, we’ve created a list of common mistakes companies make in managing their AWS resources and how to avoid them.
- Too Many Powerful Users: Having too many users with permissions they don’t need leaves your AWS environment vulnerable to catastrophes resulting from human error, configuration issues, and security threats. Be conservative when setting up your permissions, especially as your teams grow. Not everyone needs to be an admin. Follow the Principle of Least Privilege: use policies and roles to restrict access and provide developers the fewest permissions necessary to do their jobs.
- Not Prioritizing Training and Certification: Managing AWS resources appropriately is easier if the entire IT organization is trained to speak the same language when it comes to the management of their cloud resources. Cloud computing is a culture, and certifications guarantee that your technologists speak a common language as they evolve your stack. A Cloud Guru is an AWS Partner and recognized leader in cloud certification training for teams and individuals.
- Running Anything as Root: The AWS root account should never be used for day-to-day work — by anyone. Instead, create an Admin group for your power users and adhere to the Principle of Least Privilege for all other users and groups, assigning role-based access to the services a developer needs to do her job. Access Keys should be used sparingly, rotated frequently, and protected diligently. Multi-Factor Authentication should always be enabled on your root accounts.
- Not Using AWS CloudTrail: CloudTrail is an invaluable administrative and auditing service, logging all actions performed via the AWS Management Console, the AWS SDKs, command line tools, and other AWS services. It delivers — to the S3 bucket you specify — a history of all AWS API calls, providing insight that fuels both your auditing and compliance efforts. It also enables you to track changes to your resources and lets you see which users made which changes.
- Leaving Connections Wide Open: Many admins leave their ports open to the world, jeopardizing your environment’s security by making it possible for any machine, anywhere, to connect to your AWS resources. According to Fahmida Rashid, senior writer at CSO, one-third of the top 30 common AWS configuration mistakes identified by Saviynt involve open ports. She recommends giving your security groups the narrowest focus possible and using different AWS security groups as a source or destination to ensure only instances and load balancers in a specific group can communicate with another group.
- Not Taking Advantage of CloudWatch Alarms: CloudWatch is natively integrated with more than 70 AWS services such as Amazon EC2, Amazon DynamoDB, Amazon S3, Amazon ECS, AWS Lambda, Amazon API Gateway, that automatically publish detailed 1-minute metrics and custom metrics with up to 1-second granularity. CloudWatch allows users to monitor resource use, application performance, operational issues and constraints. Users can set CloudWatch Alarms that will alert them when a metric crosses its specified limit. This allows users to take quick action when resource use needs to be adjusted.
- Overestimating AWS Responsibility and Support: Remember: AWS is responsible for the security of the cloud, and you are responsible for your security in the cloud. That means that AWS is responsible only for the infrastructure they provide — you’re responsible for all aspects of what you do upon that secure infrastructure. Security, maintenance, and troubleshooting are completely up to your engineering team. While some organizations (particularly small boutique development groups) prefer to outsource their cloud maintenance needs — which can be more cost effective and provide for better, faster service — all managers overseeing a cloud infrastructure should be intimately familiar with the AWS Shared Responsibility Model.
- Not Using Auto Scaling: Setting up Auto Scaling allows you to build scaling policies for resources across multiple services - including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas - and can now be done from a single interface. Auto Scaling gives you recommendations for optimizing performance, costs and creating a balance between the two.
- Ignoring Trusted Advisor: AWS Trusted Advisor monitors customer configurations and advises cloud administrators on how to provision resources according to best practices in four categories: cost optimization, security, fault tolerance and performance improvement. Your Trusted Advisor Dashboard will notify on how you can make improvements. You can also enable weekly emails alerting you of new or resolved issues from week to week.
- Not Using Spot Instances: Spot Instances offer spare compute capacity available in the AWS cloud at steep discounts. They can be interrupted any time the market price for spot instances exceeds your bid price (the amount you’re willing to pay for this intermittent compute service), so spot instances should not used for mission-critical workloads. However, they’re a great way to save money when running workloads like large-scale data analytics projects or other workloads that aren’t time-sensitive..
Managing AWS resources to maximize cost savings while experiencing optimal security and service takes some trial and error. Avoiding these mistakes will help you strike that balance sooner and reap the benefits of the cloud.
Not an ACG for Business member yet?
We provide everything you need to level-up your team’s skills, establish a cloud culture, prepare your business for the future, and get the absolute most out of each and every license.