Identity access management
Year after year, locking down the human element continues to be the most important and most challenging security control. It is imperative to control who has access to organizational resources and how much access individuals have. Within AWS, several best practices relating to IAM can be implemented to prevent unauthorized or excessive access to data residing within the cloud. They include:
- Administering the admins. The root account should not be used for day-to-day functions; instead, individual accounts should be created for each administrator, and each should be assigned to an administrative group with the proper privileges. Increased account security requirements should be in place for administrative accounts requiring adequate length, complexity, and lockouts to prevent guessing or brute-forcing attempts.
- Reviewing account access. AWS IAM Access Analyzer can be used to review users’ permissions and to remove unused permissions. Users and their roles and groups should be routinely reviewed to make sure excessive access is not provisioned and that users retain the minimum permissions to complete job functions.
- Key management. Access keys should be rotated every 90 days to reduce the likelihood of keys becoming compromised or reused. Additionally, users should be limited to one access key per account to prevent confusion between access keys and rotation. Access keys in place should be reviewed to identify users who might not be using command line access. If users do not require access for their job function, unnecessary access should be removed to reduce the likelihood of the account becoming compromised. Temporary access keys with multifactor authentication (MFA) devices should be used to require authentication for usage of the AWS command line interface.
Infrastructure hardening and data protection
AWS is attractive to organizations because of its scalability and the ease with which it helps organizations stand up resources virtually. However, it’s important to secure resources to protect sensitive data and reduce the potential attack surface area. The more that can be done to compartmentalize systems within the organization, the more isolated a potential compromise becomes.
Much like segmenting an internal network, cloud resources should be limited to the minimum access to users and other systems to reduce the impact of an incident. Locking down the communication between systems in addition to individual resource hardening can quickly increase the security of the cloud infrastructure. The following practices are initial steps that can have a big impact on securing resources hosted within an AWS environment:
- Networking. Network access control lists and security groups can be configured to prevent access from the internet (0.0.0.0/0) to management ports. Systems should be reviewed to validate private hosts are not assigned IPv4/6 addresses. Additionally, Amazon Elastic Compute Cloud (EC2) instances should be restricted from automatically being assigned public addresses. Subnets and virtual private clouds (VPCs) can be used to restrict systems and logically segment the infrastructure. Components of systems should be isolated to their own VPCs and resources to reduce the impact of a compromise and prevent an attacker’s ability to pivot through the environment.
- Virtual computing. A catalog of preconfigured virtual images should be used to allow for strong configuration management and hardening practices to be in place at deployment of a new asset. Cloud automation tools can deploy EC2 instances with predefined attributes and security configuration settings. AWS DevOps and AWS Configure can manage services and generate alerts on resources not adhering to defined attributes. Load balancing and auto-scaling groups should be configured to allow for adaptive responses of infrastructure to increases in traffic or use.
- Cloud storage. Amazon Simple Storage Service (S3) buckets with potentially sensitive information should be locked down by enabling encryption, access and object logging, and versioning to secure and audit access to data. Amazon Elastic Block Store (EBS) volumes should be encrypted and snapshots created to secure and back up data used by EC2 instances. Additionally, EBS volumes should be prevented from being attached to EC2 instances when not in use.
- Database management. Amazon databases should not be publicly available and should be encrypted to secure data at rest while using multiple availability zones to maintain access to data. Minor version upgrades should also be enabled to keep databases up to date.
Logging and monitoring
Logging and monitoring are paramount in producing a clear picture of activities within the cloud environment. While resource health and utilization logging are important in understanding performance and optimization of resources, it is also critical to monitor user activity to be able to detect anomalous events potentially indicative of a security event or to maintain compliance with regulatory law. AWS offers the CloudTrail and CloudWatch solutions to accomplish these goals, but understanding the differences is important when configuring logging and monitoring in a cloud environment.
- User activity. CloudTrail can help illuminate user activity within AWS, and it can be used to gain insight on application programming interface calls to services and resources as well as what users are doing within the cloud environment and command line. CloudTrail is the audit trail of who did what, when, and from where. CloudTrail can best be thought of as a ledger showing the changes made to the environment, tracking user account history, and identifying potential security events.
- Performance. CloudWatch can help an organization collect information on use, scalability, and overall performance. CloudWatch can ingest logs from a wide variety of AWS services as well as custom logs from applications and on-premises resources. The ability to configure logs, metrics, and alarms within CloudTrail makes the solution a strong logging and monitoring tool for identifying what is happening throughout the environment and for allowing for quick adaptations and responses to changes.
- Best practices. Applying trails to all AWS Regions can help an organization make sure auditing is taking place throughout the environment. CloudTrail and CloudWatch can send logs to S3 buckets for storage and review. As with any data being secured, the buckets should not be publicly accessible; they should be encrypted and validated to maintain integrity. Strong permission settings should be established regarding those who can access and change logging and monitoring settings. MFA deletion and versioning should be configured to prevent logs from being removed.
Cost optimization
With the ability to rapidly scale and provide virtually unlimited computing power, it is important to use only what is required to meet demands in order to save costs. AWS offers several tools to evaluate current costs and reduce excessive resources to minimize how much an organization is spending to maintain infrastructure.
- Reporting. Using Cost Explorer allows an organization to review reports allowing for an analysis of costs and their sources. The tool allows for a comparison of costs from month to month as well as specifics such as cost per hour and which resources are costing the most. From the Cost Explorer dashboard, analysts can filter, group, and tag resources to increase savings by adjusting pricing or purchasing a compute savings plan.
- Service-specific tools. Several services within AWS offer tools to identify resource waste or allow for automatic optimization. Here are some tools native to common AWS services that can be used to reduce costs:
- S3 analytics helps analyze storage access patterns and infrequently used Amazon S3 buckets to reduce the amount of storage utilized by the organization.
- EBS volumes check can identify volumes that have low activity to reduce the number of volumes stored within EBS by eliminating unused volumes and snapshots.
- Idle database instance check can help reduce costs by stopping unused database instances deployed within Redshift.
- EC2 instance scheduler helps schedule the use of instances that are not required to be powered on and in use 24/7.
- EC2 operations conductor can help adjust the size and computing power of Amazon EC2 instances based on AWS Cost Explorer recommendations.
Moving up to the cloud
Migrating to the cloud presents a variety of challenges in the form of security, design, and cost. The benefits of using IaaS are plentiful, and the security gap between the rapid adoption of new services and organizations understanding how to securely implement them needs to be proactively addressed.
Strong governance and security controls must be put in place to address new problems that inevitably will arise. Failure to adopt proper security controls around IaaS solutions could result in cloud-hosted assets becoming less secure and more expensive than their traditional counterparts. However, cloud computing is the future of organizational infrastructure. With strong security controls in place, it offers far more benefits than traditional on-premises infrastructure.