In 2014, an attacker compromised Code Spaces’ Amazon Web Services (AWS) account used to deploy Code Spaces’ commercial code-hosting service. The attacker gained access to their control panel and demanded money. When Code Spaces refused, the attacker began to systematically delete Code Spaces’ resources hosted on AWS, including all EBS snapshots, S3 buckets, AMIs, some EBS instances, and a number of machine instances. While Code Spaces maintained backups of their resources, those too were controlled from the same panel and were permanently erased. The attack was so devastating it forced Code Spaces, a thriving company, to shut down for good.
With the emergence of cloud services such as AWS, the threat landscape has evolved, but with the right preparation any company can implement security practices in AWS that significantly reduce the potential impact of a cyber-attack.
While the story of Code Spaces is perhaps the worst case scenario of what could happen when a hacker successfully attacks an organization’s AWS environment, an incident that results in downtime of even a few hours could have a sizable impact. For example, in August 2016, a six-hour application outage at Delta Airlines delayed flights for hundreds of thousands of passengers and is estimated to have cost the company tens of millions of dollars. Below are a set of best practices that every organization should implement to protect their AWS environments and the applications deployed in them. You can also download our ebook to learn how a CASB can protect your AWS environment, and the custom applications running in AWS.
AWS infrastructure security best practices
1) Familiarize yourself with AWS’s shared responsibility model for security
Like most cloud providers, Amazon operates under a shared responsibility model. Amazon takes responsibility for the security of its infrastructure, and has made platform security a priority in order to protect customers’ critical information and applications. Amazon detects fraud and abuse, and responds to incidents by notifying customers. However, the customer is responsible for ensuring their AWS environment is configured securely, data is not shared with someone it shouldn’t be shared with inside or outside the company, identifying when a user misuses AWS, and enforcing compliance and governance policies.
- Amazon’s responsibility – Since it has little control over how AWS is used by its customers, Amazon has focused on the security of AWS infrastructure, including protecting its computing, storage, networking, and database services against intrusions. Amazon is responsible for the security of the software, hardware, and the physical facilities that host AWS services. Amazon also takes responsibility for the security configuration of its managed services such as Amazon DynamoDB, RDS, Redshift, Elastic MapReduce, WorkSpaces, etc.
- Customer’s responsibility – AWS customers are responsible for secure usage of AWS services that are considered unmanaged. For example, while Amazon has built several layers of security features to prevent unauthorized access to AWS including multifactor authentication, it is the responsibility of the customer to make sure multifactor authentication is turned on for users, particularly for those with the most extensive IAM permissions in AWS.
2) Tighten CloudTrail security configurations
CloudTrail is an AWS service that generates log files of all API calls made within AWS, including the AWS management console, SDKs, command line tools, etc. This capability allows organizations to continuously monitor activities in AWS for compliance auditing and post-incident forensic investigations. The generated log files are stored in an S3 bucket. If a cyber attacker gains access to an AWS account, one of the first things they’ll do is disable CloudTrail and delete the log files. To get the full benefit of CloudTrail, organizations must:
- Enable CloudTrail across all geographic regions and AWS services to prevent activity monitoring gaps.
- Turn on CloudTrail log file validation so that any changes made to the log file itself after it has been delivered to the S3 bucket is trackable to ensure log file integrity.
- Enable access logging for CloudTrail S3 bucket so that you can track access requests and identify potentially unauthorized or unwarranted access attempts.
- Turn on multifactor authenthication (MFA) to delete CloudTrail S3 buckets, and encrypt all CloudTrail log files in flight and at rest.
3) Follow Identity and Access Management (IAM) best practices
IAM is an AWS service that provides user provisioning and access control capabilities for AWS users. AWS administrators can use IAM to create and manage AWS users and groups and apply granular permission rules to users and groups of users to limit access to AWS APIs and resources (watch the intro to IAM video below). To make the most of IAM, organizations should:
- When creating IAM policies, ensure that they’re attached to groups or roles rather than individual users to minimize the risk of an individual user getting excessive and unnecessary permissions or privileges by accident.
- Provision access to a resource using IAM roles instead of providing an individual set of credentials for access to ensure that misplaced or compromised credentials don’t lead to unauthorized access to the resource.
- Ensure IAM users are given minimal access privileges to AWS resources that still allows them to fulfill their job responsibilities.
- As a last line of defense against a compromised account, ensure all IAM users have multifactor authentication activated for their individual accounts, and limit the number of IAM users with administrative privileges.
- Rotate IAM access keys regularly and standardize on a selected number of days for password expiration to ensure that data cannot be accessed with a potential lost or stolen key.
- Enforce a strong password policy requiring minimum of 14 characters containing at least one number, one upper case letter, and one symbol. Apply a password reset policy that prevents users from using a password they may have used in their last 24 password resets.
4) Follow security best practices when using AWS database and data storage services
Amazon offers several database services to its customers, including Amazon RDS (relational DB), Aurora (MySQL relational DB), DynamoDB (NoSQL DB), Redshift (petabyte-scale data warehouse), and ElastiCache (in-memory cache). Amazon also provides data storage services with their Elastic Block Store (EBS) and S3 services. Below are some best practices around AWS database and data storage security:
- Turn on Redshift audit logging in order to support auditing and post-incident forensic investigations for a given database.
- Encrypt data stored in EBS as an added layer of security.
- Encrypt Amazon RDS as an added layer of security.
- Enable require_ssl parameter in all Redshift clusters to minimize the risk of man-in-the-middle attack.
- Restrict access to RDS instances to decrease the risk of malicious activities such as brute force attacks, SQL injections, or DoS attacks.
Custom applications security best practices
5) Inventory and categorize all existing custom applications deployed in AWS
Inventory applications by the types of data stored in them, their compliance requirements, and possible threats they may face. Visibility into sensitive data enables the security team to identify which internal and external regulations apply to an app and its data, and what kind of security controls must be in place to protect it.
6) Involve IT security teams throughout the application development lifecycle
DevOps should invite the IT security team to bring their own application testing tools and methodologies when pushing production code without slowing down the process. IT security should also ensure that application end users are using the app in a secure manner.
7) Grant the fewest privileges possible for application users
Unrestricted or overly permissive user accounts increase the risk and damage of an external or internal threat. Application administrators should limit a user’s permissions to a level where they can only do what’s necessary to accomplish their job duties.
8) Enforce a single set of data loss prevention policies
Enforce consistent policies across custom applications and all other cloud services. By using a single DLP policy engine, incident reporting, and remediation workflow, and organization can drive greater operational efficiency while also preventing policy enforcement gaps between cloud services.
9) Encrypt highly sensitive data such as protected health information (PHI) or personally identifiable information (PII) using customer controlled keys.