Encryption Controls That SOC 2 Auditors Check in Your Cloud Infrastructure
Understand exactly which encryption controls SOC 2 auditors test, what passing versus failing looks like for each, the most common encryption gaps in AWS and cloud environments, and how to verify your encryption controls are audit-ready before the observation period starts.
Introduction
Encryption is one of the few SOC 2 control categories where auditors can test with precision. Unlike access review processes or change management procedures where they rely heavily on documentation and sampling, encryption controls leave clear, verifiable technical evidence. Either TLS is enforced on your API endpoints or it is not. Either your production database has encryption at rest enabled or it does not. Either your key management policy is being followed or there are keys that have never been rotated sitting in production.
That precision cuts both ways. Organizations with genuinely well-configured encryption pass this category cleanly. Organizations that assumed their cloud provider handled everything, or that enabled encryption in some places but not consistently across the full environment, tend to discover gaps at exactly the wrong moment.
This guide covers what SOC 2 auditors actually test when they examine your encryption controls, what a passing result looks like versus a finding for each control category, where the most common gaps appear in AWS and cloud environments, and the specific checks you should run before your observation period starts to confirm your encryption posture is audit-ready.
Why Encryption Gets Its Own Attention in SOC 2
The Security criterion in SOC 2 addresses encryption through multiple control points, but the core requirement is straightforward: customer data must be protected from unauthorized access, and encryption is a foundational mechanism for that protection. Without encryption at rest, a compromised storage system exposes data directly. Without encryption in transit, data moving between services or between your application and customers can be intercepted.
Auditors approach encryption controls with a specific mindset. They are not asking whether you have an encryption policy. They are asking whether encryption is consistently implemented across every system and communication channel in scope, whether the cryptographic standards in use are current and appropriate, and whether the keys that protect encrypted data are themselves managed with appropriate controls.
The distinction matters because encryption that exists in some places but not others is not a partially passing control. It is a gap in the security boundary, and the weakest point in that boundary is the one that matters for a breach scenario. Auditors assess completeness, not average coverage.
Data at Rest Encryption: What Auditors Test
Data at rest encryption protects stored data from unauthorized access if the underlying storage media is compromised, accessed without authorization, or physically removed. For cloud environments, this primarily means database storage, object storage, disk volumes attached to compute instances, and backup storage.
What Auditors Look For
Auditors testing data at rest encryption will request configuration evidence from each storage category in your in-scope environment. They are looking for confirmation across four dimensions.
First, that encryption is enabled. This sounds obvious but is the starting point. For each storage type, the auditor wants to see that encryption is turned on, not just that it is available as an option.
Second, that encryption covers all in-scope storage. A production database with encryption enabled but unencrypted read replicas, backup snapshots that are not encrypted, or a data export pipeline that writes unencrypted files to object storage represents an incomplete implementation. Auditors look for coverage gaps specifically.
Third, that the encryption standard in use is appropriate. AES-256 for symmetric encryption is the current baseline for data at rest. Weaker standards, such as AES-128 in some legacy configurations, or deprecated algorithms found in older system configurations, will draw questions. Auditors also verify that the encryption is applied at the storage layer rather than relying solely on application-layer encryption, which is harder to verify consistently.
Fourth, that encryption keys are managed appropriately. This connects to key management controls discussed separately below, but at the data at rest level, auditors want to confirm that encryption keys are not stored alongside the data they protect.
What Passing Looks Like
For Amazon RDS, a passing configuration shows storage encryption enabled at instance creation with a KMS key specified. For Aurora, encryption enabled at the cluster level with encryption propagating to all instances and automated backups in the cluster. For DynamoDB, server-side encryption enabled using either AWS owned keys or KMS customer managed keys. For S3, default encryption configured at the bucket level using SSE-S3 or SSE-KMS, with bucket policies that deny PutObject requests that do not specify encryption where required.
For compute instances, EBS volumes attached to EC2 instances are encrypted, and account-level settings that enforce EBS encryption by default are enabled. For container environments, persistent volumes used by stateful workloads are encrypted at the storage class level.
Evidence format that auditors typically accept: configuration exports from the AWS Console or CLI, Terraform state showing encryption parameters, or CSPM tool reports that map encryption status per resource. Screenshots can supplement but should not be the only evidence for a control as technically verifiable as this one.
What Failing Looks Like
Common findings in this category include: RDS instances with encryption not enabled because they were created before the organization had an encryption policy and were never migrated to encrypted instances. S3 buckets with no default encryption configured because they predate S3's default encryption rollout and were never updated. EBS volumes attached to older EC2 instances that were launched before account-level encryption enforcement was implemented. Database backup snapshots that are unencrypted even when the source instance is encrypted, because the snapshot configuration was not explicitly set.
Each of these is a finding with a specific remediation path, and auditors will identify them through configuration analysis rather than by taking your word that encryption is in place.
Data in Transit Encryption: What Auditors Test
Data in transit encryption protects data moving between systems, whether between your application and customers, between internal services, or between your environment and third-party vendors. For SOC 2 purposes, this means TLS configuration on all channels where customer data flows.
What Auditors Look For
Auditors testing data in transit encryption focus on two areas: external-facing communication channels and internal service-to-service communication.
For external-facing channels, the test is whether TLS is enforced and whether the TLS configuration meets current cryptographic standards. Auditors check that HTTP traffic is redirected to HTTPS rather than served in plaintext. They check TLS version support to confirm that TLS 1.2 at minimum, and preferably TLS 1.3, is enforced and that TLS 1.0 and TLS 1.1 have been disabled. They check cipher suite configuration to confirm that weak or deprecated ciphers are not supported. They verify that certificates are valid, issued by a trusted Certificate Authority, and not expired or near expiration.
For internal service-to-service communication, the test is whether services communicating within your production environment use TLS, particularly for channels where customer data passes. This includes database connections from application servers to databases, API calls between microservices, and connections to managed services such as caching layers and message queues.
What Passing Looks Like
For external-facing endpoints, a passing configuration enforces HTTPS, returns a valid certificate with appropriate chain of trust, supports TLS 1.2 and TLS 1.3, and rejects connections using TLS 1.0 or 1.1. Tools like SSL Labs provide a standardized assessment of TLS configuration that auditors recognize, and an SSL Labs rating of A or A+ for customer-facing endpoints is a strong evidence artifact.
Load balancer configurations for AWS Application Load Balancers should show a security policy that enforces TLS 1.2 minimum, such as ELBSecurityPolicy-TLS13-1-2-2021-06 or current equivalent. Listener configurations should redirect HTTP to HTTPS with a 301 redirect. HTTPS listeners should use certificates from AWS Certificate Manager or a validated external CA.
For database connections, the RDS SSL/TLS parameter requiring encrypted connections should be enforced at the parameter group level, not left as optional for clients to negotiate. For services connecting to RDS, the connection string should specify SSL mode required rather than preferred or disabled.
For internal service communication in containerized environments, service mesh configurations or application-level TLS termination evidence demonstrates that internal traffic is encrypted. In environments using AWS PrivateLink or VPC endpoints for service connectivity, documentation of that network architecture supports the transit encryption control.
What Failing Looks Like
The most common data in transit findings include: HTTP endpoints that serve content without redirecting to HTTPS, either for legacy internal tools or for endpoints that were not added to the reverse proxy configuration when they were created. Support for TLS 1.0 or TLS 1.1 on customer-facing endpoints, which remains enabled in some environments that have not updated their load balancer security policies from older defaults. Database connections configured with SSL mode set to preferred or disabled rather than required, meaning clients can connect over unencrypted channels. Expired or self-signed certificates on internal services that handle customer data.
The internal service communication gap is the one most organizations do not catch until an auditor looks. Teams are rigorous about HTTPS on customer-facing APIs but less rigorous about the connection from the API service to the database or from an internal processing service to its dependencies.
Certificate Management as Part of Transit Encryption
Certificate management is tested as part of data in transit controls because a certificate that expires or is replaced incorrectly breaks the TLS chain, producing either a service outage or a downgrade to an insecure fallback. Auditors look for evidence of a certificate inventory and renewal process.
A passing certificate management posture includes: certificates inventoried with expiration dates tracked, automated renewal in place for certificates managed through AWS Certificate Manager or Let's Encrypt, and an alert or monitoring process that flags certificates approaching expiration (typically 30 days out). Evidence of this control can be ACM configuration exports showing auto-renewal status, monitoring alert configurations, or records of certificate renewals completed during the observation period.
A failing certificate management posture is one where certificates expire without detection, where renewal is entirely manual with no tracking, or where wildcard certificates are reused across environments in ways that blur the scope boundary.
Key Management: What Auditors Test
Encryption is only as strong as the protection of the keys used to encrypt. Key management is the control category that auditors treat as the next layer of scrutiny once they have confirmed that encryption is enabled. Weak key management can make nominally encrypted data effectively unprotected.
What Auditors Look For
Auditors testing key management focus on four areas: where keys are stored, who can access them, how they are rotated, and what happens when they need to be revoked.
Key storage. Encryption keys should be stored separately from the data they protect and managed through a dedicated key management service rather than embedded in application code, stored in source control, or hardcoded in configuration files. For AWS environments, AWS Key Management Service is the standard. For GCP, Cloud KMS. For Azure, Azure Key Vault. Auditors verify that keys are in dedicated management services rather than in application code or configuration repositories.
Key access controls. KMS keys should have access policies that limit which principals can perform cryptographic operations, which can manage the key, and which can administer key policy. The separation between users who can use a key to encrypt or decrypt data and users who can administer the key policy is a control that auditors specifically examine. Broad key policies that grant all AWS principals in the account access to a key are a common finding.
Key rotation. Keys should be rotated on a documented schedule. For AWS KMS customer managed keys, automatic annual key rotation is supported and should be enabled for most keys. For keys managed outside KMS, a documented manual rotation process with evidence of rotation history is required. Auditors look at whether rotation is enabled and, for Type II audits, whether it has actually occurred during the observation period.
Key usage logging. AWS CloudTrail logs all KMS API calls including Encrypt, Decrypt, GenerateDataKey, and key management operations. Auditors look for evidence that KMS API calls are being logged and that the logs are retained appropriately. Unusual patterns in key usage, such as Decrypt calls from unexpected principals or at unexpected times, should be detectable through the logging configuration.
What Passing Looks Like
For AWS KMS, a passing configuration shows customer managed keys created for distinct purposes, key policies that limit access to specific IAM roles or services on a need-to-use basis, automatic annual rotation enabled, and CloudTrail logging capturing KMS API calls delivered to a central log destination. Key aliases are used to reference keys in application configurations rather than key ARNs that might encourage developers to hardcode key references.
Evidence that auditors accept includes KMS key policy exports, CloudTrail configuration showing logging of KMS events, key rotation status from the AWS Console or CLI, and access records showing which principals used which keys during the observation period.
What Failing Looks Like
The most common key management findings include: KMS keys with overly permissive key policies that allow broad access rather than restricting to specific services and roles. Automatic rotation not enabled for customer managed keys. Application code or infrastructure configuration files containing hardcoded encryption keys or secrets rather than referencing keys through a managed service. Keys created by developers for one-off use that were never cleaned up and still have active permissions. No documented process for key revocation or replacement in the event of a compromise.
The Most Common Encryption Gaps in AWS and Cloud Environments
Beyond the category-by-category findings above, certain encryption gaps appear consistently across cloud environments during SOC 2 assessment. Knowing these patterns lets you check for them specifically before the observation period begins.
S3 Buckets Created Before Default Encryption Was Enforced
AWS made S3 default encryption a platform-level behavior in January 2023, meaning new buckets get server-side encryption with Amazon S3 managed keys by default. However, buckets created before that change, or before your organization implemented a stricter encryption policy, may not have encryption enabled or may use weaker key management than your policy requires.
Run an audit of all S3 buckets in in-scope accounts and verify the encryption configuration explicitly for each one. Do not assume that because new buckets have default encryption, all buckets do.
Unencrypted Database Snapshots
RDS automated backups and manual snapshots inherit the encryption status of the source instance, but there are scenarios where unencrypted snapshots exist: snapshots taken before encryption was enabled on the instance, manual snapshots created from unencrypted read replicas, or snapshots shared across accounts where encryption keys are not available in the destination account.
Audit your RDS snapshot inventory for unencrypted snapshots and either delete snapshots that contain data you no longer need or migrate the data to encrypted storage.
ElastiCache Without Encryption in Transit
ElastiCache for Redis supports both encryption at rest and encryption in transit, but neither is enabled by default in all configurations. Organizations that deployed ElastiCache clusters for session storage, caching, or rate limiting without explicitly configuring transit encryption are passing customer data through an unencrypted channel between application servers and the cache.
Check ElastiCache cluster configurations for transit encryption (the in-transit encryption parameter). Note that enabling transit encryption on an existing cluster typically requires creating a new cluster, which should be planned as a change management event.
Secrets in Environment Variables and Parameter Store Without Encryption
AWS Systems Manager Parameter Store supports both standard parameters and SecureString parameters. SecureString parameters use KMS for encryption. Standard parameters are stored in plaintext. Organizations that store database credentials, API keys, or other secrets as standard parameters rather than SecureString parameters have effectively stored credentials in plaintext in a central location.
Audit your Parameter Store parameters for any that should be SecureString but are stored as standard parameters. Similarly, check your Lambda function environment variables and ECS task definitions for secrets that should be in Secrets Manager or Parameter Store but are stored as plaintext environment variable values.
TLS Configuration on Internal Load Balancers
Internal Application Load Balancers serving traffic between microservices are sometimes left with less rigorous TLS configurations than external ALBs, on the assumption that internal traffic is trusted. This assumption does not hold in environments with broad network access, and it does not satisfy the SOC 2 requirement for encryption of data in transit for customer data regardless of whether that transit is internal or external.
Check the listener configurations and security policies on internal load balancers that handle customer data, not just external-facing ones.
CloudFront Distributions Allowing HTTP Origins
AWS CloudFront can be configured to serve HTTPS to clients while connecting to origin servers over HTTP, which means data in transit between CloudFront and your origin is unencrypted despite the customer connection being encrypted. The origin protocol policy for in-scope distributions should be set to HTTPS only, not HTTP only or match viewer.
KMS Keys Without Rotation Enabled
Automatic annual rotation is a straightforward configuration setting in KMS and is not enabled by default for customer managed keys created before AWS made it opt-out rather than opt-in. Organizations that created KMS keys early in their AWS journey frequently have keys that have never been rotated.
Export your KMS key inventory and filter for customer managed keys with automatic rotation disabled. For keys used to protect production data, enable rotation and document the policy.
How to Verify Encryption Controls Are Audit-Ready
Running through the specific checks below before your observation period begins gives you a clear view of your encryption posture and time to remediate gaps before they become audit findings.
The Pre-Observation Encryption Checklist
For data at rest, verify the following across all in-scope accounts and regions.
Confirm that all RDS instances, including read replicas and multi-AZ standby instances, have storage encryption enabled. Run the AWS CLI command or console query that lists instances and their encryption status. Confirm that RDS automated backups and manual snapshots associated with encrypted instances are also encrypted. Check that DynamoDB tables storing customer data have server-side encryption enabled. Verify that all S3 buckets in scope have bucket-level encryption configured and that bucket policies deny unencrypted PutObject requests for buckets containing sensitive data. Confirm that EBS volumes attached to in-scope EC2 instances are encrypted and that account-level EBS encryption defaults are enabled in all regions where production workloads run. Check ElastiCache clusters for encryption at rest configuration.
For data in transit, run the following verifications.
Test all customer-facing endpoints using an SSL/TLS analysis tool to confirm TLS 1.2 minimum, no support for TLS 1.0 or 1.1, and a valid certificate chain. Check Application Load Balancer security policies to confirm they enforce current TLS standards. Verify that HTTP listeners redirect to HTTPS rather than serving content directly. Confirm that RDS parameter groups enforce SSL connections at the database level rather than making SSL optional. Check ElastiCache transit encryption configuration. Verify that CloudFront origin protocol policies are set to HTTPS only for in-scope distributions.
For key management, run through the following.
Export your KMS key inventory and confirm that customer managed keys used for production data have automatic annual rotation enabled. Review key policies for each CMK to confirm that access is restricted to specific IAM roles and services rather than broad account-level access. Verify that CloudTrail is logging KMS API events and that logs are being delivered and retained per your policy. Audit your Parameter Store parameters and Lambda environment variables for plaintext secrets that should be managed through Secrets Manager or SecureString parameters.
Documenting the Results
The pre-observation checklist is not just an internal exercise. The results of these checks, documented and dated, are the starting point for your encryption controls evidence. An auditor asking for evidence that your encryption controls were in place at the start of the observation period can be answered with the output of this verification process, provided you documented it with timestamps and exported configuration data rather than just making a note that you checked.
Run the checks, export the configuration outputs, and store them in your evidence repository alongside the date the verification was performed. This gives you a baseline evidence artifact that predates the start of the observation period and demonstrates that controls were in place before the clock started.
Encryption Policy: The Document Behind the Controls
Technical encryption controls without a supporting policy document create an evidence problem. Auditors want to see that your encryption implementation reflects a deliberate policy decision rather than a collection of independent configuration choices made by different engineers at different times.
Your encryption policy should specify at minimum: the encryption standards required for data at rest and data in transit, the key management requirements including rotation frequency and access controls, the process for handling exceptions when a system cannot meet the standard, and the ownership and review cycle for the policy itself.
A policy that accurately describes what your environment actually does is far more valuable than an aspirational policy that describes what you intend to do. Auditors who read a policy stating that all data at rest must be encrypted using AES-256 with customer managed keys and then find RDS instances using AWS managed keys without documentation of a risk acceptance for the deviation will flag the inconsistency. The policy and the implementation need to align.
If your current implementation uses a mix of AWS managed keys and customer managed keys based on data sensitivity, your policy should reflect that distinction rather than prescribing a single standard that your environment does not uniformly follow.
Conclusion
Encryption controls are among the most technically precise elements of a SOC 2 audit. They are also among the most fixable in advance because the gaps are identifiable through systematic configuration review rather than process observation.
The organizations that pass encryption controls cleanly are not the ones that have done the most sophisticated cryptographic engineering. They are the ones that have been consistent: encryption enabled everywhere in scope, TLS enforced on every channel that carries customer data, keys managed through dedicated services with access controls and rotation, and a policy document that accurately reflects what the environment actually does.
Run the checklist before your observation period begins. Find the gaps while you have time to remediate them. Document the verification so you have evidence that the controls were in place from the start. And when the auditor arrives and asks about your encryption posture, your answer will be a verified configuration record rather than a confident assumption.