An organization must store thousands of sensitive audio and video files in an Amazon S3 bucket.
Organizational security policies require that all data written to this bucket be encrypted.
How can compliance with this policy be ensured?
A. Use AWS Lambda to send notifications to the security team if unencrypted objects are pun in the bucket.
B. Configure an Amazon S3 bucket policy to prevent the upload of objects that do not contain the x-amzserver-side-encryption header.
C. Create an Amazon CloudWatch event rule to verify that all objects stored in the Amazon S3 bucket are encrypted.
D. Configure an Amazon S3 bucket policy to prevent the upload of objects that contain the x-amz-server-sideencryption header.
A company wants to containerize an existing three-tier web application and deploy it to Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. The application is using session data to keep track of user activities. The company needs a solution to store the session data.
Which solution will meet these requirements with the HIGHEST throughput?
A. Provision an Amazon ElastiCache for Redis cluster. Store the session data in the cluster.
B. Create a session table in Amazon Redshift. Store the session data in the table.
C. Place the web application behind a Network Load Balancer with session affinity (sticky sessions) enabled. Store the session data in the containers.
D. Create a new Amazon S3 bucket with S3 Versioning enabled. Store the session data in the S3 bucket.
A company is creating a REST service using an Amazon API Gateway with AWS Lambda integration. The service must run different versions for testing purposes. What would be the BEST way to accomplish this?
A. Use an X-Version header to denote which version is being called and pass that header to the Lambda function(s)
B. Create an API Gateway Lambda authorizer to route API clients to the correct API version
C. Create an API Gateway resource policy to isolate versions and provide context to the Lambda function(s)
D. Deploy the API versions as unique stages with unique endpoints and use stage variables to provide further context
In a move toward using microservices, a company's Management team has asked all Development teams to build their services so that API requests depend only on that service's data store. One team is building a Payments service which
has its own database; the service needs data that originates in the Accounts database.
Both are using Amazon DynamoDB.
What approach will result in the simplest, decoupled, and reliable method to get near-real time updates from the Accounts database?
A. Use Amazon Glue to perform frequent ETL updates from the Accounts database to the Payments database.
B. Use Amazon ElastiCache in Payments, with the cache updated by triggers in the Accounts database.
C. Use Amazon Kinesis Data Firehouse to deliver all changes from the Accounts database to the Payments database.
D. Use Amazon DynamoDB Streams to deliver all changes from the Accounts database to the Payments database.
A Developer has created an S3 bucket s3://mycoolapp and has enabled server across logging that points to the folder s3://mycoolapp/logs. The Developer moved 100 KB of Cascading Style Sheets (CSS) documents to the folder s3:// mycoolapp/css, and then stopped work. When the developer came back a few days later, the bucket was 50 GB.
What is the MOST likely cause of this situation?
A. The CSS files were not compressed and S3 versioning was enabled.
B. S3 replication was enabled on the bucket.
C. Logging into the same bucket caused exponential log growth.
D. An S3 lifecycle policy has moved the entire CSS file to S3 Infrequent Access.
What is required to trace Lambda-based applications with AWS X-Ray?
A. Send logs from the Lambda application to an S3 bucket; trigger a Lambda function from the bucket to send data to AWS X-Ray.
B. Trigger a Lambda function from the application logs in Amazon CloudWatch to submit tracing data to AWS X-Ray.
C. Use an IAM execution role to give the Lambda function permissions and enable tracing.
D. Update and add AWS X-Ray daemon code to relevant parts of the Lambda function to set up the trace.
A company is launching a poling application. The application will store the results of each pool an Amazon DynamoDB table. Management wants to remove pool data after a few data and store an archive of those records in Amazon S3. Which approach would allow the application to archive each poll's data while keeping complexity to a MINIMUM?
A. Enable Time to Live (TTL) on the DynamoDB table. Enable DynamoDB Streams on the table and store the records removed from the stream in Amazon S3.
B. Schedule an AWS Lambda function to periodically scan the DynamoDB table. Use the BatchWritten operation to delete the results of a scan Enable DynamoDB Stream on the table and store the records removed from the stream in Amazon S3.
C. Enable DynamoDB Streams on the table. Configure the steam as trigger for AWS Lambda. Save records to Amazon S3 when records on the stream are modified.
D. Enable cross-Region replication on the S3 bucket to achieve the poll data.
A developer has code stored in an Amazon S3 bucket The code must be deployed as an AWS Lambda function across multiple accounts in the same Region as the S3 bucket The Lambda function will be deployed using an AWS CloudFormation template that is run for each account What is the MOST secure approach to allow access to the Lambda code in the S3 bucket?
A. Grant the CloudFormation execution role S3 list and get permissions Add a bucket policy to Amazon S3 with the Pnncipal of "AWS": [account numbers].
B. Grant the CloudFormation execution role S3 get permissions Add a bucket policy to Amazon S3 with the Principal of "".
C. Use a service-based link to grant the Lambda function S3 list and get permissions by explicitly adding the S3 bucket's account number in the resource
D. Use a service-based link to grant the Lambda function S3 get permissions and add a Resource of "*" to allow access to the S3 bucket.
A developer has created an AWS Lambda function that is written in Python The Lambda function reads data from objects in Amazon S3 and writes data to an Amazon DynamoDB table
The function is successfully invoked from an S3 event notification when an object is created However, the function fails when if attempts to write to the DynamoDB table
What is the MOST likely cause of this issue?
A. The Lambda function's concurrency limit has been exceeded
B. The DynamoDB table requires a global secondary index (GSI) to support writes.
C. The Lambda function does not have 1AM permissions to write to DynamoDB ID.
D. The DynamoDB table is not running in the same Availability Zone as the Lambda function.
A developer is running an application on an Amazon EC2 instance. When the application tries to read an Amazon S3 bucket the application fails. The developer notices that the associated IAM role is missing the S3 read permission. The developer needs to give the application the ability to read the S3 bucket.
Which solution will meet this requirement with the LEAST application disruption?
A. Add the permission to the role. Terminate the existing EC2 instance. Launch a new EC2 instance
B. Add the permission to the role so that the change will take effect automatically
C. Add the permission to the role. Hibernate and restart the existing EC2 instance.
D. Add the permission to the S3 bucket. Restart the EC2 instance.