AWS Solutions Architect Test 2 Exam
Questions And Answers
A financial services company is looking to transition its IT infrastructure from on-premises to
AWS Cloud. They are moving towards a single log processing model for all their log files
(consisting of system logs, application logs, database logs, etc) that can be processed in a
serverless fashion and then durably stored for downstream analytics. They want to use an AWS
managed service that automatically scales to match the throughput of the log data and requires
no ongoing administration.
As a solutions architect, which of the following AWS services would you recommend solving this
problem? - ANS -Kinesis Data Firehose
(the easiest way to reliably load streaming data into data lakes, data stores, and analytics tools.
THROUGHPUT)
One of the largest biotechnology companies in the world uses Amazon S3 to store and protect
terabytes of critical research data for its AWS based Drug Discovery application, which allows
thousands of universities to collaborate. The engineering team wants to publish an event into an
SQS queue whenever a new research paper is uploaded on S3.
Which of the following statements are true regarding this functionality? - ANS -Standard
SQS queue is only allowed as an Amazon S3 event notification destination, whereas FIFO SQS
queue is not allowed
(The Amazon S3 notification feature enables you to receive notifications when certain events
happen in your bucket.
Currently, the Standard SQS queue is only allowed as an Amazon S3 event notification
destination, whereas the FIFO SQS queue is not allowed.)
A financial services company has to retain the activity logs for each of their customers to meet
regulatory and compliance guidelines. Depending on the business line, the company wants to
retain the logs for 5-10 years in highly available and durable storage on AWS. The overall data
size is expected to be in PetaBytes. In case of an audit, the data would need to be accessible
within a timeframe of up to 48 hours.
Which AWS storage option is the MOST cost-effective for the given compliance requirements? -
ANS -Amazon S3 Glacier Deep Archive
(S3 Glacier Deep Archive is up to 75% less expensive than S3 Glacier and provides retrieval
within 12 hours using the Standard retrieval speed. You may also reduce retrieval costs by
selecting Bulk retrieval, which will return data within 48 hours.)
,Your company has a monthly big data workload, running for about 2 hours, which can be
efficiently distributed across various servers of various sizes, with a variable number of CPU,
and that can withstand server failures.
Which is the MOST cost-optimal solution for this workload? - ANS -Run the workload on a
Spot Fleet
(The Spot Fleet selects the Spot Instance pools that meet your needs and launches Spot
Instances to meet the target capacity for the fleet. SELECTING IN ADVANCE )
You would like to mount a network file system on Linux instances, where files will be stored and
accessed frequently at first, and then infrequently. What solution is the MOST cost-effective? -
ANS -EFS IA
(Amazon EFS Infrequent Access (EFS IA) is a storage class that provides price/performance
that is cost-optimized for files, not accessed every day, with storage prices up to 92% lower
compared to Amazon EFS Standard. CANT MOUNT A NFS ON S3)
A social photo-sharing web application is hosted on EC2 instances behind an Elastic Load
Balancer. The app gives the users the ability to upload their photos and also shows a
leaderboard on the homepage of the app. The uploaded photos are stored in S3 and the
leaderboard data is maintained in DynamoDB. The EC2 instances need to access both S3 and
DynamoDB for these features.
As a solutions architect, which of the following solutions would you recommend as the MOST
secure option? - ANS -Attach the appropriate IAM role to the EC2 instance profile so that
the instance can access S3 and DynamoDB
(Instead, you should use an IAM role to manage temporary credentials for applications that run
on an EC2 instance. When you use a role, you don't have to distribute long-term credentials
(such as a username and password or access keys) to an EC2 instance)
An e-commerce website hosted on an EC2 instance consumes messages from an SQS queue
which has records for pending orders. The SQS queue has a visibility timeout of 30 minutes.
The EC2 instance sends out an email once an order has been processed. The development
team observes that 12 emails have been sent but only 4 orders have been placed.
As a solutions architect, which of the following options would you choose to describe this issue?
- ANS -Because of a configuration issue, the consumer application is not deleting the
messages in the SQS queue after it has processed them
(It is the consumer application's responsibility to process the message from the queue and
delete them once the processing is done. Otherwise, the message will be processed repeatedly
by consumer applications. The SQS queue will not delete any messages unless the default
retention period of 4 days is over.)
A startup has just developed a video backup service hosted on a fleet of EC2 instances. The
EC2 instances are behind an Application Load Balancer and the instances are using EBS
volumes for storage. The service provides authenticated users the ability to upload videos that
are then saved on the EBS volume attached to a given instance. On the first day of the beta
launch, users start complaining that they can see only some of the videos in their uploaded
, videos backup. Every time the users log into the website, they claim to see a different subset of
their uploaded videos.
Which of the following is the MOST optimal solution to make sure that users can view all the
uploaded videos? (Select two) - ANS -Write a one time job to copy the videos from all EBS
volumes to S3 and then modify the application to use Amazon S3 standard for storing the
videos
-Mount EFS on all EC2 instances. Write a one time job to copy the videos from all EBS volumes
to EFS. Modify the application to use EFS for storing the videos
What does this IAM policy do?
{ "Version": "2012-10-17",
"Statement": [
{ "Sid": "Mystery Policy",
"Action": [ "ec2:RunInstances" ],
"Effect": "Allow",
"Resource": "*",
"Condition": { "StringEquals":
{ "aws:RequestedRegion": "eu-west-1" } } } ] } - ANS -It allows running EC2 instances only
in the eu-west-1 region, and the API call can be made from anywhere in the world
(aws:RequestedRegion represents the target of the API call. So in this example, we can only
launch EC2 instances in eu-west-1, and we can do this API call from anywhere.
The infrastructure team at a company maintains 5 different VPCs (let's call these VPCs A, B, C,
D, E) for resource isolation. Due to the changed organizational structure, the team wants to
interconnect all VPCs together. To facilitate this, the team has set up VPC peering connections
between VPC A and all other VPCs in a hub and spoke model with VPC A at the center.
However, the team has still failed to establish connectivity between all VPCs.
As a solutions architect, which of the following would you recommend as the MOST
resource-efficient and scalable solution? - ANS -Use a transit gateway to interconnect the
VPCs
(Instead of using VPC peering, you can use an AWS Transit Gateway that acts as a network
transit hub, to interconnect your VPCs or connect your VPCs with on-premises networks.)
You would like to store a database password in a secure place, and enable automatic rotation of
that password every 90 days. What do you recommend? - ANS -Secrets Manager
(AWS Secrets Manager helps you protect secrets needed to access your applications, services,
and IT resources. The service enables you to easily rotate, manage, and retrieve database
credentials, API keys, and other secrets throughout their lifecycle.)
Your application is deployed on EC2 instances fronted by an Application Load Balancer.
Recently, your infrastructure has come under attack. Attackers perform over 100 requests per
second, while your normal users only make about 5 requests per second.
How can you efficiently prevent attackers from overwhelming your application? - ANS -Use
a Web Application Firewall and setup a rate-based rule