Was :
$90
Today :
$50
Was :
$108
Today :
$60
Was :
$126
Today :
$70
Why Should You Prepare For Your AWS Certified DevOps Engineer - Professional With MyCertsHub?
At MyCertsHub, we go beyond standard study material. Our platform provides authentic Amazon DOP-C02 Exam Dumps, detailed exam guides, and reliable practice exams that mirror the actual AWS Certified DevOps Engineer - Professional test. Whether you’re targeting Amazon certifications or expanding your professional portfolio, MyCertsHub gives you the tools to succeed on your first attempt.
Verified DOP-C02 Exam Dumps
Every set of exam dumps is carefully reviewed by certified experts to ensure accuracy. For the DOP-C02 AWS Certified DevOps Engineer - Professional , you’ll receive updated practice questions designed to reflect real-world exam conditions. This approach saves time, builds confidence, and focuses your preparation on the most important exam areas.
Realistic Test Prep For The DOP-C02
You can instantly access downloadable PDFs of DOP-C02 practice exams with MyCertsHub. These include authentic practice questions paired with explanations, making our exam guide a complete preparation tool. By testing yourself before exam day, you’ll walk into the Amazon Exam with confidence.
Smart Learning With Exam Guides
Our structured DOP-C02 exam guide focuses on the AWS Certified DevOps Engineer - Professional's core topics and question patterns. You will be able to concentrate on what really matters for passing the test rather than wasting time on irrelevant content. Pass the DOP-C02 Exam – Guaranteed
We Offer A 100% Money-Back Guarantee On Our Products.
After using MyCertsHub's exam dumps to prepare for the AWS Certified DevOps Engineer - Professional exam, we will issue a full refund. That’s how confident we are in the effectiveness of our study resources.
Try Before You Buy – Free Demo
Still undecided? See for yourself how MyCertsHub has helped thousands of candidates achieve success by downloading a free demo of the DOP-C02 exam dumps.
MyCertsHub – Your Trusted Partner For Amazon Exams
Whether you’re preparing for AWS Certified DevOps Engineer - Professional or any other professional credential, MyCertsHub provides everything you need: exam dumps, practice exams, practice questions, and exam guides. Passing your DOP-C02 exam has never been easier thanks to our tried-and-true resources.
Amazon DOP-C02 Sample Question Answers
Question # 1
A company has a mission-critical application on AWS that uses automatic scaling Thecompany wants the deployment lilecycle to meet the following parameters.• The application must be deployed one instance at a time to ensure the remaining fleetcontinues to serve traffic• The application is CPU intensive and must be closely monitored• The deployment must automatically roll back if the CPU utilization of the deploymentinstance exceeds 85%. Which solution will meet these requirements?
A. Use AWS CloudFormalion to create an AWS Step Functions state machine and AutoScaling hfecycle hooks to move to one instance at a time into a wait state Use AWSSystems Manager automation to deploy the update to each instance and move it back intothe Auto Scaling group using the heartbeat timeout B. Use AWS CodeDeploy with Amazon EC2 Auto Scaling. Configure an alarm tied to theCPU utilization metric. Use the CodeDeployDefault OneAtAtime configuration as adeployment strategy Configure automatic rollbacks within the deployment group to roll backthe deployment if the alarm thresholds are breached C. Use AWS Elastic Beanstalk for load balancing and AWS Auto Scaling Configure analarm tied to the CPU utilization metric Configure rolling deployments with a fixed batchsize of one instance Enable enhanced health to monitor the status of the deployment androll back based on the alarm previously created. D. Use AWS Systems Manager to perform a blue/green deployment with Amazon EC2Auto Scaling Configure an alarm tied to the CPU utilization metric Deploy updates one at atime Configure automatic rollbacks within the Auto Scaling group to roll back thedeployment if the alarm thresholds are breached
A company has 20 service learns Each service team is responsible for its ownmicroservice. Each service team uses a separate AWS account for its microservice and aVPC with the 192 168 0 0/22 CIDR block. The company manages the AWS accounts withAWS Organizations.Each service team hosts its microservice on multiple Amazon EC2 instances behind anApplication Load Balancer. The microservices communicate with each other across thepublic internet. The company's security team has issued a new guideline that allcommunication between microservices must use HTTPS over private network connectionsand cannot traverse the public internet.A DevOps engineer must implement a solution that fulfills these obligations and minimizesthe number of changes for each service team.Which solution will meet these requirements?
A. Create a new AWS account in AWS Organizations Create a VPC in this account anduse AWS Resource Access Manager to share the private subnets of this VPC with theorganization Instruct the service teams to launch a new. Network Load Balancer (NLB) and EC2 instances that use the shared private subnets Use the NLB DNS names forcommunication between microservices. B. Create a Network Load Balancer (NLB) in each of the microservice VPCs Use AWSPrivateLink to create VPC endpoints in each AWS account for the NLBs Createsubscriptions to each VPC endpoint in each of the other AWS accounts Use the VPCendpoint DNS names for communication between microservices. C. Create a Network Load Balancer (NLB) in each of the microservice VPCs Create VPCpeering connections between each of the microservice VPCs Update the route tables foreach VPC to use the peering links Use the NLB DNS names for communication betweenmicroservices. D. Create a new AWS account in AWS Organizations Create a transit gateway in thisaccount and use AWS Resource Access Manager to share the transit gateway with theorganization. In each of the microservice VPCs. create a transit gateway attachment to theshared transit gateway Update the route tables of each VPC to use the transit gatewayCreate a Network Load Balancer (NLB) in each of the microservice VPCs Use the NLBDNS names for communication between microservices.
with-overlapping-ip-ranges/ Private link is the best option because Transit
Gateway doesn't support overlapping CIDR ranges.
Question # 3
A security team is concerned that a developer can unintentionally attach an Elastic IPaddress to an Amazon EC2 instance in production. No developer should be allowed toattach an Elastic IP address to an instance. The security team must be notified if anyproduction server has an Elastic IP address at any timeHow can this task be automated'?
A. Use Amazon Athena to query AWS CloudTrail logs to check for any associate-addressattempts Create an AWS Lambda function to disassociate the Elastic IP address from theinstance, and alert the security team. B. Attach an 1AM policy to the developers' 1AM group to deny associate-addresspermissions Create a custom AWS Config rule to check whether an Elastic IP address isassociated with any instance tagged as production, and alert the security team C. Ensure that all 1AM groups associated with developers do not have associate-address permissions. Create a scheduled AWS Lambda function to check whether an Elastic IPaddress is associated with any instance tagged as production, and alert the secunty team ifan instance has an Elastic IP address associated with it D. Create an AWS Config rule to check that all production instances have EC2 1AM rolesthat include deny associate-address permissions Verify whether there is an Elastic IPaddress associated with any instance, and alert the security team if an instance has anElastic IP address associated with it.
Answer: B
Explanation:
To prevent developers from unintentionally attaching an Elastic IP address to an Amazon
EC2 instance in production, the best approach is to use IAM policies and AWS Config
rules. By attaching an IAM policy that denies the associate-address permission to the
developers’ IAM group, you ensure that developers cannot perform this action. Additionally,
creating a custom AWS Config rule to check for Elastic IP addresses associated with
instances tagged as production provides ongoing monitoring. If the rule detects an Elastic
IP address, it can trigger an alert to notify the security team. This method is proactive and
enforces the necessary permissions while also providing a mechanism for detection and
notification. References: from Amazon DevOps sources
Question # 4
A company is using AWS CodePipeline to deploy an application. According to a newguideline, a member of the company's security team must sign off on any applicationchanges before the changes are deployed into production. The approval must be recordedand retained.Which combination of actions will meet these requirements? (Select TWO.)
A. Configure CodePipeline to write actions to Amazon CloudWatch Logs. B. Configure CodePipeline to write actions to an Amazon S3 bucket at the end of eachpipeline stage. C. Create an AWS CloudTrail trail to deliver logs to Amazon S3. D. Create a CodePipeline custom action to invoke an AWS Lambda function for approval.Create a policy that gives the security team access to manage CodePipeline customactions. E. Create a CodePipeline manual approval action before the deployment step. Create apolicy that grants the security team access to approve manual approval stages.
Answer: C,E
Explanation:
To meet the new guideline for application deployment, the company can use a combination
of AWS CodePipeline and AWS CloudTrail. A manual approval action in CodePipeline
allows the security team to review and approve changes before they are deployed. This
action can be configured to pause the pipeline until approval is granted, ensuring that no
changes move to production without the necessary sign-off. Additionally, by creating an
AWS CloudTrail trail, all actions taken within CodePipeline, including approvals, are
recorded and delivered to an Amazon S3 bucket. This provides an audit trail that can be
retained for compliance and review purposes.
References:
AWS CodePipeline’s manual approval action provides a way to ensure that a
member of the security team can review and approve changes before they are
deployed1.
AWS CloudTrail integration with CodePipeline allows for the recording and
retention of all pipeline actions, including approvals, which can be stored in
Amazon S3 for record-keeping2.
Question # 5
A company has an AWS CodeDeploy application. The application has a deployment groupthat uses a single tag group to identify instances for the deployment of ApplicationA. Thesingle tag group configuration identifies instances that have Environment=Production andName=ApplicattonA tags for the deployment of ApplicationA.The company launches an additional Amazon EC2 instance with Department=MarketingEnvironment^Production. and Name=ApplicationB tags. On the next CodeDeploydeployment of ApplicationA. the additional instance has ApplicationA installed on it. ADevOps engineer needs to configure the existing deployment group to preventApplicationA from being installed on the additional instanceWhich solution will meet these requirements?
A. Change the current single tag group to include only the Environment=Production tagAdd another single tag group that includes only the Name=ApplicationA tag. B. Change the current single tag group to include the Department=MarketmgEnvironment=Production and Name=ApplicationAtags C. Add another single tag group that includes only the Department=Marketing tag. Keepthe Environment=Production and Name=ApplicationA tags with the current single tag group D. Change the current single tag group to include only the Environment=Production tagAdd another single tag group that includes only the Department=Marketing tag
Answer: A
Explanation:
To prevent ApplicationA from being installed on the additional instance, the deployment
group configuration needs to be more specific. By changing the current single tag group to
include only the Environment=Production tag and adding another single tag group that
includes only the Name=ApplicationA tag, the deployment process will target only the
instances that match both tag groups. This ensures that only instances intended for
ApplicationA with the correct environment and name tags will receive the deployment, thus
excluding the additional instance with
the Department=Marketing and Name=ApplicationB tags.
References:
AWS CodeDeploy Documentation: Working with instances for CodeDeploy
AWS CodeDeploy Documentation: Stop a deployment with CodeDeploy
Stack Overflow Discussion: CodeDeploy Deployment failed to stop Application
Question # 6
A company uses an organization in AWS Organizations to manage its AWS accounts. Thecompany recently acquired another company that has standalone AWS accounts. Theacquiring company's DevOps team needs to consolidate the administration of the AWSaccounts for both companies and retain full administrative control of the accounts. TheDevOps team also needs to collect and group findings across all the accounts to implementand maintain a security posture.Which combination of steps should the DevOps team take to meet these requirements?(Select TWO.)
A. Invite the acquired company's AWS accounts to join the organization. Create an SCPthat has full administrative privileges. Attach the SCP to the management account. B. Invite the acquired company's AWS accounts to join the organization. Create theOrganizationAccountAccessRole 1AM role in the invited accounts. Grant permission to themanagement account to assume the role. C. Use AWS Security Hub to collect and group findings across all accounts. Use SecurityHub to automatically detect new accounts as the accounts are added to the organization. D. Use AWS Firewall Manager to collect and group findings across all accounts. Enable allfeatures for the organization. Designate an account in the organization as the delegatedadministrator account for Firewall Manager. E. Use Amazon Inspector to collect and group findings across all accounts. Designate anaccount in the organization as the delegated administrator account for Amazon Inspector.
Answer: B,C
Explanation: The correct answer is B and C. Option B is correct because inviting the
acquired company’s AWS accounts to join the organization and creating the
OrganizationAccountAccessRole IAM role in the invited accounts allows the management
account to assume the role and gain full administrative access to the member accounts.
Option C is correct because using AWS Security Hub to collect and group findings across
all accounts enables the DevOps team to monitor and improve the security posture of the
organization. Security Hub can automatically detect new accounts as the accounts are
added to the organization and enable Security Hub for them. Option A is incorrect because
creating an SCP that has full administrative privileges and attaching it to the management
account does not grant the management account access to the member accounts. SCPs are used to restrict the permissions of the member accounts, not to grant permissions to
the management account. Option D is incorrect because using AWS Firewall Manager to
collect and group findings across all accounts is not a valid use case for Firewall Manager.
Firewall Manager is used to centrally configure and manage firewall rules across the
organization, not to collect and group security findings. Option E is incorrect because using
Amazon Inspector to collect and group findings across all accounts is not a valid use case
for Amazon Inspector. Amazon Inspector is used to assess the security and compliance of
applications running on Amazon EC2 instances, not to collect and group security findings
across accounts. References:
Inviting an AWS account to join your organization
Enabling and disabling AWS Security Hub
Service control policies
AWS Firewall Manager
Amazon Inspector
Question # 7
A company has an application and a CI/CD pipeline. The CI/CD pipeline consists of anAWS CodePipeline pipeline and an AWS CodeBuild project. The CodeBuild project runstests against the application as part of the build process and outputs a test report. Thecompany must keep the test reports for 90 days.Which solution will meet these requirements?
A. Add a new stage in the CodePipeline pipeline after the stage that contains theCodeBuild project. Create an Amazon S3 bucket to store the reports. Configure an S3deploy action type in the new CodePipeline stage with the appropriate path and format forthe reports. B. Add a report group in the CodeBuild project buildspec file with the appropriate path andformat for the reports. Create an Amazon S3 bucket to store the reports. Configure anAmazon EventBridge rule that invokes an AWS Lambda function to copy the reports to theS3 bucket when a build is completed. Create an S3 Lifecycle rule to expire the objects after90 days. C. Add a new stage in the CodePipeline pipeline. Configure a test action type with theappropriate path and format for the reports. Configure the report expiration time to be 90days in the CodeBuild project buildspec file. D. Add a report group in the CodeBuild project buildspec file with the appropriate path andformat for the reports. Create an Amazon S3 bucket to store the reports. Configure thereport group as an artifact in the CodeBuild project buildspec file. Configure the S3 bucketas the artifact destination. Set the object expiration to 90 days.
Answer: B
Explanation: The correct solution is to add a report group in the AWS CodeBuild project
buildspec file with the appropriate path and format for the reports. Then, create an Amazon
S3 bucket to store the reports. You should configure an Amazon EventBridge rule that
invokes an AWS Lambda function to copy the reports to the S3 bucket when a build is
completed. Finally, create an S3 Lifecycle rule to expire the objects after 90 days. This
approach allows for the automated transfer of reports to long-term storage and ensures
they are retained for the required duration without manual intervention1.
References:
AWS CodeBuild User Guide on test reporting1.
AWS CodeBuild User Guide on working with report groups2.
AWS Documentation on using AWS CodePipeline with AWS CodeBuild3.
Question # 8
An ecommerce company uses a large number of Amazon Elastic Block Store (AmazonEBS) backed Amazon EC2 instances. To decrease manual work across all the instances, aDevOps engineer is tasked with automating restart actions when EC2 instance retirementevents are scheduled.How can this be accomplished?
A. Create a scheduled Amazon EventBridge rule to run an AWS Systems Manager Automation runbook that checks if any EC2 instances are scheduled for retirement once aweek If the instance is scheduled for retirement the runbook will hibernate the instance B. Enable EC2Auto Recovery on all of the instances. Create an AWS Config rule to limitthe recovery to occur during a maintenance window only C. Reboot all EC2 instances during an approved maintenance window that is outside ofstandard business hours Set up Amazon CloudWatch alarms to send a notification in caseany instance is failing EC2 instance status checks D. Set up an AWS Health Amazon EventBridge rule to run AWS Systems ManagerAutomation runbooks that stop and start the EC2 instance when a retirement scheduledevent occurs.
A DevOps engineer is using AWS CodeDeploy across a fleet of Amazon EC2 instances inan EC2 Auto Scaling group. The associated CodeDeploy deployment group, which isintegrated with EC2 Auto Scaling, is configured to perform in-place deployments withcodeDeployDefault.oneAtATime During an ongoing new deployment, the engineerdiscovers that, although the overall deployment finished successfully, two out of fiveinstances have the previous application revision deployed. The other three instances havethe newest application revisionWhat is likely causing this issue?
A. The two affected instances failed to fetch the new deployment. B. A failed Afterinstall lifecycle event hook caused the CodeDeploy agent to roll back to theprevious version on the affected instances C. The CodeDeploy agent was not installed in two affected instances. D. EC2 Auto Scaling launched two new instances while the new deployment had not yetfinished, causing the previous version to be deployed on the affected instances.
Answer: B
Explanation:
When AWS CodeDeploy performs an in-place deployment, it updates the instances with
the new application revision one at a time, as specified by the deployment
configuration codeDeployDefault.oneAtATime. If a lifecycle event hook, such
as AfterInstall, fails during the deployment, CodeDeploy will attempt to roll back to the
previous version on the affected instances. This is likely what happened with the two
instances that still have the previous application revision deployed. The failure of
the AfterInstall lifecycle event hook triggered the rollback mechanism, resulting in those
instances reverting to the previous application revision.
References:
AWS CodeDeploy documentation on redeployment and rollback procedures1.
Stack Overflow discussions on re-deploying older revisions with AWS
CodeDeploy2.
AWS CLI reference guide for deploying a revision2.
Question # 10
A company is examining its disaster recovery capability and wants the ability to switch over its daily operations to a secondary AWS Region. The company uses AWS CodeCommit asa source control tool in the primary Region.A DevOps engineer must provide the capability for the company to develop code in thesecondary Region. If the company needs to use the secondary Region, developers canadd an additional remote URL to their local Git configuration.Which solution will meet these requirements?
A. Create a CodeCommit repository in the secondary Region. Create an AWS CodeBuildproject to perform a Git mirror operation of the primary Region's CodeCommit repository tothe secondary Region's CodeCommit repository. Create an AWS Lambda function thatinvokes the CodeBuild project. Create an Amazon EventBridge rule that reacts to mergeevents in the primary Region's CodeCommit repository. Configure the EventBridge rule toinvoke the Lambda function. B. Create an Amazon S3 bucket in the secondary Region. Create an AWS Fargate task toperform a Git mirror operation of the primary Region's CodeCommit repository and copythe result to the S3 bucket. Create an AWS Lambda function that initiates the Fargate task.Create an Amazon EventBridge rule that reacts to merge events in the CodeCommitrepository. Configure the EventBridge rule to invoke the Lambda function. C. Create an AWS CodeArtifact repository in the secondary Region. Create an AWSCodePipeline pipeline that uses the primary Region's CodeCommit repository for thesource action. Create a Cross-Region stage in the pipeline that packages the CodeCommitrepository contents and stores the contents in the CodeArtifact repository when a pullrequest is merged into the CodeCommit repository. D. Create an AWS Cloud9 environment and a CodeCommit repository in the secondaryRegion. Configure the primary Region's CodeCommit repository as a remote repository inthe AWS Cloud9 environment. Connect the secondary Region's CodeCommit repository tothe AWS Cloud9 environment.
Answer: A
Explanation: The best solution to meet the disaster recovery capability and allow
developers to switch over to a secondary AWS Region for code development is option A.
This involves creating a CodeCommit repository in the secondary Region and setting up
an AWS CodeBuild project to perform a Git mirror operation of the primary Region’s
CodeCommit repository to the secondary Region’s repository. An AWS Lambda function is
then created to invoke the CodeBuild project. Additionally, an Amazon EventBridge rule is
configured to react to merge events in the primary Region’s CodeCommit repository and
invoke the Lambda function12. This setup ensures that the secondary Region’s repository
is always up-to-date with the primary repository, allowing for a seamless transition in case
of a disaster recovery event1.
References:
AWS CodeCommit User Guide on resilience and disaster recovery1.
AWS Documentation on monitoring CodeCommit events in Amazon EventBridge
and Amazon CloudWatch Events2.
Question # 11
A company has a single developer writing code for an automated deployment pipeline. Thedeveloper is storing source code in an Amazon S3 bucket for each project. The companywants to add more developers to the team but is concerned about code conflicts and lostwork The company also wants to build a test environment to deploy newer versions of codefor testing and allow developers to automatically deploy to both environments when code ischanged in the repository.What is the MOST efficient way to meet these requirements?
A. Create an AWS CodeCommit repository tor each project, use the mam branch forproduction code: and create a testing branch for code deployed to testing Use featurebranches to develop new features and pull requests to merge code to testing and mainbranches. B. Create another S3 bucket for each project for testing code, and use an AWS Lambdafunction to promote code changes between testing and production buckets Enableversioning on all buckets to prevent code conflicts. C. Create an AWS CodeCommit repository for each project, and use the main branch forproduction and test code with different deployment pipelines for each environment Usefeature branches to develop new features. D. Enable versioning and branching on each S3 bucket, use the main branch for productioncode, and create a testing branch for code deployed to testing. Have developers use eachbranch for developing in each environment.
Answer: A
Explanation:
Creating an AWS CodeCommit repository for each project, using the main branch for
production code, and creating a testing branch for code deployed to testing will meet the
requirements. AWS CodeCommit is a managed revision control service that hosts Git
repositories and works with all Git-based tools1. By using feature branches to develop new
features and pull requests to merge code to testing and main branches, the developers can
avoid code conflicts and lost work, and also implement code reviews and approvals. Option
B is incorrect because creating another S3 bucket for each project for testing code and
using an AWS Lambda function to promote code changes between testing and production
buckets will not provide the benefits of revision control, such as tracking changes,
branching, merging, and collaborating. Option C is incorrect because using the main
branch for production and test code with different deployment pipelines for each
environment will not allow the developers to test their code changes before deploying them to production. Option D is incorrect because enabling versioning and branching on each S3
bucket will not work with Git-based tools and will not provide the same level of revision
control as AWS CodeCommit. References:
AWS CodeCommit
Certified DevOps Engineer - Professional (DOP-C02) Study Guide (page 182)
Question # 12
A company is using AWS to run digital workloads. Each application team in the companyhas its own AWS account for application hosting. The accounts are consolidated in anorganization in AWS Organizations.The company wants to enforce security standards across the entire organization. To avoidnoncompliance because of security misconfiguration, the company has enforced the use ofAWS CloudFormation. A production support team can modify resources in the productionenvironment by using the AWS Management Console to troubleshoot and resolve application-related issues.A DevOps engineer must implement a solution to identify in near real time any AWSservice misconfiguration that results in noncompliance. The solution must automaticallyremediate the issue within 15 minutes of identification. The solution also must tracknoncompliant resources and events in a centralized dashboard with accurate timestamps.Which solution will meet these requirements with the LEAST development overhead?
A. Use CloudFormation drift detection to identify noncompliant resources. Use driftdetection events from CloudFormation to invoke an AWS Lambda function for remediation.Configure the Lambda function to publish logs to an Amazon CloudWatch Logs log group.Configure an Amazon CloudWatch dashboard to use the log group for tracking. B. Turn on AWS CloudTrail in the AWS accounts. Analyze CloudTrail logs by usingAmazon Athena to identify noncompliant resources. Use AWS Step Functions to trackquery results on Athena for drift detection and to invoke an AWS Lambda function forremediation. For tracking, set up an Amazon QuickSight dashboard that uses Athena asthe data source. C. Turn on the configuration recorder in AWS Config in all the AWS accounts to identifynoncompliant resources. Enable AWS Security Hub with the ~no-enable-default-standardsoption in all the AWS accounts. Set up AWS Config managed rules and custom rules. Setup automatic remediation by using AWS Config conformance packs. For tracking, set up adashboard on Security Hub in a designated Security Hub administrator account. D. Turn on AWS CloudTrail in the AWS accounts. Analyze CloudTrail logs by usingAmazon CloudWatch Logs to identify noncompliant resources. Use CloudWatch Logsfilters for drift detection. Use Amazon EventBridge to invoke the Lambda function forremediation. Stream filtered CloudWatch logs to Amazon OpenSearch Service. Set up adashboard on OpenSearch Service for tracking.
Answer: C
Explanation:
The best solution is to use AWS Config and AWS Security Hub to identify and remediate
noncompliant resources across multiple AWS accounts. AWS Config enables continuous
monitoring of the configuration of AWS resources and evaluates them against desired
configurations. AWS Config can also automatically remediate noncompliant resources by
using conformance packs, which are a collection of AWS Config rules and remediation
actions that can be deployed as a single entity. AWS Security Hub provides a
comprehensive view of the security posture of AWS accounts and resources. AWS
Security Hub can aggregate and normalize the findings from AWS Config and other AWS
services, as well as from partner solutions. AWS Security Hub can also be used to create a
dashboard for tracking noncompliant resources and events in a centralized location.
The other options are not optimal because they either require more development overhead,
do not provide near real time detection and remediation, or do not provide a centralized
dashboard for tracking. Option A is not optimal because CloudFormation drift detection is not a near real time
solution. Drift detection has to be manually initiated on each stack or resource, or
scheduled using a cron expression. Drift detection also does not provide remediation
actions, so a custom Lambda function has to be developed and invoked. CloudWatch Logs
and dashboard can be used for tracking, but they do not provide a comprehensive view of
the security posture of the AWS accounts and resources.
Option B is not optimal because CloudTrail logs analysis using Athena is not a near real
time solution. Athena queries have to be manually run or scheduled using a cron
expression. Athena also does not provide remediation actions, so a custom Lambda
function has to be developed and invoked. Step Functions can be used to orchestrate the
query and remediation workflow, but it adds more complexity and cost. QuickSight
dashboard can be used for tracking, but it does not provide a comprehensive view of the
security posture of the AWS accounts and resources.
Option D is not optimal because CloudTrail logs analysis using CloudWatch Logs is not a
near real time solution. CloudWatch Logs filters have to be manually created or updated for
each resource type and configuration change. CloudWatch Logs also does not provide
remediation actions, so a custom Lambda function has to be developed and invoked.
EventBridge can be used to trigger the Lambda function, but it adds more complexity and
cost. OpenSearch Service dashboard can be used for tracking, but it does not provide a
comprehensive view of the security posture of the AWS accounts and resources.
References:
AWS Config conformance packs
Introducing AWS Config conformance packs
Managing conformance packs across all accounts in your organization
Question # 13
A DevOps engineer manages a company's Amazon Elastic Container Service (AmazonECS) cluster. The cluster runs on several Amazon EC2 instances that are in an AutoScaling group. The DevOpsengineer must implement a solution that logs and reviews all stopped tasks for errors.Which solution will meet these requirements?
A. Create an Amazon EventBridge rule to capture task state changes. Send the event to Amazon CloudWatch Logs. Use CloudWatch Logs Insights to investigate stopped tasks. B. Configure tasks to write log data in the embedded metric format. Store the logs inAmazon CloudWatch Logs. Monitor the ContainerInstanceCount metric for changes. C. Configure the EC2 instances to store logs in Amazon CloudWatch Logs. Create aCloudWatch Contributor Insights rule that uses the EC2 instance log data. Use theContributor Insights rule to investigate stopped tasks. D. Configure an EC2 Auto Scaling lifecycle hook for the EC2_INSTANCE_TERMINATINGscale-in event. Write the SystemEventLog file to Amazon S3. Use Amazon Athena to querythe log file for errors.
Answer: A
Explanation:
The best solution to log and review all stopped tasks for errors is to use Amazon
EventBridge and Amazon CloudWatch Logs. Amazon EventBridge allows the DevOps
engineer to create a rule that matches task state change events from Amazon ECS. The
rule can then send the event data to Amazon CloudWatch Logs as the target. Amazon
CloudWatch Logs can store and monitor the log data, and also provide CloudWatch Logs
Insights, a feature that enables the DevOps engineer to interactively search and analyze
the log data. Using CloudWatch Logs Insights, the DevOps engineer can filter and
aggregate the log data based on various fields, such as cluster, task, container, and
reason. This way, the DevOps engineer can easily identify and investigate the stopped
tasks and their errors.
The other options are not as effective or efficient as the solution in option A. Option B is not
suitable because the embedded metric format is designed for custom metrics, not for
logging task state changes. Option C is not feasible because the EC2 instances do not
store the task state change events in their logs. Option D is not relevant because the
EC2_INSTANCE_TERMINATING lifecycle hook is triggered when an EC2 instance is
terminated by the Auto Scaling group, not when a task is stopped by Amazon ECS.
References:
: Creating a CloudWatch Events Rule That Triggers on an Event - Amazon Elastic
Container Service
: Sending and Receiving Events Between AWS Accounts - Amazon EventBridge
: Working with Log Data - Amazon CloudWatch Logs
: Analyzing Log Data with CloudWatch Logs Insights - Amazon CloudWatch Logs
: Embedded Metric Format - Amazon CloudWatch
: Amazon EC2 Auto Scaling Lifecycle Hooks - Amazon EC2 Auto Scaling
Question # 14
A company has deployed a critical application in two AWS Regions. The application usesan Application Load Balancer (ALB) in both Regions. The company has Amazon Route 53alias DNS records for both ALBs.The company uses Amazon Route 53 Application Recovery Controller to ensure that theapplication can fail over between the two Regions. The Route 53 ARC configurationincludes a routing control for both Regions. The company uses Route 53 ARC to performquarterly disaster recovery (DR) tests.During the most recent DR test, a DevOps engineer accidentally turned off both routingcontrols. The company needs to ensure that at least one routing control is turned on at alltimes.Which solution will meet these requirements?
A. In Route 53 ARC. create a new assertion safety rule. Apply the assertion safety rule tothe two routing controls. Configure the rule with the ATLEAST type with a threshold of 1. B. In Route 53 ARC, create a new gating safety rule. Apply the assertion safety rule to thetwo routing controls. Configure the rule with the OR type with a threshold of 1. C. In Route 53 ARC, create a new resource set. Configure the resource set with an AWS:Route53: HealthCheck resource type. Specify the ARNs of the two routing controls as thetarget resource. Create a new readiness check for the resource set. D. In Route 53 ARC, create a new resource set. Configure the resource set with an AWS:Route53RecoveryReadiness: DNSTargetResource resource type. Add the domain namesof the two Route 53 alias DNS records as the target resource. Create a new readinesscheck for the resource set.
Answer: A
Explanation:
The correct solution is to create a new assertion safety rule in Route 53 ARC and apply it
to the two routing controls. An assertion safety rule is a type of safety rule that ensures that
a minimum number of routing controls are always enabled. The ATLEAST type of assertion
safety rule specifies the minimum number of routing controls that must be enabled for the
rule to evaluate as healthy. By setting the threshold to 1, the rule ensures that at least one
routing control is always turned on. This prevents the scenario where both routing controls are accidentally turned off and the application becomes unavailable in both Regions.
The other solutions are incorrect because they do not use safety rules to prevent both
routing controls from being turned off. A gating safety rule is a type of safety rule that
prevents routing control state changes that violate the rule logic. The OR type of gating
safety rule specifies that one or more routing controls must be enabled for the rule to
evaluate as healthy. However, this rule does not prevent a user from turning off both
routing controls manually. A resource set is a collection of resources that are tested for
readiness by Route 53 ARC. A readiness check is a test that verifies that all the resources
in a resource set are operational. However, these concepts are not related to routing
control states or safety rules. Therefore, creating a new resource set and a new readiness
check will not ensure that at least one routing control is turned on at all times. References:
Routing control in Amazon Route 53 Application Recovery Controller
Viewing and updating routing control states in Route 53 ARC
Creating a control panel in Route 53 ARC
Creating safety rules in Route 53 ARC
Question # 15
A company manages a multi-tenant environment in its VPC and has configured AmazonGuardDuty for the corresponding AWS account. The company sends all GuardDutyfindings to AWS Security Hub.Traffic from suspicious sources is generating a large number of findings. A DevOpsengineer needs to implement a solution to automatically deny traffic across the entire VPCwhen GuardDuty discovers a new suspicious source.Which solution will meet these requirements?
A. Create a GuardDuty threat list. Configure GuardDuty to reference the list. Create anAWS Lambda function that will update the threat list Configure the Lambda function to runin response to new Security Hub findings that come from GuardDuty. B. Configure an AWS WAF web ACL that includes a custom rule group. Create an AWSLambda function that will create a block rule in the custom rule group Configure theLambda function to run in response to new Security Hub findings that come from GuardDuty C. Configure a firewall in AWS Network Firewall. Create an AWS Lambda function that willcreate a Drop action rule in the firewall policy Configure the Lambda function to run inresponse to new Security Hub findings that come from GuardDuty D. Create an AWS Lambda function that will create a GuardDuty suppression rule.Configure the Lambda function to run in response to new Security Hub findings that comefrom GuardDuty.
A company recently deployed its web application on AWS. The company is preparing for alarge-scale sales event and must ensure that the web application can scale to meet thedemandThe application's frontend infrastructure includes an Amazon CloudFront distribution thathas an Amazon S3 bucket as an origin. The backend infrastructure includes an AmazonAPI Gateway API. several AWS Lambda functions, and an Amazon Aurora DB clusterThe company's DevOps engineer conducts a load test and identifies that the Lambdafunctions can fulfill the peak number of requests However, the DevOps engineer noticesrequest latency during the initial burst of requests Most of the requests to the Lambdafunctions produce queries to the database A large portion of the invocation time is used toestablish database connectionsWhich combination of steps will provide the application with the required scalability? (SelectTWO)
A. Configure a higher reserved concurrency for the Lambda functions. B. Configure a higher provisioned concurrency for the Lambda functions C. Convert the DB cluster to an Aurora global database Add additional Aurora Replicas inAWS Regions based on the locations of the company's customers. D. Refactor the Lambda Functions Move the code blocks that initialize databaseconnections into the function handlers. E. Use Amazon RDS Proxy to create a proxy for the Aurora database Update the Lambdafunctions to use the proxy endpoints for database connections.
Answer: B,E
Explanation:
The correct answer is B and E. Configuring a higher provisioned concurrency for the
Lambda functions will ensure that the functions are ready to respond to the initial burst of
requests without any cold start latency. Using Amazon RDS Proxy to create a proxy for the
Aurora database will enable the Lambda functions to reuse existing database connections
and reduce the overhead of establishing new ones. This will also improve the scalability
and availability of the database by managing the connection pool size and handling
failovers. Option A is incorrect because reserved concurrency only limits the number of
concurrent executions for a function, not pre-warms them. Option C is incorrect because
converting the DB cluster to an Aurora global database will not address the issue of
database connection latency, and may introduce additional costs and complexity. Option D
is incorrect because moving the code blocks that initialize database connections into the
function handlers will not improve the performance or scalability of the Lambda functions,
and may actually worsen the cold start latency. References:
AWS Lambda Provisioned Concurrency
Using Amazon RDS Proxy with AWS Lambda
Certified DevOps Engineer - Professional (DOP-C02) Study Guide (page 173)
Question # 17
A company's security policies require the use of security hardened AMIS in productionenvironments. A DevOps engineer has used EC2 Image Builder to create a pipeline thatbuilds the AMIs on a recurring schedule.The DevOps engineer needs to update the launch templates of the companys Auto Scalinggroups. The Auto Scaling groups must use the newest AMIS during the launch of AmazonEC2 instances.Which solution will meet these requirements with the MOST operational efficiency?
A. Configure an Amazon EventBridge rule to receive new AMI events from Image Builder.Target an AWS Systems Manager Run Command document that updates the launchtemplates of the Auto Scaling groups with the newest AMI ID. B. Configure an Amazon EventBridge rule to receive new AMI events from Image Builder.Target an AWS Lambda function that updates the launch templates of the Auto Scalinggroups with the newest AMI ID. C. Configure the launch template to use a value from AWS Systems Manager ParameterStore for the AMI ID. Configure the Image Builder pipeline to update the Parameter Storevalue with the newest AMI ID. D. Configure the Image Builder distribution settings to update the launch templates with thenewest AMI ID. Configure the Auto Scaling groups to use the newest version of the launch template.
Answer: C
Explanation:
The most operationally efficient solution is to use AWS Systems Manager
Parameter Store1 to store the AMI ID and reference it in the launch template2.
This way, the launch template does not need to be updated every time a new AMI
is created by Image Builder. Instead, the Image Builder pipeline can update the
Parameter Store value with the newest AMI ID3, and the Auto Scaling group can
launch instances using the latest value from Parameter Store.
The other solutions require updating the launch template or creating a new version
of it every time a new AMI is created, which adds complexity and overhead.
Additionally, using EventBridge rules and Lambda functions or Run Command
documents introduces additional dependencies and potential points of failure.
References: 1: AWS Systems Manager Parameter Store 2: Using AWS Systems Manager
parameters instead of AMI IDs in launch templates 3: Update an SSM parameter with
Image Builder
Question # 18
A company requires its internal business teams to launch resources through pre-approvedAWS CloudFormation templates only. The security team requires automated monitoringwhen resources drift from their expected state.Which strategy should be used to meet these requirements?
A. Allow users to deploy CloudFormation stacks using a CloudFormation service role only.Use CloudFormation drift detection to detect when resources have drifted from theirexpected state. B. Allow users to deploy CloudFormation stacks using a CloudFormation service role only.Use AWS Config rules to detect when resources have drifted from their expected state. C. Allow users to deploy CloudFormation stacks using AWS Service Catalog only. Enforcethe use of a launch constraint. Use AWS Config rules to detect when resources havedrifted from their expected state. D. Allow users to deploy CloudFormation stacks using AWS Service Catalog only. Enforcethe use of a template constraint. Use Amazon EventBridge notifications to detect whenresources have drifted from their expected state.
Answer: C
Explanation:
The correct answer is C. Allowing users to deploy CloudFormation stacks using AWS
Service Catalog only and enforcing the use of a launch constraint is the best way to ensure
that the internal business teams launch resources through pre-approved CloudFormation
templates only. AWS Service Catalog is a service that enables organizations to create and
manage catalogs of IT services that are approved for use on AWS. A launch constraint is a
rule that specifies the role that AWS Service Catalog assumes when launching a product.
By using a launch constraint, the DevOps engineer can control the permissions that the
users have when launching a product. Using AWS Config rules to detect when resources
have drifted from their expected state is the best way to automate the monitoring of the
resources. AWS Config is a service that enables you to assess, audit, and evaluate the
configurations of your AWS resources. AWS Config rules are custom or managed rules
that AWS Config uses to evaluate whether your AWS resources comply with your desired
configurations. By using AWS Config rules, the DevOps engineer can track the changes in
the resources and identify any non-compliant resources.
Option A is incorrect because allowing users to deploy CloudFormation stacks using a
CloudFormation service role only is not the best way to ensure that the internal business
teams launch resources through pre-approved CloudFormation templates only. A
CloudFormation service role is an IAM role that CloudFormation assumes to create,
update, or delete the stack resources. By using a CloudFormation service role, the DevOps
engineer can control the permissions that CloudFormation has when acting on the
resources, but not the permissions that the users have when launching a stack. Therefore,
option A does not prevent the users from launching resources that are not approved by the
company. Using CloudFormation drift detection to detect when resources have drifted from
their expected state is a valid way to monitor the resources, but it is not as automated and
scalable as using AWS Config rules. CloudFormation drift detection is a feature that
enables you to detect whether a stack’s actual configuration differs, or has drifted, from its
expected configuration. To use this feature, the DevOps engineer would need to manually
initiate a drift detection operation on the stack or the stack resources, and then view the
drift status and details in the CloudFormation console or API.
Option B is incorrect because allowing users to deploy CloudFormation stacks using a
CloudFormation service role only is not the best way to ensure that the internal business teams launch resources through pre-approved CloudFormation templates only, as
explained in option A. Using AWS Config rules to detect when resources have drifted from
their expected state is a valid way to monitor the resources, as explained in option C.
Option D is incorrect because enforcing the use of a template constraint is not the best way
to ensure that the internal business teams launch resources through pre-approved
CloudFormation templates only. A template constraint is a rule that defines the values or
properties that users can specify when launching a product. By using a template constraint,
the DevOps engineer can control the parameters that the users can provide when
launching a product, but not the permissions that the users have when launching a product.
Therefore, option D does not prevent the users from launching resources that are not
approved by the company. Using Amazon EventBridge notifications to detect when
resources have drifted from their expected state is a less reliable and consistent solution
than using AWS Config rules. Amazon EventBridge is a service that enables you to
connect your applications with data from a variety of sources. Amazon EventBridge can
deliver a stream of real-time data from event sources, such as AWS services, and route
that data to targets, such as AWS Lambda functions. However, to use this solution, the
DevOps engineer would need to configure the event source, the event bus, the event rule,
and the event target for each resource type that needs to be monitored, which is more
complex and error-prone than using AWS Config rules.
Question # 19
A company is building a web and mobile application that uses a serverless architecturepowered by AWS Lambda and Amazon API Gateway The company wants to fullyautomate the backend Lambda deployment based on code that is pushed to theappropriate environment branch in an AWS CodeCommit repositoryThe deployment must have the following:• Separate environment pipelines for testing and production• Automatic deployment that occurs for test environments onlyWhich steps should be taken to meet these requirements'?
A. Configure a new AWS CodePipelme service Create a CodeCommit repository for eachenvironment Set up CodePipeline to retrieve the source code from the appropriaterepository Set up the deployment step to deploy the Lambda functions with AWSCloudFormation. B. Create two AWS CodePipeline configurations for test and production environmentsConfigure the production pipeline to have a manual approval step Create aCodeCommit repository for each environment Set up each CodePipeline to retrieve thesource code from the appropriate repository Set up the deployment step to deploy theLambda functions with AWS CloudFormation. C. Create two AWS CodePipeline configurations for test and production environmentsConfigure the production pipeline to have a manual approval step. Create oneCodeCommit repository with a branch for each environment Set up each CodePipeline toretrieve the source code from the appropriate branch in the repository. Set up thedeployment step to deploy the Lambda functions with AWS CloudFormation D. Create an AWS CodeBuild configuration for test and production environments Configurethe production pipeline to have a manual approval step. Create one CodeCommitrepository with a branch for each environment Push the Lambda function code to anAmazon S3 bucket Set up the deployment step to deploy the Lambda functions from theS3 bucket.
Answer: C
Explanation:
The correct approach to meet the requirements for separate environment pipelines and
automatic deployment for test environments is to create two AWS CodePipeline
configurations, one for each environment. The production pipeline should have a manual
approval step to ensure that changes are reviewed before being deployed to production. A
single AWS CodeCommit repository with separate branches for each environment allows
for organized and efficient code management. Each CodePipeline retrieves the source
code from the appropriate branch in the repository. The deployment step utilizes AWS
CloudFormation to deploy the Lambda functions, ensuring that the infrastructure as code is maintained and version-controlled.
References:
AWS Lambda with Amazon API Gateway: Using AWS Lambda with Amazon API
Gateway
Tutorial on using Lambda with API Gateway: Tutorial: Using Lambda with API
Gateway
AWS CodePipeline automatic deployment: Set Up a Continuous Deployment
Pipeline Using AWS CodePipeline
Building a pipeline for test and production stacks: Walkthrough: Building a pipeline
for test and production stacks
Question # 20
A healthcare services company is concerned about the growing costs of software licensingfor an application for monitoring patient wellness. The company wants to create an auditprocess to ensure that the application is running exclusively on Amazon EC2 DedicatedHosts. A DevOps engineer must create a workflow to audit the application to ensurecompliance.What steps should the engineer take to meet this requirement with the LEASTadministrative overhead?
A. Use AWS Systems Manager Configuration Compliance. Use calls to the putcompliance-items API action to scan and build a database of noncompliant EC2 instancesbased on their host placement configuration. Use an Amazon DynamoDB table to storethese instance IDs for fast access. Generate a report through Systems Manager by callingthe list-compliance-summaries API action. B. Use custom Java code running on an EC2 instance. Set up EC2 Auto Scaling for theinstance depending on the number of instances to be checked. Send the list ofnoncompliant EC2 instance IDs to an Amazon SQS queue. Set up another worker instanceto process instance IDs from the SQS queue and write them to Amazon DynamoDB. Usean AWS Lambda function to terminate noncompliant instance IDs obtained from the queue,and send them to an Amazon SNS email topic for distribution. C. Use AWS Config. Identify all EC2 instances to be audited by enabling Config Recordingon all Amazon EC2 resources for the region. Create a custom AWS Config rule thattriggers an AWS Lambda function by using the "config-rule-change-triggered" blueprint. Modify the LambdaevaluateCompliance () function to verify host placement to return a NON_COMPLIANTresult if the instance is not running on an EC2 Dedicated Host. Use the AWS Config reportto address noncompliant instances. D. Use AWS CloudTrail. Identify all EC2 instances to be audited by analyzing all calls tothe EC2 RunCommand API action. Invoke a AWS Lambda function that analyzes the hostplacement of the instance. Store the EC2 instance ID of noncompliant resources in anAmazon RDS for MySQL DB instance. Generate a report by querying the RDS instanceand exporting the query results to a CSV text file.
Answer: C
Explanation:
The correct answer is C. Using AWS Config to identify and audit all EC2 instances based
on their host placement configuration is the most efficient and scalable solution to ensure
compliance with the software licensing requirement. AWS Config is a service that enables
you to assess, audit, and evaluate the configurations of your AWS resources. By creating a
custom AWS Config rule that triggers a Lambda function to verify host placement, the
DevOps engineer can automate the process of checking whether the instances are running
on EC2 Dedicated Hosts or not. The Lambda function can return a NON_COMPLIANT
result if the instance is not running on an EC2 Dedicated Host, and the AWS Config report
can provide a summary of the compliance status of the instances. This solution requires
the least administrative overhead compared to the other options.
Option A is incorrect because using AWS Systems Manager Configuration Compliance to
scan and build a database of noncompliant EC2 instances based on their host placement
configuration is a more complex and costly solution than using AWS Config. AWS Systems
Manager Configuration Compliance is a feature of AWS Systems Manager that enables
you to scan your managed instances for patch compliance and configuration
inconsistencies. To use this feature, the DevOps engineer would need to install the
Systems Manager Agent on each EC2 instance, create a State Manager association to run
the put-compliance-items API action periodically, and use a DynamoDB table to store the
instance IDs of noncompliant resources. This solution would also require more API calls
and storage costs than using AWS Config.
Option B is incorrect because using custom Java code running on an EC2 instance to
check and terminate noncompliant EC2 instances is a more cumbersome and error-prone
solution than using AWS Config. This solution would require the DevOps engineer to write
and maintain the Java code, set up EC2 Auto Scaling for the instance, use an SQS queue
and another worker instance to process the instance IDs, use a Lambda function and an
SNS topic to terminate and notify the noncompliant instances, and handle any potential
failures or exceptions in the workflow. This solution would also incur more compute,
storage, and messaging costs than using AWS Config.
Option D is incorrect because using AWS CloudTrail to identify and audit EC2 instances by
analyzing the EC2 RunCommand API action is a less reliable and accurate solution than using AWS Config. AWS CloudTrail is a service that enables you to monitor and log the
API activity in your AWS account. The EC2 RunCommand API action is used to execute
commands on one or more EC2 instances. However, this API action does not necessarily
indicate the host placement of the instance, and it may not capture all the instances that
are running on EC2 Dedicated Hosts or not. Therefore, option D would not provide a
comprehensive and consistent audit of the EC2 instances.
Question # 21
A company's application runs on Amazon EC2 instances. The application writes to a log filethat records the username, date, time: and source IP address of the login. The log ispublished to a log group in Amazon CloudWatch LogsThe company is performing a root cause analysis for an event that occurred on theprevious day The company needs to know the number of logins for a specific user from thepast 7 daysWhich solution will provide this information'?
A. Create a CloudWatch Logs metric filter on the log group Use a filter pattern that matchesthe username. Publish a CloudWatch metric that sums the number of logins over the past 7days. B. Create a CloudWatch Logs subscription on the log group Use a filter pattern thatmatches the username Publish a CloudWatch metric that sums the number of logins overthe past 7 days C. Create a CloudWatch Logs Insights query that uses an aggregation function to count thenumber of logins for the username over the past 7 days. Run the query against the loggroup D. Create a CloudWatch dashboard. Add a number widget that has a filter pattern thatcounts the number of logins for the username over the past 7 days directly from the loggroup
Answer: C
Explanation: To analyze and find the number of logins for a specific user from the past 7
days, a CloudWatch Logs Insights query is the most suitable solution. CloudWatch Logs
Insights enables you to interactively search and analyze your log data in Amazon
CloudWatch Logs. You can use the query language to perform queries that contain multiple
commands, including aggregation functions, which can count the occurrences of logins for
a specific username over a specified time period. This approach is more direct and efficient
than creating a metric filter or subscription, which would require additional steps to publish
and sum a metric. References: AWS Certified DevOps Engineer -
Professional, CloudWatch Logs Insights query syntax, Tutorial: Run a query with an
aggregation function, Add or remove a number widget from a CloudWatch dashboard.
Question # 22
AnyCompany is using AWS Organizations to create and manage multiple AWS accountsAnyCompany recently acquired a smaller company, Example Corp. During the acquisitionprocess, Example Corp's single AWS account joined AnyCompany's management accountthrough an Organizations invitation. AnyCompany moved the new member account underan OU that is dedicated to Example Corp.AnyCompany's DevOps eng•neer has an IAM user that assumes a role that is namedOrganizationAccountAccessRole to access member accounts. This role is configured witha full access policy When the DevOps engineer tries to use the AWS Management Consoleto assume the role in Example Corp's new member account, the DevOps engineerreceives the following error message "Invalid information in one or more fields. Check yourinformation or contact your administrator."Which solution will give the DevOps engineer access to the new member account?
A. In the management account, grant the DevOps engineer's IAM user permission toassume the OrganzatlonAccountAccessR01e IAM role in the new member account. B. In the management account, create a new SCR In the SCP, grant the DevOpsengineer's IAM user full access to all resources in the new member account. Attach theSCP to the OU that contains the new member account, C. In the new member account, create a new IAM role that is namedOrganizationAccountAccessRole. Attach the AdmInistratorAccess AVVS managed policy tothe role. In the role's trust policy, grant the management account permission to assume therole. D. In the new member account edit the trust policy for the Organ zationAccountAccessRoleIAM role. Grant the management account permission to assume the role.
Answer: C
Explanation: The problem is that the DevOps engineer cannot assume the
OrganizationAccountAccessRole IAM role in the new member account that joined
AnyCompany’s management account through an Organizations invitation. The solution is
to create a new IAM role with the same name and trust policy in the new member account.
Option A is incorrect, as it does not address the root cause of the error. The
DevOps engineer’s IAM user already has permission to assume the
OrganizationAccountAccessRole IAM role in any member account, as this is the
default role name that AWS Organizations creates when a new account joins an
organization. The error occurs because the new member account does not have
this role, as it was not created by AWS Organizations.
Option B is incorrect, as it does not address the root cause of the error. An SCP is
a policy that defines the maximum permissions for account members of an
organization or organizational unit (OU). An SCP does not grant permissions to
IAM users or roles, but rather limits the permissions that identity-based policies or
resource-based policies grant to them. An SCP also does not affect how IAM roles
are assumed by other principals.
Option C is correct, as it addresses the root cause of the error. By creating a new
IAM role with the same name and trust policy as the
OrganizationAccountAccessRole IAM role in the new member account, the
DevOps engineer can assume this role and access the account. The new role
should have the AdministratorAccess AWS managed policy attached, which grants
full access to all AWS resources in the account. The trust policy should allow the
management account to assume the role, which can be done by specifying the
management account ID as a principal in the policy statement.
Option D is incorrect, as it assumes that the new member account already has the
OrganizationAccountAccessRole IAM role, which is not true. The new member
account does not have this role, as it was not created by AWS Organizations.
Editing the trust policy of a non-existent role will not solve the problem.
Question # 23
A company has an application that includes AWS Lambda functions. The Lambda functionsrun Python code that is stored in an AWS CodeCommit repository. The company hasrecently experienced failures in the production environment because of an error in thePython code. An engineer has written unit tests for the Lambda functions to help avoidreleasing any future defects into the production environment.The company's DevOps team needs to implement a solution to integrate the unit tests intoan existing AWS CodePipeline pipeline. The solution must produce reports about the unittests for the company to view.Which solution will meet these requirements?
A. Associate the CodeCommit repository with Amazon CodeGuru Reviewer. Create a newAWS CodeBuild project. In the CodePipeline pipeline, configure a test stage that uses thenew CodeBuild project. Create a buildspec.yml file in the CodeCommit repository. In thebuildspec.yml file, define the actions to run a CodeGuru review. B. Create a new AWS CodeBuild project. In the CodePipeline pipeline, configure a teststage that uses the new CodeBuild project. Create a CodeBuild report group. Create abuildspec.yml file in the CodeCommit repository. In the buildspec.yml file, define theactions to run the unit tests with an output of JUNITXML in the build phase section.Configure the test reports to be uploaded to the new CodeBuild report group. C. Create a new AWS CodeArtifact repository. Create a new AWS CodeBuild project. Inthe CodePipeline pipeline, configure a test stage that uses the new CodeBuild project.Create an appspec.yml file in the original CodeCommit repository. In the appspec.yml file,define the actions to run the unit tests with an output of CUCUMBERJSON in the buildphase section. Configure the tests reports to be sent to the new CodeArtifact repository. D. Create a new AWS CodeBuild project. In the CodePipeline pipeline, configure a teststage that uses the new CodeBuild project. Create a new Amazon S3 bucket. Create abuildspec.yml file in the CodeCommit repository. In the buildspec.yml file, define theactions to run the unit tests with an output of HTML in the phases section. In the reportssection, upload the test reports to the S3 bucket.
Answer: B
Explanation: The correct answer is B. Creating a new AWS CodeBuild project and
configuring a test stage in the AWS CodePipeline pipeline that uses the new CodeBuild
project is the best way to integrate the unit tests into the existing pipeline. Creating a
CodeBuild report group and uploading the test reports to the new CodeBuild report group
will produce reports about the unit tests for the company to view. Using JUNITXML as the
output format for the unit tests is supported by CodeBuild and will generate a valid report.
Option A is incorrect because Amazon CodeGuru Reviewer is a service that provides
automated code reviews and recommendations for improving code quality and
performance. It is not a tool for running unit tests or producing test reports. Therefore,
option A will not meet the requirements.
Option C is incorrect because AWS CodeArtifact is a service that provides secure,
scalable, and cost-effective artifact management for software development. It is not a tool
for running unit tests or producing test reports. Moreover, option C uses CUCUMBERJSON
as the output format for the unit tests, which is not supported by CodeBuild and will not
generate a valid report.
Option D is incorrect because uploading the test reports to an Amazon S3 bucket is not the
best way to produce reports about the unit tests for the company to view. CodeBuild has a
built-in feature to create and manage test reports, which is more convenient and efficient
than using S3. Furthermore, option D uses HTML as the output format for the unit tests,
which is not supported by CodeBuild and will not generate a valid report.
Question # 24
A company is using AWS Organizations to create separate AWS accounts for each of itsdepartments The company needs to automate the following tasks• Update the Linux AMIs with new patches periodically and generate a golden image• Install a new version to Chef agents in the golden image, is available• Provide the newly generated AMIs to the department's accountsWhich solution meets these requirements with the LEAST management overhead'?
A. Write a script to launch an Amazon EC2 instance from the previous golden image Applythe patch updates Install the new version of the Chef agent, generate a new golden image,and then modify the AMI permissions to share only the new image with the department'saccounts. B. Use Amazon EC2 Image Builder to create an image pipeline that consists of the baseLinux AMI and components to install the Chef agent Use AWS Resource Access Managerto share EC2 Image Builder images with the department's accounts C. Use an AWS Systems Manager Automation runbook to update the Linux AMI by usingthe previous image Provide the URL for the script that will update the Chef agent Use AWSOrganizations to replace the previous golden image in the department's accounts. D. Use Amazon EC2 Image Builder to create an image pipeline that consists of the base Linux AMI and components to install the Chef agent Create a parameter in AWS SystemsManager Parameter Store to store the new AMI ID that can be referenced by thedepartment's accounts
Answer: B
Explanation:
Amazon EC2 Image Builder is a service that automates the creation, management, and
deployment of customized, secure, and up-to-date server images that are pre-installed with
software and configuration settings tailored to meet specific IT standards. EC2 Image
Builder simplifies the creation and maintenance of golden images, and makes it easy to
generate images for multiple platforms, such as Amazon EC2 and on-premises. EC2
Image Builder also integrates with AWS Resource Access Manager, which allows you to
share your images across accounts within your organization or with external AWS
accounts. This solution meets the requirements of automating the tasks of updating the
Linux AMIs, installing the Chef agent, and providing the images to the department’s
accounts with the least management overhead. References:
Amazon EC2 Image Builder
Sharing EC2 Image Builder images
Question # 25
A DevOps engineer is setting up a container-based architecture. The engineer has decidedto use AWS CloudFormation to automatically provision an Amazon ECS cluster and anAmazon EC2 Auto Scaling group to launch the EC2 container instances. After successfullycreating the CloudFormation stack, the engineer noticed that, even though the ECS clusterand the EC2 instances were created successfully and the stack finished the creation, theEC2 instances were associating with a different cluster.How should the DevOps engineer update the CloudFormation template to resolve thisissue?
A. Reference the EC2 instances in the AWS: ECS: Cluster resource and reference theECS cluster in the AWS: ECS: Service resource. B. Reference the ECS cluster in the AWS: AutoScaling: LaunchConfiguration resource ofthe UserData property. C. Reference the ECS cluster in the AWS:EC2: lnstance resource of the UserDataproperty. D. Reference the ECS cluster in the AWS: CloudFormation: CustomResource resource to trigger an AWS Lambda function that registers the EC2 instances with the appropriate ECScluster.
Answer: B
Explanation:
The UserData property of the AWS: AutoScaling: LaunchConfiguration resource can be
used to specify a script that runs when the EC2 instances are launched. This script can
include the ECS cluster name as an environment variable for the ECS agent running on the
EC2 instances. This way, the EC2 instances will register with the correct ECS cluster.
Option A is incorrect because the AWS: ECS: Cluster resource does not have a property to
reference the EC2 instances. Option C is incorrect because the EC2 instances are
launched by the Auto Scaling group, not by the AWS: EC2: Instance resource. Option D is
incorrect because using a custom resource and a Lambda function is unnecessary and