Was :
$90
Today :
$50
Was :
$108
Today :
$60
Was :
$126
Today :
$70
Why Should You Prepare For Your AWS Certified DevOps Engineer - Professional With MyCertsHub?
At MyCertsHub, we go beyond standard study material. Our platform provides authentic Amazon DOP-C02 Exam Dumps, detailed exam guides, and reliable practice exams that mirror the actual AWS Certified DevOps Engineer - Professional test. Whether you’re targeting Amazon certifications or expanding your professional portfolio, MyCertsHub gives you the tools to succeed on your first attempt.
Verified DOP-C02 Exam Dumps
Every set of exam dumps is carefully reviewed by certified experts to ensure accuracy. For the DOP-C02 AWS Certified DevOps Engineer - Professional , you’ll receive updated practice questions designed to reflect real-world exam conditions. This approach saves time, builds confidence, and focuses your preparation on the most important exam areas.
Realistic Test Prep For The DOP-C02
You can instantly access downloadable PDFs of DOP-C02 practice exams with MyCertsHub. These include authentic practice questions paired with explanations, making our exam guide a complete preparation tool. By testing yourself before exam day, you’ll walk into the Amazon Exam with confidence.
Smart Learning With Exam Guides
Our structured DOP-C02 exam guide focuses on the AWS Certified DevOps Engineer - Professional's core topics and question patterns. You will be able to concentrate on what really matters for passing the test rather than wasting time on irrelevant content. Pass the DOP-C02 Exam – Guaranteed
We Offer A 100% Money-Back Guarantee On Our Products.
After using MyCertsHub's exam dumps to prepare for the AWS Certified DevOps Engineer - Professional exam, we will issue a full refund. That’s how confident we are in the effectiveness of our study resources.
Try Before You Buy – Free Demo
Still undecided? See for yourself how MyCertsHub has helped thousands of candidates achieve success by downloading a free demo of the DOP-C02 exam dumps.
MyCertsHub – Your Trusted Partner For Amazon Exams
Whether you’re preparing for AWS Certified DevOps Engineer - Professional or any other professional credential, MyCertsHub provides everything you need: exam dumps, practice exams, practice questions, and exam guides. Passing your DOP-C02 exam has never been easier thanks to our tried-and-true resources.
Amazon DOP-C02 Sample Question Answers
Question # 1
A company has an AWS Control Tower landing zone. The company's DevOps team
creates a workload OU. A development OU and a production OU are nested under the
workload OU. The company grants users full access to the company's AWS accounts to
deploy applications.
The DevOps team needs to allow only a specific management 1AM role to manage the
1AM roles and policies of any AWS accounts In only the production OU.
Which combination of steps will meet these requirements? {Select TWO.)
A. Create an SCP that denies full access with a condition to exclude the management 1AM
role for the organization root. B. Ensure that the FullAWSAccess SCP is applied at the organization root C. Create an SCP that allows IAM related actions Attach the SCP to the development OU D. Create an SCP that denies IAM related actions with a condition to exclude the management I AM role Attach the SCP to the workload OU E. Create an SCP that denies IAM related actions with a condition to exclude the management 1AM role Attach the SCP to the production OU
Answer: B,E
Question # 2
A company uses Amazon Redshift as its data warehouse solution. The company wants to
create a dashboard to view changes to the Redshift users and the queries the users
perform.
Which combination of steps will meet this requirement? (Select TWO.)
A. Create an Amazon CloudWatch log group. Create an AWS CloudTrail trail that writes to
the CloudWatch log group. B. Create a new Amazon S3 bucket. Configure default audit logging on the Redshift cluster. Configure the S3 bucket as the target. C. Configure the Redshift cluster database audit logging to include user activity logs. Configure Amazon CloudWatch as the target. D. Create an Amazon CloudWatch dashboard that has a log widget. Configure the widget to display user details from the Redshift logs. E. Create an AWS Lambda function that uses Amazon Athena to query the Redshift logs. Create an Amazon CloudWatch dashboard that has a custom widget type that uses the Lambda function.
Answer: B,D
Question # 3
A DevOps engineer is implementing governance controls for a company that requires its
infrastructure to be housed within the United States. The engineer must restrict which AWS
Regions can be used, and ensure an alert is sent as soon as possible if any activity outside
the governance policy takes place. The controls should be automatically enabled on any
new Region outside the United States (US).
Which combination of actions will meet these requirements? (Select TWO.)
A. Create an AWS Organizations SCP that denies access to all non-global services in nonUS Regions. Attach the policy to the root of the organization. B. Configure AWS CloudTrail to send logs to Amazon CloudWatch Logs and enable it for all Regions. Use a CloudWatch Logs metric filter to send an alert on any service activity in non-US Regions. C. Use an AWS Lambda function that checks for AWS service activity and deploy it to all Regions. Write an Amazon EventBridge rule that runs the Lambda function every hour,
sending an alert if activity is found in a non-US Region. D. Use an AWS Lambda function to query Amazon Inspector to look for service activity in non-US Regions and send alerts if any activity is found. E. Write an SCP using the aws: RequestedRegion condition key limiting access to US Regions. Apply the policy to all users, groups, and roles
Answer: A,B
Question # 4
A company is developing code and wants to use semantic versioning. The company's
DevOps team needs to create a pipeline for compiling the code. The team also needs to
manage versions of the compiled code. If the code uses any open source libraries, the
libraries must also be cached in the build process. Which solution will meet these
requirements?
A. Create an AWS CodeArtifact repository and associate the upstream repositories. Create
an AWS CodeBuild project that builds the semantic version of the code artifacts. Configure
the project to authenticate and connect to the CodeArtifact repository and publish the
artifact to the repository. B. Use AWS CodeDeploy to upload the generated semantic version of the artifact to an Amazon Elastic File System (Amazon EFS) file system. C. Use an AWS CodeBuild project to build the code and to publish the generated semantic version of the artifact to AWS Artifact. Configure build caching in the CodeBuild project. D. Create a new AWS CodeArtifact repository. Create an AWS Lambda function that pulls open source packages from the internet and publishes the packages to the repository. Configure AWS CodeDeploy to build semantic versions of the code and publish the versions to the repository.
Answer: A
Question # 5
A company is running a custom-built application that processes records. All the
components run on Amazon EC2 instances that run in an Auto Scaling group. Each
record's processing is a multistep sequential action that is compute-intensive. Each step is
always completed in 5 minutes or less.
A limitation of the current system is that if any steps fail, the application has to reprocess
the record from the beginning The company wants to update the architecture so that the
application must reprocess only the failed steps.
What is the MOST operationally efficient solution that meets these requirements?
A. Create a web application to write records to Amazon S3 Use S3 Event Notifications to
publish to an Amazon Simple Notification Service (Amazon SNS) topic Use an EC2
instance to poll Amazon SNS and start processing Save intermediate results to Amazon S3
to pass on to the next step B. Perform the processing steps by using logic in the application. Convert the application code to run in a container. Use AWS Fargate to manage the container Instances. Configure the container to invoke itself to pass the state from one step to the next. C. Create a web application to pass records to an Amazon Kinesis data stream. Decouple the processing by using the Kinesis data stream and AWS Lambda functions. D. Create a web application to pass records to AWS Step Functions. Decouple the processing into Step Functions tasks and AWS Lambda functions.
Answer: D
Question # 6
A company uses Amazon API Gateway and AWS Lambda functions to implement an API.
The company uses a pipeline in AWS CodePipeline to build and deploy the API. The
pipeline contains a source stage, build stage, and deployment stage.
The company deploys the API without performing smoke tests. Soon after the deployment,
the company observes multiple issues with the API. A security audit finds security
vulnerabilities in the production code.
The company wants to prevent these issues from happening in the future.
Which combination of steps will meet this requirement? (Select TWO.)
A. Create a smoke test script that returns an error code if the API code fails the test. Add
an action in the deployment stage to run the smoke test script after deployment. Configure
the deployment stage for automatic rollback. B. Create a smoke test script that returns an error code if the API code fails the test. Add an action in the deployment stage to run the smoke test script after deployment. Configure the deployment stage to fail if the smoke test script returns an error code. C. Add an action in the build stage that uses Amazon Inspector to scan the Lambda
function code after the code is built. Configure the build stage to fail if the scan returns any
security findings. D. Add an action in the build stage to run an Amazon CodeGuru code scan after the code is built. Configure the build stage to fail if the scan returns any security findings. D. Add an action in the deployment stage to run an Amazon CodeGuru code scan after deployment. Configure the deployment stage to fail if the scan returns any security findings.
Answer: B,D
Question # 7
A company wants to build a pipeline to update the standard AMI monthly. The AMI must be
updated to use the most recent patches to ensure that launched Amazon EC2 instances
are up to date. Each new AMI must be available to all AWS accounts in the company's
organization in AWS Organizations.
The company needs to configure an automated pipeline to build the AMI.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create an AWS CodePipeline pipeline that uses AWS CodeBuild. Create an AWS
Lambda function to run the pipeline every month. Create an AWS CloudFormation
template. Share the template with all AWS accounts in the organization. B. Create an AMI pipeline by using EC2 Image Builder. Configure the pipeline to distribute the AMI to the AWS accounts in the organization. Configure the pipeline to run monthly. C. Create an AWS CodePipeline pipeline that runs an AWS Lambda function to build the AMI. Configure the pipeline to share the AMI with the AWS accounts in the organization. Configure Amazon EventBridge Scheduler to invoke the pipeline every month. D. Create an AWS Systems Manager Automation runbook. Configure the automation to run in all AWS accounts in the organization. Create an AWS Lambda function to run the automation every month.
Answer: B
Question # 8
A company uses a pipeline in AWS CodePipeline to deploy an application. The company
created an AWS Fault Injection Service (AWS FIS) experiment template to test the
resiliency of the application. A DevOps engineer needs to integrate the experiment into the
pipeline.
Which solution will meet this requirement?
A. Configure a new stage in the pipeline that includes an AWS FIS action. Configure the
action to reference the AWS FIS experiment template. Grant the pipeline access to start
the experiment. B. Create an Amazon EventBridge scheduler. Grant the scheduler permission to start the AWS FIS experiment. Configure a new stage in the pipeline that includes an action to invoke the EventBridge scheduler. C. Create an AWS Lambda function to start the AWS FIS experiment. Grant the Lambda function permission to start the experiment. Create a new stage in the pipeline that has a Lambda action. Set the action to invoke the Lambda function. D. Export the AWS FIS experiment template to an Amazon S3 bucket. Create an AWS CodeBuild unit test project that has a buildspec that starts the AWS FIS experiment. Grant the CodeBuild project access to start the experiment. Configure a new stage in the pipeline that includes an action to run the CodeBuild unit test project.
Answer: C
Question # 9
A company has a file-reading application that saves files to a database running on Amazon
EC2 instances. Regulations require daily file deletions from EC2 instances and deletion of
database records older than 60 days. Database record deletion must occur after file
deletion. The company needs email notifications for any deletion script failures.
Which solution will meet these requirements with the LEAST development effort?
A. Use AWS Systems Manager State Manager to automatically invoke an Automation
document at the specified time daily. Configure the Automation document to run deletion
scripts sequentially via run command. Create an EventBridge rule to send failure
notifications to Amazon SNS. B. Use AWS Systems Manager State Manager to automatically invoke an Automation document at the specified time daily. Configure the Automation document to run deletion scripts sequentially. Add a conditional check for errors as the last step and send failure notifications via Amazon SES. C. Create an EventBridge rule to invoke a Lambda function at the specified time. Configure the Lambda function to run deletion scripts sequentially and send failure notifications via SNS. D. Create an EventBridge rule to invoke a Lambda function at the specified time. Configure the Lambda function to run deletion scripts sequentially and send failure notifications via SES.
Answer: A
Question # 10
A DevOps administrator is responsible for managing the security of a company's Amazon
CloudWatch Logs log groups. The company’s security policy states that employee IDs
must not be visible in logs except by authorized personnel. Employee IDs follow the pattern
of Emp-XXXXXX, where each X is a digit.
An audit discovered that employee IDs are found in a single log file. The log file is available
to engineers, but the engineers are not authorized to view employee IDs. Engineers
currently have an AWS IAM Identity Center permission that allows logs:* on all resources in
the account.
The administrator must mask the employee ID so that new log entries that contain the
employee ID are not visible to unauthorized personnel.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create a new data protection policy on the log group. Add an Emp-\d{6} custom data
identifier configuration. Create an IAM policy that has a Deny action for the
"Action":"logs:Unmask" permission on the resource. Attach the policy to the engineering
accounts. B. Create a new data protection policy on the log group. Add managed data identifiers for the personal data category. Create an IAM policy that has a Deny action for the "NotAction":"logs:Unmask" permission on the resource. Attach the policy to the engineering accounts. C. Create an AWS Lambda function to parse a log file entry, remove the employee ID, and write the results to a new log file. Create a Lambda subscription filter on the log group and select the Lambda function. Grant the lambda:InvokeFunction permission to the log group. D. Create an Amazon Data Firehose delivery stream that has an Amazon S3 bucket as the destination. Create a Firehose subscription filter on the log group that uses the Firehose delivery stream. Remove the "logs:*" permission on the engineering accounts. Create an Amazon Macie job on the S3 bucket that has an Emp-\d{6} custom identifier.
Answer: A
Question # 11
A DevOps engineer needs a resilient CI/CD pipeline that builds container images, stores
them in ECR, scans images for vulnerabilities, and is resilient to outages in upstream
source image repositories.
Which solution meets this?
A. Create a private ECR repo, scan images on push, replicate images from upstream repos
with a replication rule. B. Create a public ECR repo to cache images from upstream repos, create a private repo to store images, scan images on push. C. Create a public ECR repo, configure a pull-through cache rule, create a private repo to store images, enable basic scanning. D. Create a private ECR repo, enable basic scanning, create a pull-through cache rule.
Answer: D
Question # 12
A DevOps engineer has created an AWS CloudFormation template that deploys an
application on Amazon EC2 instances The EC2 instances run Amazon Linux The
application is deployed to the EC2 instances by using shell scripts that contain user data.
The EC2 instances have an 1AM instance profile that has an 1AM role with the AmazonSSMManagedlnstanceCore managed policy attached
The DevOps engineer has modified the user data in the CloudFormation template to install
a new version of the application. The engineer has also applied the stack update. However,
the application was not updated on the running EC2 instances. The engineer needs to
ensure that the changes to the application are installed on the running EC2 instances.
Which combination of steps will meet these requirements? (Select TWO.)
A. Configure the user data content to use the Multipurpose Internet Mail Extensions
(MIME) multipart format. Set the scripts-user parameter to always in the text/cloud-config
section. B. Refactor the user data commands to use the cfn-init helper script. Update the user data to install and configure the cfn-hup and cfn-mit helper scripts to monitor and apply the metadata changes C. Configure an EC2 launch template for the EC2 instances. Create a new EC2 Auto Scaling group. Associate the Auto Scaling group with the EC2 launch template Use the AutoScalingScheduledAction update policy for the Auto Scaling group. D. Refactor the user data commands to use an AWS Systems Manager document (SSM document). Add an AWS CLI command in the user data to use Systems Manager Run Command to apply the SSM document to the EC2 instances E. Refactor the user data command to use an AWS Systems Manager document (SSM document) Use Systems Manager State Manager to create an association between the SSM document and the EC2 instances.
Answer: B,E
Question # 13
A company has an AWS Cloud Format ion slack that is deployed in a single AWS account.
The company has configured the stack to send event notifications to an Amazon Simple
Notification Service (Amazon SNS) topic.
A DevOps engineer must implement an automated solution that applies a tag to the
specific Cloud Formation stack instance only after a successful stack update occurs. The
DevOps engineer has created an AWS Lambda function that applies and updates this tag
(or the specific slack instance.
Which solution will meet these requirements?
A. Run the AWS-UpdateCloudfomationStack AWS Systems Manager Automation runbook
when Systems Manager detects an UPDATE_COMPLETE event for the instance status of
the Cloud Formation stack. Configure the runbook to invoke the Lambda function. B. Create a custom AWS Config rule that produces a compliance change event if the CloudFormation stack has an UPDATE_COMPLETE instance status. Configure AWS Config to directly invoke the Lambda function to automatically remediate the change event. C. Create an Amazon EventBridge rule that matches the UPDATE COMPLETE event pattern for the instance status of the CloudFormation stack. Configure the rule to invoke the Lambda function. D. Adjust the configuration of the CloudFormation stack to send notifications for only an UPDATE COMPLETE instance status event to the SNS topic. Subscribe the Lambda function to the SNS topic.
Answer: C
Question # 14
A company uses AWS Organizations to manage multiple AWS accounts. The company
needs a solution to improve the company's management of AWS resources in a production
account.
The company wants to use AWS CloudFormation to manage all manually created
infrastructure. The company must have the ability to strictly control who can make manual
changes to AWS infrastructure. The solution must ensure that users can deploy new
infrastructure only by making changes to a CloudFormation template that is stored in an
AWS CodeConnections compatible Git provider.
Which combination of steps will meet these requirements with the LEAST implementation
effort? (Select THREE).
A. Configure the CloudFormation infrastructure as code (IaC) generator to scan for existing
resources in the AWS account. Create a CloudFormation template that includes the
scanned resources. Import the CloudFormation template into a new CloudFormation stack. B. Configure AWS Config to scan for existing resources in the AWS account. Create a CloudFormation template that includes the scanned resources. Import the CloudFormation template into a new CloudFormation stack. C. Use CodeConnections to establish a connection between the Git provider and AWS CodePipeline. Push the CloudFormation template to the Git repository. Run a pipeline in CodePipeline that deploys the CloudFormation stack for every merge into the Git repository. D. Use CodeConnections to establish a connection between the Git provider and CloudFormation. Push the CloudFormation template to the Git repository. Sync the Git repository with the CloudFormation stack. E. Create an IAM role, and set CloudFormation as the principal. Grant the IAM role access to manage the stack resources. Create an SCP that denies all actions to all the principals
except by the IAM role. Link the SCP with the production OU. F. Create an IAM role, and set CloudFormation as the principal. Grant the IAM role access to manage the stack resources. Create an SCP that allows all actions to only the IAM role. Link the SCP with the production OU.
Answer: A,C,E
Question # 15
A company uses AWS Organizations to manage its AWS accounts. The company has a
root OU that has a child OU. The root OU has an SCP that allows all actions on all
resources. The child OU has an SCP that allows all actions for Amazon DynamoDB and
AWS Lambda, and denies all other actions.
The company has an AWS account that is named vendor-data in the child OU. A DevOps
engineer has an 1AM user that is attached to the AdministratorAccess 1AM policy in the
vendor-data account. The DevOps engineer attempts to launch an Amazon EC2 instance
in the vendor-data account but receives an access denied error.
Which change should the DevOps engineer make to launch the EC2 instance in the
vendor-data account?
A. Attach the AmazonEC2FullAccess 1AM policy to the 1AM user. B. Create a new SCP that allows all actions for Amazon EC2. Attach the SCP to the vendor-data account. C. Update the SCP in the child OU to allow all actions for Amazon EC2. D. Create a new SCP that allows all actions for Amazon EC2. Attach the SCP to the root OU.
Answer: C
Question # 16
A company is migrating its container-based workloads to an AWS Organizations multiaccount environment. The environment consists of application workload accounts that the
company uses to deploy and run the containerized workloads. The company has also
provisioned a shared services account tor shared workloads in the organization.
The company must follow strict compliance regulations. All container images must receive
security scanning before they are deployed to any environment. Images can be consumed
by downstream deployment mechanisms after the images pass a scan with no critical
vulnerabilities. Pre-scan and post-scan images must be isolated from one another so that a
deployment can never use pre-scan images.
A DevOps engineer needs to create a strategy to centralize this process.
Which combination of steps will meet these requirements with the LEAST administrative
overhead? (Select TWO.)
A. Create Amazon Elastic Container Registry (Amazon ECR) repositories in the shared
services account: one repository for each pre-scan image and one repository for each postscan image. Configure Amazon ECR image scanning to run on new image pushes to the
pre-scan repositories. Use resource-based policies to grant the organization write access
to the pre-scan repositories and read access to the post-scan repositories. B. Create pre-scan Amazon Elastic Container Registry (Amazon ECR) repositories in each account that publishes container images. Create repositories for post-scan images in the shared services account. Configure Amazon ECR image scanning to run on new image pushes to the pre-scan repositories. Use resource-based policies to grant the organization read access to the post-scan repositories. C. Configure image replication for each image from the image's pre-scan repository to the image's post-scan repository. D. Create a pipeline in AWS CodePipeline for each pre-scan repository. Create a source stage that runs when new images are pushed to the pre-scan repositories. Create a stage that uses AWS CodeBuild as the action provider. Write a buildspec.yaml definition that determines the image scanning status and pushes images without critical vulnerabilities lo the post-scan repositories. E. Create an AWS Lambda function. Create an Amazon EventBridge rule that reacts to image scanning completed events and invokes the Lambda function. Write function code that determines the image scanning status and pushes images without critical vulnerabilities to the post-scan repositories.
Answer: A,C
Question # 17
A company uses an organization in AWS Organizations to manage multiple AWS accounts
The company needs an automated process across all AWS accounts to isolate any
compromised Amazon EC2 instances when the instances receive a specific tag.
Which combination of steps will meet these requirements? (Select TWO.)
A. Use AWS Cloud Formation StackSets to deploy the Cloud Formation stacks in all AWS
accounts. B. Create an SCP that has a Deny statement for the ec2:" action with a condition of "aws:RequestTag/isolation": false. C. Attach the SCP to the root of the organization. D. Create an AWS Cloud Formation template that creates an EC2 instance rote that has no 1AM policies attached. Configure the template to have a security group that has an explicit
Deny rule on all traffic. Use the Cloud Formation template to create an AWS Lambda
function that attaches the 1AM role to instances. Configure the Lambda function to add a
network ACL. Sot up an Amazon EventBridge rule to invoke the Lambda function when a
specific tag is applied to a compromised EC2 instance. E. Create an AWS Cloud Formation template that creates an EC2 instance role that has no 1AM policies attached. Configure the template to have a security group that has no inbound rules or outbound rules. Use the CloudFormation template to create an AWS Lambda function that attaches the 1AM role to instances. Configure the Lambda function to replace any existing security groups with the new security group. Set up an Amazon EventBridge rule to invoke the Lambda function when a specific tag is applied to a compromised EC2 instance.
Answer: A,E
Question # 18
A DevOps engineer uses AWS CodeBuild to frequently produce software packages. The
CodeBuild project builds large Docker images that the DevOps engineer can use across
multiple builds. The DevOps engineer wants to improve build performance and minimize
costs. Which solution will meet these requirements?
A. Store the Docker images in an Amazon Elastic Container Registry (Amazon ECR)
repository. Implement a local Docker layer cache for CodeBuild. B. Cache the Docker images in an Amazon S3 bucket that is available across multiple build hosts. Expire the cache by using an S3 Lifecycle policy. C. Store the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Modify the CodeBuild project runtime configuration to always use the most recent image version. D. Create custom AMIs that contain the cached Docker images. In the CodeBuild build, launch Amazon EC2 instances from the custom AMIs.
Answer: A
Question # 19
A company runs an application on an Amazon Elastic Container Service (Amazon ECS)
service by using the AWS Fargate launch type. The application consumes messages from
an Amazon Simple Queue Service (Amazon SQS) queue. The application can take several
minutes to process each message from the queue. When the application processes a
message, the application reads a file from an Amazon S3 bucket and processes the data in
the file. The application writes the processed output to a second S3 bucket. The company
uses Amazon CloudWatch Logs to monitor processing errors and to ensure that the
application processes messages successfully.
The SQS queue typically receives a low volume of messages. However, occasionally the
queue receives higher volumes of messages. A DevOps engineer needs to implement a
solution to reduce the processing time of message bursts.
Which solution will meet this requirement in the MOST cost-effective way?
A. Register the ECS service as a scalable target in AWS Application Auto Scaling.
Configure a target tracking scaling policy to scale the service in response to the queue
size. B. Increase the maximum number of messages that Amazon SQS requests to batch messages together. Use long polling to minimize the number of API calls to Amazon SQS during periods of low traffic. C. Send messages to an Amazon EventBridge event bus instead of the SQS queue. Replace the ECS service with an EventBridge rule that launches ECS tasks in response to matching events. D. Create an Auto Scaling group of EC2 instances. Create a capacity provider in the ECS cluster by using the Auto Scaling group. Change the ECS service to use the EC2 launch type.
Answer: A
Question # 20
A company is developing a microservices-based application on AWS. The application
consists of AWS Lambda functions and Amazon Elastic Container Service (Amazon ECS)
services that need to be deployed frequently.
A DevOps engineer needs to implement a consistent deployment solution across all
components of the application. The solution must automate the deployments, minimize
downtime during updates, and manage configuration data for the application.
Which solution will meet these requirements with the LEAST deployment effort?
A. Use AWS CloudFormation to define and provision the Lambda functions and ECS
services. Implement stack updates with resource replacement for all components. Use
AWS Secrets Manager to manage the configuration data. B. Use AWS CodeDeploy to manage deployments for the Lambda functions and ECS services. Implement canary deployments for the Lambda functions. Implement blue/green deployments for the ECS services. Use AWS Systems Manager Parameter Store to manage the configuration data. C. Use AWS Step Functions to orchestrate deployments for the Lambda functions and ECS services. Use canary deployments for the Lambda functions and ECS services in a different AWS Region. Use AWS Systems Manager Parameter Store to manage the configuration data. D. Use AWS Systems Manager to manage deployments for the Lambda functions and ECS services. Implement all-at-once deployments for the Lambda functions. Implement rolling updates for the ECS services. Use AWS Secrets Manager to manage the configuration data.
Answer: B
Question # 21
A company is running its ecommerce website on AWS. The website is currently hosted on
a single Amazon EC2 instance in one Availability Zone. A MySQL database runs on the
same EC2 instance. The company needs to eliminate single points of failure in the
architecture to improve the website's availability and resilience. Which solution will meet
these requirements with the LEAST configuration changes to the website?
A. Deploy the application by using AWS Fargate containers. Migrate the database to
Amazon DynamoDB. Use Amazon API Gateway to route requests. B. Deploy the application on EC2 instances across multiple Availability Zones. Put the EC2 instances into an Auto Scaling group behind an Application Load Balancer. Migrate the database to Amazon Aurora Multi-AZ. Use Amazon CloudFront for content delivery. C. Use AWS Elastic Beanstalk to deploy the application across multiple AWS Regions. Migrate the database to Amazon Redshift. Use Amazon ElastiCache for session management. D. Migrate the application to AWS Lambda functions. Use Amazon S3 for static content hosting. Migrate the database to Amazon DocumentDB (with MongoDB compatibility).
Answer: B
Question # 22
A company uses an Amazon API Gateway regional REST API to host its application API.
The REST API has a custom domain. The REST API's default endpoint is deactivated. The company's internal teams consume the API. The company wants to use mutual TLS
between the API and the internal teams as an additional layer of authentication.
Which combination of steps will meet these requirements? (Select TWO.)
A. Use AWS Certificate Manager (ACM) to create a private certificate authority (CA).
Provision a client certificate that is signed by the private CA. B. Provision a client certificate that is signed by a public certificate authority (CA). Import the certificate into AWS Certificate Manager (ACM). C. Upload the provisioned client certificate to an Amazon S3 bucket. Configure the API Gateway mutual TLS to use the client certificate that is stored in the S3 bucket as the trust store. D. Upload the provisioned client certificate private key to an Amazon S3 bucket. Configure the API Gateway mutual TLS to use the private key that is stored in the S3 bucket as the trust store. E. Upload the root private certificate authority (CA) certificate to an Amazon S3 bucket. Configure the API Gateway mutual TLS to use the private CA certificate that is stored in the S3 bucket as the trust store.
Answer: A,E
Question # 23
A company uses Amazon Elastic Container Registry (Amazon ECR) for all images of the
company's containerized infrastructure. The company uses the pull through cache
functionality with the /external prefix to avoid throttling when the company retrieves images
from external image registries. The company uses AWS Organizations for its accounts.
Every image in the registry must be encrypted with a specific, pre-provisioned AWS Key
Management Service (AWS KMS) key. The company's internally created images already
comply with this policy. However, cached external images use server-side encryption with
Amazon S3 managed keys (SSE-S3).
The company must remove the noncompliant cache repositories. The company must also
implement a secure solution to ensure that all new pull through cache repositories are
automatically encrypted with the required KMS key.
Which solution will meet these requirements?
A. Configure AWS Config. Add a custom rule that uses Guard syntax. Write the rule to
enable KMS encryption for new repositories. B. Configure an ECR repository creation template for the prefix. Specify the KMS key. Wait for the repositories to repopulate. C. Configure an SCP for all AWS accounts that requires all ECR repositories to be KMS encrypted. D. Create a new Amazon EventBridge rule that triggers on all "ECR Pull Through Cache Action" events. Set AWS KMS as the rule target.
Answer: B
Question # 24
A company has developed a web application that conducts seasonal sales on public
holidays. The web application is deployed on AWS and uses AWS services for storage,
database, computing, and encryption. During seasonal sales, the company expects high
network traffic from many users. The company must receive insights regarding any
unexpected behavior during the sale. A DevOps team wants to review the insights upon
detecting anomalous behaviors during the sale. The DevOps team wants to receive
recommended actions to resolve the anomalous behaviors. The recommendations must be
provided on the provisioned infrastructure to address issues that might occur in the future.
Which combination of steps will meet these requirements with the LEAST operational
overhead? (Select TWO.)
A. Enable Amazon DevOps Guru in the AWS account. Determine the coverage for DevOps
Guru for all supported AWS resources in the account. Use the DevOps Guru dashboard to
find the analysis, recommendations, and related metrics. B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Configure Amazon DevOps Guru to send notifications about important events to the company when anomalies are identified. C. Create an Amazon S3 bucket. Store Amazon CloudWatch logs, AWS CloudTrail data, and AWS Config data in the S3 bucket. Use Amazon Athena to generate insights on the data. Create a dashboard by using Amazon QuickSight. D. Configure email message reports for an Amazon QuickSight dashboard. Schedule and send the email reports to the company. E. Create an Amazon Simple Notification Service (Amazon SNS) topic. Configure Amazon Athena to send query results about important events to the company when anomalies are identified.
Answer: A,B
Question # 25
A company gives its employees limited rights to AWS DevOps engineers have the ability to
assume an administrator role. For tracking purposes, the security team wants to receive a
near-real-time notification when the administrator role is assumed.
How should this be accomplished?
A. Configure AWS Config to publish logs to an Amazon S3 bucket Use Amazon Athena to
query the logs and send a notification to the security team when the administrator role is
assumed B. Configure Amazon GuardDuty to monitor when the administrator role is assumed and send a notification to the security team C. Create an Amazon EventBridge event rule using an AWS Management Console sign-in events event pattern that publishes a message to an Amazon SNS topic if the administrator role is assumed D. Create an Amazon EventBridge events rule using an AWS API call that uses an AWS CloudTrail event pattern to invoke an AWS Lambda function that publishes a message to an Amazon SNS topic if the administrator role is assumed.
Answer: D
Feedback That Matters: Reviews of Our Amazon DOP-C02 Dumps
Milo HoustonMay 02, 2026
The DOP-C02 test engine on MyCertsHub was incredibly realistic—helped me ace the real exam!
Arianna KellyMay 01, 2026
Got 20% off using coupon code CERT20—best deal for AWS exam prep anywhere online!
Sophia ScottMay 01, 2026
Clean website, fast downloads, and up-to-date DOP-C02 materials—MyCertsHub delivers!
Christopher BaumannApr 30, 2026
The practice questions felt just like the actual DOP-C02—shoutout to their awesome test engine!
Ram Gopal BhatnagarApr 30, 2026
I used CERT20 at checkout and saved big—MyCertsHub is now my go-to for IT certs.