Google Professional-Cloud-DevOps-Engineer Exam Dumps
Google Cloud Certified - Professional Cloud DevOps Engineer Exam
923 Reviews
Exam Code
Professional-Cloud-DevOps-Engineer
Exam Name
Google Cloud Certified - Professional Cloud DevOps Engineer Exam
Questions
201 Questions Answers With Explanation
Update Date
04, 21, 2026
Price
Was :
$81
Today :
$45
Was :
$99
Today :
$55
Was :
$117
Today :
$65
Why Should You Prepare For Your Google Cloud Certified - Professional Cloud DevOps Engineer Exam With MyCertsHub?
At MyCertsHub, we go beyond standard study material. Our platform provides authentic Google Professional-Cloud-DevOps-Engineer Exam Dumps, detailed exam guides, and reliable practice exams that mirror the actual Google Cloud Certified - Professional Cloud DevOps Engineer Exam test. Whether you’re targeting Google certifications or expanding your professional portfolio, MyCertsHub gives you the tools to succeed on your first attempt.
Every set of exam dumps is carefully reviewed by certified experts to ensure accuracy. For the Professional-Cloud-DevOps-Engineer Google Cloud Certified - Professional Cloud DevOps Engineer Exam , you’ll receive updated practice questions designed to reflect real-world exam conditions. This approach saves time, builds confidence, and focuses your preparation on the most important exam areas.
Realistic Test Prep For The Professional-Cloud-DevOps-Engineer
You can instantly access downloadable PDFs of Professional-Cloud-DevOps-Engineer practice exams with MyCertsHub. These include authentic practice questions paired with explanations, making our exam guide a complete preparation tool. By testing yourself before exam day, you’ll walk into the Google Exam with confidence.
Smart Learning With Exam Guides
Our structured Professional-Cloud-DevOps-Engineer exam guide focuses on the Google Cloud Certified - Professional Cloud DevOps Engineer Exam's core topics and question patterns. You will be able to concentrate on what really matters for passing the test rather than wasting time on irrelevant content. Pass the Professional-Cloud-DevOps-Engineer Exam – Guaranteed
We Offer A 100% Money-Back Guarantee On Our Products.
After using MyCertsHub's exam dumps to prepare for the Google Cloud Certified - Professional Cloud DevOps Engineer Exam exam, we will issue a full refund. That’s how confident we are in the effectiveness of our study resources.
Try Before You Buy – Free Demo
Still undecided? See for yourself how MyCertsHub has helped thousands of candidates achieve success by downloading a free demo of the Professional-Cloud-DevOps-Engineer exam dumps.
MyCertsHub – Your Trusted Partner For Google Exams
Whether you’re preparing for Google Cloud Certified - Professional Cloud DevOps Engineer Exam or any other professional credential, MyCertsHub provides everything you need: exam dumps, practice exams, practice questions, and exam guides. Passing your Professional-Cloud-DevOps-Engineer exam has never been easier thanks to our tried-and-true resources.
Google Professional-Cloud-DevOps-Engineer Sample Question Answers
Question # 1
You are designing a deployment technique for your applications on Google Cloud. As part Of your
deployment planning, you want to use live traffic to gather performance metrics for new versions Of
your applications. You need to test against the full production load before your applications are
launched. What should you do?
A. Use A/B testing with blue/green deployment. B. Use shadow testing with continuous deployment. C. Use canary testing with continuous deployment. D. Use canary testing with rolling updates deployment,
Answer: B
Explanation:
The correct answer is B, Use shadow testing with continuous deployment.
Shadow testing is a deployment technique that involves routing a copy of the live traffic to a new
version of the application, without affecting the production environment. This way, you can gather
performance metrics and compare them with the current version, without exposing the new version
to the users. Shadow testing can help you test against the full production load and identify any issues
or bottlenecks before launching the new version. You can use continuous deployment to automate
the process of deploying the new version after it passes the shadow testing.
Reference:
Application deployment and testing strategies, Testing strategies, Shadow test pattern.
Question # 2
You are the Site Reliability Engineer responsible for managing your company's data services andproducts. You regularly navigate operational challenges, such as unpredictable data volume and highcost, with your company's data ingestion processes. You recently learned that a new data ingestionproduct will be developed in Google Cloud. You need to collaborate with the product developmentteam to provide operational input on the new product. What should you do?
A. Deploy the prototype product in a test environment, run a load test, and share the results with theproduct development team B. When the initial product version passes the quality assurance phase and compliance assessments,deploy the product to a staging environment. Share error logs and performance metrics with theproduct development team C. When the new product is used by at least one internal customer in production, share error logsand monitoring metrics with the product development team. D. Review the design of the product with the product development team to provide feedback early inthe design phase.
Answer: D
Explanation:
The correct answer is D, Review the design of the product with the product development team to
provide feedback early in the design phase.
According to the Google Cloud DevOps best practices, a Site Reliability Engineer (SRE) should
collaborate with the product development team from the beginning of the product lifecycle, not just
after the product is deployed or tested. This way, the SRE can provide operational input on the
product design, such as scalability, reliability, security, and cost efficiency. The SRE can also help
define service level objectives (SLOs) and service level indicators (SLIs) for the product, as well as
monitoring and alerting strategies. By collaborating early and often, the SRE and the product
development team can ensure that the product meets the operational requirements and
expectations of the customers.
Reference:
Preparing for Google Cloud Certification: Cloud DevOps Engineer Professional Certificate, Course 1:
Site Reliability Engineering and DevOps, Week 1: Introduction to SRE and DevOps.
Question # 3
Your company runs services by using Google Kubernetes Engine (GKE). The GKE clusters in thedevelopment environment run applications with verbose logging enabled. Developers view logs byusing the kubect1 logscommand and do not use Cloud Logging. Applications do not have a uniform logging structuredefined. You need to minimize the costs associated with application logging while still collecting GKEoperational logs. What should you do?
A. Run the gcloud container clusters update --logging”SYSTEM command for the developmentcluster. B. Run the gcloud container clusters update logging=WORKLOAD command for the developmentcluster C. Run the gcloud logging sinks update _Defau1t --disabled command in the project associated withthe development environment. D. Add the severity >= DEBUG resource. type "k83 container" exclusion filter to the Default loggingsink in the project associated with the development environment.
Answer: A
Question # 4
Your company runs applications in Google Kubernetes Engine (GKE) that are deployed following aGitOps methodology.Application developers frequently create cloud resources to support their applications. You want togive developers the ability to manage infrastructure as code, while ensuring that you follow Googlerecommendedpractices. You need to ensure that infrastructure as code reconciles periodically toavoid configuration drift. What should you do?
A. Install and configure Config Connector in Google Kubernetes Engine (GKE). B. Configure Cloud Build with a Terraform builder to execute plan and apply commands. C. Create a Pod resource with a Terraform docker image to execute terraform plan and terraformapply commands. D. Create a Job resource with a Terraform docker image to execute terraforrm plan and terraformapply commands.
Answer: A
Explanation:
The best option to give developers the ability to manage infrastructure as code, while ensuring that
you follow Google-recommended practices, is to install and configure Config Connector in Google
Kubernetes Engine (GKE).
Config Connector is a Kubernetes add-on that allows you to manage Google Cloud resources through
Kubernetes. You can use Config Connector to create, update, and delete Google Cloud resources
using Kubernetes manifests. Config Connector also reconciles the state of the Google Cloud
resources with the desired state defined in the manifests, ensuring that there is no configuration
drift1.
Config Connector follows the GitOps methodology, as it allows you to store your infrastructure
configuration in a Git repository, and use tools such as Anthos Config Management or Cloud Source
Repositories to sync the configuration to your GKE cluster. This way, you can use Git as the source of
truth for your infrastructure, and enable reviewable and version-controlled workflows2.
Config Connector can be installed and configured in GKE using either the Google Cloud Console or
the gcloud command-line tool. You need to enable the Config Connector add-on for your GKE cluster,
and create a Google Cloud service account with the necessary permissions to manage the Google
Cloud resources. You also need to create a Kubernetes namespace for each Google Cloud project that
you want to manage with Config Connector3.
By using Config Connector in GKE, you can give developers the ability to manage infrastructure as
code, while ensuring that you follow Google-recommended practices. You can also benefit from the
features and advantages of Kubernetes, such as declarative configuration, observability, and
portability4.
Reference:
1: Overview | Artifact Registry Documentation | Google Cloud
2: Deploy Anthos on GKE with Terraform part 1: GitOps with Config Sync | Google Cloud Blog
4: Why use Config Connector? | Config Connector Documentation | Google Cloud
Question # 5
You recently migrated an ecommerce application to Google Cloud. You now need to prepare theapplication for the upcoming peak traffic season. You want to follow Google-recommended practices.What should you do first to prepare for the busy season?
A. Migrate the application to Cloud Run, and use autoscaling. B. Load test the application to profile its performance for scaling. C. Create a Terraform configuration for the application's underlying infrastructure to quickly deployto additional regions D. Pre-provision the additional compute power that was used last season, and expect growth.
Answer: B
Explanation:
The first thing you should do to prepare your ecommerce application for the upcoming peak traffic
season is to load test the application to profile its performance for scaling. Load testing is a process of
simulating high traffic or user demand on your application and measuring how it responds. Load
testing can help you identify any bottlenecks, errors, or performance issues that might affect your
application during the busy season1. Load testing can also help you determine the optimal scaling
strategy for your application, such as horizontal scaling (adding more instances) or vertical scaling
(adding more resources to each instance)2.
There are different tools and methods for load testing your ecommerce application on Google Cloud,
depending on the type and complexity of your application. For example, you can use Cloud Load
Balancing to distribute traffic across multiple instances of your application, and use Cloud Monitoring
to measure the latency, throughput, and error rate of your application3. You can also use Cloud
Functions or Cloud Run to create serverless load generators that can simulate user requests and send
them to your application4. Alternatively, you can use third-party tools such as Apache JMeter or
Locust to create and run load tests on your application.
By load testing your ecommerce application before the peak traffic season, you can ensure that your
application is ready to handle the expected load and provide a good user experience. You can also
use the results of your load tests to plan and implement other steps to prepare your application for
the busy season, such as migrating to a more scalable platform, creating a Terraform configuration
for deploying to additional regions, or pre-provisioning additional compute power.
Reference:
1: Load Testing 101: How To Test Website Performance | BlazeMeter
2: Scaling applications | Google Cloud
3: Load testing using Google Cloud | Solutions | Google Cloud
4: Serverless load testing using Cloud Functions | Solutions | Google Cloud
Question # 6
You are the Operations Lead for an ongoing incident with one of your services. The service usuallyruns at around 70% capacity. You notice that one node is returning 5xx errors for all requests. Therehas also been a noticeable increase in support cases from customers. You need to remove theoffending node from the load balancer pool so that you can isolate and investigate the node. Youwant to follow Google-recommended practices to manage the incident and reduce the impact onusers. What should you do?
A. 1. Communicate your intent to the incident team.2. Perform a load analysis to determine if the remaining nodes can handle the increase in trafficoffloaded from the removed node, and scale appropriately.3. When any new nodes report healthy, drain traffic from the unhealthy node, and remove theunhealthy node from service. B. 1. Communicate your intent to the incident team.2. Add a new node to the pool, and wait for the new node to report as healthy.3. When traffic is being served on the new node, drain traffic from the unhealthy node, and removethe old node from service C. 1 . Drain traffic from the unhealthy node and remove the node from service.2. Monitor traffic to ensure that the error is resolved and that the other nodes in the pool arehandling the traffic appropriately.3. Scale the pool as necessary to handle the new load.4. Communicate your actions to the incident team. D. 1 . Drain traffic from the unhealthy node and remove the old node from service.2. Add a new node to the pool, wait for the new node to report as healthy, and then serve traffic tothe new node.3. Monitor traffic to ensure that the pool is healthy and is handling traffic appropriately.4. Communicate your actions to the incident team.
Answer: A
Explanation:
The correct answer is A, Communicate your intent to the incident team. Perform a load analysis to
determine if the remaining nodes can handle the increase in traffic offloaded from the removed
node, and scale appropriately. When any new nodes report healthy, drain traffic from the unhealthy
node, and remove the unhealthy node from service.
This answer follows the Google-recommended practices for incident management, as described in
the Chapter 9 - Incident Response, Google SRE Book1. According to this source, some of the best
practices are:
Maintain a clear line of command. Designate clearly defined roles. Keep a working record of
debugging and mitigation as you go. Declare incidents early and often.
Communicate your intent before taking any action that might affect the service or the incident
response. This helps to avoid confusion, duplication of work, or unintended consequences.
Perform a load analysis before removing a node from the load balancer pool, as this might affect the
capacity and performance of the service. Scale the pool as necessary to handle the expected load.
Drain traffic from the unhealthy node before removing it from service, as this helps to avoid dropping
requests or causing errors for users.
Answer A follows these best practices by communicating the intent to the incident team, performing
a load analysis and scaling the pool, and draining traffic from the unhealthy node before removing it.
Answer B does not follow the best practice of performing a load analysis before adding or removing
nodes, as this might cause overloading or underutilization of resources.
Answer C does not follow the best practice of communicating the intent before taking any action, as
this might cause confusion or conflict with other responders.
Answer D does not follow the best practice of draining traffic from the unhealthy node before
removing it, as this might cause errors for users.
Reference:
1: Chapter 9 - Incident Response, Google SRE Book
Question # 7
Your Cloud Run application writes unstructured logs as text strings to Cloud Logging. You want toconvert the unstructured logs to JSON-based structured logs. What should you do?
A. A Install a Fluent Bit sidecar container, and use a JSON parser. B. Install the log agent in the Cloud Run container image, and use the log agent to forward logs toCloud Logging. C. Configure the log agent to convert log text payload to JSON payload. D. Modify the application to use Cloud Logging software development kit (SDK), and send log entrieswith a jsonPay10ad field.
Answer: D
Explanation:
The correct answer is D, Modify the application to use Cloud Logging software development kit
(SDK), and send log entries with a jsonPayload field.
Cloud Logging SDKs are libraries that allow you to write structured logs from your Cloud Run
application. You can use the SDKs to create log entries with a jsonPayload field, which contains a
JSON object with the properties of your log entry. The jsonPayload field allows you to use advanced
features of Cloud Logging, such as filtering, querying, and exporting logs based on the properties of
your log entry1.
To use Cloud Logging SDKs, you need to install the SDK for your programming language, and then use
the SDK methods to create and send log entries to Cloud Logging. For example, if you are using
Node.js, you can use the following code to write a structured log entry with a jsonPayload field2:
Using Cloud Logging SDKs is the best way to convert unstructured logs to structured logs, as it
provides more flexibility and control over the format and content of your log entries.
Using a Fluent Bit sidecar container is not a good option, as it adds complexity and overhead to your
Cloud Run application. Fluent Bit is a lightweight log processor and forwarder that can be used to
collect and parse logs from various sources and send them to different destinations3. However,
Cloud Run does not support sidecar containers, so you would need to run Fluent Bit as part of your
main container image. This would require modifying your Dockerfile and configuring Fluent Bit to
read logs from supported locations and parse them as JSON. This is more cumbersome and less
reliable than using Cloud Logging SDKs.
Using the log agent in the Cloud Run container image is not possible, as the log agent is not
supported on Cloud Run. The log agent is a service that runs on Compute Engine or Google
Kubernetes Engine instances and collects logs from various applications and system components.
However, Cloud Run does not allow you to install or run any agents on its underlying infrastructure,
as it is a fully managed service that abstracts away the details of the underlying platform.
Storing the password directly in the code is not a good practice, as it exposes sensitive information
and makes it hard to change or rotate the password. It also requires rebuilding and redeploying the
application each time the password changes, which adds unnecessary work and downtime.
Reference:
1: Writing structured logs | Cloud Run Documentation | Google Cloud
2: Write structured logs | Cloud Run Documentation | Google Cloud
3: Fluent Bit - Fast and Lightweight Log Processor & Forwarder
: Logging Best Practices for Serverless Applications - Google Codelabs
: About the logging agent | Cloud Logging Documentation | Google Cloud
: Cloud Run FAQ | Google Cloud
Question # 8
You are designing a system with three different environments: development, quality assurance (QA),and production.Each environment will be deployed with Terraform and has a Google Kubemetes Engine (GKE) clustercreated so that application teams can deploy their applications. Anthos Config Management will beused and templated to deployinfrastructure level resources in each GKE cluster. All users (for example, infrastructure operators andapplication owners) will use GitOps. How should you structure your source control repositories forboth Infrastructure as Code (laC) and application code?
A. Cloud Infrastructure (Terraform) repository is shared: different directories are differentenvironmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: differentoverlay directories are different environmentsApplication (app source code) repositories are separated: different branches are different features. B. Cloud Infrastructure (Terraform) repository is shared: different directories are differentenvironmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated:different branches are different environmentsApplication (app source code) repositories are separated: different branches are different features C. Cloud Infrastructure (Terraform) repository is shared: different branches are differentenvironmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: differentoverlay directories are different environmentsApplication (app source code) repository is shared: different directories are different features D. Cloud Infrastructure (Terraform) repositories are separated: different branches are differentenvironmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated:different overlay directories are different environmentsApplication (app source code) repositories are separated: different branches are different features
Answer: B
Explanation:
The correct answer is B, Cloud Infrastructure (Terraform) repository is shared: different directories
are different environments. GKE Infrastructure (Anthos Config Management Kustomize manifests)
repositories are separated: different branches are different environments. Application (app source
code) repositories are separated: different branches are different features.
This answer follows the best practices for using Terraform and Anthos Config Management with
GitOps, as described in the following sources:
For Terraform, it is recommended to use a single repository for all environments, and use directories
to separate them. This way, you can reuse the same Terraform modules and configurations across
environments, and avoid code duplication and drift. You can also use Terraform workspaces to isolate
the state files for each environment12.
For Anthos Config Management, it is recommended to use separate repositories for each
environment, and use branches to separate the clusters within each environment. This way, you can
enforce different policies and configurations for each environment, and use pull requests to promote
changes across environments. You can also use Kustomize to create overlays for each cluster that
apply specific patches or customizations34.
For application code, it is recommended to use separate repositories for each application, and use
branches to separate the features or bug fixes for each application. This way, you can isolate the
development and testing of each application, and use pull requests to merge changes into the main
branch. You can also use tags or labels to trigger deployments to different environments5 .
Reference:
1: Best practices for using Terraform | Google Cloud
3: Deploy Anthos on GKE with Terraform part 1: GitOps with Config Sync | Google Cloud Blog
4: Using Kustomize with Anthos Config Management | Anthos Config Management Documentation |
Google Cloud
5: Deploy Anthos on GKE with Terraform part 3: Continuous Delivery with Cloud Build | Google Cloud
Blog
: GitOps-style continuous delivery with Cloud Build | Cloud Build Documentation | Google Cloud
Question # 9
You are deploying an application to Cloud Run. The application requires a password to start. Yourorganization requires that all passwords are rotated every 24 hours, and your application must havethe latest password. You need to deploy the application with no downtime. What should you do?
A. Store the password in Secret Manager and send the secret to the application by usingenvironment variables. B. Store the password in Secret Manager and mount the secret as a volume within the application. C. Use Cloud Build to add your password into the application container at build time. Ensure thatArtifact Registry is secured from public access. D. Store the password directly in the code. Use Cloud Build to rebuild and deploy the applicationeach time the password changes.
Answer: B
Explanation:
The correct answer is B, Store the password in Secret Manager and mount the secret as a volume
within the application.
Secret Manager is a service that allows you to securely store and manage sensitive data such as
passwords, API keys, certificates, and tokens. You can use Secret Manager to rotate your secrets
automatically or manually, and access them from your Cloud Run applications1.
There are two ways to use secrets from Secret Manager in Cloud Run:
As environment variables: You can set environment variables that point to secrets in Secret Manager.
Cloud Run will resolve the secrets at runtime and inject them into the environment of your
application. However, this method has some limitations, such as:
The environment variables are cached for up to 10 minutes, so you may not get the latest version of
the secret immediately.
The environment variables are visible in plain text in the Cloud Console and the Cloud SDK, which
may expose sensitive information.
The environment variables are limited to 4 KB of data, which may not be enough for some secrets.2
As file system volumes: You can mount secrets from Secret Manager as files in a volume within your
application. Cloud Run will create a tmpfs volume and write the secrets as files in it. This method has
some advantages, such as:
The files are updated every 30 seconds, so you can get the latest version of the secret faster.
The files are not visible in the Cloud Console or the Cloud SDK, which provides better security.
The files can store up to 64 KB of data, which allows for larger secrets.3
Therefore, for your use case, it is better to use the second method and mount the secret as a file
system volume within your application. This way, you can ensure that your application has the latest
password, and you can deploy it with no downtime.
To mount a secret as a file system volume in Cloud Run, you can use the following command:
gcloud beta run deploy SERVICE --image IMAGE_URL --updatesecrets=/
path/to/file=secretName:version
where:
SERVICE is the name of your Cloud Run service.
IMAGE_URL is the URL of your container image.
/path/to/file is the path where you want to mount the secret file in your application.
secretName is the name of your secret in Secret Manager.
version is the version of your secret. You can use latest to get the most recent version.3
You can also use the Cloud Console to mount secrets as file system volumes. For more details, see
Mounting secrets from Secret Manager.
Reference:
1: Overview | Secret Manager Documentation | Google Cloud
2: Using secrets as environment variables | Cloud Run Documentation | Google Cloud
3: Mounting secrets from Secret Manager | Cloud Run Documentation | Google Cloud
Question # 10
You are developing reusable infrastructure as code modules. Each module contains integration teststhat launch the module in a test project. You are using GitHub for source control. You need toContinuously test your feature branch and ensure that all code is tested before changes are accepted.You need to implement a solution to automate the integration tests. What should you do?
A. Use a Jenkins server for Cl/CD pipelines. Periodically run all tests in the feature branch. B. Use Cloud Build to run the tests. Trigger all tests to run after a pull request is merged. C. Ask the pull request reviewers to run the integration tests before approving the code. D. Use Cloud Build to run tests in a specific folder. Trigger Cloud Build for every GitHub pull request.
Answer: D
Explanation:
Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure. Cloud
Build can import source code from Google Cloud Storage, Cloud Source Repositories, GitHub, or
Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or
Java archives1. Cloud Build can also run integration tests as part of your build steps2.
You can use Cloud Build to run tests in a specific folder by specifying the path to the folder in
the dir field of your build step3. For example, if you have a folder named tests that contains your
integration tests, you can use the following build step to run them:
steps:
- name: 'gcr.io/cloud-builders/go'
args: ['test', '-v']
dir: 'tests'
Copy
You can use Cloud Build to trigger builds for every GitHub pull request by using the Cloud Build
GitHub app. The app allows you to automatically build on Git pushes and pull requests and view your
build results on GitHub and Google Cloud console4. You can configure the app to run builds on
specific branches, tags, or paths5. For example, if you want to run builds on pull requests that target
the master branch, you can use the following trigger configuration:
includedFiles:
- '**'
name: 'pull-request-trigger'
github:
name: 'my-repo'
owner: 'my-org'
pullRequest:
branch: '^master$'
Using Cloud Build to run tests in a specific folder and trigger builds for every GitHub pull request is a
good way to continuously test your feature branch and ensure that all code is tested before changes
are accepted. This way, you can catch any errors or bugs early and prevent them from affecting the
main branch.
Using a Jenkins server for CI/CD pipelines is not a bad option, but it would require more setup and
maintenance than using Cloud Build, which is fully managed by Google Cloud. Periodically running
all tests in the feature branch is not as efficient as running tests for every pull request, as it may delay
the feedback loop and increase the risk of conflicts or failures.
Using Cloud Build to run the tests after a pull request is merged is not a good practice, as it may
introduce errors or bugs into the main branch that could have been prevented by testing before
merging.
Asking the pull request reviewers to run the integration tests before approving the code is not a
reliable way of ensuring code quality, as it depends on human intervention and may be prone to
errors or oversights.
Reference:
1: Overview | Cloud Build Documentation | Google Cloud
Your organization is starting to containerize with Google Cloud. You need a fully managed storagesolution for container images and Helm charts. You need to identify a storage solution that has nativeintegration into existing Google Cloud services, including Google Kubernetes Engine (GKE), CloudRun, VPC Service Controls, and Identity and Access Management (IAM). What should you do?
A. Use Docker to configure a Cloud Storage driver pointed at the bucket owned by your organization. B. Configure Container Registry as an OCI-based container registry for container images. C. Configure Artifact Registry as an OCI-based container registry for both Helm charts and containerimages D. Configure an open source container registry server to run in GKE with a restrictive role-basedaccess control (RBAC) configuration.
Answer: C
Question # 12
You are configuring Cloud Logging for a new application that runs on a Compute Engine instance witha public IP address. A user-managed service account is attached to the instance. You confirmed thatthe necessary agents are running on the instance but you cannot see any log entries from theinstance in Cloud Logging. You want to resolve the issue by following Google-recommendedpractices. What should you do?Add the Logs Writer role to the service account.Enable Private Google Access on the subnet that the instance is in.Update the instance to use the default Compute Engine service account.Export the service account key and configure the agents to use the key.
Answer: A
Explanation:
The correct answer is
A. Add the Logs Writer role to the service account.
To use Cloud Logging, the service account attached to the Compute Engine instance must have the
necessary permissions to write log entries. The Logs Writer role (roles/logging.logWriter) provides
this permission. You can grant this role to the user-managed service account at the project, folder, or
organization level1.
Private Google Access is not required for Cloud Logging, as it allows instances without external IP
addresses to access Google APIs and services2. The default Compute Engine service account already
has the Logs Writer role, but it is not a recommended practice to use it for user applications3.
Exporting the service account key and configuring the agents to use the key is not a secure way of
authenticating the service account, as it exposes the key to potential compromise4.
Reference:
1: Access control with IAM | Cloud Logging | Google Cloud
2: Private Google Access overview | VPC | Google Cloud
3: Service accounts | Compute Engine Documentation | Google Cloud
4: Best practices for securing service accounts | IAM Documentation | Google Cloud
Question # 13
You work for a global organization and are running a monolithic application on Compute Engine Youneed to select the machine type for the application to use that optimizes CPU utilization by using thefewest number of steps You want to use historical system metncs to identify the machine type for theapplication to use You want to follow Google-recommended practices What should you do?
A. Use the Recommender API and apply the suggested recommendations B. Create an Agent Policy to automatically install Ops Agent in all VMs C. Install the Ops Agent in a fleet of VMs by using the gcloud CLI D. Review the Cloud Monitoring dashboard for the VM and choose the machine type with the lowest
CPU utilization
Answer: A
Explanation:
The best option for selecting the machine type for the application to use that optimizes CPU
utilization by using the fewest number of steps is to use the Recommender API and apply the
suggested recommendations. The Recommender API is a service that provides recommendations for
optimizing your Google Cloud resources, such as Compute Engine instances, disks, and firewalls. You
can use the Recommender API to get recommendations for changing the machine type of your
Compute Engine instances based on historical system metrics, such as CPU utilization. You can also
apply the suggested recommendations by using the Recommender API or Cloud Console. This way,
you can optimize CPU utilization by using the most suitable machine type for your application with
minimal effort.
Question # 14
You are reviewing your deployment pipeline in Google Cloud Deploy You must reduce toil in thepipeline and you want to minimize the amount of time it takes to complete an end-to-enddeployment What should you do?Choose 2 answers
A. Create a trigger to notify the required team to complete the next step when manual interventionis required B. Divide the automation steps into smaller tasks C. Use a script to automate the creation of the deployment pipeline in Google Cloud Deploy D. Add more engineers to finish the manual steps. E. Automate promotion approvals from the development environment to the test environment
Answer: A, E
Explanation:
The best options for reducing toil in the pipeline and minimizing the amount of time it takes to
complete an end-to-end deployment are to create a trigger to notify the required team to complete
the next step when manual intervention is required and to automate promotion approvals from the
development environment to the test environment. A trigger is a resource that initiates a
deployment when an event occurs, such as a code change, a schedule, or a manual request. You can
create a trigger to notify the required team to complete the next step when manual intervention is
required by using Cloud Build or Cloud Functions. This way, you can reduce the waiting time and
human errors in the pipeline. A promotion approval is a process that allows you to approve or reject
a deployment from one environment to another, such as from development to test. You can
automate promotion approvals from the development environment to the test environment by using
Google Cloud Deploy or Cloud Build. This way, you can speed up the deployment process and avoid
manual steps
Question # 15
Your team is building a service that performs compute-heavy processing on batches of data The datais processed faster based on the speed and number of CPUs on the machine These batches of datavary in size and may arrive at any time from multiple third-party sources You need to ensure thatthird parties are able to upload their data securely. You want to minimize costs while ensuring thatthe data is processed as quickly as possible What should you do?
A.Provide a secure file transfer protocol (SFTP) server on a Compute Engine instance so that thirdparties can upload batches of data and provide appropriate credentials to the serverCreate a Cloud Function with a google.storage, object, finalize Cloud Storage trigger Write code sothat the function can scale up a Compute Engine autoscaling managed instance groupUse an image pre-loaded with the data processing software that terminates the instances whenprocessing completes B.Provide a Cloud Storage bucket so that third parties can upload batches of data, and provideappropriate Identity and Access Management (1AM) access to the bucketUse a standard Google Kubernetes Engine (GKE) cluster and maintain two services one thatprocesses the batches of data and one that monitors Cloud Storage for new batches of dataStop the processing service when there are no batches of data to process C.Provide a Cloud Storage bucket so that third parties can upload batches of data, and provideappropriate identity and Access Management (1AM) access to the bucketCreate a Cloud Function with a google, storage, object .finalise Cloud Storage trigger Write code sothat the function can scale up a Compute Engine autoscaling managed instance groupUse an image pre-loaded with the data processing software that terminates the instances whenprocessing completesC.Provide a Cloud Storage bucket so that third parties can upload batches of data, and provideappropriate identity and Access Management (1AM) access to the bucketCreate a Cloud Function with a google, storage, object .finalise Cloud Storage trigger Write code sothat the function can scale up a Compute Engine autoscaling managed instance groupUse an image pre-loaded with the data processing software that terminates the instances whenprocessing completes D.Provide a Cloud Storage bucket so that third parties can upload batches of data, and provideappropriate Identity and Access Management (1AM) access to the bucketUse Cloud Monitoring to detect new batches of data in the bucket and trigger a Cloud Functionthat processes the dataSet a Cloud Function to use the largest CPU possible to minimize the runtime of the processing
Answer: C
Explanation:
The best option for ensuring that third parties are able to upload their data securely and minimizing
costs while ensuring that the data is processed as quickly as possible is to provide a Cloud Storage
bucket so that third parties can upload batches of data, and provide appropriate Identity and Access
Management (IAM) access to the bucket; create a Cloud Function with a
google.storage.object.finalize Cloud Storage trigger; write code so that the function can scale up a
Compute Engine autoscaling managed instance group; use an image pre-loaded with the data
processing software that terminates the instances when processing completes. A Cloud Storage
bucket is a resource that allows you to store and access data in Google Cloud. You can provide a
Cloud Storage bucket so that third parties can upload batches of data securely and conveniently. You
can also provide appropriate IAM access to the bucket by using roles and policies to control who can
read or write data to the bucket. A Cloud Function is a serverless function that executes code in
response to an event, such as a change in a Cloud Storage bucket. A google.storage.object.finalize
trigger is a type of trigger that fires when a new object is created or an existing object is overwritten
in a Cloud Storage bucket. You can create a Cloud Function with a google.storage.object.finalize
trigger so that the function runs whenever a new batch of data is uploaded to the bucket. You can
write code so that the function can scale up a Compute Engine autoscaling managed instance group,
which is a group of VM instances that automatically adjusts its size based on load or custom metrics.
You can use an image pre-loaded with the data processing software that terminates the instances
when processing completes, which means that the instances only run when there is data to process
and stop when they are done. This way, you can minimize costs while ensuring that the data is
processed as quickly as possible.
Question # 16
Your company's security team needs to have read-only access to Data Access audit logs in the_Required bucket You want to provide your security team with the necessary permissions followingthe principle of least privilege and Google-recommended practices. What should you do?
A. Assign the roles/logging, viewer role to each member of the security team B. Assign the roles/logging. viewer role to a group with all the security team members C. Assign the roles/logging.privateLogViewer role to each member of the security team D. Assign the roles/logging.privateLogviewer role to a group with all the security team members
Answer: D
Explanation:
The best option for providing your security team with the necessary permissions following the
principle of least privilege and Google-recommended practices is to assign the
roles/logging.privateLogViewer role to a group with all the security team members. The
roles/logging.privateLogViewer role is a predefined role that grants read-only access to Data Access
audit logs and other private logs in Cloud Logging. A group is a collection of users that can be
assigned roles and permissions as a single unit. You can assign the roles/logging.privateLogViewer
role to a group with all the security team members by using IAM policies. This way, you can provide
your security team with the minimum level of access they need to view Data Access audit logs in the
_Required bucket.
Question # 17
You are building an application that runs on Cloud Run The application needs to access a third-partyAPI by using an API key You need to determine a secure way to store and use the API key in yourapplication by following Google-recommended practices What should you do?
A. Save the API key in Secret Manager as a secret Reference the secret as an environment variable inthe Cloud Run application B. Save the API key in Secret Manager as a secret key Mount the secret key under the /sys/api_keydirectory and decrypt the key in the Cloud Run application C. Save the API key in Cloud Key Management Service (Cloud KMS) as a key Reference the key as anenvironment variable in the Cloud Run application D. Encrypt the API key by using Cloud Key Management Service (Cloud KMS) and pass the key toCloud Run as an environment variable Decrypt and use the key in Cloud Run
Answer: A
Explanation:
The best option for storing and using the API key in your application by following Googlerecommended
practices is to save the API key in Secret Manager as a secret and reference the secret
as an environment variable in the Cloud Run application. Secret Manager is a service that allows you
to store and manage sensitive data, such as API keys, passwords, and certificates, in Google Cloud. A
secret is a resource that represents a logical secret, such as an API key. You can save the API key in
Secret Manager as a secret and use IAM policies to control who can access it. You can also reference
the secret as an environment variable in the Cloud Run application by using the ${SECRET_NAME}
syntax. This way, you can securely store and use the API key in your application without exposing it in
your code or configuration files.
Question # 18
You want to share a Cloud Monitoring custom dashboard with a partner team What should you do?
A. Provide the partner team with the dashboard URL to enable the partner team to create a copy ofthe dashboard B. Export the metrics to BigQuery Use Looker Studio to create a dashboard, and share the dashboardwith the partner team C. Copy the Monitoring Query Language (MQL) query from the dashboard; and send the MQL queryto the partner team D. Download the JSON definition of the dashboard, and send the JSON file to the partner team
Answer: A
Explanation:
The best option for sharing a Cloud Monitoring custom dashboard with a partner team is to provide
the partner team with the dashboard URL to enable the partner team to create a copy of the
dashboard. A Cloud Monitoring custom dashboard is a dashboard that allows you to create and
customize charts and widgets to display metrics, logs, and traces from your Google Cloud resources
and applications. You can share a custom dashboard with a partner team by providing them with the
dashboard URL, which is a link that allows them to view the dashboard in their browser. The partner
team can then create a copy of the dashboard in their own project by using the Copy Dashboard
option. This way, they can access and modify the dashboard without affecting the original one.
Question # 19
You have an application that runs in Google Kubernetes Engine (GKE). The application consists ofseveral microservices that are deployed to GKE by using Deployments and Services One of themicroservices is experiencing an issue where a Pod returns 403 errors after the Pod has been runningfor more than five hours Your development team is working on a solution but the issue will not beresolved for a month You need to ensure continued operations until the microservice is fixed Youwant to follow Google-recommended practices and use the fewest number of steps What should youdo?
A. Create a cron job to terminate any Pods that have been running for more than five hours B. Add a HTTP liveness probe to the microservice s deployment C. Monitor the Pods and terminate any Pods that have been running for more than five hours D. Configure an alert to notify you whenever a Pod returns 403 errors
Answer: B
Explanation:
The best option for ensuring continued operations until the microservice is fixed is to add a HTTP
liveness probe to the microservices deployment. A HTTP liveness probe is a type of probe that
checks if a Pod is alive by sending an HTTP request and expecting a success response code. If the
probe fails, Kubernetes will restart the Pod. You can add a HTTP liveness probe to your microservices
deployment by using a livenessProbe field in your Pod spec. This way, you can ensure that any Pod
that returns 403 errors after running for more than five hours will be restarted automatically and
resume normal operations.
Question # 20
As part of your company's initiative to shift left on security, the infoSec team is asking all teams toimplement guard rails on all the Google Kubernetes Engine (GKE) clusters to only allow thedeployment of trusted and approved images You need to determine how to satisfy the InfoSec teamsgoal of shifting left on security. What should you do?
A. Deploy Falco or Twistlock on GKE to monitor for vulnerabilities on your running Pods B. Configure Identity and Access Management (1AM) policies to create a least privilege model onyour GKE clusters C. Use Binary Authorization to attest images during your CI CD pipeline D. Enable Container Analysis in Artifact Registry, and check for common vulnerabilities and exposures(CVEs) in your container images
Answer: C
Explanation:
The best option for implementing guard rails on all GKE clusters to only allow the deployment of
trusted and approved images is to use Binary Authorization to attest images during your CI/CD
pipeline. Binary Authorization is a feature that allows you to enforce signature-based validation
when deploying container images. You can use Binary Authorization to create policies that specify
which images are allowed or denied in your GKE clusters. You can also use Binary Authorization to
attest images during your CI/CD pipeline by using tools such as Container Analysis or third-party
integrations. An attestation is a digital signature that certifies that an image meets certain criteria,
such as passing vulnerability scans or code reviews. By using Binary Authorization to attest images
during your CI/CD pipeline, you can ensure that only trusted and approved images are deployed to
your GKE clusters.
Question # 21
You are building and running client applications in Cloud Run and Cloud Functions Your clientrequires that all logs must be available for one year so that the client can import the logs into theirlogging service You must minimize required code changes What should you do?
A. Update all images in Cloud Run and all functions in Cloud Functions to send logs to both CloudLogging andthe client's logging service Ensure that all the ports required to send logs are open in the VPC firewall B. Create a Pub/Sub topic subscription and logging sink Configure the logging sink to send all logsinto thetopic Give your client access to the topic to retrieve the logs C. Create a storage bucket and appropriate VPC firewall rules Update all images in Cloud Run and allfunctions in Cloud Functions to send logs to a file within the storage bucket D. Create a logs bucket and logging sink. Set the retention on the logs bucket to 365 days Configurethelogging sink to send logs to the bucket Give your client access to the bucket to retrieve the logs
Answer: D
Explanation:
The best option for storing all logs for one year and minimizing required code changes is to create a
logs bucket and logging sink, set the retention on the logs bucket to 365 days, configure the logging
sink to send logs to the bucket, and give your client access to the bucket to retrieve the logs. A logs
bucket is a Cloud Storage bucket that is used to store logs from Cloud Logging. A logging sink is a
resource that defines where log entries are sent, such as a logs bucket, BigQuery dataset, or Pub/Sub
topic. You can create a logs bucket and logging sink in Cloud Logging and set the retention on the logs
bucket to 365 days. This way, you can ensure that all logs are stored for one year and protected from
deletion. You can also configure the logging sink to send logs from Cloud Run and Cloud Functions to
the logs bucket without any code changes. You can then give your client access to the logs bucket by
using IAM policies or signed URLs.
Question # 22
Your team deploys applications to three Google Kubernetes Engine (GKE) environments
development staging and production You use GitHub reposrtones as your source of truth You need to
ensure that the three environments are consistent You want to follow Google-recommended
practices to enforce and install network policies and a logging DaemonSet on all the GKE clusters in
those environments What should you do?
A. Use Google Cloud Deploy to deploy the network policies and the DaemonSet Use Cloud
Monitoring to trigger an alert if the network policies and DaemonSet drift from your source in the
repository B. Use Google Cloud Deploy to deploy the DaemonSet and use Policy Controller to configure the
network policies Use Cloud Monitoring to detect drifts from the source in the repository and Cloud
Functions to
correct the drifts C. Use Cloud Build to render and deploy the network policies and the DaemonSet Set up Config Sync
to sync the configurations for the three environments D. Use Cloud Build to render and deploy the network policies and the DaemonSet Set up a Policy
Controller to enforce the configurations for the three environments
Answer: C Explanation:
The best option for ensuring that the three environments are consistent and following Googlerecommended
practices is to use Cloud Build to render and deploy the network policies and the
DaemonSet, and set up Config Sync to sync the configurations for the three environments. Cloud
Build is a service that executes your builds on Google Cloud infrastructure. You can use Cloud Build to
render and deploy your network policies and DaemonSet as code using tools like Kustomize, Helm,
or kpt. Config Sync is a feature that enables you to manage the configurations of your GKE clusters
from a single source of truth, such as a Git repository. You can use Config Sync to sync the
configurations for your development, staging, and production environments and ensure that they are
consistent.
Question # 23
You are building the Cl/CD pipeline for an application deployed to Google Kubernetes Engine (GKE)The application is deployed by using a Kubernetes Deployment, Service, and Ingress The applicationteam asked you to deploy the application by using the blue'green deployment methodology Youneed to implement the rollback actions What should you do?
A. Run the kubectl rollout undo command B. Delete the new container image, and delete the running Pods C. Update the Kubernetes Service to point to the previous Kubernetes Deployment D. Scale the new Kubernetes Deployment to zero
Answer: C
Explanation:
The best option for implementing the rollback actions is to update the Kubernetes Service to point to
the previous Kubernetes Deployment. A Kubernetes Service is a resource that defines how to access
a set of Pods. A Kubernetes Deployment is a resource that manages the creation and update of Pods.
By using the blue/green deployment methodology, you can create two Deployments, one for the
current version (blue) and one for the new version (green), and use a Service to switch traffic
between them. If you need to rollback, you can update the Service to point to the previous
Deployment (blue) and stop sending traffic to the new Deployment (green).
Question # 24
Your company operates in a highly regulated domain that requires you to store all organization logsfor seven years You want to minimize logging infrastructure complexity by using managed servicesYou need to avoid any future loss of log capture or stored logs due to misconfiguration or humanerror What should you do?
A. Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into aBigQuery dataset B. Use Cloud Logging to configure an aggregated sink at the organization level to export all logs intoCloud Storage with a seven-year retention policy and Bucket Lock C. Use Cloud Logging to configure an export sink at each project level to export all logs into aBigQuery dataset D. Use Cloud Logging to configure an export sink at each project level to export all logs into CloudStorage with a seven-year retention policy and Bucket Lock
Answer: B
Explanation:
The best option for storing all organization logs for seven years and avoiding any future loss of log
capture or stored logs due to misconfiguration or human error is to use Cloud Logging to configure an
aggregated sink at the organization level to export all logs into Cloud Storage with a seven-year
retention policy and Bucket Lock. Cloud Logging is a service that allows you to collect and manage
logs from your Google Cloud resources and applications. An aggregated sink is a sink that collects
logs from multiple sources, such as projects, folders, or organizations. You can use Cloud Logging to
configure an aggregated sink at the organization level to export all logs into Cloud Storage, which is a
service that allows you to store and access data in Google Cloud. A retention policy is a policy that
specifies how long objects in a bucket are retained before they are deleted. Bucket Lock is a feature
that allows you to lock a retention policy on a bucket and prevent it from being reduced or removed.
You can use Cloud Storage with a seven-year retention policy and Bucket Lock to ensure that your
logs are stored for seven years and protected from accidental or malicious deletion.
Question # 25
Answer: DExplanation:The best option for updating the instance template and minimizing disruption to the application andthe number of pipeline runs is to set the create_before_destroy meta-argument to true in thelifecycle block on the instance template. The create_before_destroy meta-argument is a Terraformfeature that specifies that a new resource should be created before destroying an existing one duringan update. This way, you can avoid downtime and errors when updating a resource that is in use byanother resource, such as an instance template that is used by a managed instance group. By settingthe create_before_destroy meta-argument to true in the lifecycle block on the instance template,you can ensure that Terraform creates a new instance template with the updated machine type,updates the managed instance group to use the new instance template, and then deletes the oldinstance template.
A. Verify the maximum node pool size enable a Horizontal Pod Autoscaler and then perform a loadlest to verify your expected resource needs B. Because you deployed the service on GKE and are using a cluster autoscaler your GKE cluster willscale automatically regardless of growth rate C. Because you are only using 30% of deployed CPU capacity there is significant headroom and youdo not need to add any additional capacity for this rate of growth D. Proactively add 80% more node capacity to account for six months of 10% growth rate and thenperform a load test to ensure that you have enough capacity
Answer: A
Explanation:
The best option for preparing to handle the predicted growth is to verify the maximum node pool
size, enable a Horizontal Pod Autoscaler, and then perform a load test to verify your expected
resource needs. The maximum node pool size is a parameter that specifies the maximum number of
nodes that can be added to a node pool by the cluster autoscaler. You should verify that the
maximum node pool size is sufficient to accommodate your expected growth rate and avoid hitting
any quota limits. The Horizontal Pod Autoscaler is a feature that automatically adjusts the number of
Pods in a deployment or replica set based on observed CPU utilization or custom metrics. You should
enable a Horizontal Pod Autoscaler for your application to ensure that it runs enough Pods to handle
the load. A load test is a test that simulates high user traffic and measures the performance and
reliability of your application. You should perform a load test to verify your expected resource needs
and identify any bottlenecks or issues.
Feedback That Matters: Reviews of Our Google Professional-Cloud-DevOps-Engineer Dumps