Google Associate-Cloud-Engineer dumps

Google Associate-Cloud-Engineer Exam Dumps

Google Cloud Certified - Associate Cloud Engineer
855 Reviews

Exam Code Associate-Cloud-Engineer
Exam Name Google Cloud Certified - Associate Cloud Engineer
Questions 332 Questions Answers With Explanation
Update Date January 15,2026
Price Was : $81 Today : $45 Was : $99 Today : $55 Was : $117 Today : $65

Why Should You Prepare For Your Google Cloud Certified - Associate Cloud Engineer With MyCertsHub?

At MyCertsHub, we go beyond standard study material. Our platform provides authentic Google Associate-Cloud-Engineer Exam Dumps, detailed exam guides, and reliable practice exams that mirror the actual Google Cloud Certified - Associate Cloud Engineer test. Whether you’re targeting Google certifications or expanding your professional portfolio, MyCertsHub gives you the tools to succeed on your first attempt.

Verified Associate-Cloud-Engineer Exam Dumps

Every set of exam dumps is carefully reviewed by certified experts to ensure accuracy. For the Associate-Cloud-Engineer Google Cloud Certified - Associate Cloud Engineer , you’ll receive updated practice questions designed to reflect real-world exam conditions. This approach saves time, builds confidence, and focuses your preparation on the most important exam areas.

Realistic Test Prep For The Associate-Cloud-Engineer

You can instantly access downloadable PDFs of Associate-Cloud-Engineer practice exams with MyCertsHub. These include authentic practice questions paired with explanations, making our exam guide a complete preparation tool. By testing yourself before exam day, you’ll walk into the Google Exam with confidence.

Smart Learning With Exam Guides

Our structured Associate-Cloud-Engineer exam guide focuses on the Google Cloud Certified - Associate Cloud Engineer's core topics and question patterns. You will be able to concentrate on what really matters for passing the test rather than wasting time on irrelevant content. Pass the Associate-Cloud-Engineer Exam – Guaranteed

We Offer A 100% Money-Back Guarantee On Our Products.

After using MyCertsHub's exam dumps to prepare for the Google Cloud Certified - Associate Cloud Engineer exam, we will issue a full refund. That’s how confident we are in the effectiveness of our study resources.

Try Before You Buy – Free Demo

Still undecided? See for yourself how MyCertsHub has helped thousands of candidates achieve success by downloading a free demo of the Associate-Cloud-Engineer exam dumps.

MyCertsHub – Your Trusted Partner For Google Exams

Whether you’re preparing for Google Cloud Certified - Associate Cloud Engineer or any other professional credential, MyCertsHub provides everything you need: exam dumps, practice exams, practice questions, and exam guides. Passing your Associate-Cloud-Engineer exam has never been easier thanks to our tried-and-true resources.

Google Associate-Cloud-Engineer Sample Question Answers

Question # 1

You are planning to move your company's website and a specific asynchronous background job to Google Cloud Your website contains only static HTML content The background job is started through an HTTP endpoint and generates monthly invoices for your customers. Your website needs to be available in multiple geographic locations and requires autoscaling. You want to have no costs when your workloads are not In use and follow recommended practices. What should you do? 

A. Move your website to Google Kubemetes Engine (GKE). and move your background job to Cloud Functions 
B. Move both your website and background job to Compute Engine. 
C. Move both your website and background job to Cloud Run. 
D. Move your website to Google Kubemetes Engine (GKE), and move your background job to Compute Engine 



Question # 2

Your company runs a variety of applications and workloads on Google Cloud and you are responsible for managing cloud costs. You need to identify a solution that enables you to perform detailed cost analysis You also must be able to visualize the cost data in multiple ways on the same dashboard What should you do? 

A. Use the cost breakdown report with the available filters from Cloud Billing to visualize the data 
B. Enable the Cloud Billing export to BigQuery. and use Looker Studio to visualize the data 
C. Run Queries in Cloud Monitoring Create dashboards to visualize the billing metrics 
D. Enable Cloud Monitoring metrics export to BigQuery and use Looker to visualize the data 



Question # 3

You are the Google Cloud systems administrator for your organization. User A reports that they received an error when attempting to access the Cloud SQL database in their Google Cloud project, while User B can access the database. You need to troubleshoot the issue for User A, while following Google-recommended practices. What should you do first? 

A. Confirm that network firewall rules are not blocking traffic for User A. 
B. Review recent configuration changes that may have caused unintended modifications to permissions. 
C. Verify that User A has the Identity and Access Management (IAM) Project Owner role assigned. 
D. Review the error message that User A received. 



Question # 4

Your company stores data from multiple sources that have different data storage requirements. These data include: 1. Customer data that is structured and read with complex queries 2. Historical log data that is large in volume and accessed infrequently 3. Real-time sensor data with high-velocity writes, which needs to be available for analysis but can tolerate some data loss You need to design the most cost-effective storage solution that fulfills all data storage requirements. What should you do? 

A. Use Spanner for all data.
B. Use Cloud SQL for customer data, Cloud Storage (Coldline) for historical logs, and BigQuery for sensor data.
C. Use Cloud SQL for customer data, Cloud Storage (Archive) for historical logs, and Bigtable for sensor data. 
D. Use Firestore for customer data, Cloud Storage (Nearline) for historical logs, and Bigtable for sensor data. 



Question # 5

You are planning to migrate your containerized workloads to Google Kubernetes Engine (GKE). You need to determine which GKE option to use. Your solution must have high availability, minimal downtime, and the ability to promptly apply security updates to your nodes. You also want to pay only for the compute resources that your workloads use without managing nodes. You want to follow Google-recommended practices and minimize operational costs. What should you do? 

A. Configure a Standard multi-zonal GKE cluster. 
B. Configure an Autopilot GKE cluster. 
C. Configure a Standard zonal GKE cluster. 
D. Configure a Standard regional GKE cluster. 



Question # 6

You are planning to migrate a database and a backend application to a Standard Google Kubernetes Engine (GKE) cluster. You need to prevent data loss and make sure there are enough nodes available for your backend application based on the demands of your workloads. You want to follow Googlerecommended practices and minimize the amount of manual work required. What should you do?

A. Run your database as a StatefulSet. Configure cluster autoscaling to handle changes in the demands of your workloads. 
B. Run your database as a single Pod. Run the resize command when you notice changes in the demands of your workloads.
C. Run your database as a Deployment. Configure cluster autoscaling to handle changes in the demands of your workloads. 
D. Run your database as a DaemonSet. Run the resize command when you notice changes in the demands of your workloads. 



Question # 7

Your company has many legacy third-party applications that rely on a shared NFS server for file sharing between these workloads. You want to modernize the NFS server by using a Google Cloud managed service. You need to select the solution that requires the least amount of change to the application. What should you do? 

A. Configure Firestore. Configure all applications to use Firestore instead of the NFS server. 
B. Deploy a Filestore instance. Replace all NFS mounts with a Filestore mount. 
C. Create a Cloud Storage bucket. Configure all applications to use Cloud Storage client libraries instead of the NFS server.
D. Create a Compute Engine instance and configure an NFS server on the instance. Point all NFS mounts to the Compute Engine instance.



Question # 8

You are deploying an application to Cloud Run. Your application requires the use of an API that runs on Google Kubernetes Engine (GKE). You need to ensure that your Cloud Run service can privately reach the API on GKE, and you want to follow Google-recommended practices. What should you do?

A. Deploy an ingress resource on the GKE cluster to expose the API to the internet. Use Cloud Armor to filter for IP addresses that can connect to the API. On the Cloud Run service, configure the application to fetch its public IP address and update the Cloud Armor policy on startup to allow this IP address to call the API on ports 80 and 443.
B. Create an egress firewall rule on the VPC to allow connections to 0.0.0.0/0 on ports 80 and 443. 
C. Create an ingress firewall rule on the VPC to allow connections from 0.0.0.0/0 on ports 80 and 443. 
D. Deploy an internal Application Load Balancer to expose the API on GKE to the VPC. Configure Cloud DNS with the IP address of the internal Application Load Balancer. Deploy a Serverless VPC Access connector to allow the Cloud Run service to call the API through the FQDN on Cloud DNS. 



Question # 9

You assist different engineering teams in deploying their infrastructure on Google Cloud. Your company has defined certain practices required for all workloads. You need to provide the engineering teams with a solution that enables teams to deploy their infrastructure independently without having to know all implementation details of the company's required practices. What should you do? 

A. Create a service account per team, and grant the service account the Project Editor role. Ask the teams to provision their infrastructure through the Google Cloud CLI (gcloud CLI), while impersonating their dedicated service account. 
B. Provide training for all engineering teams you work with to understand the companys required practices. Allow the engineering teams to provision the infrastructure to best meet their needs.  
C. Configure organization policies to enforce your companys required practices. Ask the teams to provision their infrastructure by using the Google Cloud console.
D. Write Terraform modules for each component that are compliant with the companys required practices, and ask teams to implement their infrastructure through these modules. 



Question # 10

Your organization has decided to deploy all its compute workloads to Kubernetes on Google Cloud and two other cloud providers. You want to build an infrastructure-as-code solution to automate the provisioning process for all cloud resources. What should you do? 

A. Build the solution by using YAML manifests, and provision the resources. 
B. Build the solution by using Terraform, and provision the resources. 
C. Build the solution by using Python and the cloud SDKs from all providers to provision the resources. 
D. Build the solution by using Config Connector, and provision the resources. 



Question # 11

You are planning to migrate your on-premises VMs to Google Cloud. You need to set up a landing zone in Google Cloud before migrating the VMs. You must ensure that all VMs in your production environment can communicate with each other through private IP addresses. You need to allow all VMs in your Google Cloud organization to accept connections on specific TCP ports. You want to follow Google-recommended practices, and you need to minimize your operational costs. What should you do? 

A. Create individual VPCs per Google Cloud project. Peer all the VPCs together. Apply organization policies on the organization level.
B. Create individual VPCs for each Google Cloud project. Peer all the VPCs together. Apply hierarchical firewall policies on the organization level.
C. Create a host VPC project with each production project as its service project. Apply organization policies on the organization level. 
D. Create a host VPC project with each production project as its service project. Apply hierarchical firewall policies on the organization level. 



Question # 12

You are deploying an application to Google Kubernetes Engine (GKE) that needs to call an external third-party API. You need to provide the external API vendor with a list of IP addresses for their firewall to allow traffic from your application. You want to follow Google-recommended practices and avoid any risk of interrupting traffic to the API due to IP address changes. What should you do?

A. Configure your GKE cluster with one node, and set the node to have a static external IP address. Ensure that the GKE cluster autoscaler is off. Send the external IP address of the node to the vendor to be added to the allowlist. 
B. Configure your GKE cluster with private nodes. Configure a Cloud NAT instance with static IP addresses. Provide these IP addresses to the vendor to be added to the allowlist.
C. Configure your GKE cluster with public nodes. Write a Cloud Function that pulls the public IP addresses of each node in the cluster. Trigger the function to run every day with Cloud Scheduler. Send the list to the vendor by email every day. 
D. Configure your GKE cluster with private nodes. Configure a Cloud NAT instance with dynamic IP addresses. Provide these IP addresses to the vendor to be added to the allowlist. 



Question # 13

You have an application that is currently processing transactions by using a group of managed VM instances. You need to migrate the application so that it is serverless and scalable. You want to implement an asynchronous transaction processing system, while minimizing management overhead. What should you do? 

A. Install Kafka on VM instances to acknowledge incoming transactions. Use Cloud Run to process transactions.
B. Install Kafka on VM Instances to acknowledge incoming transactions. Use VM Instances to process transactions.
C. Use Pub/Sub to acknowledge incoming transactions. Use VM instances to process transactions. 
D. Use Pub/Sub to acknowledge incoming transactions. Use Cloud Run to process transactions. 



Question # 14

(You host your website on Compute Engine. The number of global users visiting your website is rapidly expanding. You need to minimize latency and support user growth in multiple geographical regions. You also want to follow Google-recommended practices and minimize operational costs. Which two actions should you take? Choose 2 answers) 

A. Deploy all of your VMs in a single Google Cloud region with the largest available CIDR range. 
B. Deploy your VMs in multiple Google Cloud regions closest to your users geographical locations. 
C. Use an external Application Load Balancer in Regional mode. 
D. Use an external Application Load Balancer in Global mode. 
E. Use a Network Load Balancer. 



Question # 15

(You are managing a stateful application deployed on Google Kubernetes Engine (GKE) that can only have one replic a. You recently discovered that the application becomes unstable at peak times. You have identified that the application needs more CPU than what has been configured in the manifest at these peak times. You want Kubernetes to allocate the application sufficient CPU resources during these peak times, while ensuring cost efficiency during off-peak periods. What should you do?) 

A. Enable cluster autoscaling on the GKE cluster. 
B. Configure a Vertical Pod Autoscaler on the Deployment. 
C. Configure a Horizontal Pod Autoscaler on the Deployment. 
D. Enable node auto-provisioning on the GKE cluster. 



Question # 16

(Your companys developers use an automation that you recently built to provision Linux VMs in Compute Engine within a Google Cloud project to perform various tasks. You need to manage the Linux account lifecycle and access for these users. You want to follow Google-recommended practices to simplify access management while minimizing operational costs. What should you do?)

A. Enable OS Login for all VMs. Use IAM roles to grant user permissions.
B. Enable OS Login for all VMs. Write custom startup scripts to update user permissions. 
C. Require your developers to create public SSH keys. Make the owner of the public key the root user. 
D. Require your developers to create public SSH keys. Write custom startup scripts to update user permissions. 



Question # 17

(You are migrating your on-premises workload to Google Cloud. Your company is implementing its Cloud Billing configuration and requires access to a granular breakdown of its Google Cloud costs. You need to ensure that the Cloud Billing datasets are available in BigQuery so you can conduct a detailed analysis of costs. What should you do?) 

A. Enable the BigQuery API and ensure that the BigQuery User IAM role is selected. Change the BigQuery dataset to select a data location.
B. Create a Cloud Billing account. Enable the BigQuery Data Transfer Service API to export pricing data. 
C. Enable Cloud Billing data export to BigQuery when you create a Cloud Billing account. 
D. Enable Cloud Billing on the project and link a Cloud Billing account. Then view the billing data table in the BigQuery dataset. 



Question # 18

(You are deploying a web application using Compute Engine. You created a managed instance group (MIG) to host the application. You want to follow Google-recommended practices to implement a secure and highly available solution. What should you do?)

A. Use a proxy Network Load Balancer for the MIG and an A record in your DNS private zone with the load balancer's IP address. 
B. Use a proxy Network Load Balancer for the MIG and a CNAME record in your DNS public zone with the load balancer's IP address. 
C. Use an Application Load Balancer for the MIG and a CNAME record in your DNS private zone with the load balancer's IP address. 
D. Use an Application Load Balancer for the MIG and an A record in your DNS public zone with the load balancer's IP address. 



Question # 19

(Your company is modernizing its applications and refactoring them to containerized microservices. You need to deploy the infrastructure on Google Cloud so that teams can deploy their applications. The applications cannot be exposed publicly. You want to minimize management and operational overhead. What should you do?) 

A. Provision a Standard zonal Google Kubernetes Engine (GKE) cluster. 
B. Provision a fleet of Compute Engine instances and install Kubernetes. 
C. Provision a Google Kubernetes Engine (GKE) Autopilot cluster. 
D. Provision a Standard regional Google Kubernetes Engine (GKE) cluster. 



Question # 20

(You are migrating your companys on-premises compute resources to Google Cloud. You need to deploy batch processing jobs that run every night. The jobs require significant CPU and memory for several hours but can tolerate interruptions. You must ensure that the deployment is cost-effective. What should you do?) 

A. Containerize the batch processing jobs and deploy them on Compute Engine. 
B. Use custom machine types on Compute Engine. 
C. Use the M1 machine series on Compute Engine. 
D. Use Spot VMs on Compute Engine. 



Question # 21

(You have an application running inside a Compute Engine instance. You want to provide the application with secure access to a BigQuery dataset. You must ensure that credentials are only valid for a short period of time, and your application will only have access to the intended BigQuery dataset. You want to follow Google-recommended practices and minimize your operational costs. What should you do?) 

A. Attach a custom service account to the instance, and grant the service account the BigQuery Data Viewer IAM role on the project. 
B. Attach a new service account to the instance every hour, and grant the service account the BigQuery Data Viewer IAM role on the dataset. 
C. Attach a custom service account to the instance, and grant the service account the BigQuery Data Viewer IAM role on the dataset. 
D. Attach a new service account to the instance every hour, and grant the service account the BigQuery Data Viewer IAM role on the project. 



Question # 22

(Your company was recently impacted by a service disruption that caused multiple Dataflow jobs to get stuck, resulting in significant downtime in downstream applications and revenue loss. You were able to resolve the issue by identifying and fixing an error you found in the code. You need to design a solution with minimal management effort to identify when jobs are stuck in the future to ensure that this issue does not occur again. What should you do?) 

A. Set up Error Reporting to identify stack traces that indicate slowdowns in Dataflow jobs. Set up alerts based on these log entries.
B. Use the Personalized Service Health dashboard to identify issues with Dataflow jobs across regions. 
C. Update the Dataflow job configurations to send messages to a Pub/Sub topic when there are delays. Configure a backup Dataflow job to process jobs that are delayed. Use Cloud Tasks to trigger an alert when messages are pushed to the Pub/Sub topic. 
D. Set up Cloud Monitoring alerts on the data freshness metric for the Dataflow jobs to receive a notification when a certain threshold is reached. 



Question # 23

(Your company was recently impacted by a service disruption that caused multiple Dataflow jobs to get stuck, resulting in significant downtime in downstream applications and revenue loss. You were able to resolve the issue by identifying and fixing an error you found in the code. You need to design a solution with minimal management effort to identify when jobs are stuck in the future to ensure that this issue does not occur again. What should you do?) 

A. Set up Error Reporting to identify stack traces that indicate slowdowns in Dataflow jobs. Set up alerts based on these log entries.
B. Use the Personalized Service Health dashboard to identify issues with Dataflow jobs across regions. 
C. Update the Dataflow job configurations to send messages to a Pub/Sub topic when there are delays. Configure a backup Dataflow job to process jobs that are delayed. Use Cloud Tasks to trigger an alert when messages are pushed to the Pub/Sub topic. 
D. Set up Cloud Monitoring alerts on the data freshness metric for the Dataflow jobs to receive a notification when a certain threshold is reached. 



Question # 24

(You manage a VPC network in Google Cloud with a subnet that is rapidly approaching its private IP address capacity. You expect the number of Compute Engine VM instances in the same region to double within a week. You need to implement a Google-recommended solution that minimizes operational costs and does not require downtime. What should you do?)

A. Create a second VPC with the same subnet IP range, and connect this VPC to the existing VPC by using VPC Network Peering.
B. Delete the existing subnet, and create a new subnet with double the IP range available. 
C. Use the Google Cloud CLI tool to expand the primary IP range of your subnet. 
D. Permit additional traffic from the expected range of private IP addresses to reach your VMs by configuring firewall rules. 



Question # 25

(Your company uses a multi-cloud strategy that includes Google Cloud. You want to centralize application logs in a third-party software-as-a-service (SaaS) tool from all environments. You need to integrate logs originating from Cloud Logging, and you want to ensure the export occurs with the least amount of delay possible. What should you do?)

A. Use a Cloud Scheduler cron job to trigger a Cloud Function that queries Cloud Logging and sends the logs to the SaaS tool. 
B. Create a Cloud Logging sink and configure Pub/Sub as the destination. Configure the SaaS tool to subscribe to the Pub/Sub topic to retrieve the logs.
C. Create a Cloud Logging sink and configure Cloud Storage as the destination. Configure the SaaS tool to read the Cloud Storage bucket to retrieve the logs. 
D. Create a Cloud Logging sink and configure BigQuery as the destination. Configure the SaaS tool to query BigQuery to retrieve the logs.



Feedback That Matters: Reviews of Our Google Associate-Cloud-Engineer Dumps

Leave Your Review