Was :
$81
Today :
$45
Was :
$99
Today :
$55
Was :
$117
Today :
$65
Why Should You Prepare For Your Google Cloud Certified - Associate Cloud Engineer With MyCertsHub?
At MyCertsHub, we go beyond standard study material. Our platform provides authentic Google Associate-Cloud-Engineer Exam Dumps, detailed exam guides, and reliable practice exams that mirror the actual Google Cloud Certified - Associate Cloud Engineer test. Whether you’re targeting Google certifications or expanding your professional portfolio, MyCertsHub gives you the tools to succeed on your first attempt.
Verified Associate-Cloud-Engineer Exam Dumps
Every set of exam dumps is carefully reviewed by certified experts to ensure accuracy. For the Associate-Cloud-Engineer Google Cloud Certified - Associate Cloud Engineer , you’ll receive updated practice questions designed to reflect real-world exam conditions. This approach saves time, builds confidence, and focuses your preparation on the most important exam areas.
Realistic Test Prep For The Associate-Cloud-Engineer
You can instantly access downloadable PDFs of Associate-Cloud-Engineer practice exams with MyCertsHub. These include authentic practice questions paired with explanations, making our exam guide a complete preparation tool. By testing yourself before exam day, you’ll walk into the Google Exam with confidence.
Smart Learning With Exam Guides
Our structured Associate-Cloud-Engineer exam guide focuses on the Google Cloud Certified - Associate Cloud Engineer's core topics and question patterns. You will be able to concentrate on what really matters for passing the test rather than wasting time on irrelevant content. Pass the Associate-Cloud-Engineer Exam – Guaranteed
We Offer A 100% Money-Back Guarantee On Our Products.
After using MyCertsHub's exam dumps to prepare for the Google Cloud Certified - Associate Cloud Engineer exam, we will issue a full refund. That’s how confident we are in the effectiveness of our study resources.
Try Before You Buy – Free Demo
Still undecided? See for yourself how MyCertsHub has helped thousands of candidates achieve success by downloading a free demo of the Associate-Cloud-Engineer exam dumps.
MyCertsHub – Your Trusted Partner For Google Exams
Whether you’re preparing for Google Cloud Certified - Associate Cloud Engineer or any other professional credential, MyCertsHub provides everything you need: exam dumps, practice exams, practice questions, and exam guides. Passing your Associate-Cloud-Engineer exam has never been easier thanks to our tried-and-true resources.
Google Associate-Cloud-Engineer Sample Question Answers
Question # 1
(You are developing an internet of things (IoT) application that captures sensor data from multiple devices that have already been set up. You need to identify the global data storage product your company should use to store this data. You must ensure that the storage solution you choose meets your requirements of sub-millisecond latency. What should you do?)
A. Store the IoT data in Spanner. Use caches to speed up the process and avoid latencies. B. Store the IoT data in Bigtable. C. Capture IoT data in BigQuery datasets. D. Store the IoT data in Cloud Storage. Implement caching by using Cloud CDN.
Answer: B Explanation:
Let's evaluate each option based on the requirement of sub-millisecond latency for globally stored
IoT data:
A. Spanner with Caching: While Spanner offers strong consistency and global scalability, the base
latency might not consistently be sub-millisecond for all read/write operations globally. Introducing
caching adds complexity and doesn't guarantee sub-millisecond latency for all initial reads or cache
misses.
B. Bigtable: Bigtable is a highly scalable NoSQL database service designed for low-latency, highthroughput
workloads. It excels at storing and retrieving large volumes of time-series data, which is
typical for IoT sensor data. Its architecture is optimized for single-key lookups and scans, providing
consistent sub-millisecond latency, making it a strong candidate for this use case.
C. BigQuery: BigQuery is a fully managed, serverless data warehouse designed for analytical queries
on large datasets. While it's excellent for analyzing IoT data in batch, it's not optimized for the lowlatency,
high-throughput ingestion and retrieval required for real-time IoT applications with submillisecond
latency needs.
D. Cloud Storage with Cloud CDN: Cloud Storage is object storage and is not designed for low-latency
transactional workloads. Cloud CDN is a content delivery network that caches content closer to users
for faster delivery, but it's not suitable for the primary storage of rapidly incoming IoT sensor data
Storage as object storage, not ideal for sub-millisecond latency reads and writes required for realtime
IoT data.
Question # 2
(Your digital media company stores a large number of video files on-premises. Each video file ranges from 100 MB to 100 GB. You are currently storing 150 TB of video data in your on-premises network, with no room for expansion. You need to migrate all infrequently accessed video files older than one year to Cloud Storage to ensure that on-premises storage remains available for new files. You must also minimize costs and control bandwidth usage. What should you do?)
A. Create a Cloud Storage bucket. Establish an Identity and Access Management (IAM) role with write permissions to the bucket. Use the gsutil tool to directly copy files over the network to Cloud Storage. B. Set up a Cloud Interconnect connection between the on-premises network and Google Cloud. Establish a private endpoint for Filestore access. Transfer the data from the existing Network File System (NFS) to Filestore. C. Use Transfer Appliance to request an appliance. Load the data locally, and ship the appliance back to Google for ingestion into Cloud Storage. D. Use Storage Transfer Service to move the data from the selected on-premises file storage systems to a Cloud Storage bucket.
Answer: D
Explanation:
Let's analyze each option:
A. Using gsutil: While gsutil can transfer data to Cloud Storage, for 150 TB of infrequently accessed
data, direct transfer over the network might be slow and consume significant bandwidth, potentially
impacting other network operations. It also lacks built-in mechanisms for filtering files based on age.
B. Using Cloud Interconnect and Filestore: Cloud Interconnect provides a dedicated connection, but
Filestore is a fully managed NFS service primarily designed for high-performance file sharing for
applications running in Google Cloud. Migrating 150 TB of infrequently accessed data to Filestore
would be cost-inefficient compared to Cloud Storage and doesn't directly address the requirement of
moving older than one year files.
C. Using Transfer Appliance: Transfer Appliance is suitable for very large datasets (petabytes) or when
network connectivity is poor or unreliable. While it addresses bandwidth concerns, it involves a
physical appliance and might be an overkill for 150 TB of data, especially if network connectivity is
reasonable.
D . Using Storage Transfer Service: Storage Transfer Service is specifically designed for moving large
amounts of data between online storage systems, including on-premises file systems and Cloud
Storage. It offers features like filtering by file age, scheduling transfers, and bandwidth control,
directly addressing all the requirements of the question: migrating infrequently
accessed files older than one year to Cloud Storage, minimizing costs (by using appropriate Cloud
Storage classes for infrequent access), and controlling bandwidth usage.
different storage classes (Standard, Nearline, Coldline, Archive) is crucial for cost optimization of
infrequently accessed dat
a. Storage Transfer Service can be configured to move data to a cost-effective class like Nearline or
Coldline.
Question # 3
(You are managing the security configuration of your company's Google Cloud organization. The Operations team needs specific permissions on both a Google Kubernetes Engine (GKE) cluster and a Cloud SQL instance. Two predefined Identity and Access Management (IAM) roles exist that contain a subset of the permissions needed by the team. You need to configure the necessary IAM permissions for this team while following Google-recommended practices. What should you do?)
A. Grant the team the two predefined IAM roles. B. Create a custom IAM role that combines the permissions from the two relevant predefined roles. C. Create a custom IAM role that includes only the required permissions from the predefined roles. D. Grant the team the IAM roles of Kubernetes Engine Admin and Cloud SQL Admin.
Answer: C
Explanation:
Granting more permissions than necessary violates the principle of least privilege, a fundamental
security best practice. While option A grants the necessary permissions (as subsets exist in two
predefined roles), it might also grant more permissions than the Operations team strictly requires for
their tasks on GKE and Cloud SQL. Option D is too broad; 'Admin' roles grant extensive permissions
that likely exceed the specific needs.
Google Cloud's best practices strongly recommend adhering to the principle of least privilege.
Creating a custom role allows you to precisely define the set of permissions the Operations team
needs for their specific tasks on the GKE cluster and the Cloud SQL instance, without granting any
unnecessary permissions. This minimizes the potential blast radius in case of accidental or malicious
concepts of predefined and custom roles and their use cases.
Question # 4
You are deploying an application on Google Cloud that requires a relational database for storage. To satisfy your company's security policies, your application must connect to your database through an encrypted and authenticated connection that requires minimal management and integrates with Identity and Access Management (IAM). What should you do?
A. Deploy a Cloud SQL database with the SSL mode set to encrypted only, configure SSL/TLS client certificates, and configure a database user and password. B. Deploy a Cloud SOL database and configure IAM database authentication. Access the database through the Cloud SQL Auth Proxy. C. Deploy a Cloud SQL database with the SSL mode set to encrypted only, configure SSL/TLS client certificates, and configure IAM database authentication. D. Deploy a Cloud SQL database and configure a database user and password. Access the database through the Cloud SQL Auth Proxy.
Answer: B
Explanation:
Cloud SQL Auth Proxy: This proxy ensures secure connections to your Cloud SQL database by
automatically handling encryption (SSL/TLS) and IAM-based authentication. It simplifies the
management of secure connections without needing to manage SSL/TLS certificates manually. IAM
Database Authentication: This allows you to use IAM credentials to authenticate to the database,
providing a unified and secure authentication mechanism that integrates seamlessly with Google
Cloud IAM.
Question # 5
You are configuring service accounts for an application that spans multiple projects. Virtual machines (VMs) running in the web-applications project need access to BigQuery datasets in the crmdatabases project. You want to follow Google-recommended practices to grant access to the service account in the web-applications project. What should you do?
A. Grant "project owner" for web-applications appropriate roles to crm-databases. B. Grant "project owner" role to crm-databases and the web-applications project. C. Grant "project owner" role to crm-databases and roles/bigquery.dataViewer role to webapplications. D. Grant roles/bigquery.dataViewer role to crm-databases and appropriate roles to web-applications.
Answer: C
Explanation:
Question # 6
You have several hundred microservice applications running in a Google Kubernetes Engine (GKE) cluster. Each microservice is a deployment with resource limits configured for each container in the deployment. You've observed that the resource limits for memory and CPU are not appropriately set for many of the microservices. You want to ensure that each microservice has right sized limits for memory and CPU. What should you do?
A. Modify the cluster's node pool machine type and choose a machine type with more memory and CPU. B. Configure a Horizontal Pod Autoscaler for each microservice. C. Configure GKE cluster autoscaling. D. Configure a Vertical Pod Autoscaler for each microservice.
Answer: D
Explanation:
Question # 7
Your company is running a critical workload on a single Compute Engine VM instance. Your company's disaster recovery policies require you to backup the entire instance's disk data every day. The backups must be retained for 7 days. You must configure a backup solution that complies with your company's security policies and requires minimal setup and configuration. What should you do?
A. Configure the instance to use persistent disk asynchronous replication. B. Configure daily scheduled persistent disk snapshots with a retention period of 7 days. C. Configure Cloud Scheduler to trigger a Cloud Function each day that creates a new machine image and deletes machine images that are older than 7 days. D. Configure a bash script using gsutil to run daily through a cron job. Copy the disk's files to a Cloud Storage bucket with archive storage class and an object lifecycle rule to delete the objects after 7 days.
Answer: B
Explanation
Question # 8
You need to deploy a third-party software application onto a single Compute Engine VM instance. The application requires the highest speed read and write disk access for the internal database. You need to ensure the instance will recover on failure. What should you do?
A. Create an instance template. Set the disk type to be an SSD Persistent Disk. Launch the instance template as part of a stateful managed instance group. B. Create an instance template. Set the disk type to be an SSD Persistent Disk. Launch the instance template as part of a stateless managed instance group. C. Create an instance template. Set the disk type to be Hyperdisk Extreme. Launch the instance template as part of a stateful managed instance group. D. Create an instance template. Set the disk type to be Hyperdisk Extreme. Launch the instance template as part of a stateless managed instance group.
Answer: A
Explanation:
Question # 9
You want to deploy a new containerized application into Google Cloud by using a Kubernetes manifest. You want to have full control over the Kubernetes deployment, and at the same time, you want to minimize configuring infrastructure. What should you do?
A. Deploy the application on GKE Autopilot. B. Deploy the application on GKE Standard. C. Deploy the application on Cloud Functions. D. Deploy the application on Cloud Run.
Answer: A
Explanation:
Question # 10
Your preview application, deployed on a single-zone Google Kubernetes Engine (GKE) cluster in uscentrall, has gained popularity. You are now ready to make the application generally available. You need to deploy the application to production while ensuring high availability and resilience. You also want to follow Google-recommended practices. What should you do?
A. Use the gcloud container clusters create command with the options--enable-multi-networking and--enable- autoscaling to create an autoscaling zonal cluster and deploy the application to it. B. Use the gcloud container clusters create-auto command to create an autopilot cluster and deploy the application to it. C. Use the gcloud container clusters update command with the option”region us-centrall to update the cluster and deploy the application to it. D. Use the gcloud container clusters update command with the option”node-locations us-centralla, us-centrall-b to update the cluster and deploy the application to the nodes.
Answer: B
Explanation:
Question # 11
You use Cloud Logging lo capture application logs. You now need to use SOL to analyze the application logs in Cloud Logging, and you want to follow Google-recommended practices. What should you do?
A. Develop SQL queries by using Gemini for Google Cloud. B. Enable Log Analytics for the log bucket and create a linked dataset in BigQuery. C. Create a schema for the storage bucket and run SQL queries for the data in the bucket. D. Export logs to a storage bucket and create an external view in BigQuery.
Answer: B
Explanation:
Question # 12
Your company requires that Google Cloud products are created with a specific configuration to comply with your company's security policies You need to implement a mechanism that will allow software engineers at your company to deploy and update Google Cloud products in a preconfigured and approved manner. What should you do?
A. Create Java packages that utilize the Google Cloud Client Libraries for Java to configure Google
Cloud products. Store and share the packages in a source code rep B. Create bash scripts that utilize the Google Cloud CLI to configure Google Cloud products. Store and share the bash scripts in a source code repository C. Create Terraform modules that utilize the Google Cloud Terraform Provider to configure Google Cloud products. Store and share the modules in a source code repository. D. Use the Google Cloud APIs by using curl to configure Google Cloud products. Store and share the curl commands in a source code repository.
Answer: C
Explanation:
Question # 13
You are building a backend service for an ecommerce platform that will persist transaction data from mobile and web clients. After the platform is launched, you expect a large volume of global transactions. Your business team wants to run SQL queries to analyze the dat a. You need to build a highly available and scalable data store for the platform. What should you do?
A. Create a multi-region Cloud Spanner instance with an optimized schema. B. Create a multi-region Firestore database with aggregation query enabled. C. Create a multi-region Cloud SQL for PostgreSQL database with optimized indexes. D. Create a multi-region BigQuery dataset with optimized tables.
Answer: A
Explanation:
Question # 14
You have two Google Cloud projects: project-a with VPC vpc-a (10.0.0.0) and project-b with VPC vpc-b (10.8.0.0). Your frontend application resides in vpc-a and the backend API services ate deployed in vpc-b. You need to efficiently and cost-effectively enable communication between these Google Cloud projects. You also want to follow Google-recommended practices. What should you do?
A. Configure a Cloud Router in vpc-a and another Cloud Router in vpc-b. B. Configure a Cloud Interconnect connection between vpc-a and vpc-b. C. Create VPC Network Peering between vpc-a and vpc-b. D. Create an OpenVPN connection between vpc-a and vpc-b.
Answer: C
Explanation:
Question # 15
You want to enable your development team to deploy new features to an existing Cloud Run service in production. To minimize the risk associated with a new revision, you want to reduce the number of customers who might be affected by an outage without introducing any development or operational costs to your customers. You want to follow Google-recommended practices for managing revisions to a service. What should you do9
A. Deploy your application to a second Cloud Run service, and ask your customers to use the second Cloud Run service. B. Ask your customers to retry access to your service with exponential backoff to mitigate any potential problems after the new revision is deployed. C. Gradually roll out the new revision and split customer traffic between the revisions to allow rollback in case a problem occurs. D. Send all customer traffic to the new revision, and roll back to a previous revision if you witness any problems in production.
Answer: C
Explanation:
Question # 16
Your company is running a three-tier web application on virtual machines that use a MySQL database. You need to create an estimated total cost of cloud infrastructure to run this application on Google Cloud instances and Cloud SQL. What should you do?
A. Use the Google Cloud Pricing Calculator to determine the cost of every Google Cloud resource you expect to use. Use similar size instances for the web server, and use your current on-premises machines as a comparison for Cloud SQL. B. Implement a similar architecture on Google Cloud, and run a reasonable load test on a smaller scale. Check the billing information, and calculate the estimated costs based on the real load your system usually handles. C. Use the Google Cloud Pricing Calculator and select the Cloud Operations template to define your web application with as much detail as possible. D. Create a Google spreadsheet with multiple Google Cloud resource combinations. On a separate sheet, import the current Google Cloud prices and use these prices for the calculations within formulas.
Answer: C
Explanation:
Question # 17
You are running out of primary internal IP addresses in a subnet for a custom mode VPC. The subnet has the IP range 10.0.0.0. and the IP addresses are primarily used by virtual machines in the project. You need to provide more IP addresses for the virtual machines. What should you do?
A. Change the subnet IP range from 10.0.0.0 to 10.0.0.0. B. Change the subnet IP range from 10.0 0.0 to 10.0.0.0718. C. Add a secondary IP range 10.1.0.0 to the subnet. D. Convert the subnet IP range from IPv4 to IPv6
Answer: B
Explanation:
Question # 18
You are a Google Cloud organization administrator. You need to configure organization policies and log sinks on Google Cloud projects that cannot be removed by project users to comply with your company's security policies. The security policies are different for each company department Each company department has a user with the Project Owner role assigned to their projects. What should you do?
A. Organize projects under folders for each department. Configure both organization policies and log sinks on the folders B. Organize projects under folders for each department. Configure organization policies on the organization and log sinks on the folders. C. Use a standard naming convention for projects that includes the department name. Configure organization policies on the organization and log sinks on the projects. D. Use a standard naming convention for projects that includes the department name. Configure both organization policies and log sinks on the projects.
Answer: A
Question # 19
Your application stores files on Cloud Storage by using the Standard Storage class. The application only requires access to files created in the last 30 days. You want to automatically save costs on files that are no longer accessed by the application. What should you do?
A. Create a retention policy on the storage bucket of 30 days, and lock the bucket by using a retention policy lock. B. Enable object versioning on the storage bucket and add lifecycle rules to expire non-current versions after 30 days C. Create an object lifecycle on the storage bucket to change the storage class to Archive Storage for objects with an age over 30 days. D. Create a cron job in Cloud Scheduler to call a Cloud Functions instance every day to delete files older than 30 days.
Answer: C
Explanation:
Question # 20
You need to deploy a single stateless web application with a web interface and multiple endpoints. For security reasons, the web application must be reachable from an internal IP address from your company's private VPC and on-premises network. You also need to update the web application multiple times per day with minimal effort and want to manage a minimal amount of cloud infrastructure. What should you do?
A. Deploy the web application on Google Kubernetes Engine standard edition with an internal ingress. B. Deploy the web application on Cloud Run with Private Google Access configured C. Deploy the web application to GKE Autopilot with Private Google Access configured D. Deploy the web application on Cloud Run with Private Service Connect configured.
Answer: A
Explanation:
Question # 21
You are planning to migrate the following on-premises data management solutions to Google Cloud: One MySQL cluster for your main database Apache Kafka for your event streaming platform One Cloud SOL for PostgreSOL database for your analytical and reporting needs You want to implement Google-recommended solutions for the migration. You need to ensure that the new solutions provide global scalability and require minimal operational and infrastructure management. What should you do?
A. Migrate from MySQL to Cloud SQL, from Kafka to Memorystore, and from Cloud SQL for PostgreSQL to Cloud SQL B. Migrate from MySQL to Cloud Spanner, from Kafka to Memorystore, and from Cloud SOL for PostgreSQL to Cloud SQL C. Migrate from MySQL to Cloud SOL, from Kafka to Pub/Sub, and from Cloud SOL for PostgreSQL to BigQuery D. Migrate from MySQL to Cloud Spanner, from Kafka to Pub/Sub. and from Cloud SQL for PostgreSQL to BigQuery
Answer: D
Explanation
Question # 22
Your web application is hosted on Cloud Run and needs to query a Cloud SOL database. Every morning during a traffic spike, you notice API quota errors in Cloud SOL logs. The project has already reached the maximum API quot a. You want to make a configuration change to mitigate the issue. What should you do?
A. Modify the minimum number of Cloud Run instances. B. Set a minimum concurrent requests environment variable for the application. C. Modify the maximum number of Cloud Run instances. D. Use traffic splitting.
Answer: C
Explanation:
Question # 23
Your company uses BigQuery to store and analyze dat a. Upon submitting your query in BigQuery, the query fails with a quotaExceeded error. You need to diagnose the issue causing the error. What should you do? Choose 2 answers
A. Search errors in Cloud Audit Logs to analyze the issue. B. Configure Cloud Trace to analyze the issue. C. View errors in Cloud Monitoring to analyze the issue. D. Use the information schema views to analyze the underlying issue. E. Use BigQuery Bl Engine to analyze the issue.
Answer: AC
Explanation:
When encountering a quotaExceeded error in BigQuery, you should follow these steps to diagnose
and mitigate the issue:
Understand the Error:
The error message indicates that a quota was exceeded (either a short-term rate limit or a longerterm
limit).
The response payload contains information about which quota was reached.
Quotas can fall into two categories:
rateLimitExceeded: Short-term limits. Retry the operation after a few seconds using exponential
backoff.
quotaExceeded: Longer-term limits. Wait 10 minutes or longer before retrying the operation.
Search Errors in Cloud Audit Logs (Option A):
Cloud Audit Logs provide detailed information about API requests and responses.
By searching the logs, you can identify the specific API call that triggered the quotaExceeded error.
This helps you understand which resource or operation exceeded the quota.
View Errors in Cloud Monitoring (Option C):
Cloud Monitoring (formerly known as Stackdriver) provides insights into your Google Cloud
resources.
Check the monitoring dashboard for any alerts related to BigQuery quotas.
You can set up custom monitoring rules to track specific quotas and receive notifications.
Other Options:
B . Configure Cloud Trace: Cloud Trace is used for performance analysis and latency tracking. Its not
directly related to quota issues.
D . Use Information Schema Views: Information schema views provide metadata about your datasets
and tables but wont help diagnose quota errors.
E . Use BigQuery Bl Engine: There is no such tool called œBigQuery Bl Engine. This option is invalid.
Remember that some quotas replenish incrementally over a 24-hour period, so you dont always
need to wait a full 24 hours after reaching the limit. If you consistently hit longer-term quotas,
consider workload optimization or requesting a quota increase
Question # 24
You have a VM instance running in a VPC with single-stack subnets. You need to ensure that the VM instance has a fixed IP address so that other services hosted in the same VPC can communicate with the VM. You want to follow Google-recommended practices while minimizing cost. What should you do?
A. Reserve a new static external IP address and assign the new IP address to the VM. B. Promote the existing IP address of the VM to become a static external IP address. C. Reserve a new static external IPv6 address and assign the new IP address to the VM. D. Promote the existing IP address of the VM to become a static internal IP address.
Answer: B
Explanation:
Question # 25
You need to migrate invoice documents stored on-premises to Cloud Storage. The documents have the following storage requirements: Documents must be kept for five years. Up to five revisions of the same invoice document must be stored, to allow for corrections. Documents older than 365 days should be moved to lower cost storage tiers. You want to follow Google-recommended practices to minimize your operational and development costs. What should you do?
A. Enable retention policies on the bucket, and use Cloud Scheduler to invoke a Cloud Function to move or delete your documents based on their metadata. B. Enable retention policies on the bucket, use lifecycle rules to change the storage classes of the objects, set the number of versions, and delete old files. C. Enable object versioning on the bucket, and use Cloud Scheduler to invoke a Cloud Functions instance to move or delete your documents based on their metadata. D. Enable object versioning on the bucket, use lifecycle conditions to change the storage class of the objects, set the number of versions, and delete old files.