Was :
$142.2
Today :
$79
Was :
$160.2
Today :
$89
Was :
$178.2
Today :
$99
Why Should You Prepare For Your AWS Generative AI Developer Professional With MyCertsHub?
At MyCertsHub, we go beyond standard study material. Our platform provides authentic Amazon AIP-C01 Exam Dumps, detailed exam guides, and reliable practice exams that mirror the actual AWS Generative AI Developer Professional test. Whether you’re targeting Amazon certifications or expanding your professional portfolio, MyCertsHub gives you the tools to succeed on your first attempt.
Verified AIP-C01 Exam Dumps
Every set of exam dumps is carefully reviewed by certified experts to ensure accuracy. For the AIP-C01 AWS Generative AI Developer Professional , you’ll receive updated practice questions designed to reflect real-world exam conditions. This approach saves time, builds confidence, and focuses your preparation on the most important exam areas.
Realistic Test Prep For The AIP-C01
You can instantly access downloadable PDFs of AIP-C01 practice exams with MyCertsHub. These include authentic practice questions paired with explanations, making our exam guide a complete preparation tool. By testing yourself before exam day, you’ll walk into the Amazon Exam with confidence.
Smart Learning With Exam Guides
Our structured AIP-C01 exam guide focuses on the AWS Generative AI Developer Professional's core topics and question patterns. You will be able to concentrate on what really matters for passing the test rather than wasting time on irrelevant content. Pass the AIP-C01 Exam – Guaranteed
We Offer A 100% Money-Back Guarantee On Our Products.
After using MyCertsHub's exam dumps to prepare for the AWS Generative AI Developer Professional exam, we will issue a full refund. That’s how confident we are in the effectiveness of our study resources.
Try Before You Buy – Free Demo
Still undecided? See for yourself how MyCertsHub has helped thousands of candidates achieve success by downloading a free demo of the AIP-C01 exam dumps.
MyCertsHub – Your Trusted Partner For Amazon Exams
Whether you’re preparing for AWS Generative AI Developer Professional or any other professional credential, MyCertsHub provides everything you need: exam dumps, practice exams, practice questions, and exam guides. Passing your AIP-C01 exam has never been easier thanks to our tried-and-true resources.
Amazon AIP-C01 Sample Question Answers
Question # 1
Question on LCNC LLM Fine-Tuning
? Category: AIP – Operational Efficiency and Optimization for Generative AI
Applications.
? Scenario: A team needs to fine-tune an LLM for text summarization using a lowcode/no-code (LCNC) solution to automate model training and minimize manual
intervention.
? Question: Which solution will best meet the team’s requirements?
Utilize SageMaker Studio for fine-tuning an LLM deployed on Amazon EC2
instances, simplifying the training process with an interactive and intuitive
environment. Leverage SageMaker Script Mode to fine-tune an LLM on Amazon EC2
instances, enabling custom training scripts to optimize model performance
with flexibility Configure SageMaker Autopilot to fine-tune an LLM deployed via
SageMaker JumpStart, streamlining model customization with automatic
setup and minimal user intervention.
Answer D
Explanation: SageMaker JumpStart provides access to pre-trained LLMs and
solution templates. SageMaker Autopilot automates the end-to-end ML workflow,
including fine-tuning and hyperparameter optimization. Combining these two services delivers a powerful, LCNC solution for customizing LLMs, meeting the
requirement for automated setup and minimal manual intervention
Question # 2
Question on Cold-Start Forecasting
? Category: AIP – Foundation Model Integration, Data Management, and Compliance.
? Scenario: A manufacturer needs to forecast weekly sales for a brand-new product
variant that has no sales history (cold-start problem). The model must learn
shared patterns across existing SKUs.
? Question: Which approach best satisfies these requirements?
Use SageMaker AI to train a Linear Learner regression model using historical
sales data as features and forecast values as labels for all SKUs. Use SageMaker AI to train a Random Cut Forest (RCF) model to detect
anomalies in historical sales data and project future demand levels for the
new variant Use SageMaker AI to train a K-means clustering model to group similar SKUs
and infer demand patterns for the new variant based on the nearest cluster. Use SageMaker AI to train the built-in DeepAR algorithm across all
related SKUs and then generate a forecast for the new variant.
Answer D
Explanation: The DeepAR forecasting algorithm is specifically designed to train a
single model jointly over all related time series. By capturing shared
consumption patterns and leveraging covariates, DeepAR can generate reliable
forecasts for new time series (the cold-start product variant) that lack historical
data, outperforming traditional single-series methods.
Question # 3
. Question on Canvas Access to External Model (Select TWO)
? Category: AIP – Implementation and Integration.
? Scenario: An LLM was fine-tuned outside of SageMaker, and artifacts are in S3. A
non-technical team (data specialists) needs access to this model via SageMaker
Canvas.
? Question: Which combination of steps must be taken for the AI developer to enable
SageMaker Canvas access to the model? (Select TWO.).
The AI developer is required to set up a SageMaker endpoint for the model The data specialist team must create a shared workspace within SageMaker
Canvas that allows both the AI developer and data specialists to access the
model. The AI developer must convert the model into a TensorFlow or PyTorch
format for SageMaker Canvas compatibility The AI developer must register the model in the SageMaker
Model Registry to enable the data specialist team?s access via
SageMaker Canvas The data specialist team must be granted the necessary permissions to
access the S3 bucket where the model artifacts are stored
Answer D,E Explanation: To make an externally trained model available to Canvas, the model
must be registered in the SageMaker Model Registry. Since the model
artifacts remain in S3, the end-users (the data specialist team) must also be
granted the
necessary IAM permissions to read the model artifacts from the S3 bucket .
Canvas does not require endpoint deployment for access to registered models.
Question # 4
Question on Multi-Dimensional Visualization
? Category: AIP – Operational Efficiency and Optimization for Generative AI
Applications.
? Scenario: Visualize recommendation results across four dimensions in SageMaker
Canvas: X-axis (interest score), Y-axis (conversion rate), Color (product category),
and Size (number of impressions). ? Question: Which approach best satisfies the given requirements?.
Visualize the data using the SageMaker Data Wrangler scatter plot
visualization and color data points by the third feature to represent all four
dimensions. Use the SageMaker Canvas Box Plot visualization to compare distributions
and use a fill pattern for the third dimension. Use the SageMaker Canvas Bar Chart visualization to group products by
category and simultaneously apply bar color and height to represent interest
score and conversion rate Apply the SageMaker Canvas scatter plot visualization and map the
third dimension (product category) to scatter point color and the fourth
dimension (number of impressions) to scatter point size
Answer d
Explanation: The SageMaker Canvas Scatter Plot visualization is the appropriate
tool for multi-dimensional analysis. It allows mapping continuous features to the X
and Y axes, while enabling the developer to map the third feature (product
category) to the color of the data points and the fourth feature (number of
impressions) to the size of the data points, thereby representing all four dimensions
effectively
Question # 5
Question on Toxic Language Detection
? Category: AIP – AI Safety, Security, and Governance.
? Scenario: A social media platform needs to enhance safety by detecting toxic or
harmful language in real-time (hate speech, harassment) within its SageMaker AI
inference pipeline. The solution must be managed, handle high throughput, and
provide confidence scores.
? Question: Which of the solutions provides a managed solution for detecting toxicity
in text to support this ML pipeline?.
Use Amazon Bedrock to fine-tune a foundation model for general
language understanding Utilize Amazon Comprehend sentiment analysis to detect negative
comments and block content automatically Use Amazon Translate to convert text into another language
before moderation to reduce offensive content . Utilize Amazon Comprehend toxicity detection to identify abusive or
harmful language in text.
Answer D
Explanation: Amazon Comprehend toxicity detection is a specialized, fully
managed feature that uses pre-trained ML models to automatically classify content
as toxic, providing confidence scores for severity. This service is designed for realtime moderation, reduces operational overhead by eliminating custom model
building, and integrates easily with existing pipelines. Sentiment analysis is
insufficient as negative sentiment is not the same as toxic language.
Question # 6
Question on Multilingual Content Processing
? Category: AIP – Implementation and Integration.
? Scenario: A multinational company needs an efficient solution to process
audio/video content, translate it from Spanish (and other languages) into English,
and summarize it quickly using an LLM, minimizing deployment time and
maximizing scalability.
? Question: Which option will best fulfill these requirements in the shortest time
possible?.
Train a custom model in Amazon SageMaker AI to process the data into
English, then deploy an LLM in SageMaker AI for summarizing the
content. Leverage Amazon Translate to translate the text into English, apply a
pre- trained model in Amazon SageMaker AI for analysis, and summarize
the content using the Claude Anthropic model in Amazon Bedrock. Use AWS Glue to clean and prepare the data, then use Amazon Translate to
translate the data into English, and summarize the content using Amazon Lex
to create a conversational summary. . Utilize Amazon Transcribe for audio and video-to-text conversion,
Amazon Translate for translating the content into English, and Amazon
Bedrock with the Jamba model for summarizing the text.
Answer D Explanation: This approach uses a sequence of specialized, fully managed AI
services to minimize development time: Amazon Transcribe converts audio/video
to text. Amazon Translate translates the text into English. Amazon Bedrock
provides powerful foundation models (like Jamba) optimized for efficient
summarization.
Question # 7
Question on Predictive Maintenance (LCNC)
? Category: AIP – AI Safety, Security, and Governance.
? Scenario: A company analyzes maintenance reports (Comprehend extraction) and
sensor readings (S3) to predict equipment failure. An analytics team must prepare
the combined dataset and train a custom predictive model using an interface that
simplifies data preparation and model training while maintaining integration with
other SageMaker components.
? Question: Which should be used to prepare the data and train a custom model for
predicting equipment maintenance needs?.
Utilize Amazon Bedrock to fine-tune a foundation model to
predict equipment failures from sensor data and maintenance
notes. Use SageMaker Ground Truth to label maintenance events and automatically
train a predictive model for maintenance scheduling. Utilize Comprehend to directly build and deploy a predictive model
for maintenance events without relying on SageMaker services. Use SageMaker Canvas to prepare the combined dataset and train
a custom model through a no-code interface that integrates with
SageMaker AI.
Answer D
Explanation: SageMaker Canvas is a visual, no-code machine learning service
that allows users (like the analytics team) to prepare data, explore features, build
custom predictive models (like classification or regression required for maintenance
predictions), and generate predictions without writing code. It integrates seamlessly
with the rest of the SageMaker AI ecosystem.
Question # 8
Question on Edge ML Deployment
? Category: AIP – Foundation Model Integration, Data Management, and Compliance.
? Scenario: A manufacturing company in remote locations (unreliable internet) needs
an ML solution to detect package dimensions in real-time video footage. Model
training is done in SageMaker AI. Goal: real-time decision-making without relying on
constant cloud connectivity.
? Question: Which of the following solutions would best meet the company ?s needs?.
Deploy a Convolutional Neural Network (CNN) in SageMaker AI using
Amazon Kinesis Video Streams to analyze the video footage in real time. Use
Amazon EventBridge to trigger downstream actions for routing packages
based on the detected dimensions. Train the model using SageMaker AI and deploy it to Amazon Elastic
Kubernetes Service (Amazon EKS) clusters running in each factory. Use
Amazon SQS to queue routing decisions and send them to the cloud for
processing. Use Rekognition Custom Labels to train the model and deploy it using
Amazon EC2 instances at each factory. Use Amazon EventBridge to monitor
inference results and trigger routing actions Use SageMaker?s built-in Object Detection algorithm to train the
model. Deploy the trained model to an AWS IoT Greengrass core with
AWS Lambda handling the decision logic at the factory
Answer D Explanation: AWS IoT Greengrass Core is designed for edge deployments,
enabling local execution of ML models (trained in SageMaker) and logic (AWS
Lambda functions) directly on devices in environments with unstable connectivity.
This setup ensures the model can make real-time decisions locally (offline
inference) regarding package routing.
Question # 9
Question on AI Agent/Workflow Orchestration
? Category: AIP – Implementation and Integration.
? Scenario: A customer service assistant needs to handle complex order inquiries,
maintain conversation context across sessions, and securely update order records
(execute actions).
? Question: Which solution best satisfies the company ?s requirements?
Use Amazon Lex V2 to build a conversational chatbot for customer
interactions and store conversation transcripts in Amazon S3 for historical
analysis. Use Amazon Titan Text G1 for conversation handling and maintain
customer session states in a Python dictionary within the application
memory for short-term interactions. Use Amazon Kendra to search for answers to product manuals and FAQs.
Combined with AWS Lambda to manage refund requests and data updates. Use Amazon Bedrock AgentCore to develop an AI agent capable of
reasoning, planning, and executing workflows for order
management.
Integrate the agent with Amazon DynamoDB to store and retrieve
customer session data, order history, and interaction context for each
user conversation.
Answer D
Explanation: Amazon Bedrock AgentCore is the specialized framework for
building intelligent agents that can perform multi-step reasoning, planning, and
executing actions against backend systems (like order management). Amazon
DynamoDB is the high-performance, scalable NoSQL database recommended for
storing and retrieving the agent's memory and persistent, structured session data
and history, which is essential for multi-turn conversations.
Question # 10
Question on BERT Fine-Tuning/Transfer Learning
? Category: AIP – Foundation Model Integration, Data Management, and Compliance.
? Scenario: An email filtering system needs to fine-tune a pre-trained BERT model for
spam detection using a labeled email dataset (binary classification). Goal: correctly
load the pretrained weights and use them as the initialization point for fine-tuning
without full retraining.
? Question: Which approach will correctly initialize the BERT model to achieve this
requirement?.
Load the pretrained model weights for every layer and place an external
classifier on top of the primary model output vector. Train the newly added
classifier with the labeled dataset. Use the pretrained model weights for all transformer layers and attach a
second classifier layer in parallel with the existing output layer. Train only this
additional classifier using the labeled dataset. Initialize the model with pretrained weights, convert the output layer into
a multi-task classifier that predicts multiple text classes beyond spam
detection, and train this classifier using the labeled dataset. Apply pretrained model parameters across all layers, then discard
the existing final layer. Introduce a custom classifier and train it using
the labeled data for spam detection.
Answer D
Explanation: In transfer learning with models like BERT, the goal is to preserve the
general linguistic knowledge captured in the pretrained encoder layers. The
correct methodology is to discard the existing final layer (which was trained for the
original pretraining task) and replace it with a new custom classifier layer
specifically designed for the new classification task (spam detection). Only this new
layer, along with possible gradual adjustments to the underlying encoder, needs to
be trained on the labeled dataset.
Question # 11
Question on Churn Prediction (Automation/Explainability)
? Category: AIP – Implementation and Integration.
? Scenario: A retail team needs an automated way (minimal manual effort) to build a
model to predict customer churn and identify the most relevant features
contributing to the prediction (explainability). Question: Which of the following solutions will best fulfill these requirements while
minimizing manual effort?.
Use SageMaker Data Wrangler to automatically train a churn prediction
model and rely on its quick model visualization feature to generate accurate
importance scores for deployment decisions. Use SageMaker Ground Truth to label customer churn data, then build a
custom TensorFlow model to predict churn and analyze feature weights posttraining Use the k-means algorithm in SageMaker AI to cluster customers based on
purchasing patterns. After clustering, use the resulting clusters to predict
churn based on customer behavior Leverage SageMaker Autopilot to automatically train a classification
model for forecasting customer churn. Then, utilize insights from
SageMaker Clarify to determine which features most significantly
influence the predictions
Answer D
Explanation: SageMaker Autopilot automates the entire ML pipeline
(preprocessing, model selection, tuning) for classification tasks like churn
prediction, minimizing manual effort. SageMaker Clarify is then used to compute
bias metrics and feature attribution scores (SHAP values), providing the required
explainability and insights into which features drive the predictions.
Question # 12
Question on Endpoint Scaling Policy
? Category: AIP – Operational Efficiency and Optimization for Generative AI
Applications.
? Scenario: A recommendation endpoint experiences significant delays during
predictable high-traffic sales events, resulting in poor user experience. The goal is to
adjust the target tracking scaling policy to proactively ensure sufficient capacity and
prevent latency issues during these peak periods.
? Question: Which solution will best meet the requirements?.
Increase the instance size of the SageMaker endpoint to a larger instance
type to accommodate higher traffic during sales events. Implement a step scaling policy for the SageMaker inference endpoint that
scales based on resource utilization metrics such as CPU and memory
usage. Use AWS Lambda to periodically restart the SageMaker endpoint during peak
traffic to refresh instance performance. Configure a scheduled scaling policy to increase the capacity of the
SageMaker inference endpoint before the sales events begin.
Answer D
Explanation: Scheduled scaling allows you to automatically adjust capacity based
on predictable load changes. By scheduling an action to increase endpoint capacity
before the sales event begins, the system proactively handles the traffic spike,
preventing latency and ensuring a smooth user experience, which is more effective
than reactive scaling policies.
Question # 13
Question on API Token Rotation
? Category: AIP – AI Safety, Security, and Governance.
? Scenario: A fraud detection system relies on external APIs, and the security policy
requires rotating API tokens every 3 months. Goal: automate token rotation,
ensure secure token storage, and maintain continuous operation without
downtime.
? Question: Which solution will best address these requirements?.
Use AWS Key Management Service (AWS KMS) with customer-managed keys
to store the tokens and rely on Amazon EventBridge to trigger rotation
events. Use AWS Systems Manager Parameter Store to store the tokens and rely
on an AWS Lambda function for automatic rotation. Use AWS Secrets Manager to store the tokens, monitor API usage with
AWS CloudTrail, and rely on Amazon EventBridge to trigger token rotation. Use AWS Secrets Manager to store the tokens and rely on an AWS
Lambda function to perform the rotation process.
Answer D
Explanation: AWS Secrets Manager is purpose-built for securely storing,
managing, and automatically rotating sensitive credentials like API tokens. For nonnative services, Secrets Manager supports custom rotation using an AWS Lambda
function, which executes the logic to generate and apply new credentials at
scheduled intervals, eliminating manual effort and ensuring continuous operation.
Question # 14
Question on Safe Deployment Strategy
? Category: AIP – AI Safety, Security, and Governance.
? Scenario: A new model version for credit default risk prediction needs to be
deployed to a SageMaker real-time inference endpoint. Previous deployments
experienced latency spikes and failures. Goal: minimize downtime and mitigate
performance degradation risk by using a deployment strategy that offers safe
rollout and automatic rollback capabilities.
? Question: Which deployment configuration best meets these requirements?.
Use SageMaker batch transform to validate the new model offline.
Promote directly to full production using a single update event. Use a shadow testing deployment to send duplicate inference requests to
the new model. Log results for later comparison, without affecting live
predictions. Deploy both models using a multi-model endpoint configuration.
Dynamically select the model version at runtime based on an API request
parameter. Configure a blue/green deployment with canary traffic shifting and
a traffic size of 10%. Gradually route requests to the new model
while maintaining the existing version as a fallback
Answer D
Explanation: Blue/Green deployment with canary traffic shifting is the standard
MLOps strategy for safe model transitions. By shifting a small percentage of traffic
(e.g., 10%) to the new version (Canary) while keeping the old version (Blue) fully
operational, performance can be monitored under live load. This approach
minimizes user impact and allows for immediate, automatic rollback if performance
metrics (like latency) degrade
Question # 15
Question on Secure S3 Access (IAM Role)
? Category: AIP – AI Safety, Security, and Governance.
? Scenario: A SageMaker notebook instance needs appropriate permissions to read
training data from one S3 bucket and write model artifacts, logs, and evaluation
results to a different S3 bucket. Goal: grant secure and proper access control.
? Question: Which approach should be used to securely enable this access?
Define a bucket policy on the S3 bucket that allows the SageMaker AI
notebook instance by its ARN to perform s3:GetObject, s3:PutObject, and
s3:ListBucket actions Use AWS IAM identity federation to provide temporary access to the
S3 bucket by configuring the SageMaker notebook instance to assume
a federated role for accessing the data. Create an S3 access point for the SageMaker notebook instance, granting it
access to the necessary data, and configure the access point to allow only
the required actions (s3:GetObject, s3:PutObject, and s3:ListBucket). Allow the SageMaker notebook instance to perform s3:GetObject,
s3:PutObject, and s3:ListBucket operations by attaching a policy to its
associated IAM role that grants access to the designated S3 buckets
Answer D
Explanation: The recommended and most secure method for granting an AWS
service (like SageMaker) access to other AWS resources (like S3) is by attaching a
resource-level IAM policy to the IAM role associated with the SageMaker
notebook instance. This method enforces the principle of least privilege by granting
only the necessary actions (GetObject, PutObject, ListBucket) to the specific
buckets required.
Question # 16
Question on Secure On-Premises to S3 Transfer
? Category: AIP – AI Safety, Security, and Governance.
? Scenario: A healthcare organization must upload non-sensitive data from an onpremises Microsoft SQL Server database to Amazon S3 for model retraining.
Sensitive patient records must never leave the data center. All transfers must be
done securely over an IPsec connection.
? Question: Which solution will satisfy the given requirements?.
Configure AWS Database Migration Service (AWS DMS) to replicate nonsensitive data from the Microsoft SQL Server database into S3, and transfer it
over an IPsec connection. Utilize Amazon Data Firehose to stream non-sensitive transaction data into
S3. Ensure that the data transfer happens over an IPsec-protected
connection, and leverage AWS Lambda to filter out sensitive data before
uploading Set up an AWS Glue job to connect to the Microsoft SQL Server
database, extract only the non-sensitive data, and transfer it to S3 over
an AWS
Site-to-Site VPN connection for model retraining Use AWS Transfer Family to securely transfer the entire database backup file
to S3, and then use AWS Glue to filter sensitive data from the S3 bucket.
Answer C
Explanation: AWS Glue is a fully managed ETL service that supports connecting
to on-premises JDBC data stores (like MS SQL Server). Glue allows for precise
filtering and extraction of only non-sensitive data before the transfer begins,
ensuring
sensitive data never leaves the premises. The transfer itself is secured via AWS Siteto-Site VPN, which establishes the required IPsec tunnels between the data center
and AWS
Question # 17
Question on A/B Testing (Multi-Variant Endpoints)
? Category: AIP – Testing, Validation, and Troubleshooting.
? Scenario: Multiple recommendation models must be evaluated using A/B testing in
production. The system must route live inference traffic, monitor real-time
engagement metrics, and seamlessly direct 100% of traffic to the best-performing
model with minimal operational overhead. Question: Which solution will meet these requirements in the most operationally
efficient way?.
Deploy the models on Amazon EC2 instances behind an Application Load
Balancer (ALB) to perform A/B testing, then manually adjust the ALB
weights when a model shows higher engagement. Create a separate Amazon SageMaker AI endpoint for each model and
configure Amazon API Gateway to distribute traffic for A/B testing based
on weighted routing rules Use AWS CodeDeploy with blue/green deployment strategies and an
Application Load Balancer (ALB) to alternate traffic between model versions
during A/B testing. Gradually route 100% of traffic to the model with the
highest engagement metrics. Use Amazon SageMaker AI multi-variant endpoints to deploy all
model versions behind a single endpoint. Configure traffic weights for
A/B testing and update routing to send all inference requests to the
best- performing model once identified.
Answer D
Explanation: SageMaker AI multi-variant endpoints allow multiple models
(variants) to be deployed behind a single, fully managed endpoint. You can allocate
traffic weights to these variants for A/B testing and dynamically update these
weights to seamlessly shift 100% of the traffic to the winner, which minimizes
operational overhead compared to custom API Gateway or EC2/ALB setups.
Question # 18
Question on Bias Detection/Explainability (Recommendations)
? Category: AIP – AI Safety, Security, and Governance.
? Scenario: A recommendation model shows specific product categories
disproportionately to customers from certain regions. The organization must verify if
the imbalance is due to dataset bias or model prediction bias , and must generate
automated reports explaining feature influence for compliance and transparency.
? Question: Which solution will meet these requirements?.
Use Amazon Personalize to automatically adjust recommendation weights in
real-time to reduce category bias without performing explicit fairness
evaluation. Use SageMaker Data Wrangler to manually rebalance the dataset by
filtering and transforming data before retraining the recommendation
model. Use SageMaker Model Monitor to track endpoint metrics such as
latency, drift, and accuracy without analyzing model bias or feature
influence Use SageMaker Clarify to detect and measure data bias, evaluate
model fairness, and generate feature attribution explainability reports
for compliance and transparency.
Answer D Explanation: SageMaker Clarify is purpose-built to address this exact
requirement. It can analyze pre-training datasets to identify bias sources and
assess post-training model behavior to detect disparities in prediction outcomes
using fairness metrics. Additionally, Clarify generates explainability reports (using
SHAP values) that measure how much each feature contributes to a prediction,
providing the required transparency and compliance documentation.
Question # 19
Question on Secure API Traffic (VPC Endpoint) (Select TWO)
? Category: AIP – AI Safety, Security, and Governance.
? Scenario: An ML pipeline uses a SageMaker Service API VPC interface endpoint in a
public subnet. The team must ensure that only specific Amazon EC2 instances
and IAM users can invoke SageMaker API operations through that endpoint.
? Question: Which combination of actions should the team take to secure the traffic
to the SageMaker Service API? (Select TWO.).
Enable private DNS for the VPC endpoint to ensure that traffic remains
within the VPC. Enable VPC Flow Logs to monitor traffic patterns. Use AWS Lambda to
automatically block unauthorized access to the SageMaker API endpoint. Deploy an additional VPC endpoint for SageMaker AI Runtime to isolate
inference traffic Attach a custom VPC endpoint policy that explicitly grants access to
selected IAM identities Configure the security group linked to the endpoint network interface
to allow traffic only from approved instances.
Answer D,E
Explanation: To restrict access based on user identity, a VPC endpoint policy
is required, as it controls which AWS principals (IAM users/roles) can access
the
service through the endpoint. To restrict access based on the source EC2 instance,
the security group attached to the endpoint's network interface must be configured
to allow traffic only from approved instance security groups or IP ranges.
Question # 20
Question on Multimodal RAG (BDA)
? Category: AIP – Implementation and Integration.
? Scenario: A publishing company needs a generative AI chatbot to answer queries
based on a repository of documents, images, audio, and video content. Goal:
Process all media types, index them for semantic search, and use the retrieved
context to ground a foundation model (LLM) hosted on SageMaker AI.
? Question: Which approach best meets these requirements?.
Employ Bedrock Data Automation (BDA) to process all media types and
directly supply the structured output into a foundation model via
SageMaker AI, bypassing the use of a knowledge base Use Bedrock Data Automation (BDA) to process the media files, store the raw
content in Amazon S3, and deploy a custom AWS Lambda function to create
your own vector database outside of Bedrock Knowledge Bases before
passing it to SageMaker AI. Utilize Bedrock Data Automation (BDA) to process media files, analyze the
structured content using Amazon Comprehend for entity and sentiment
extraction, and then forward the results to a foundation model hosted on
Amazon EC2 for generating answers Leverage Bedrock Data Automation (BDA) to process documents,
images, audio, and video, index the results in Bedrock Knowledge
Bases for semantic search, and then input the retrieved context into a
foundation model via SageMaker AI for response generation
Answer D
Explanation: Bedrock Data Automation (BDA) automates the transformation of
unstructured multimedia content (documents, images, audio, video) into structured
insights. Indexing these results in Bedrock Knowledge Bases is the fully managed
way to enable efficient semantic search and retrieval (RAG). The retrieved context is
then passed to the LLM (hosted on SageMaker AI) to generate grounded, accurate
responses.
Question # 21
Question on Feature Attribution Drift Monitoring
? Category: AIP – Testing, Validation, and Troubleshooting.
? Scenario: A claims automation system uses SageMaker AI, predicting claim
approval based on vehicle damage severity and other features (age, mileage). The
model must be continuously monitored for feature attribution drift in production
(i.e., if the model starts prioritizing less relevant features like vehicle age over
damage severity).
? Question: Which solution should be implemented?.
Deploy SageMaker Clarify to perform bias and explainability analysis on the
training dataset. Use Amazon CloudWatch to alert if Clarify reports
significant changes in feature attribution or fairness metrics. Implement a baseline for model quality using the ModelQualityMonitor class.
The baseline will evaluate key performance metrics such as accuracy and
recall, with periodic checks to identify any significant shifts in model
performance. Set up an Amazon CloudWatch if the model ?s quality metrics
diverge from the baseline Enable SageMaker DataCapture to log inference inputs and outputs. Build a
custom pipeline to analyze feature distributions and model responses over
time. Use Amazon CloudWatch to alert when significant shifts in input
patterns or predictions are detected. Use ModelExplainabilityMonitor class with a SHAP-based baseline to
detect feature attribution drift in production. Regularly compare how the
model assigns importance to input features against the baseline, and configure Amazon CloudWatch to alert stakeholders when attribution
values drift beyond acceptable thresholds
Answer D Explanation: SageMaker Model Monitor provides the ModelExplainabilityMonitor
(powered by SageMaker Clarify) specifically to detect feature attribution drift. This
monitor uses a SHAP-based baseline to quantify how much each input feature
contributes to the prediction (attribution). By continuously comparing live
production attributions against the baseline and integrating with CloudWatch
alerts, the system can proactively detect when the model's decision logic shifts.
Question # 22
. Question on Real-Time Fraud Detection (Minimal Overhead)
? Category: AIP – Implementation and Integration.
? Scenario: A digital payments provider is experiencing fraudulent transactions,
especially from new accounts performing high-value payments. The existing batch
model in SageMaker cannot flag activity quickly enough. Goal: real-time fraud
detection that can automatically assess and reject fraudulent transactions at the
moment of occurrence, requiring minimal operational effort.
? Question: Which option satisfies these requirements?.
Use Amazon Lookout for Vision to detect anomalies in uploaded transaction
receipt images and classify them as fraudulent or legitimate Use Comprehend to extract entities from transaction metadata and forward
them to SageMaker AI to retrain a fraud detection model. Use SageMaker AI to train a new supervised model for fraud detection
and deploy it on Amazon EC2 using custom inference code. . Use the Amazon Fraud Detector prediction API to automatically
approve or deny transactions that are identified as fraudulent.
Answer D
Explanation: Amazon Fraud Detector is a fully managed service designed for realtime fraud detection tailored for transactional workflows. It provides a prediction
API that allows immediate assessment and rejection of suspicious activity as it
occurs, minimizing operational overhead compared to building and managing a
custom SageMaker model or EC2 deployment.
Question # 23
Question on Semantic Embeddings/RAG
? Category: AIP – Foundation Model Integration, Data Management, and Compliance.
? Scenario: A research team needs a mechanism to represent user queries and
internal documents as semantic embeddings to capture contextual relationships.
The solution must be fully managed, scalable, and integrate easily with Bedrock AI
agents for downstream RAG workflows.
? Question: Which approach best satisfies these requirements?.
Implement Amazon Kendra to index research documents, support naturallanguage queries, and let AI agents retrieve relevant results using the
managed semantic-search and ranking capabilities of the service. Configure SageMaker Data Wrangler to preprocess textual data,
extract engineered features through clustering, and allow AI agents to
analyze document similarity within structured datasets and grouped
content. Deploy SageMaker JumpStart to fine-tune and host a pre-trained language
model for summarization and text generation, integrating it with AI agents
for enhanced content discovery workflows. . Leverage Amazon Titan Text Embeddings in Bedrock to convert text
into semantic vectors and store them in Amazon OpenSearch Service
for context-aware retrieval and reasoning by AI agents
Answer D
Explanation: Amazon Titan Text Embeddings specializes in converting text into
high-dimensional numerical vectors that capture semantic meaning. Amazon
OpenSearch Service provides vector search functionality, making it an ideal
complement to store, index, and query these vectors efficiently to find conceptually
similar information. This combination provides a fully managed, scalable
foundation for intelligent semantic retrieval required for RAG and reasoning by AI
agents.
Question # 24
Question on Text Preprocessing for Word2Vec (Select THREE)
? Category: AIP – Foundation Model Integration, Data Management, and Compliance.
? Scenario: Training a Word2Vec-style model on SageMaker AI requires preparing a
dataset of over one million sentences with inconsistent casing, mixed encodings,
and minor typographical errors (e.g., “An Apple a DAY Keeps the doctor Away”).
Goal: ensure consistent sanitization, reproducibility, and embedding quality.
? Question: Which of the following operations should be implemented in the
preprocessing phase to correctly sanitize and prepare the dataset for embedding
and downstream predictions? (Select THREE.).
. Apply part-of-speech tagging to identify grammatical elements and retain
only the verbs and nouns. Replace every word with its corresponding synonym using a lexical database
before tokenization Normalize the text by converting every word in the sentence to
lowercase. Exclude common non-informative words from the dataset using an
English stop-word dictionary Segment the sentence into individual word units through
tokenization. Convert all tokens into fixed-length character n-grams before
Word2Vec training to capture subword features.
Answer C, D, E Explanation: Normalization (converting to lowercase) prevents variations of the
same word (e.g., Apple, APPLE, apple) from being treated as separate tokens,
ensuring semantic consistency. Tokenization segments the text into individual
words, which is the foundational step for Word2Vec to learn co-occurrence
patterns. Stop-word removal filters out common, non-informative tokens (like the,
and, is), reducing noise and computational cost, and improving embedding clarity
Question # 25
Question on Asynchronous Inference (Long Processing Time)
? Category: AIP – AI Safety, Security, and Governance.
? Scenario: A fraud detection model must run within a private VPC (no public
internet). The model handles payloads of 6 MB to 11 MB and requires a
long
inference execution time (18–22 minutes) per request. Cost minimization is also a
requirement while supporting a request/response style interaction. Question: Which solution is the most suitable approach to satisfy this
requirement?.
Configure a SageMaker Batch Transform job to run within private subnets
and attach the appropriate VPC configuration parameters during endpoint
creation Use SageMaker multi-model endpoint architecture inside private subnets
with VPC configuration applied as part of endpoint deployment procedures. Use SageMaker Neo compiled model packaging and deploy the compiled
artifact to a SageMaker real-time inference endpoint inside private subnets
using VPC configuration. Deploy a SageMaker asynchronous endpoint inside private subnets and
include VPC configuration parameters during endpoint creation.
Answer D
Explanation: SageMaker Asynchronous Inference is designed for workloads with
large payload sizes (up to 1 GB) and long processing times (up to one hour). It
queues incoming requests and processes them asynchronously, and it can scale
instances down to zero when idle, minimizing costs. It supports VPC configurations
to keep traffic private, making it the only appropriate choice for the given
constraints, as real-time endpoints typically time out after 60 seconds.
Feedback That Matters: Reviews of Our Amazon AIP-C01 Dumps