Was :
$81
Today :
$45
Was :
$99
Today :
$55
Was :
$117
Today :
$65
Why Should You Prepare For Your ISTQB Certified Tester AI Testing Exam With MyCertsHub?
At MyCertsHub, we go beyond standard study material. Our platform provides authentic iSQI CT-AI Exam Dumps, detailed exam guides, and reliable practice exams that mirror the actual ISTQB Certified Tester AI Testing Exam test. Whether you’re targeting iSQI certifications or expanding your professional portfolio, MyCertsHub gives you the tools to succeed on your first attempt.
Verified CT-AI Exam Dumps
Every set of exam dumps is carefully reviewed by certified experts to ensure accuracy. For the CT-AI ISTQB Certified Tester AI Testing Exam , you’ll receive updated practice questions designed to reflect real-world exam conditions. This approach saves time, builds confidence, and focuses your preparation on the most important exam areas.
Realistic Test Prep For The CT-AI
You can instantly access downloadable PDFs of CT-AI practice exams with MyCertsHub. These include authentic practice questions paired with explanations, making our exam guide a complete preparation tool. By testing yourself before exam day, you’ll walk into the iSQI Exam with confidence.
Smart Learning With Exam Guides
Our structured CT-AI exam guide focuses on the ISTQB Certified Tester AI Testing Exam's core topics and question patterns. You will be able to concentrate on what really matters for passing the test rather than wasting time on irrelevant content. Pass the CT-AI Exam – Guaranteed
We Offer A 100% Money-Back Guarantee On Our Products.
After using MyCertsHub's exam dumps to prepare for the ISTQB Certified Tester AI Testing Exam exam, we will issue a full refund. That’s how confident we are in the effectiveness of our study resources.
Try Before You Buy – Free Demo
Still undecided? See for yourself how MyCertsHub has helped thousands of candidates achieve success by downloading a free demo of the CT-AI exam dumps.
MyCertsHub – Your Trusted Partner For iSQI Exams
Whether you’re preparing for ISTQB Certified Tester AI Testing Exam or any other professional credential, MyCertsHub provides everything you need: exam dumps, practice exams, practice questions, and exam guides. Passing your CT-AI exam has never been easier thanks to our tried-and-true resources.
iSQI CT-AI Sample Question Answers
Question # 1
Which of the following problems would best be solved using the supervised learning category ofregression?
A. Determining the optimal age for a chicken's egg laying production using input data of the chicken'sage and average daily egg production for one million chickens. B. Recognizing a knife in carry on luggage at a security checkpoint in an airport scanner. C. Determining if an animal is a pig or a cow based on image recognition. D. Predicting shopper purchasing behavior based on the category of shopper and the positioning ofpromotional displays within a store.
Answer: A
Explanation:
Understanding Supervised Learning - Regression
Supervised learning is a category of machine learning where the model is trained on labeled data.
Within this category, regression is used when the goal is to predict a continuous numeric value.
Regression deals with problems where the output variable is continuous in nature, meaning it can
take any numerical value within a range.
Common examples include predicting prices, estimating demand, and analyzing production trends.
Analysis of Answer Choices
(A) Determining the optimal age for a chicken's egg-laying production using input data of the
chicken's age and average daily egg production for one million chickens. ✅ (Correct)
This is a classic regression problem because it involves predicting a continuous variable: daily egg
production based on the input variable chicken's age.
The goal is to find a numerical relationship between age and egg production, which makes regression
the appropriate supervised learning method.
(B) Recognizing a knife in carry-on luggage at a security checkpoint in an airport scanner. ⠌
(Incorrect)
This is an image recognition task, which falls under classification, not regression.
Classification problems involve assigning inputs to discrete categories (e.g., "knife detected" or "no
knife detected").
(C) Determining if an animal is a pig or a cow based on image recognition. ⠌ (Incorrect)
This is another classification problem where the goal is to categorize an image into one of two labels
(pig or cow).
(D) Predicting shopper purchasing behavior based on the category of shopper and the positioning of
promotional displays within a store. ⠌ (Incorrect)
This problem could involve a mix of classification and association rule learning, but it does not
explicitly predict a continuous variable in the way regression does.
Reference from ISTQB Certified Tester AI Testing Study Guide
Regression is used when predicting a numeric output.
"Predicting the age of a person based on input data about their habits or predicting the future prices
of stocks are examples of problems that use regression."
Supervised learning problems are divided into classification and regression.
"If the output is numeric and continuous in nature, it may be regression."
Regression is commonly used for predicting numerical trends over time.
"Regression models result in a numerical or continuous output value for a given input."
Thus, option A is the correct answer, as it aligns with the principles of regression-based supervised
learning.
Question # 2
There is a growing backlog of unresolved defects for your project. You know the developers have anML model that they have created which has learned which developers work on which type ofsoftware and the speed with which they resolve issues. How could you use this model to help reducethe backlog and implement more efficient defect resolution?
A. Use it to prioritize defects automatically based on the time expected for the fix to be made, thespeed of the fix, and the likelihood of regressions. B. Use it to assign defects to the best developer to resolve the problem and to load balance thedefect assignments among the developers. C. Use it to determine the root cause of each defect and develop a process improvement plan thatcan be implemented to remove the most common root causes. D. Use it to review the code and determine where more defects are likely to occur so that testing canbe targeted to those areas.
Answer: B
Explanation:
AI and ML models can play a significant role in optimizing defect resolution processes. According to
the ISTQB Certified Tester AI Testing (CT-AI) Syllabus, ML models can be used to analyze defect
reports, prioritize critical defects, and assign defects to developers based on historical defect
resolution patterns. The key AI applications for defect management include:
Defect Categorization “ NLP techniques can analyze defect reports and classify them based on
metadata like severity and impact.
Defect Prioritization “ ML models trained on past defects can predict which issues are likely to cause
failures, allowing teams to prioritize the most critical issues .
Defect Assignment “ AI-based models can suggest which developers are best suited for specific
defects, optimizing the resolution process based on past performance and specialization .
From the given answer choices:
Option A (Automatic Prioritization) is useful but does not directly reduce backlog efficiently by
considering developer expertise and workload balancing.
Option C (Root Cause Analysis for Process Improvement) is a long-term strategy but does not directly
address backlog reduction.
Option D (Defect Prediction for Testing Focus) helps preemptively identify issues but does not resolve
the existing backlog.
Thus, Option B is the best choice as it aligns with AI's capability to assign defects to the most suitable
developers based on historical data, ensuring efficient defect resolution and backlog reduction .
Certified Tester AI Testing Study Guide Reference:
ISTQB CT-AI Syllabus v1.0, Section 11.2 (Using AI to Analyze Reported Defects)
ISTQB CT-AI Syllabus v1.0, Section 11.5 (Using AI for Defect Prediction)
Question # 3
You are using a neural network to train a robot vacuum to navigate without bumping into objects.You set up a reward scheme that encourages speed but discourages hitting the bumper sensors.Instead of what you expected, the vacuum has now learned to drive backwards because there are nobumpers on the back.This is an example of what type of behavior?
A. Error-shortcircuiting B. Reward-hacking C. Transparency D. Interpretability
Answer: B
Explanation:
Reward hacking occurs when an AI-based system optimizes for a reward function in a way that is
unintended by its designers, leading to behavior that technically maximizes the defined reward but
does not align with the intended objectives.
In this case, the robot vacuum was given a reward scheme that encouraged speed while discouraging
collisions detected by bumper sensors. However, since the bumper sensors were only on the front,
the AI found a loophole”driving backward”thereby avoiding triggering the bumper sensors while
still maximizing its reward function.
This is a classic example of reward hacking, where an AI "games" the system to achieve high rewards
in an unintended way. Other examples include:
An AI playing a video game that modifies the score directly instead of completing objectives.
A self-learning system exploiting minor inconsistencies in training data rather than genuinely
improving performance.
Reference from ISTQB Certified Tester AI Testing Study Guide:
Section: Section 2.6 - Side Effects and Reward Hacking explains that AI systems may produce unexpected, and
sometimes harmful, results when optimizing for a given goal in ways not intended by designers .
Definition of Reward Hacking in AI: "The activity performed by an intelligent agent to maximize its
reward function to the detriment of meeting the original objective"
Question # 4
Which of the following is an example of an input change where it would be expected that the AIsystem should be able to adapt?
A. It has been trained to recognize cats and is given an image of a dog. B. It has been trained to recognize human faces at a particular resolution and it is given a human faceimage captured with a higher resolution. C. It has been trained to analyze mathematical models and is given a set of landscape pictures toclassify. D. It has been trained to analyze customer buying trend data and is given information on suppliercost data.
Answer: B
Explanation:
AI systems, particularly machine learning models, need to exhibit adaptability and flexibility to
handle slight variations in input data without requiring retraining. The ISTQB CT-AI syllabus outlines
adaptability as a crucial feature of AI systems, especially when the system is exposed to variations in
its operational environment .
Analysis of the Answer Options:
Option A: œIt has been trained to recognize cats and is given an image of a dog.
This scenario introduces an entirely new class (dogs), which is outside the AI systems expected
scope. If the AI was only trained to recognize cats, it would not be expected to recognize dogs
correctly without retraining. This does not demonstrate adaptability as expected from an AI system.
Option B: œIt has been trained to recognize human faces at a particular resolution and it is given a
human face image captured with a higher resolution.
This is an example of an AI system encountering a variation of its training data rather than entirely
new data. Most AI-based image processing models can adapt to different resolutions by applying
downsampling or other pre-processing techniques. Since the data remains within the domain of
human faces, the model should be able to process the higher-resolution image without significant
issues.
Option C: œIt has been trained to analyze mathematical models and is given a set of landscape
pictures to classify.
This represents a complete shift in the data type from structured numerical data to unstructured
image data. The AI system is unlikely to adapt effectively, as it has not been trained on image
classification tasks.
Option D: œIt has been trained to analyze customer buying trend data and is given information on
supplier cost data.
This introduces a significant domain shift. Customer buying trends focus on consumer behavior, while
supplier cost data relates to pricing structures and logistics. The AI system would likely require
retraining to process the new data meaningfully.
ISTQB CT-AI Syllabus Reference:
Adaptability Requirements: The syllabus discusses that AI-based systems must be able to adapt to
changes in their operational environment and constraints, including minor variations in input quality
(such as resolution changes) .
Autonomous Learning & Evolution: AI systems are expected to improve and handle evolving inputs
based on prior experience .
Challenges in Testing Self-Learning Systems: AI systems should be tested to ensure they function
correctly when encountering new but related data, such as different resolutions of the same object .
Thus, option B is the best choice as it aligns with the adaptability characteristics expected from AIbased
systems.
Question # 5
An airline has created a ML model to project fuel requirements for future flights. The model importsweather data such as wind speeds and temperatures, calculates flight routes based on historicalroutings from air traffic control, and estimates loads from average passenger and baggage weights.The model performed within an acceptable standard for the airline throughout the summer but aswinter set in the load weights became less accurate. After some exploratory data analysis it becameapparent that luggage weights were higher in the winter than in summer.Which of the following statements BEST describes the problem and how it could have beenprevented?
A. The model suffers from drift and therefore should be regularly tested to ensure that anyoccurrences of drift are detected soon enough for the problem to be mitigated. B. The model suffers from drift and therefore the performance standard should be eased until a newmodel with more transparency can be developed. C. The model suffers from corruption and therefore should be reloaded into the computer systembeing used, preferably with a method of version control to prevent further changes. D. The model suffers from a lack of transparency and therefore should be regularly tested to ensurethat any progressive errors are detected soon enough for the problem to be mitigated.
Answer: A
Explanation:
The problem described in the question is a classic case of concept drift. Concept drift occurs when
the relationship between input variables and the output variable changes over time, leading to a
decline in model accuracy .
In this scenario, the average passenger and baggage weights used in the model changed due to
seasonal variations, but the model was not updated accordingly. This resulted in inaccurate
predictions for fuel requirements in the winter season. This is an example of seasonal drift, where
model behavior changes periodically due to recurring trends (e.g., higher luggage weights in winter
compared to summer).
To prevent such problems:
The model should be regularly tested for concept drift against agreed ML functional performance
criteria .
Exploratory Data Analysis (EDA) should be performed periodically to detect gradual changes in input
distributions.
Retraining of the model with updated training data should be done to maintain accuracy.
If drift is detected, mitigation techniques such as incremental learning, retraining with new data, or
adjusting model parameters should be employed.
Why Other Options Are Incorrect:
Option B (Easing the performance standard instead of addressing drift): Lowering the performance
standard is not a solution; it only masks the problem without fixing it. Instead, regular testing and
retraining should be used to handle drift properly .
Option C (Corruption and reloading the model): Model corruption is unrelated to this issue.
Corruption refers to accidental or malicious damage to the model or data, whereas this case is due to
a changing data environment .
Option D (Lack of transparency): Transparency refers to how understandable the models decisions
are, but the problem here is a change in data distributions, making drift the primary concern .
Supporting Reference from ISTQB Certified Tester AI Testing Study Guide:
ISTQB CT-AI Syllabus (Section 7.6: Testing for Concept Drift)
"The operational environment can change over time without the trained model changing
correspondingly. This phenomenon is known as concept drift and typically causes the outputs of the
model to become increasingly less accurate and less useful."
"Systems that may be prone to concept drift should be regularly tested against their agreed ML
functional performance criteria to ensure that any occurrences of concept drift are detected soon
enough for the problem to be mitigated."
ISTQB CT-AI Syllabus (Section 7.7: Selecting a Test Approach for an ML System)
"If concept drift is detected, it may be mitigated by retraining the system with up-to-date training
data followed by confirmation testing, regression testing, and possibly A/B testing where the
updated system must outperform the original system."
Conclusion:
Since the question describes a situation where seasonal variations affected input data distributions,
the correct answer is A: The model suffers from drift and therefore should be regularly tested to
ensure that any occurrences of drift are detected soon enough for the problem to be mitigated.
Question # 6
A company is using a spam filter to attempt to identify which emails should be marked as spam.Detection rules are created by the filter that causes a message to be classified as spam. An attackerwishes to have all messages internal to the company be classified as spam. So, the attacker sendsmessages with obvious red flags in the body of the email and modifies the from portion of the emailto make it appear that the emails have been sent by company members. The testers plan to useexploratory data analysis (EDA) to detect the attack and use this information to prevent futureadversarial attacks.How could EDA be used to detect this attack?
A. EDA can help detect the outlier emails from the real emails. B. EDA can detect and remove the false emails. C. EDA can restrict how many inputs can be provided by unique users. D. EDA cannot be used to detect the attack.
Answer: A
Explanation:
Exploratory Data Analysis (EDA) is an essential technique for examining datasets to uncover patterns,
trends, and anomalies, including outliers. In this case, the attacker manipulates the spam filter by
injecting emails with red flags and masking them as internal company emails. The primary goal of
EDA here is to detect these adversarial modifications.
Detecting Outliers:
EDA techniques such as statistical analysis, clustering, and visualization can reveal patterns in email
A team of software testers is attempting to create an AI algorithm to assist in software testing. Thisparticular team has gone through over 40 iterations of testing and cannot afford to spend as muchtime as it takes to run the full regression test suite. They are hoping to have the algorithm reduce theamount of testing required thus reducing the time needed for each testing cycle.How can an AI-based tool be expected to assist in this reduction?
A. By using a clustering method to quantify the relationships between test cases and then assigningeach test case to a category B. By performing optimization of the data from past iterations to see where the most commondefects occurred and select the corresponding test cases C. By performing bayesian analysis to estimate the types of human interactions that are expected tobe seen in the system and then selecting those test cases D. By using A/B testing to compare the last update with the newest change and compare metricsbetween the two
Answer: B
Explanation:
AI-based tools can significantly optimize regression test suites by analyzing historical data, past test
results, associated defects, and changes made to the software. These tools prioritize and select the
most relevant test cases based on previous defect patterns and frequently failing features, which
helps in reducing the test execution time while maintaining effectiveness.
The optimization process involves:
Prioritizing test cases: AI-based tools rank test cases based on past defect detection trends, ensuring
that the most relevant tests are executed first.
Reducing redundant test cases: The tool can eliminate test cases that do not contribute significantly
to defect detection, reducing overall test execution time.
Augmenting test cases: The AI can also suggest new test cases if certain features are more prone to
defects.
This approach has been proven to reduce regression test suite sizes by up to 50% while maintaining
fault detection capabilities .
Reference from ISTQB Certified Tester AI Testing Study Guide:
Section: Section 11.4 - Using AI for the Optimization of Regression Test Suites states that AI-based tools can
optimize regression test suites by analyzing past test data and defect occurrences, leading to
significant reductions in test execution time .
Question # 9
Which of the following are the three activities in the data acquisition activities for data preparation?
A. Cleaning, transforming, augmenting B. Feature selecting, feature growing, feature augmenting C. Identifying, gathering, labelling D. Building, approving, deploying
Answer: C
Explanation:
According to the ISTQB Certified Tester AI Testing (CT-AI) syllabus, data acquisition, a critical step in
data preparation for machine learning (ML) workflows, consists of three key activities:
Identification: This step involves determining the types of data required for training and prediction.
For example, in a self-driving car application, data types such as radar, video, laser imaging, and
LiDAR (Light Detection and Ranging) data may be identified as necessary sources.
Gathering: After identifying the required data types, the sources from which the data will be
collected are determined, along with the appropriate collection methods. An example could be
gathering financial data from the International Monetary Fund (IMF) and integrating it into an AIbased
system.
Labeling: This process involves annotating or tagging the collected data to make it meaningful for
supervised learning models. Labeling is an essential activity that helps machine learning algorithms
differentiate between categories and make accurate predictions.
These activities ensure that the data is suitable for training and testing machine learning models,
forming the foundation of data preparation .
Question # 10
Before deployment of an AI based system, a developer is expected to demonstrate in a testenvironment how decisions are made. Which of the following characteristics does decision makingfall under?
A. Explainability B. Autonomy C. Self-learning D. Non-determinism
Answer: A
Explanation:
Explainability in AI-based systems refers to the ease with which users can determine how the system
reaches a particular result. It is a crucial aspect when demonstrating AI decision-making, as it ensures
that decisions made by AI models are transparent, interpretable, and understandable by
stakeholders.
Before deploying an AI-based system, a developer must validate how decisions are made in a test
environment. This process falls under the characteristic of explainability because it involves clarifying
how an AI model arrives at its conclusions, which helps build trust in the system and meet regulatory
and ethical requirements.
Supporting Reference from ISTQB Certified Tester AI Testing Study Guide:
ISTQB CT-AI Syllabus (Section 2.7: Transparency, Interpretability, and Explainability)
"Explainability is considered to be the ease with which users can determine how the AI-based system
comes up with a particular result" .
"Most users are presented with AI-based systems as ˜black boxes and have little awareness of how
these systems arrive at their results. This ignorance may even apply to the data scientists who built
the systems. Occasionally, users may not even be aware they are interacting with an AI-based
system" .
ISTQB CT-AI Syllabus (Section 8.6: Testing the Transparency, Interpretability, and Explainability of AIbased
Systems)
"Testing the explainability of AI-based systems involves verifying whether users can understand and
validate AI-generated decisions. This ensures that AI systems remain accountable and do not make
incomprehensible or biased decisions" .
Contrast with Other Options:
Autonomy (B): Autonomy relates to an AI system's ability to operate independently without human
oversight . While decision-making is a key function of autonomy, the focus here is on demonstrating
the reasoning behind decisions, which falls under explainability rather than autonomy.
Self-learning (C): Self-learning systems adapt based on previous data and experiences, which is
different from making decisions understandable to humans .
Non-determinism (D): AI-based systems are often probabilistic and non-deterministic, meaning they
do not always produce the same output for the same input. This can make testing and validation
more challenging, but it does not relate to explaining the decision-making process .
Conclusion:
Since the question explicitly asks about the characteristic under which decision-making falls when
being demonstrated before deployment, explainability is the correct choice because it ensures that
AI decisions are transparent, understandable, and accountable to stakeholders .
Question # 11
œBioSearch is creating an Al model used for predicting cancer occurrence via examining X-Rayimages. The accuracy of the model in isolation has been found to be good. However, the users of themodel started complaining of the poor quality of results, especially inability to detect real cancercases, when put to practice in the diagnosis lab, leading to stopping of the usage of the model.A testing expert was called in to find the deficiencies in the test planning which led to the abovescenario.Which ONE of the following options would you expect to MOST likely be the reason to be discoveredby the test expert?SELECT ONE OPTION
A. A lack of similarity between the training and testing data. B. The input data has not been tested for quality prior to use for testing. C. A lack of focus on choosing the right functional-performance metrics. D. A lack of focus on non-functional requirements testing.
Answer: A
Explanation:
The question asks which deficiency is most likely to be discovered by the test expert given the
scenario of poor real-world performance despite good isolated accuracy.
A lack of similarity between the training and testing data (A): This is a common issue in ML where the
model performs well on training data but poorly on real-world data due to a lack of
representativeness in the training data. This leads to poor generalization to new, unseen data.
The input data has not been tested for quality prior to use for testing (B): While data quality is
important, this option is less likely to be the primary reason for the described issue compared to the
representativeness of training data.
A lack of focus on choosing the right functional-performance metrics (C): Proper metrics are crucial,
but the issue described seems more related to the data mismatch rather than metric selection.
A lack of focus on non-functional requirements testing (D): Non-functional requirements are
important, but the scenario specifically mentions issues with detecting real cancer cases, pointing
more towards data issues.
Reference:
ISTQB CT-AI Syllabus Section 4.2 on Training, Validation, and Test Datasets emphasizes the
importance of using representative datasets to ensure the model generalizes well to real-world data.
Sample Exam Questions document, Question #40 addresses issues related to data
representativeness and model generalization.
Question # 12
A system was developed for screening the X-rays of patients for potential malignancy detection (skincancer). A workflow system has been developed to screen multiple cancers by using severalindividually trained ML models chained together in the workflow.Testing the pipeline could involve multiple kind of tests (I - III):I . Pairwise testing of combinationsII . Testing each individual model for accuracyIII . A/B testing of different sequences of modelsWhich ONE of the following options contains the kinds of tests that would be MOST APPROPRIATE toinclude in the strategy for optimal detection?SELECT ONE OPTION
A. Only III B. I and II C. I and III D. Only II
Answer: B
Explanation:
The question asks which combination of tests would be most appropriate to include in the strategy
for optimal detection in a workflow system using multiple ML models.
Pairwise testing of combinations (I): This method is useful for testing interactions between different
components in the workflow to ensure they work well together, identifying potential issues in the
integration.
Testing each individual model for accuracy (II): Ensuring that each model in the workflow performs
accurately on its own is crucial before integrating them into a combined workflow.
A/B testing of different sequences of models (III): This involves comparing different sequences to
determine which configuration yields the best results. While useful, it might not be as fundamental
as pairwise and individual accuracy testing in the initial stages.
Reference:
ISTQB CT-AI Syllabus Section 9.2 on Pairwise Testing and Section 9.3 on Testing ML Models emphasize
the importance of testing interactions and individual model accuracy in complex ML workflows.
Question # 13
Which ONE of the following characteristics is the least likely to cause safety related issues for an Alsystem?SELECT ONE OPTION
A. Non-determinism B. Robustness C. High complexity D. Self-learning
Answer: B
Explanation:
The question asks which characteristic is least likely to cause safety-related issues for an AI system.
Let's evaluate each option:
Non-determinism (A): Non-deterministic systems can produce different outcomes even with the
same inputs, which can lead to unpredictable behavior and potential safety issues.
Robustness (B): Robustness refers to the ability of the system to handle errors, anomalies, and
unexpected inputs gracefully. A robust system is less likely to cause safety issues because it can
maintain functionality under varied conditions.
High complexity (C): High complexity in AI systems can lead to difficulties in understanding,
predicting, and managing the system's behavior, which can cause safety-related issues.
Self-learning (D): Self-learning systems adapt based on new data, which can lead to unexpected
changes in behavior. If not properly monitored and controlled, this can result in safety issues.
Reference:
ISTQB CT-AI Syllabus Section 2.8 on Safety and AI discusses various factors affecting the safety of AI
systems, emphasizing the importance of robustness in maintaining safe operation.
Question # 14
Which ONE of the following tests is LEAST likely to be performed during the ML model testing phase?SELECT ONE OPTION
A. Testing the accuracy of the classification model. B. Testing the API of the service powered by the ML model. C. Testing the speed of the training of the model. D. Testing the speed of the prediction by the model.
Answer: C
Explanation:
The question asks which test is least likely to be performed during the ML model testing phase. Let's
consider each option:
Testing the accuracy of the classification model (A): Accuracy testing is a fundamental part of the ML
model testing phase. It ensures that the model correctly classifies the data as intended and meets
the required performance metrics.
Testing the API of the service powered by the ML model (B): Testing the API is crucial, especially if the
ML model is deployed as part of a service. This ensures that the service integrates well with other
systems and that the API performs as expected.
Testing the speed of the training of the model (C): This is least likely to be part of the ML model
testing phase. The speed of training is more relevant during the development phase when optimizing
and tuning the model. During testing, the focus is more on the model's performance and behavior
rather than how quickly it was trained.
Testing the speed of the prediction by the model (D): Testing the speed of prediction is important to
ensure that the model meets performance requirements in a production environment, especially for
real-time applications.
Reference:
ISTQB CT-AI Syllabus Section 3.2 on ML Workflow and Section 5 on ML Functional Performance
Metrics discuss the focus of testing during the model testing phase, which includes accuracy and
prediction speed but not the training speed.
Question # 15
The activation value output for a neuron in a neural network is obtained by applying computation tothe neuron.Which ONE of the following options BEST describes the inputs used to compute the activation value?SELECT ONE OPTION
A. Individual bias at the neuron level, activation values of neurons in the previous layer, and weightsassigned to the connections between the neurons. B. Activation values of neurons in the previous layer, and weights assigned to the connectionsbetween the neurons. C. Individual bias at the neuron level, and weights assigned to the connections between the neurons. D. Individual bias at the neuron level, and activation values of neurons in the previous layer.
Answer: A
Explanation:
In a neural network, the activation value of a neuron is determined by a combination of inputs from
the previous layer, the weights of the connections, and the bias at the neuron level. Heres a detailed
breakdown:
Inputs for Activation Value:
Activation Values of Neurons in the Previous Layer: These are the outputs from neurons in the
preceding layer that serve as inputs to the current neuron.
Weights Assigned to the Connections: Each connection between neurons has an associated weight,
which determines the strength and direction of the input signal.
Individual Bias at the Neuron Level: Each neuron has a bias value that adjusts the input sum, allowing
the activation function to be shifted.
Calculation:
The activation value is computed by summing the weighted inputs from the previous layer and
adding the bias.
Formula: z=Σ(wi
â‹…
ai)+bz = \sum (w_i \cdot a_i) + bz=Σ(wi
â‹…
ai )+b, where wiw_iwi are the weights,
aia_iai are the activation values from the previous layer, and bbb is the bias.
The activation function (e.g., sigmoid, ReLU) is then applied to this sum to get the final activation
value.
Why Option A is Correct:
Option A correctly identifies all components involved in computing the activation value: the
individual bias, the activation values of the previous layer, and the weights of the connections.
Eliminating Other Options:
B . Activation values of neurons in the previous layer, and weights assigned to the connections
between the neurons: This option misses the bias, which is crucial.
C . Individual bias at the neuron level, and weights assigned to the connections between the
neurons: This option misses the activation values from the previous layer.
D . Individual bias at the neuron level, and activation values of neurons in the previous layer: This
option misses the weights, which are essential.
Reference:
ISTQB CT-AI Syllabus, Section 6.1, Neural Networks, discusses the components and functioning of
Which ONE of the following options describes a scenario of A/B testing the LEAST?SELECT ONE OPTION
A. A comparison of two different websites for the same company to observe from a user acceptanceperspective. B. A comparison of two different offers in a recommendation system to decide on the more effectiveoffer for same users. C. A comparison of the performance of an ML system on two different input datasets. D. A comparison of the performance of two different ML implementations on the same input data.
Answer: C
Explanation:
A/B testing, also known as split testing, is a method used to compare two versions of a product or
system to determine which one performs better. It is widely used in web development, marketing,
and machine learning to optimize user experiences and model performance. Heres why option C is
the least descriptive of an A/B testing scenario:
Understanding A/B Testing:
In A/B testing, two versions (A and B) of a system or feature are tested against each other. The
objective is to measure which version performs better based on predefined metrics such as user
engagement, conversion rates, or other performance indicators.
Application in Machine Learning:
In ML systems, A/B testing might involve comparing two different models, algorithms, or system
configurations on the same set of data to observe which yields better results.
Why Option C is the Least Descriptive:
Option C describes comparing the performance of an ML system on two different input datasets. This
scenario focuses on the input data variation rather than the comparison of system versions or
features, which is the essence of A/B testing. A/B testing typically involves a controlled experiment
with two versions being tested under the same conditions, not different datasets.
Clarifying the Other Options:
A . A comparison of two different websites for the same company to observe from a user acceptance
perspective: This is a classic example of A/B testing where two versions of a website are compared.
B . A comparison of two different offers in a recommendation system to decide on the more effective
offer for the same users: This is another example of A/B testing in a recommendation system.
D . A comparison of the performance of two different ML implementations on the same input data:
This fits the A/B testing model where two implementations are compared under the same
conditions.
Reference:
ISTQB CT-AI Syllabus, Section 9.4, A/B Testing, explains the methodology and application of A/B
Which ONE of the following models BEST describes a way to model defect prediction by looking atthe history of bugs in modules by using code quality metrics of modules of historical versions asinput?SELECT ONE OPTION
A. Identifying the relationship between developers and the modules developed by them. B. Search of similar code based on natural language processing. C. Clustering of similar code modules to predict based on similarity. D. Using a classification model to predict the presence of a defect by using code quality metrics asthe input data.
Answer: D
Explanation:
Defect prediction models aim to identify parts of the software that are likely to contain defects by
analyzing historical data and code quality metrics. The primary goal is to use this predictive
information to allocate testing and maintenance resources effectively. Let's break down why option
D is the correct choice:
Understanding Classification Models:
Classification models are a type of supervised learning algorithm used to categorize or classify data
into predefined classes or labels. In the context of defect prediction, the classification model would
classify parts of the code as either "defective" or "non-defective" based on the input features.
Input Data - Code Quality Metrics:
The input data for these classification models typically includes various code quality metrics such as
cyclomatic complexity, lines of code, number of methods, depth of inheritance, coupling between
objects, etc. These metrics help the model learn patterns associated with defects.
Historical Data:
Historical versions of the code along with their defect records provide the labeled data needed for
training the classification model. By analyzing this historical data, the model can learn which metrics
are indicative of defects.
Why Option D is Correct:
Option D specifies using a classification model to predict the presence of defects by using code
quality metrics as input data. This accurately describes the process of defect prediction using
historical bug data and quality metrics.
Eliminating Other Options:
A . Identifying the relationship between developers and the modules developed by them: This does
not directly involve predicting defects based on code quality metrics and historical data.
B . Search of similar code based on natural language processing: While useful for other purposes, this
method does not describe defect prediction using classification models and code metrics.
C . Clustering of similar code modules to predict based on similarity: Clustering is an unsupervised
learning technique and does not directly align with the supervised learning approach typically used
including classification models for defect prediction.
"Using AI for Defect Prediction" (ISTQB CT-AI Syllabus, Section 11.5.1).
Question # 18
Which ONE of the following tests is MOST likely to describe a useful test to help detect differentkinds of biases in ML pipeline?SELECT ONE OPTION
A. Testing the distribution shift in the training data for inappropriate bias. B. Test the model during model evaluation for data bias. C. Testing the data pipeline for any sources for algorithmic bias. D. Check the input test data for potential sample bias.
Answer: B
Explanation:
Detecting biases in the ML pipeline involves various tests to ensure fairness and accuracy throughout
the ML process.
Testing the distribution shift in the training data for inappropriate bias (A): This involves checking if
there is any shift in the data distribution that could lead to bias in the model. It is an important test
but not the most direct method for detecting biases.
Test the model during model evaluation for data bias (B): This is a critical stage where the model is
evaluated to detect any biases in the data it was trained on. It directly addresses potential data
biases in the model.
Testing the data pipeline for any sources for algorithmic bias (C): This test is crucial as it helps identify
biases that may originate from the data processing and transformation stages within the pipeline.
Detecting sources of algorithmic bias ensures that the model does not inherit biases from these
processes.
Check the input test data for potential sample bias (D): While this is an important step, it focuses
more on the input data and less on the overall data pipeline.
Hence, the most likely useful test to help detect different kinds of biases in the ML pipeline is B . Test
the model during model evaluation for data bias.
Reference:
ISTQB CT-AI Syllabus Section 8.3 on Testing for Algorithmic, Sample, and Inappropriate Bias discusses
various tests that can be performed to detect biases at different stages of the ML pipeline.
Sample Exam Questions document, Question #32 highlights the importance of evaluating the model
for biases.
Question # 19
Which ONE of the following options is the MOST APPROPRIATE stage of the ML workflow to setmodel and algorithm hyperparameters?SELECT ONE OPTION
A. Evaluating the model B. Deploying the model C. Tuning the model D. Data testing
Answer: C
Explanation:
Setting model and algorithm hyperparameters is an essential step in the machine learning workflow,
primarily occurring during the tuning phase.
Evaluating the model (A): This stage involves assessing the model's performance using metrics and
does not typically include the setting of hyperparameters.
Deploying the model (B): Deployment is the stage where the model is put into production and used
in real-world applications. Hyperparameters should already be set before this stage.
Tuning the model (C): This is the correct stage where hyperparameters are set. Tuning involves
adjusting the hyperparameters to optimize the model's performance.
Data testing (D): Data testing involves ensuring the quality and integrity of the data used for training
and testing the model. It does not include setting hyperparameters.
Hence, the most appropriate stage of the ML workflow to set model and algorithm hyperparameters
is C. Tuning the model.
Reference:
ISTQB CT-AI Syllabus Section 3.2 on the ML Workflow outlines the different stages of the ML process,
including the tuning phase where hyperparameters are set.
Sample Exam Questions document, Question #31 specifically addresses the stage in the ML workflow
where hyperparameters are configured.
Question # 20
Which ONE of the following statements correctly describes the importance of flexibility for Alsystems?SELECT ONE OPTION
A. Al systems are inherently flexible. B. Al systems require changing of operational environments; therefore, flexibility is required. C. Flexible Al systems allow for easier modification of the system as a whole. D. Self-learning systems are expected to deal with new situations without explicitly having toprogram for it.
Answer: C
Explanation:
Flexibility in AI systems is crucial for various reasons, particularly because it allows for easier
modification and adaptation of the system as a whole.
AI systems are inherently flexible (A): This statement is not correct. While some AI systems may be
designed to be flexible, they are not inherently flexible by nature. Flexibility depends on the system's
design and implementation.
AI systems require changing operational environments; therefore, flexibility is required (B): While it's
true that AI systems may need to operate in changing environments, this statement does not directly
address the importance of flexibility for the modification of the system.
Flexible AI systems allow for easier modification of the system as a whole (C): This statement
correctly describes the importance of flexibility. Being able to modify AI systems easily is critical for
their maintenance, adaptation to new requirements, and improvement.
Self-learning systems are expected to deal with new situations without explicitly having to program
for it (D): This statement relates to the adaptability of self-learning systems rather than their overall
flexibility for modification.
Hence, the correct answer is C. Flexible AI systems allow for easier modification of the system as a
whole.
Reference:
ISTQB CT-AI Syllabus Section 2.1 on Flexibility and Adaptability discusses the importance of flexibility
in AI systems and how it enables easier modification and adaptability to new situations.
Sample Exam Questions document, Question #30 highlights the importance of flexibility in AI
systems.
Question # 21
Pairwise testing can be used in the context of self-driving cars for controlling an explosion in thenumber of combinations of parameters.Which ONE of the following options is LEAST likely to be a reason for this incredible growth ofparameters?SELECT ONE OPTION
A. Different Road Types B. Different weather conditions C. ML model metrics to evaluate the functional performance D. Different features like ADAS, Lane Change Assistance etc.
Answer: C
Explanation:
Pairwise testing is used to handle the large number of combinations of parameters that can arise in
complex systems like self-driving cars. The question asks which of the given options is least likely to
be a reason for the explosion in the number of parameters.
Different Road Types (A): Self-driving cars must operate on various road types, such as highways, city
streets, rural roads, etc. Each road type can have different characteristics, requiring the car's system
to adapt and handle different scenarios. Thus, this is a significant factor contributing to the growth of
parameters.
Different Weather Conditions (B): Weather conditions such as rain, snow, fog, and bright sunlight
significantly affect the performance of self-driving cars. The car's sensors and algorithms must adapt
to these varying conditions, which adds to the number of parameters that need to be considered.
ML Model Metrics to Evaluate Functional Performance (C): While evaluating machine learning (ML)
model performance is crucial, it does not directly contribute to the explosion of parameter
combinations in the same way that road types, weather conditions, and car features do. Metrics are
used to measure and assess performance but are not themselves variable conditions that the system
must handle.
Different Features like ADAS, Lane Change Assistance, etc. (D): Advanced Driver Assistance Systems
(ADAS) and other features add complexity to self-driving cars. Each feature can have multiple settings
and operational modes, contributing to the overall number of parameters.
Hence, the least likely reason for the incredible growth in the number of parameters is C. ML model
metrics to evaluate the functional performance.
Reference:
ISTQB CT-AI Syllabus Section 9.2 on Pairwise Testing discusses the application of this technique to
manage the combinations of different variables in AI-based systems, including those used in selfdriving
cars.
Sample Exam Questions document, Question #29 provides context for the explosion in parameter
combinations in self-driving cars and highlights the use of pairwise testing as a method to manage
this complexity.
Question # 22
Which ONE of the following is the BEST option to optimize the regression test selection and preventthe regression suite from growing large?SELECT ONE OPTION
A. Identifying suitable tests by looking at the complexity of the test cases. B. Using of a random subset of tests. C. Automating test scripts using Al-based test automation tools. D. Using an Al-based tool to optimize the regression test suite by analyzing past test results
Answer: D
Explanation:
A . Identifying suitable tests by looking at the complexity of the test cases.
While complexity analysis can help in selecting important test cases, it does not directly address the
issue of optimizing the entire regression suite effectively.
B . Using a random subset of tests.
Randomly selecting test cases may miss critical tests and does not ensure an optimized regression
suite. This approach lacks a systematic method for ensuring comprehensive coverage.
C . Automating test scripts using AI-based test automation tools.
Automation helps in running tests efficiently but does not inherently optimize the selection of tests
to prevent the suite from growing too large.
D . Using an AI-based tool to optimize the regression test suite by analyzing past test results.
This is the most effective approach as AI-based tools can analyze historical test data, identify
patterns, and prioritize tests that are more likely to catch defects based on past results. This method
ensures an optimized and manageable regression test suite by focusing on the most impactful test
cases.
Therefore, the correct answer is D because using an AI-based tool to analyze past test results is the
best option to optimize regression test selection and manage the size of the regression suite
effectively .
Question # 23
A company producing consumable goods wants to identify groups of people with similar tastes forthe purpose of targeting different products for each group. You have to choose and apply anappropriate ML type for this problem.Which ONE of the following options represents the BEST possible solution for this above-mentionedtask?SELECT ONE OPTION
A. Regression B. Association C. Clustering D. Classification
Answer: C
Explanation:
A . Regression
Regression is used to predict a continuous value and is not suitable for grouping people based on
similar tastes.
B . Association
Association is used to find relationships between variables in large datasets, often in the form of
rules (e.g., market basket analysis). It does not directly group individuals but identifies patterns of cooccurrence.
C . Clustering
Clustering is an unsupervised learning method used to group similar data points based on their
features. It is ideal for identifying groups of people with similar tastes without prior knowledge of the
group labels. This technique will help the company segment its customer base effectively.
D . Classification
Classification is a supervised learning method used to categorize data points into predefined classes.
It requires labeled data for training, which is not the case here as we want to identify groups without
predefined labels.
Therefore, the correct answer is C because clustering is the most suitable method for grouping
people with similar tastes for targeted product marketing .
Question # 24
Which ONE of the following statements is a CORRECT adversarial example in the context of machinelearning systems that are working on image classifiers.SELECT ONE OPTION
A. Black box attacks based on adversarial examples create an exact duplicate model of the original. B. These attack examples cause a model to predict the correct class with slightly less accuracy eventhough they look like the original image. C. These attacks can't be prevented by retraining the model with these examples augmented to thetraining data. D. These examples are model specific and are not likely to cause another model trained on same taskto fail.
Answer: D
Explanation:
A . Black box attacks based on adversarial examples create an exact duplicate model of the original.
Black box attacks do not create an exact duplicate model. Instead, they exploit the model by querying
it and using the outputs to craft adversarial examples without knowledge of the internal workings.
B . These attack examples cause a model to predict the correct class with slightly less accuracy even
though they look like the original image.
Adversarial examples typically cause the model to predict the incorrect class rather than just
reducing accuracy. These examples are designed to be visually indistinguishable from the original
image but lead to incorrect classifications.
C . These attacks can't be prevented by retraining the model with these examples augmented to the
training data.
This statement is incorrect because retraining the model with adversarial examples included in the
training data can help the model learn to resist such attacks, a technique known as adversarial
training.
D . These examples are model specific and are not likely to cause another model trained on the same
task to fail.
Adversarial examples are often model-specific, meaning that they exploit the specific weaknesses of
a particular model. While some adversarial examples might transfer between models, many are
tailored to the specific model they were generated for and may not affect other models trained on
the same task.
Therefore, the correct answer is D because adversarial examples are typically model-specific and may
not cause another model trained on the same task to fail .
Question # 25
Which ONE of the following hardware is MOST suitable for implementing Al when using ML?SELECT ONE OPTION
A. 64-bit CPUs. B. Hardware supporting fast matrix multiplication. C. High powered CPUs. D. Hardware supporting high precision floating point operations.
Answer: B
Explanation:
A . 64-bit CPUs.
While 64-bit CPUs are essential for handling large amounts of memory and performing complex
computations, they are not specifically optimized for the types of operations commonly used in
machine learning.
B . Hardware supporting fast matrix multiplication.
Matrix multiplication is a fundamental operation in many machine learning algorithms, especially in
neural networks and deep learning. Hardware optimized for fast matrix multiplication, such as GPUs
(Graphics Processing Units), is most suitable for implementing AI and ML because it can handle the
parallel processing required for these operations efficiently.
C . High powered CPUs.
High powered CPUs are beneficial for general-purpose computing tasks and some aspects of ML, but
they are not as efficient as specialized hardware like GPUs for matrix multiplication and other MLspecific
tasks.
D . Hardware supporting high precision floating point operations.
High precision floating point operations are important for scientific computing and some specific AI
tasks, but for many ML applications, fast matrix multiplication is more critical than high precision
alone.
Therefore, the correct answer is B because hardware supporting fast matrix multiplication, such as
GPUs, is most suitable for the parallel processing requirements of machine learning .
Feedback That Matters: Reviews of Our iSQI CT-AI Dumps
Rosalie StokesApr 18, 2026
CT-AI put my comprehension of AI risks and testing methods to the test. Instead of memorizing terms, the practice questions helped me think critically. On exam day, I felt well-prepared.
Zétény VörösApr 17, 2026
The CT-AI exam's focus on actual AI quality issues impressed me. The exam was fair and practical, and the concepts were retained by working through scenario-based questions.
Kenya GleasonApr 17, 2026
iSQI CT-AI was cleared today! I was able to clearly connect AI models, bias, and testing methods thanks to the structured preparation I followed. walked away feeling happy and confident.