Was :
$81
Today :
$45
Was :
$99
Today :
$55
Was :
$117
Today :
$65
Why Should You Prepare For Your NVIDIA-Certified Professional AI Networking With MyCertsHub?
At MyCertsHub, we go beyond standard study material. Our platform provides authentic NVIDIA NCP-AIN Exam Dumps, detailed exam guides, and reliable practice exams that mirror the actual NVIDIA-Certified Professional AI Networking test. Whether you’re targeting NVIDIA certifications or expanding your professional portfolio, MyCertsHub gives you the tools to succeed on your first attempt.
Verified NCP-AIN Exam Dumps
Every set of exam dumps is carefully reviewed by certified experts to ensure accuracy. For the NCP-AIN NVIDIA-Certified Professional AI Networking , you’ll receive updated practice questions designed to reflect real-world exam conditions. This approach saves time, builds confidence, and focuses your preparation on the most important exam areas.
Realistic Test Prep For The NCP-AIN
You can instantly access downloadable PDFs of NCP-AIN practice exams with MyCertsHub. These include authentic practice questions paired with explanations, making our exam guide a complete preparation tool. By testing yourself before exam day, you’ll walk into the NVIDIA Exam with confidence.
Smart Learning With Exam Guides
Our structured NCP-AIN exam guide focuses on the NVIDIA-Certified Professional AI Networking's core topics and question patterns. You will be able to concentrate on what really matters for passing the test rather than wasting time on irrelevant content. Pass the NCP-AIN Exam – Guaranteed
We Offer A 100% Money-Back Guarantee On Our Products.
After using MyCertsHub's exam dumps to prepare for the NVIDIA-Certified Professional AI Networking exam, we will issue a full refund. That’s how confident we are in the effectiveness of our study resources.
Try Before You Buy – Free Demo
Still undecided? See for yourself how MyCertsHub has helped thousands of candidates achieve success by downloading a free demo of the NCP-AIN exam dumps.
MyCertsHub – Your Trusted Partner For NVIDIA Exams
Whether you’re preparing for NVIDIA-Certified Professional AI Networking or any other professional credential, MyCertsHub provides everything you need: exam dumps, practice exams, practice questions, and exam guides. Passing your NCP-AIN exam has never been easier thanks to our tried-and-true resources.
NVIDIA NCP-AIN Sample Question Answers
Question # 1
[AI Network Architecture]You're designing a multi-GPU system for AI training using NVIDIA GPUs with NVLink connections. Youneed to maximize inter-GPU communication bandwidth. Which feature included in NCCL allows forimproved communication between GPUs and NICs?
A. Adaptive Routing B. PXN C. Graph Search Optimization D. SHARP v2
Answer: B
Explanation:
The correct answer is PXN (Peer eXchange Network).
From the NVIDIA NCCL Documentation:
"PXN enables communication between GPUs connected via NVLink and NICs by treating the GPUs as
a distributed switch. This architecture improves bandwidth utilization by enabling any GPU to
communicate with the NIC via the shortest path available, even if it's not directly connected to the
NIC."
This enhances GPU-to-NIC and NIC-to-GPU transfers, leveraging the NVLink topology. It significantly
boosts performance in multi-GPU setups where not every GPU is directly connected to the NIC.
Other options:
Adaptive Routing is a fabric-level feature for dynamic path rerouting.
Graph Search Optimization is used internally for topology modeling in NCCL.
SHARP v2 is a switch-based collective acceleration method, unrelated to PXN.
Reference: NVIDIA NCCL User Guide “ PXN Feature Section
Question # 2
[Al Network Architecture “ DPU Modes]In which mode of the BlueField DPU does the ARM system on the DPU control the NIC data path, butallow access to the DPU OS from the host?
A. Separated Host mode B. NIC mode C. DPU mode D. Restricted mode
Answer: C
Explanation:
In DPU Mode, the ARM cores on BlueField own the NIC data path, while still allowing the host system
to access the DPU OS (via OOB or virtio).
From NVIDIA BlueField Documentation:
"In DPU Mode, the data path is offloaded to the BlueField Arm cores, enabling advanced security and
networking functions, while still allowing host access to the BlueField OS."
This is different from:
NIC Mode: Data path controlled by host, ARM cores inactive.
[InfiniBand Troubleshooting]Which of the following tools in Cumulus Linux is specifically useful for detecting and differentiatingmicrobursts from regular network congestion?Pick the 2 correct responses below
A. Monthly network utilization reports B. ASIC monitoring with millisecond-level granularity C. SNMP polling at 5-minute intervals D. What Just Happened (WJH) feature for packet drop analysis
Answer: B, D
Explanation:
In Cumulus Linux, microbursts are short-lived, high-volume traffic bursts that often go undetected by
coarse-grained monitoring like SNMP.
The two tools specifically used for this purpose are:
What Just Happened (WJH)
"WJH provides real-time packet drop visibility and classifies drops by reason (e.g., congestion, ACLs,
etc.), enabling microburst detection."
ASIC monitoring at millisecond granularity
"Deep telemetry is enabled via the switch ASIC, which provides sub-second counters that capture
microburst patterns otherwise missed by SNMP."
Incorrect Options:
A and C provide low-frequency sampling, insufficient for microbursts which last milliseconds.
Reference: NVIDIA NetQ & Cumulus Linux Documentation “ What Just Happened (WJH)
Question # 4
[InfiniBand Optimization]Which of the following NCCL environment variables enable SHARP aggregation with NCCL whenusing the NCCL-SHARP plugin?Pick the 2 correct responses below
A. NCCL_COLLNET_ENABLE=1 B. NCCL_ALGO=CollNet C. NCCLSPECTRUM_ENABLE=1 D. NCCL_SHARP_AUTOINIT
Answer: A, D
Explanation:
To enable SHARP (Scalable Hierarchical Aggregation and Reduction Protocol) aggregation using the
NCCL-SHARP plugin, the following two environment variables are required:
NCCL_COLLNET_ENABLE=1
Enables NCCLs support for CollNet (Collective Network) operations, including SHARP.
NCCL_SHARP_AUTOINIT=1
Automatically initializes the SHARP plugin when available, activating SHARP-based collectives.
From the NVIDIA NCCL User Guide “ SHARP Plugin Section:
"NCCL_COLLNET_ENABLE must be set to enable collective network acceleration features."
"NCCL_SHARP_AUTOINIT enables automatic SHARP plugin integration at NCCL runtime."
Incorrect Options:
B . NCCL_ALGO=CollNet “ This variable controls the algorithm used for collectives but does not
enable SHARP.
C . NCCLSPECTRUM_ENABLE “ This is not a documented NCCL variable.
[Spectrum-X Optimization]Which tool would you use to gather telemetry data in a SpectrumX network?
A. NVIEW B. UFM C. NetQ D. BCM
Answer: C
Explanation:
The NVIDIA Spectrum-X networking platform is an Ethernet-based solution optimized for AI
workloads, combining Spectrum-4 switches, BlueField-3 SuperNICs, and advanced software to
deliver high performance and low latency. Gathering telemetry data is critical for optimizing
Spectrum-X networks, as it provides visibility into network performance, congestion, and potential
issues. The question asks for the tool used to collect telemetry data in a Spectrum-X network.
According to NVIDIAs official documentation, NVIDIA NetQ is the primary tool for gathering
telemetry data in Ethernet-based networks, including those running on Spectrum-X platforms with
Cumulus Linux or SONiC. NetQ is a network operations toolset that provides real-time monitoring,
telemetry collection, and analytics for network health, enabling administrators to optimize
performance, troubleshoot issues, and validate configurations. It collects detailed telemetry data
such as link status, packet drops, latency, and congestion metrics, which are essential for Spectrum-X
optimization.
Exact Extract from NVIDIA Documentation:
œNVIDIA NetQ is a highly scalable network operations tool that provides telemetry-based monitoring
and analytics for Ethernet networks, including NVIDIA Spectrum-X platforms. NetQ collects real-time
telemetry data from switches and hosts, offering insights into network performance, congestion, and
connectivity. It supports Cumulus Linux and SONiC environments, making it ideal for optimizing
Spectrum-X networks by providing visibility into key metrics like latency, throughput, and packet
loss.
” NVIDIA NetQ User Guide
This extract confirms that option C, NetQ, is the correct tool for gathering telemetry data in a
Spectrum-X network. NetQs integration with Spectrum-X switches and its ability to collect and
analyze telemetry data make it the go-to solution for network optimization tasks.
Question # 6
[Spectrum-X Configuration]You are troubleshooting a Spectrum-X network and need to validate the fabric configuration. Whichfeature of Spectrum-X allows for automated fabric validation?
A. NVIDIA NetQ B. RoCE Adaptive Routing C. NVIDIA DOCA D. RoCE Performance Isolation
Answer: A
Explanation:
NVIDIA NetQ is a network operations tool that provides real-time visibility and automated validation
of the network fabric. It helps in identifying misconfigurations, monitoring network health, and
ensuring that the fabric meets the required specifications for AI workloads.
[InfiniBand Troubleshooting]You are troubleshooting InfiniBand connectivity issues in a cluster managed by the NVIDIA NetworkOperator. You need to verify the status of the InfiniBand interfaces. Which command should you useto check the state and link layer of InfiniBand interfaces on a node?
A. rdma show devices B. ibstat -d mlx5_X C. ifconfig ib0 D. ip link show dev ib0
Answer: B
Explanation:
To check the status and link layer of InfiniBand interfaces, the ibstat command is used. For example:
ibstat -d mlx5_0
This command provides detailed information about the InfiniBand device, including its state (e.g.,
Active), physical state (e.g., LinkUp), and link layer (e.g., InfiniBand).
[InfiniBand Configuration]In order to configure RoCE on a Cumulus switch, which command should be used?
A. nv set qos roce enable on B. nv set roce qos enable on C. nv roce qos enable on D. nv qos roce enable on
Answer: A
Explanation:
To enable RDMA over Converged Ethernet (RoCE) on a Cumulus switch, the correct command is:
nv set qos roce enable on
This command configures the Quality of Service (QoS) settings to support RoCE, ensuring that the
necessary parameters for lossless Ethernet are applied.
Reference: NVIDIA Cumulus Linux Documentation “ RDMA over Converged Ethernet (RoCE)
Question # 9
[Spectrum-X Configuration]You are automating the deployment of a Spectrum-X network using Ansible. You need to ensure thatthe playbooks can handle different switch models and configurations efficiently.Which feature of the NVIDIA NVUE Collection helps simplify the automation by providing pre-builtroles for common network configurations?
A. Collection libraries B. Collection modules C. Collection roles D. Collection plugins
Answer: C
Explanation:
The NVIDIA NVUE Collection for Ansible includes pre-built roles designed to streamline automation
tasks across various switch models and configurations. These roles encapsulate common network
configurations, allowing for efficient and consistent deployment.
By utilizing these roles, network administrators can:
Apply standardized configurations across different devices.
Reduce the complexity of playbooks by reusing modular components.
Ensure consistency and compliance with organizational policies.
This approach aligns with Ansible best practices, promoting maintainability and scalability in network
[AI Network Architecture]What are the prerequisites for performing Flow Analysis with NetQ?
A. Cumulus 4.x and later / Spectrum-2 and later / LCM enabled B. Cumulus 5.x and later / Spectrum-3 and later / On-premises deployment C. Cumulus 5.x and later / Spectrum-2 and later / On-premises deployment D. Cumulus 5.x and later / Spectrum-2 and later / LCM enabled
Answer: D
Explanation:
To perform Flow Analysis with NetQ, the following prerequisites must be met:
Cumulus Linux Version: NetQ Flow Analysis requires Cumulus Linux 5.x or later.
Switch Hardware: The feature is supported on Spectrum-2 and later switch models.
Lifecycle Management (LCM): LCM must be enabled to utilize Flow Analysis capabilities.
These requirements ensure compatibility and proper functioning of the Flow Analysis feature within
[InfiniBand Optimization]You are optimizing a multi-node AI training cluster using InfiniBand networking and NVIDIA GPUs.You need to implement efficient collective communication operations across the nodes.Which feature of NVIDIA Collective Communications Library (NCCL) allows for optimizedperformance in multi-subnet InfiniBand environments?
A. Lazy connection establishment B. GPU Direct RDMA C. Static plugin linking D. Support for IB Router
Answer: D
Explanation:
In multi-subnet InfiniBand environments, AI training clusters are segmented across network zones
(or subnets). Direct GPU-to-GPU communication (especially for collective ops like AllReduce,
Broadcast, etc.) requires inter-subnet reachability. NCCL supports this via the InfiniBand Router (IB
Router) feature.
From the NCCL User Guide “ Environment Variables Section:
"NCCL_IB_USE_IB_ROUTER: Enables NCCL support for IB routers which are used in multi-subnet
InfiniBand fabrics. When enabled, NCCL can traverse IB subnets using a properly configured IB
router."
This is critical because without IB Router support:
NCCL would be restricted to intra-subnet GPU collectives.
Multi-node training across subnets would fail or fall back to slower TCP fallback mechanisms.
Technical
Explanation:
IB Routers use subnet managers (like OpenSM with routing tables) to bridge communication across
different InfiniBand partitions.
NCCL queries the subnet topology, discovers routing paths, and uses RDMA CM (Connection
Manager) to establish GPU transport over routers.
This capability is especially important in data center-scale AI clusters spanning multiple racks or
zones, connected via IB routers like Mellanox SB7800 or QM8700 series.
When NCCL_IB_USE_IB_ROUTER=1 is set:
NCCL includes router-aware route resolution in its path selection logic.
Enables efficient zero-copy communication across GPUs in different IB domains, maintaining low
latency.
Other Options Explained:
A . Lazy connection establishment “ controls when peer connections are made but does not enable
cross-subnet reach.
B . GPU Direct RDMA “ enables intra-node direct memory access, not applicable for routing across
subnets.
C . Static plugin linking “ affects how NCCL links plugins, not related to IB topology.
Exact Extract Reference:
Source: NVIDIA NCCL User Guide “ Environment Variables Section
Extract: "NCCL_IB_USE_IB_ROUTER: Enables NCCL support for IB routers, required for multi-subnet
InfiniBand configurations. Ensures proper routing of collectives over fabric-wide topologies."
Question # 12
[Spectrum-X Troubleshooting]You're troubleshooting a Spectrum-X network and notice that the System Status LED on a switch isblinking for more than 5 minutes. What is the most likely cause of this issue?
A. The power supply unit is failing B. The switch is overheating C. The Onyx software did not boot properly
Answer: C
Explanation:
According to the NVIDIA Spectrum-X Switch Operating System (SX_OS) Troubleshooting Guide, the
System Status LED behavior is a critical indicator of the switchs internal operational state.
From the document:
œThe System Status LED will blink green during system initialization. If the LED continues blinking for
more than 5 minutes, it indicates that the Onyx OS has failed to load properly. The system may be
stuck in the boot process, or the file system may be corrupted.
This blinking LED beyond normal initialization time indicates that the system has either encountered
a failure during software boot or is unable to transition from bootloader to the OS runtime
environment (i.e., Onyx).
Key causes include:
Corrupted or missing system files.
Failed firmware or OS upgrade attempts.
Boot device (e.g., eMMC or SSD) issues or corrupted partitions.
Technically, during power-on:
The switch performs POST (Power-On Self Test).
Then the Onyx OS attempts to load from the boot partition.
If the Onyx OS kernel or root filesystem is invalid, the system halts boot, and the LED remains in a
blinking state, as no successful OS load confirmation is triggered.
Remediation Steps (as per NVIDIA guide):
Access the switch through console and monitor boot logs.
Use ONIE recovery or re-flash a stable Onyx OS version.
Check system storage integrity using built-in diagnostics.
Exact Extract Reference:
Source: NVIDIA SX_OS 3.9.3000 Documentation
Topic: Troubleshooting System Status LED
Extract: "If the LED blinks for more than 5 minutes and the switch is not accessible via CLI, the Onyx
software failed to load properly and recovery procedures must be initiated."
Question # 13
[InfiniBand Optimization]You are optimizing an InfiniBand network for AI workloads that require low-latency and highthroughputdata transfers. Which feature of InfiniBand networks minimizes CPU overhead duringdata transfers?
A. TCP/IP Offloading B. SHARP C. Direct Memory Access (DMA) D. PKey
Answer: C
Explanation:
Direct Memory Access (DMA) in InfiniBand networks allows data to be transferred directly between
the memory of two devices without involving the CPU. This capability significantly reduces CPU
overhead, lowers latency, and increases throughput, making it ideal for AI workloads that demand
efficient data transfers.
Question # 14
[Spectrum-X Configuration]What is the purpose of configuring NVUE to ignore Linux files?
A. Enable pushing of configuration through Ansible template files. B. Enable the persistent manipulation of specific settings using both NVUE and flat-file approaches. C. Reduce NVUE memory utilization to optimize performance. D. Improve Cumulus security by reducing the attack surface.
Answer: B
Explanation:
Configuring NVUE to ignore certain underlying Linux files allows administrators to manage specific
settings manually or through automation tools like Ansible without NVUE overwriting these
configurations. This approach enables the persistent manipulation of settings using both NVUE and
flat-file methods, providing flexibility in network management.
Question # 15
[Spectrum-X Optimization]How is congestion evaluated in an NVIDIA Spectrum-X system?
A. By assessing the physical distance between network devices. B. By monitoring the CPU and power usage of network devices. C. By measuring the number of connected devices in the network. D. By analyzing the egress queue loads ensuring all ports are well-balanced.
Answer: D
Explanation:
In NVIDIA Spectrum-X, congestion is evaluated based on egress queue loads. Spectrum-4 switches
assess the load on each egress queue and select the port with the minimal load for packet
transmission. This approach ensures that all ports are well-balanced, optimizing network
performance and minimizing congestion.
Question # 16
[Spectrum-X Optimization]Your organization is planning to utilize Ethernet for an upcoming AI project. Spectrum-X is theselected platform for this deployment, and Adaptive Routing is a key feature.What are the requirements included in the Spectrum-X RA for adaptive routing?
A. SN4700, BlueField-3 SuperNIC, DDR, RoCE traffic B. SN5600, BlueField-3 SuperNIC, DDR, RoCE traffic C. SN5600, BlueField-3 SuperNIC, DDR, TCP traffic
Answer: B
Explanation:
The NVIDIA Spectrum-X Reference Architecture (RA) 1.0.1 is designed for Ethernet AI cloud
deployments and includes the SN5600 Spectrum-4 switches and BlueField-3 SuperNICs. This
architecture supports adaptive routing and DOCA programmable congestion control (PCC) for
lossless RoCE traffic, optimizing performance for AI workloads.
The SN5600 switch offers 64 ports of 800GbE in a dense 2U form factor, providing high throughput
and low latency essential for AI applications.
Question # 17
[InfiniBand Troubleshooting]Which of the following scenarios would the Network Traffic Map in UFM be least useful fortroubleshooting?
A. When investigating reports of network congestion or latency problems. B. After making changes to network configuration. C. When troubleshooting a single node's hardware failure. D. When optimizing job placement and workload distribution across the cluster.
Answer: C
Explanation:
The Network Traffic Map in NVIDIA's Unified Fabric Manager (UFM) provides a visual representation
of the network topology and traffic flows, which is particularly useful for identifying congestion
points, verifying network configurations, and optimizing workload distribution.
However, when troubleshooting a single node's hardware failure, the Network Traffic Map is less
effective, as it focuses on network-level issues rather than individual hardware components.
Question # 18
[Spectrum-X Optimization]What is the purpose of WJH (What Just Happened)?
A. Provide contextual information regarding dropped packets in order to aid debugging. B. Send notifications of failed login attempts to a pre-defined Slack channel. C. Identify potential cyberattacks or unusual traffic patterns across the cluster. D. Collate operating system logs and diagnose system crashes.
Answer: A
Explanation:
NVIDIA's What Just Happened (WJH) is a feature that provides real-time visibility into network
problems by analyzing all packets passing through the switch and alerting on performance issues
caused by packet drops, congestion, high latency, or misconfigurations.
WJH retains the last packets that were dropped from the switch with complete packet headers and
the actual drop reason. This enhances the ability to debug network problems, identify affected flows,
and decrease time-to-repair.
Question # 19
[Spectrum-X Configuration]You are using NVIDIA Air to simulate a Spectrum-X network for AI workloads. You want to ensure thatyour network configurations are optimal before deployment.Which NVIDIA tool can be integrated with Air to validate network configurations in the digital twinenvironment?
A. Spectrum-X Manager B. NetQ C. DOCA D. GPU Cloud
Answer: B
Explanation:
NVIDIA NetQ is a highly scalable network operations toolset that provides visibility, troubleshooting,
and validation of networks in real-time. It delivers actionable insights and operational intelligence
about the health of data center networks”from the container or host all the way to the switch and
port”enabling a NetDevOps approach.
NetQ can be used as the functional test platform for the network CI/CD in conjunction with NVIDIA
Air. Customers benefit from testing the new configuration with NetQ in the NVIDIA Air environment
(œdigital twin ) and fix errors before deploying to their production.
Question # 20
[Spectrum-X Optimization]You have recently implemented NVIDIA Spectrum-X in your data center to optimize AI workloads.You need to verify the performance improvements and create a baseline for future comparisons.Which tool would be most appropriate for creating performance baseline results in this Spectrum-Xenvironment?
A. NetQ B. CloudAI Benchmark C. MLNX-OS D. Ansible
Answer: B
Explanation:
The CloudAI Benchmark is designed to evaluate and establish performance baselines in AI-optimized
networking environments like NVIDIA Spectrum-X. It assesses various performance metrics,
including throughput and latency, ensuring that the network meets the demands of AI workloads.
This benchmarking is essential for validating the benefits of Spectrum-X and for ongoing
[InfiniBand Troubleshooting]A user has requested confirmation that the InfiniBand network is performing optimally and is notlimiting the speed of a training run. To verify this, you would like to measure the RDMA throughputrate between two endpoints.Which tool should be used?
A. ibdiagnet B. ib_write_bw C. ping D. iperf
Answer: B
Explanation:
The ib_write_bw tool is part of the Perftest package and is specifically designed to measure the
bandwidth of RDMA write operations between two InfiniBand endpoints. It provides accurate
assessments of RDMA throughput, which is crucial for verifying the performance of InfiniBand
networks in high-performance computing and AI training environments.
Reference: ib_write_bw - NVIDIA Enterprise Support Portal
Question # 22
[AI Network Architecture]Which of the following statements are true about AI workloads and adaptive routing?Pick the 2 correct responses below.
A. AI workloads are made of a small number of volumetric flows called elephant flows. B. AI workloads have very high entropy that helps spread traffic evenly without congestion. C. Flow-based load balancing mechanisms increase congestion risk. D. ECMP-based load balancing works best for AI workloads.
Answer: A, C
Explanation:
AI workloads, particularly in large-scale training scenarios, are characterized by a small number of
high-bandwidth, long-lived flows known as "elephant flows." These flows can dominate network
traffic and are prone to causing congestion if not managed effectively.
Traditional flow-based load balancing mechanisms, such as Equal-Cost Multipath (ECMP), distribute
traffic based on flow hashes. However, in AI workloads with low entropy (i.e., limited variability in
flow characteristics), ECMP can lead to uneven traffic distribution and congestion on certain paths.
Adaptive routing techniques, which dynamically adjust paths based on real-time network conditions,
are more effective in managing AI traffic patterns and mitigating congestion risks.
Reference: Powering Next-Generation AI Networking with NVIDIA SuperNICs
Question # 23
[InfiniBand Troubleshooting]You are tasked with troubleshooting a link flapping issue in an InfiniBand AI fabric. You would like tostart troubleshooting from the physical layer.What is the right NVIDIA tool to be used for this task?
A. nvidia-smi utility B. mlxlink utility C. tcpdump tool
Answer: B
Explanation:
The mlxlink tool is used to check and debug link status and issues related to them. The tool can be
used on different links and cables (passive, active, transceiver, and backplane). It is intended for
advanced users with appropriate technical background.
Reference: mlxlink Utility - NVIDIA Docs
Question # 24
[Spectrum-X Configuration]What is the total throughput of the SN5600 Spectrum-X switch?
A. 12.8 petabits per second B. 25.6 terabits per second C. 102.4 gigabits per second D. 51.2 terabits per second
Answer: D
Explanation:
The SN5600 smart-leaf/spine/super-spine switch offers 64 ports of 800GbE in a dense 2U form factor.
The SN5600 offers diverse connectivity in combinations of 1 to 800GbE and boasts an industryleading total throughput of 51.2Tb/s.
[Spectrum-X Configuration]When upgrading Cumulus Linux to a new version, which configuration files should be migrated fromthe old installation?Pick the 2 correct responses below
A. All files in /etc/cumulus/acl B. All files in /etc/network C. All files in /etc D. All files in /etc/mix
Answer: A, B
Explanation:
Before upgrading Cumulus Linux, it's essential to back up configuration files to a different server. The
/etc directory is the primary location for all configuration data in Cumulus Linux. Specifically, the
following files and directories should be backed up:
/etc/frr/ - Routing application (responsible for BGP and OSPF)
/etc/hostname - Configuration file for the hostname of the switch
/etc/network/ - Network configuration files, most notably /etc/network/interfaces and
/etc/network/interfaces.d/
/etc/cumulus/acl - Access control list configurations
Cumulus Linux is a network operating system used on NVIDIA Spectrum switches, including those in
the Spectrum-X platform, to provide a Linux-based environment for Ethernet networking in AI and
HPC data centers. When upgrading Cumulus Linux to a new version, its critical to migrate specific
configuration files to preserve network settings and ensure continuity. The question asks for the two
configuration file locations that should be migrated from the old installation during an upgrade.
According to NVIDIAs official Cumulus Linux documentation, the key directories containing
configuration files that should be migrated during an upgrade are /etc/cumulus/acl (for access
control list configurations) and /etc/network (for network interface configurations). These directories
store critical network settings that define the switchs behavior, such as ACL rules and interface
settings, which must be preserved to maintain network functionality after the upgrade.
Exact Extract from NVIDIA Documentation:
œWhen upgrading Cumulus Linux, you must back up and migrate specific configuration files to ensure
continuity of network settings. The following directories should be included in the backup:
/etc/cumulus/acl: Contains access control list (ACL) configuration files that define packet filtering and
security policies.
/etc/network: Contains network interface configuration files, such as interfaces and ifupdown2
settings, which define the network interfaces and their properties.
Back up these directories before upgrading and restore them after the new version is installed to
maintain consistent network behavior.
” NVIDIA Cumulus Linux Upgrade Guide
This extract confirms that options A and B are the correct answers, as /etc/cumulus/acl and
/etc/network contain essential configuration files that must be migrated during a Cumulus Linux
upgrade. These files ensure that ACL policies and network interface settings are preserved, which are
critical for Spectrum-X configurations in AI networking environments.
Reference: Upgrading Cumulus Linux - NVIDIA Docs
Feedback That Matters: Reviews of Our NVIDIA NCP-AIN Dumps