GPU Thermal Density and Coolant Flow Specs: B200, GB200, and MI300
Table of Contents
What you will learn
B200, GB200, and MI300 thermal density translates to specific coolant flow rates, ΔT, and inhibitor stress. Spec table + chemistry implications for AI cooling loops.
When NVIDIA shipped the V100 in 2017, a 300 W TDP per accelerator was a landmark figure—one that challenged thermal designers but remained solvable with high-static-pressure fans, dense fin arrays, and well-managed airflow corridors. Seven years and four major architecture generations later, the B200 has crossed the 1,000 W threshold, and a single GB200 NVL72 rack draws 120–130 kW of electrical power while demanding more than 700 liters per minute of liquid coolant to maintain junction temperatures within specification. This article provides the engineering-level depth data center facility teams need to move from GPU vendor cold-plate specifications to correct fluid chemistry, flow infrastructure sizing, and coolant procurement—covering NVIDIA B200, the GB200 NVL72 full-rack system, and AMD MI300X/MI300A across all seven dimensions that determine whether a liquid-cooling deployment runs cleanly for five years or begins corroding cold-plate microchannels within eighteen months.
The Thermal Density Step-Change: V100 to B200

GPU thermal design power has roughly tripled in seven years—but TDP alone understates what happened at the architecture level. The more consequential metric is heat flux at the chip interface: how many watts must be extracted from each square centimeter of die surface. As node geometries compressed and transistor counts multiplied, heat flux densities reached levels where the physics of air cooling—which depend on convective heat transfer coefficients an order of magnitude lower than single-phase liquid—simply cannot carry enough thermal energy away fast enough to maintain safe operating temperatures at the die.
| GPU Generation | Release Year | TDP (W) | Form Factor | Cooling Requirement at Scale |
|---|---|---|---|---|
| V100 SXM2 | 2017 | 300 | SXM2 | Air or liquid; air fully viable |
| A100 SXM4 | 2020 | 400 | SXM4 | Air at ≤40 kW/rack; DLC preferred for density |
| H100 SXM5 | 2022 | 700 | SXM5 | DLC required above 8 GPUs per rack |
| B200 SXM6 | 2024 | 1,000 | SXM6 | DLC required; air not viable at scale |
| GB200 NVL72 (rack) | 2024–2025 | 120,000–130,000 | Full-rack 21U | Direct liquid cooling mandatory |
The V100-to-H100 jump—300 W to 700 W, a 133% TDP increase—pushed the air-cooling envelope hard. High-performance air-cooled H100 servers running eight GPUs in a 4U chassis at 5,600 W of GPU power alone required rack PDU upgrades, hot-aisle containment, and facility chilled-air supply temperatures in the 18–22°C range just to keep throttling events below workload SLA thresholds. Many hyperscalers treating H100 as an air-cooled deployment discovered mid-deployment that sustained transformer training runs at near-TDP draws produced throttling that degraded job completion times by 15–25%. The community converged on liquid cooling for H100 as the production-grade choice. B200 eliminates the debate entirely.
At 1,000 W per GPU and 120–130 kW per GB200 NVL72 rack, the math on airside thermal extraction becomes physically inadmissible. Removing 120 kW from a 21U enclosure via forced air at a 20°C supply-to-exhaust temperature rise requires approximately 85,000 CFM of airflow through that chassis volume. That velocity produces unacceptable noise, structural loading on the chassis, and CRAC unit sizing that exceeds the total cost of a liquid cooling CDU and manifold system. The GB200 NVL72 reference design has no air-cooling variant. It ships with a liquid-cooling manifold pre-plumbed into the chassis.
The facility infrastructure implication scales with the step-change. Traditional raised-floor data centers designed for 10–20 kW per rack face a 6× to 13× power density multiplier when deploying a single GB200 NVL72 row. Floor load ratings, power distribution architecture, UPS sizing, and the chilled-water plant must be re-engineered from scratch for this density class. New greenfield AI data center designs being commissioned in 2025–2026 are specifying 250–400 kW per cabinet row as baseline assumptions, with facility chilled-water capacity derived from liquid-cooling manifold flow requirements rather than from CRAC unit coverage area. The design methodology has inverted: compute thermal output drives facility infrastructure, not the reverse.
The inflection point is not merely quantitative. Air-cooled facilities can be incrementally upgraded to handle higher TDP by increasing airflow, tightening containment, and reducing supply air temperature. Liquid-cooled facilities require a fundamentally different commissioning discipline: fluid chemistry, leak detection, manifold pressure management, and coolant maintenance scheduling that have no equivalent in traditional air-cooled operations. Understanding the GPU-level specifications that define these requirements is the starting point for every liquid-cooled AI deployment.
NVIDIA B200 and GB200 NVL72 Cold-Plate Specifications

NVIDIA's thermal design documentation for the B200 SXM6 module establishes the cold-plate interface requirements that facilities teams must meet to maintain warranty coverage and achieve rated TDP handling. At the component level, the B200 cold plate is a copper microchannel heat exchanger brazed to the SXM6 substrate. At 1,000 W TDP with an active die area of approximately 1.5–2 cm², the resulting heat flux at the cold-plate interface reaches 500–600 W/cm²—among the highest of any production compute package commercially deployed at volume scale.
| Parameter | B200 Per Cold Plate | GB200 NVL72 Rack Level | Unit |
|---|---|---|---|
| TDP | 1,000 | 120,000–130,000 | W |
| Heat flux at cold-plate interface | 500–600 | — | W/cm² |
| Coolant flow rate | ~20 | >700 | LPM |
| Coolant supply temperature range | 18–32 | 18–32 | °C |
| Cold-plate ΔT (inlet to outlet) | <10 | <10 per cold plate | °C |
| Electrical conductivity (max) | 25 | 25 | µS/cm |
| Particulate size maximum | 50 | 50 | µm |
| Coolant pH range | 7.5–9.0 | 7.5–9.0 | — |
| GPU count per chassis | 1 | 72 | — |
| GB200 packages (2 GPUs + NVLink each) | — | 36 | — |
The 25 µS/cm conductivity ceiling is non-negotiable from a materials standpoint. The GB200 cold-plate assembly uses copper microchannels in direct contact with aluminum structural elements and stainless manifold components. Above 25 µS/cm, galvanic currents between dissimilar metals accelerate corrosion rates by an order of magnitude. Copper pitting produces particulate that re-enters the coolant loop and compounds fouling risk at the very microchannel geometries where thermal resistance is most sensitive to deposit buildup. Facilities that commission with tap water or improperly diluted glycol—where conductivity commonly reads 150–400 µS/cm—can produce measurable cold-plate corrosion within 90 days of first flow.
The sub-50 µm particulate specification exists to protect the microchannel geometry itself. GB200 cold-plate microchannels are machined to approximately 200–300 µm channel width. A 50 µm particle represents 17–25% of channel width—large enough to lodge at channel bends or bifurcations, create local flow restriction, and generate hotspots directly above the highest-power die regions. In a 1,000 W GPU where 10°C ΔT is the full thermal budget, a partial blockage degrading local heat transfer coefficient by 20% can push junction temperatures above throttle thresholds within the first sustained training run.
At the GB200 NVL72 rack level, the 72-GPU chassis manifold flow requirement exceeds 700 LPM. This demands CDU (Coolant Distribution Unit) sizing of at least 750–800 LPM at design pressure with 15–20% capacity headroom for flow balancing across manifold branch circuits. CDU pump curves must account for the full manifold pressure drop, which across a 72-cold-plate parallel circuit with supply and return headers and quick-disconnect fittings typically ranges from 1.5 to 2.5 bar depending on manifold design and CDU-to-rack run length. The 36 GB200 packages—each containing two GPUs and NVLink interconnect logic with its own thermal footprint—are plumbed in parallel branches from the rack manifold, requiring precise flow balancing to maintain the <10°C ΔT specification uniformly across all 72 cold plates simultaneously.
AMD MI300 Cold-Plate Specifications

AMD's MI300 platform—encompassing the MI300X (GPU-only with 8 HBM3 stacks) and MI300A (APU integrating CPU and GPU dies)—entered production at approximately 750 W TDP. This positions MI300 as a more thermally tractable platform than the B200 but significantly more demanding than the H100 generation it is designed to compete with and displace in AI training workloads where FP8 precision requirements and memory bandwidth justify the alternative architecture. The framing matters for facilities planning: MI300 is not an air-cooled platform at density. It is a liquid-cooled platform with gentler flow demands than B200, where the same chemistry discipline applies.
| Parameter | AMD MI300X | NVIDIA B200 SXM6 | NVIDIA H100 SXM5 (reference) | Unit |
|---|---|---|---|---|
| TDP | 750 | 1,000 | 700 | W |
| Approx. heat flux at cold-plate | 350–420 | 500–600 | 320–400 | W/cm² |
| Per cold-plate flow rate | 12–15 | ~20 | 8–12 | LPM |
| Cold-plate ΔT target | <10 | <10 | <10 | °C |
| Conductivity ceiling | 25 | 25 | 25 | µS/cm |
| Particulate maximum | 50 | 50 | 50 | µm |
| Coolant pH range | 7.5–9.0 | 7.5–9.0 | 7.5–9.0 | — |
The conductivity, particulate, and pH specifications are identical across MI300X, B200, and H100, because the corrosion risk is defined by cold-plate metallurgy—copper microchannels, aluminum structures, stainless manifold components—not by TDP alone. Every platform in this class shares the same material architecture; the chemistry constraints are structural, not workload-driven. This is a critical point for procurement teams managing mixed-GPU fleets: you cannot maintain separate, looser chemistry standards for your MI300 loop just because it runs at lower heat flux than your B200 nodes. The galvanic corrosion potential between copper and aluminum at >25 µS/cm is the same at 350 W/cm² as it is at 550 W/cm².
The MI300A presents a subtly different thermal challenge: its APU die integrates Zen 4 CPU cores alongside the GPU complex. The spatial heat distribution across a larger die area changes the heat flux profile, though aggregate TDP remains at approximately 750 W. In practice, the cold-plate specification for MI300A is identical to MI300X from a fluid chemistry standpoint; the distinction matters for cold-plate mechanical design and die-to-spreader interface thermal resistance, not for coolant selection or maintenance protocol.
For facilities running mixed fleets—B200 nodes alongside MI300 nodes sharing a facility chilled-water plant or CDU—the chemistry specification defaults to the most restrictive platform. A single shared coolant loop serving both GPU types must maintain ≤25 µS/cm conductivity, sub-50 µm particulate, pH 7.5–9.0, and OAT inhibitor chemistry throughout. Testing cadence and recharge decisions are driven by the B200 loop segments, since those operate at higher heat flux with proportionally faster inhibitor depletion. Do not establish separate, independent maintenance schedules for MI300 and B200 nodes on a shared loop.
Why Thermal Density Compresses Chemistry Maintenance Margin
The relationship between heat flux and OAT inhibitor depletion rate is governed by Arrhenius kinetics: for every 10°C increase in reaction temperature, the rate constant approximately doubles. In a liquid-cooled GPU loop, "reaction temperature" is not the bulk fluid temperature measured at the CDU inlet—it is the boundary-layer temperature at the cold-plate microchannel wall, where coolant is in direct thermal contact with the hottest surface in the system.
At 500–600 W/cm² heat flux and 20 LPM flow, the thermal boundary layer at the B200 cold-plate wall surface runs 15–25°C above the bulk fluid temperature in the microchannel. If bulk coolant is supplied at 28°C and exits at 38°C (10°C ΔT), the wall temperature in the highest-flux regions of the cold plate is reaching 53–63°C. OAT carboxylate inhibitors designed and qualified at nominal 85°C glycol service temperatures do not fail catastrophically at this exposure—but the carboxylate oxidation rate at 60°C is approximately 2–3× the rate at 40°C, which is the boundary-layer regime for a well-managed H100 or MI300 loop at 400 W/cm² heat flux. This acceleration factor directly shortens effective inhibitor service life.
| Platform / Condition | Approx. Cold-Plate Wall Temp (°C) | OAT Oxidation Rate Factor | Nominal OAT Service Life | Effective Life at Density |
|---|---|---|---|---|
| H100 / MI300 (moderate density, 22°C supply) | 45–55 | 1.0× (baseline) | 5–7 years | 4–6 years |
| B200 (single-server deployment, 25°C supply) | 53–60 | 1.5–2.0× | 5–7 years | 3.0–4.5 years |
| GB200 NVL72 full-rack (28°C supply) | 55–65 | 2.0–3.0× | 5–7 years | 2.5–4.0 years |
The consequence for testing cadence is acute. A facility that inherited an annual inhibitor testing SLA from its H100 deployment and carries that protocol into a GB200 NVL72 deployment will discover—at the 12-month mark—that reserve alkalinity is at or near the recharge threshold. At that point, corrosion has been occurring during the prior four to six months at an accelerated rate, because OAT inhibitor efficacy degrades non-linearly as reserve alkalinity falls: the final 20% of reserve alkalinity delivers less than 20% of active corrosion protection. The copper has already been exposed.
Field Note — Andre Taki, Lead Product Specialist · Practice Leader, Cooling Chemistry:
"The reserve alkalinity number is not just a quality metric—it's the early-warning system for cold-plate corrosion. In a GB200 loop running at 500 W/cm² heat flux, I've seen reserve alkalinity drop from specification baseline to near-depletion threshold in under eighteen months. If you're running annual tests, you'll find out the system needs recharge after the damage is already happening. The inhibitor is there to sacrifice itself to protect the copper—when the sacrifice is nearly complete, the copper starts going instead. Build quarterly spot-checks into your SLA, and treat reserve alkalinity at 60% of baseline as a mandatory recharge trigger, not a watch-and-wait condition."
— Andre Taki, Lead Product Specialist · Practice Leader, Cooling Chemistry, Alliance Chemical
Conductivity trending is the secondary early-warning indicator. A loop with functional inhibitor chemistry and proper DI water top-off should maintain conductivity below 15 µS/cm comfortably within the 25 µS/cm ceiling. When conductivity begins trending upward toward 20 µS/cm during normal operation—with no known top-off events or leaks to explain the rise—this typically reflects accumulation of oxidized carboxylate fragments and metal chelation products from early-stage corrosion. Rising conductivity combined with declining reserve alkalinity is a compound warning signal warranting immediate recharge scheduling, not a watch-and-wait response.
| Platform | Supply Temp | Recommended Testing Interval | Recharge Trigger |
|---|---|---|---|
| H100 / A100 | 18–25°C | Annual | Reserve alkalinity <50% of delivery baseline |
| MI300X/A | 25–32°C | Every 9 months | Reserve alkalinity <60% of delivery baseline |
| B200 (single-server) | 25–32°C | Every 6 months | Reserve alkalinity <65% of delivery baseline |
| GB200 NVL72 (full rack) | 25–32°C | Every 6 months (quarterly preferred) | Reserve alkalinity <65% of delivery baseline |
ASHRAE H1 Class and Supply Temperature Optimization
ASHRAE's thermal guidelines for data processing environments define several operating condition classes, with Class H1 established specifically for high-density liquid-cooled equipment. The original H1 specification established acceptable facility coolant supply temperatures at 18–27°C. In practice, as GPU vendors characterized actual thermal performance across a wider supply temperature range and as facility operators pushed for higher chiller plant efficiency, production deployments have converged around 25–32°C—with NVIDIA and AMD both providing platform-specific guidance accepting warmer coolant in exchange for acknowledged reduction in thermal throttle margin and, in some cases, reduced maximum sustainable sustained TDP.
The energy efficiency case for warmer coolant supply is quantifiable and significant:
| Chilled Water Supply Temp (°C) | Typical Chiller COP | Chiller Power (1 MW Cooling Load) | Annual Chiller Energy Cost (at $0.08/kWh) |
|---|---|---|---|
| 18 | 3.2 | 313 kW | ~$219,000 |
| 22 | 3.7 | 270 kW | ~$189,000 |
| 25 | 4.1 | 244 kW | ~$171,000 |
| 28 | 4.5 | 222 kW | ~$155,000 |
| 32 | 5.1 | 196 kW | ~$137,000 |
For a 100 MW AI data center cooling plant, the difference between 18°C and 28°C supply temperature setpoints represents approximately $6.4M in annual chiller energy cost savings—before accounting for the additional benefit that warmer supply temperatures extend free-cooling (adiabatic or air-side economization) hours per year in moderate climates. In the Pacific Northwest or Northern Europe, free cooling at 28°C supply is achievable for 60–70% of annual hours in a well-designed economizer configuration, effectively eliminating compressor operation for the majority of the year and dramatically reducing facility PUE for those periods.
The thermal margin case for more conservative setpoints is equally valid for sustained AI training workloads:
- At 18°C supply with 10°C cold-plate ΔT, coolant exits the cold plate at 28°C. With a typical 15°C chip-to-coolant thermal resistance contribution from the cold plate and substrate, die junction temperature approaches ~43°C—well below the 85°C junction limit with 42°C of headroom.
- At 28°C supply under identical conditions, coolant exits at 38°C and junction temperature approaches ~53°C—still comfortably within spec, but with 32°C of headroom instead of 42°C.
- At 32°C supply under continuous near-TDP training workloads, thermal margin compresses further. In facilities where coolant flow is not precisely balanced across all cold plates in a rack—which requires careful manifold commissioning—uneven flow distribution can push individual GPU junction temperatures above throttle thresholds even when average loop temperatures are within spec.
Production hyperscaler deployments have largely converged on 28–30°C as the operating sweet spot: meaningfully better chiller COP than 18–22°C designs, sufficient thermal margin for sustained training workloads, and consistent with both NVIDIA and AMD thermal characterization at rated TDP. Conservative enterprise deployments—where workload profiles are more variable and throttling-induced latency SLA violations carry higher business cost—are operating at 25°C. Deployments relying primarily on air-side economization in warm climates must accept 32°C setpoints or invest in supplemental compressor-based chiller capacity for summer peak periods.
Real-World Flow Path Architecture Tradeoffs
There are two primary liquid-cooling flow path architectures for GPU cold-plate deployments, with meaningfully different implications for fluid chemistry requirements, commissioning complexity, ongoing maintenance cost, and contamination risk isolation. Selecting the correct architecture for a given facility requires evaluating these tradeoffs against infrastructure constraints, operational model, and total cost of ownership over a five-to-seven-year operating cycle.
Single-Loop (Direct) Architecture
In a single-loop design, the facility's chilled-water plant connects directly to the CDU, which drives coolant flow through the server manifolds and cold plates without an intermediate heat exchanger. The coolant in the cold-plate circuit is the same fluid maintained by the facility chilled-water loop. This architecture minimizes capital cost and commissioning complexity—there is no heat exchanger to size, commission, or maintain, and there is no secondary pump circuit to balance.
The fundamental constraint is that the conductivity ceiling, particulate specification, and chemistry class requirements apply to the entire facility loop volume. A facility with legacy steel piping, cooling tower water with corrosion inhibitor chemistries incompatible with OAT, or even a single corroded cast-iron valve in the chilled-water distribution will contaminate the GPU cold-plate circuit at a rate determined by facility loop volume and flow velocity. This architecture is well-suited to greenfield facilities built specifically for liquid-cooled AI infrastructure using all-copper or stainless distribution piping—where the entire facility loop can realistically be maintained to GPU cold-plate chemistry standards.
Dual-Loop Architecture with Plate-Frame Heat Exchanger
In a dual-loop design, a plate-frame or brazed-plate heat exchanger decouples the GPU cold-plate circuit from the facility chilled-water loop. The primary loop—serving cold plates directly—is a closed, small-volume, high-purity glycol circuit maintained to GPU vendor specification. The secondary loop—the facility chilled-water plant—operates independently with its own water treatment chemistry, which may include scale inhibitors, biocides, or corrosion inhibitor packages appropriate for open-tower systems that would be incompatible with the GPU cold-plate conductivity spec.
The thermal penalty is real and must be designed in from the start: a well-designed plate-frame heat exchanger imposes 2–5°C of approach temperature loss between facility supply and primary loop supply. If the facility supplies chilled water at 25°C, the GPU primary loop receives coolant at 27–30°C after the HX. This approach temperature loss reduces available thermal margin at the cold plate by an equivalent amount, constraining practical supply temperature setpoints and, in facilities already at the warm end of the supply range, potentially increasing throttle event frequency under peak training workloads.
| Architecture Feature | Single-Loop (Direct) | Dual-Loop (HX Separated) |
|---|---|---|
| Initial cooling infrastructure CapEx | Lower (no HX, no secondary pump) | Higher (+15–25%) |
| Commissioning complexity | Lower | Higher (secondary loop balance, HX sizing) |
| Chemistry scope for GPU spec compliance | Full facility loop | Primary loop only; facility loop independent |
| Contamination isolation | None — facility event directly affects GPU loop | Full — HX provides hard isolation boundary |
| HX approach temperature penalty | None | 2–5°C reduction in GPU supply temp |
| Coolant fluid volume requiring spec compliance | Full facility loop (large) | Primary loop only (typically 10–15% of facility volume) |
| Leak isolation capability | Limited — leak anywhere in facility loop is a GPU-loop event | Full — primary/secondary circuits are hard-separated |
| Best suited for | Greenfield all-copper/SS facilities | Brownfield, multi-tenant, legacy steel piping |
A practical consideration often overlooked in architecture selection is the fluid inventory cost for recharge and ongoing top-off. A single-loop facility serving 100 GB200 NVL72 racks with a total facility chilled-water loop volume of 80,000–120,000 liters requires dramatically more high-purity glycol inventory to maintain specification than a dual-loop facility where the GPU-facing primary loop for those same 100 racks has a total volume of 8,000–12,000 liters. Over a five-year operating cycle including recharge events, the fluid cost differential—particularly for semiconductor-grade glycol and high-purity deionized water, which carry a meaningful per-liter premium over technical-grade equivalents—can substantially offset the heat exchanger capital cost premium of the dual-loop design. This lifecycle cost comparison should be completed before architecture selection is finalized.
Procurement Spec Checklist: From Cold-Plate Data Sheet to Purchase Order
Translating a GPU vendor's cold-plate specification document into a well-defined glycol purchase order requires working through six specification parameters systematically. The following checklist addresses each parameter and maps it to a specific line item or test requirement that should appear on your procurement document and in your quality agreement with your chemical supplier.
1. Chemistry Class: OAT
Specify Organic Acid Technology (OAT) inhibitor chemistry explicitly. Do not accept conventional silicate-based (IAT) coolants: silicate inhibitors precipitate from solution at the low ionic concentrations required to maintain the 25 µS/cm conductivity ceiling, forming silicate gel deposits that foul microchannels even as they deplete from the bulk fluid. HOAT (Hybrid OAT, containing both organic acids and low-level silicate or borate) may be acceptable on certain platforms where the vendor specifically approves it in writing—but do not assume approval for B200/GB200 or MI300 deployments without explicit written confirmation from the GPU vendor's thermal engineering team.
- Purchase order language: "OAT inhibited propylene glycol [or ethylene glycol] per ASTM D6210 (EG-based) or ASTM D3306 (PG-based) latest revision. Inhibitor package must be carboxylate-based OAT chemistry. Silicate-containing formulations not acceptable."
2. Specific Gravity and Concentration Range
Specify glycol concentration by its corresponding density range at delivery temperature, not by nominal percentage alone. Concentration drifts due to evaporative losses from open-loop CDU components or from topping off with non-glycol makeup water; verifying density at delivery and at testing intervals confirms dilution state without requiring chromatographic analysis.
- For 25–32°C supply deployments in climate-controlled facilities: 20–25% PG provides freeze protection to approximately −8°C—adequate for indoor CDU applications.
- For facilities with potential for transient low-temperature exposure (power outage in cold climates, outdoor CDU components): 30–40% PG provides freeze protection to −15°C to −22°C.
- Corresponding specific gravity at 20°C: 20% PG ≈ 1.018 g/cm³; 25% PG ≈ 1.022 g/cm³; 35% PG ≈ 1.031 g/cm³.
3. Conductivity Ceiling
Specify ≤25 µS/cm at delivery as the maximum acceptable value, with a preferred delivery target of ≤15 µS/cm. This provides a 10 µS/cm operating buffer before the GPU vendor's absolute limit, accommodating in-service conductivity rise from makeup water additions and minor corrosion products between testing intervals. Specify measurement method explicitly to avoid ambiguity between temperature-referenced values.
- Measurement method: ASTM D1125, reported at 25°C
- In-service monitoring alarm: 20 µS/cm — investigate source and initiate corrective action plan
- In-service hard limit: 25 µS/cm — mandatory immediate recharge or fluid replacement
4. Particulate Filtration Certificate
For pre-mixed (diluted) glycol delivered ready-to-use, require certification that the supplier has filtered the fluid through an absolute-rated filter at or below 50 µm prior to packaging. Request a Certificate of Filtration (COF) noting the absolute micron rating used in the packaging process. This is a non-standard request for many industrial distributors. Alliance Chemical's application engineering team provides pre-mix services with documented micron filtration for data center accounts requiring this specification—contact the applications team to discuss delivery format and documentation requirements for commissioning-scale volumes.
5. Inhibitor Reserve Alkalinity Baseline
Reserve alkalinity is the most actionable proxy measurement for OAT inhibitor health and the primary trending metric across the coolant service life. Specifying a minimum reserve alkalinity at delivery establishes the baseline against which all subsequent in-service measurements are compared to calculate depletion percentage. A supplier that cannot report reserve alkalinity on a COA is not appropriate for this application.
- Minimum at delivery: ≥40 mg KOH/100 mL (ASTM D1121 potentiometric titration or equivalent)
- Recharge trigger threshold: ≤60–65% of delivery baseline value
- Purchase order requirement: Reserve alkalinity must be measured and reported on the Certificate of Analysis for every delivered lot, with lot number traceable to delivery documentation
6. Certificate of Analysis Requirements Per Lot
Require a lot-level COA accompanying every delivery shipment, with the following parameters reported with test method references and specific numeric results (not "pass/fail" checkboxes):
| COA Parameter | Test Method | Acceptance Criterion |
|---|---|---|
| pH at 25°C | ASTM E70 | 7.5–9.0 |
| Specific gravity at 20°C | ASTM D891 | Per concentration specification (see §2) |
| Electrical conductivity at 25°C | ASTM D1125 | ≤25 µS/cm; ≤15 µS/cm preferred |
| Reserve alkalinity | ASTM D1121 | ≥40 mg KOH/100 mL |
| Inhibitor type confirmation | FTIR or titration | Carboxylate OAT confirmed; no silicate detected |
| Chloride content | ASTM D512 | ≤25 mg/L (chloride is a primary driver of copper pitting above threshold) |
| Appearance | Visual inspection | Clear to slightly colored; no turbidity, haze, or visible particulate |
Alliance Chemical Catalog Products for Data Center Cooling
Alliance Chemical's catalog includes three products directly applicable to B200/GB200 and MI300 liquid-cooling deployments, each addressable through the Alliance Chemical applications team for specification review, COA provision, and volume delivery coordination:
- 100% Propylene Glycol — Inhibited (PG Inhibited): Pre-charged with a carboxylate OAT inhibitor package at the concentrate level, available in drum and tote quantities for on-site dilution. Preferred for environmentally sensitive facilities, deployments where incidental personnel contact is a concern, and sites where regulatory reporting for glycol releases is a compliance consideration. Dilute with High-Purity Deionized Water to target concentration and verify final conductivity prior to system fill.
- Semiconductor Grade Ethylene Glycol (EG): For deployments where EG is specified by the GPU vendor or where lower viscosity at low operating temperatures provides CDU pump efficiency benefits. Ionic impurity levels are controlled to a semiconductor-grade specification, with typical conductivity at delivery well below 5 µS/cm—ensuring that dilution to 20–25% working concentration produces a mixed fluid well within the 25 µS/cm GPU vendor ceiling without supplemental ion exchange treatment.
- High-Purity Deionized Water: For on-site dilution of glycol concentrate to working concentration. Resistivity ≥1 MΩ·cm (conductivity <1 µS/cm) at delivery, ensuring that dilution to 20–25% glycol concentration produces a mixed fluid well within specification and that top-off water additions between testing intervals do not contribute measurable conductivity load to the loop.
The thermal density inflection at B200 and GB200 NVL72 scale is not an incremental challenge layered onto the liquid-cooling infrastructure decisions made for H100—it is a category change that demands new facility design parameters, new coolant chemistry maintenance protocols, and procurement specifications written to a precision standard most industrial cooling programs have not previously required. Facilities that carry forward H100-era annual testing intervals, relaxed conductivity monitoring, and commodity glycol specifications into GB200 deployments will encounter corrosion, microchannel fouling, and thermal throttling within the first two years of operation. The engineering basis for every specification in this article—heat flux calculations, Arrhenius kinetics, metallurgical corrosion thresholds—is well-established; what changes at B200/GB200 scale is that the margins are thin enough that specification compliance is no longer optional for long-term infrastructure health. For companion coverage on OAT versus HOAT coolant chemistry selection, conductivity and galvanic corrosion risk management in closed liquid-cooled loops, and glycol concentration and freeze-point calculations for AI infrastructure deployments, see the related sub-pillars in the Alliance Chemical data center cooling series. Full product specifications, technical data sheets, COA samples, and application engineering support are available at Alliance Chemical's AI & Data Center Cooling Chemicals resource center.