
AC Research
American Compute
The AC Research team publishes analysis on GPU economics, AI infrastructure financing, and hardware depreciation. Our work is informed by direct experience underwriting GPU residual value insurance.
Articles
Every GPU Infrastructure Term You Need to Know
Every term you'll encounter when buying, building, or operating a GPU cluster, defined in plain English. From GPUs and NVLink to colocation and TCO.
SXM vs PCIe for GPU Servers
SXM and PCIe GPUs use the same silicon. The difference is the connector, and it determines bandwidth, power, cost, and flexibility. Here is how to choose.
Where to Buy GPU Servers
OEMs, brokers, used vs refurbished, warranties, and what to check before you write a $200K+ check for GPU hardware.
When a GPU Dies in Production
How GPU failures are detected, what causes them, what they cost in training and inference, and the full replacement workflow from RMA to validation.
AI Data Center Stakeholders
Every stakeholder in an AI data center project: power providers, lenders, colos, OEMs, VARs, brokers, consultants, ITADs, and more. Three project lifecycles show how they assemble differently for hyperscalers, neoclouds, and enterprises.
Data Center Tiers Explained
What data center tiers actually measure, how certification works, the history of the Uptime Institute standard, notable fraud cases, and what tiers miss about AI workloads.
Neocloud Business Model and Unit Economics
How neoclouds make money selling GPU-hours: contract vs on-demand pricing, cost structure with CoreWeave FY 2025 financials, debt financing mechanics, and what threatens the model.
Private Credit and Asset-Backed Securities for GPU Financing
How private credit, ABS, and SPVs became the primary funding mechanism for AI infrastructure. History from Ginnie Mae to GPU-backed bonds, with aircraft and taxi medallion precedents.
GPUs as Loan Collateral
What makes good collateral, how GPUs compare to aircraft, railcars, and other established asset classes, and what lenders should evaluate when underwriting GPU-backed loans.
How to Underwrite AI Infrastructure Investments and Why GPU Financing Fails
Demand is easy to secure for AI infrastructure as of 2026. The real risk is deployment: power, permitting, construction, and hardware delivery. Here is how to evaluate schedule risk for data center builds and GPU cluster rollouts.
NICs and DPUs for GPU Servers
A NIC connects your server to the network. A DPU is a NIC with its own CPU. Which one you need depends on what your cluster is doing besides training.
Starting a Neocloud in 2026
What it takes to launch a GPU cloud business: hardware costs at three scales, GPU-backed debt structures, colocation constraints, pricing models, CoreWeave unit economics, and the five risks that kill neoclouds.
NVIDIA AI GPU Differences from Ampere to Blackwell
NVIDIA’s six flagship data center GPUs compared: V100, A100, H100, H200, B200, and B300. Specs, architecture changes, and which generation to buy in 2026.
Bare Metal for AI Compute
Bare metal means renting a physical server with no virtualization layer. For GPU compute, bare metal is becoming the default because the hardware is the product, cloud premiums don't justify themselves at full utilization, and AI coding tools let any team build its own stack.
GPU Cluster Networking 101
How GPU clusters are networked: NVLink within servers, InfiniBand or Ethernet between them, switches, topology, optics, and real costs from 16-GPU to 24,576-GPU scale.
AI Cluster Cost Breakdown: CapEx
What goes into the Bill of Materials for an AI cluster: GPU servers, InfiniBand networking, storage, infrastructure, and real BOMs at 16-GPU, 576-GPU, and 24,576-GPU scale.
HGX, DGX, MGX: NVIDIA's Server Platforms
HGX is the GPU baseboard, DGX is the turnkey server, MGX is the modular reference architecture. How they relate, what OEMs change, and which platform fits your deployment.
NVLink and NVSwitch
NVLink is NVIDIA’s high-bandwidth GPU-to-GPU interconnect. NVSwitch is the routing chip that turns those links into a full mesh. How they work, six generations of specs, and when each one matters for training and inference.