American Compute
How It WorksFor Lenders & InvestorsFor Clusters
Channel Partners
BlogContact us
Illia Kasian

Illia Kasian

CTO, American Compute

Illia leads engineering at American Compute. Previously a founding engineer at a YC-backed insurance carrier and an ML engineer building fraud detection systems at scale. His background includes machine learning, full-stack development, and infrastructure engineering across insurance and defense technology.

LinkedIn →

Articles

  • Every GPU Infrastructure Term You Need to Know

    Every term you'll encounter when buying, building, or operating a GPU cluster, defined in plain English. From GPUs and NVLink to colocation and TCO.

  • SXM vs PCIe for GPU Servers

    SXM and PCIe GPUs use the same silicon. The difference is the connector, and it determines bandwidth, power, cost, and flexibility. Here is how to choose.

  • NICs and DPUs for GPU Servers

    A NIC connects your server to the network. A DPU is a NIC with its own CPU. Which one you need depends on what your cluster is doing besides training.

  • NVIDIA AI GPU Differences from Ampere to Blackwell

    NVIDIA’s six flagship data center GPUs compared: V100, A100, H100, H200, B200, and B300. Specs, architecture changes, and which generation to buy in 2026.

  • GPU Cluster Networking 101

    How GPU clusters are networked: NVLink within servers, InfiniBand or Ethernet between them, switches, topology, optics, and real costs from 16-GPU to 24,576-GPU scale.

Product

  • How It Works
  • For Lenders & Investors
  • For Clusters
  • GPU Residual Value Insurance

Resources

  • Blog
  • Contact us

Company

  • LinkedIn
  • Twitter

Channel Partners

  • Procurement Specialists
  • Value Added Resellers
  • Hardware Brokers
  • Financing Groups
  • Advisory Firms
  • Consultants
  • Colocation Facilities
  • Infrastructure Providers

American
Compute


© 2026 American Compute