Austin AI Infrastructure: 10MW GPU Cluster Deployment

Design through commissioning in 16 weeks—zero safety incidents, zero callbacks.

10MW

Total Capacity

400

Racks Deployed

16 WEEKS

Project Execution

ZERO

Safety Incidents

The Challenge

A Fortune 500 gaming technology company needed to rapidly deploy AI
infrastructure to support next-generation game development and real-time
rendering capabilities. The project required unprecedented power density—40kW
per rack across approximately 400 cabinets—in a timeline that traditional
approaches couldn’t meet.

Compounding the technical complexity, the client operated under gaming
commission oversight requiring enhanced physical security measures.
Standard colocation security wouldn’t satisfy audit requirements. The infrastructure needed
to be isolated, monitored, and access-controlled to standards more typically
associated with financial services or classified government facilities.

The stakes were significant: delays would impact product development timelines
with substantial downstream business consequences. The deployment needed to
be fast, but it also needed to be right—because in mission-critical AI infrastructure,
rework isn’t just expensive, it’s operationally disruptive.

Our Approach

BNS Networks served as prime contractor, self-performing across all trades to
maintain quality control and eliminate the coordination gaps that plague multi-
vendor deployments. When a single organization owns design, installation, and
commissioning, accountability is clear and handoff problems disappear.

Our team began with detailed power and cooling assessments, modeling thermal
loads for GPU infrastructure that generates heat densities far beyond traditional IT
equipment. The liquid cooling design needed to integrate seamlessly with the
facility’s existing air handling infrastructure—augmenting rather than replacing
conventional cooling systems.

The deployment strategy prioritized parallel workstreams without sacrificing the
methodical verification that mission-critical infrastructure demands. Cage
construction and physical security installation proceeded simultaneously with
power distribution and cooling infrastructure.

Work Undertaken:

  • Power and cooling assessment for 10MW load with 40kW per-rack density
  • Direct-to-chip liquid cooling system design and installation
  • Cold aisle containment deployment
  • ~400 racks of NVIDIA H100/H200 GPU installation and configuration
  • High-density power distribution with RPP diversity
  • Opaque cage construction, biometric access, surveillance systems
  • Full commissioning and documentation package

Technologies & Standards:

  • NVIDIA H100 and H200 GPUs
  • Direct-to-chip liquid cooling
  • 40kW per-rack power distribution
  • Remote Power Panels with diversity
  • Cold aisle containment
  • Biometric access control
  • BICSI-compliant structured cabling

Outcomes

10MW of AI compute infrastructure fully operational

~400 racks of NVIDIA H100/H200 GPUs deployed and configured

Direct-to-chip liquid cooling supporting 40kW per-rack density
Zero safety incidents throughout construction and commissioning
Zero callbacks or punch list items at handover
Gaming commission security audit passed on first review
Ongoing smart hands support contract awarded

Planning an AI infrastructure deployment? 

Let’s discuss your requirements.