In vast swathes of farmland and industrial parks, giant buildings filled with racks of computers are springing up to fuel the race for AI. These engineering marvels constitute a new breed of infrastructure: supercomputers designed to train and run large language models on a mind-blowing scale, complete with their own specialized chips, cooling systems, and even their own power reserves.
Hyperscale AI data centers group hundreds of thousands of specialized computer chips called graphics processing units (GPUs), such as Nvidia’s H100s, into synchronized clusters that operate like a giant supercomputer. These chips excel at processing huge amounts of data in parallel. Hundreds of thousands of miles of fiber optic cables connect the chips like a nervous system, allowing them to communicate at lightning speed. Huge storage systems constantly feed data into the chips while the facilities hum and spin 24 hours a day.
