Did you know that scaling current AI algorithms is so energy-intensive that the infrastructure required to meet the demand could cost tens, hundreds, or thousands of trillions of dollars? Energy is quickly becoming the main constraint in scaling computing resources globally.
Extropic is working on a fundamentally new approach to bypass these economic and physical limitations, moving beyond the traditional GPU/deep learning paradigm.
Here are my takeaways from this deep dive into energy-efficient AI:
New Hardware Architecture: Extropic is building a new integrated circuit processor called the Thermodynamic Sampling Unit (TSU). Unlike GPUs, which focus on deterministic functions (like matrix multiplication), the TSU is a giant array of sampling cells using probabilistic circuits.
Sampling, Not Just Computation: Probabilistic circuits are designed to sample from mathematically defined probability distributions instead of computing simple deterministic functions. This aligns directly with the core root of machine learning, which is about fitting distributions and sampling from them.
Massive Efficiency Gains Predicted: By developing new generative AI algorithms (like Denoising Thermodynamic Models) optimized for TSUs, simulations show the potential for groundbreaking efficiency. Early tests suggest a TSU could be around 10,000 times more efficient than a GPU running a VAE on simple generative AI benchmarks.