Nvidia’s H100 GPUs are poised to become a significant player in the AI processing landscape, with expectations of selling millions in the coming year. Each H100 GPU commands a substantial power consumption of 700 watts, surpassing that of the average American household and rivaling the energy usage of certain small European countries. This surge in demand for Nvidia’s H100 GPUs for AI workloads is projected to collectively match the power consumption of major American cities, exemplifying the substantial impact of these GPUs on the energy landscape.
Current estimates indicate that the total power consumption of data centers dedicated to AI applications is comparable to the energy consumption of a nation like Cyprus, according to Schneider Electric’s assessment in October. However, when considering the power requirements of Nvidia’s H100 and A100 processors, Paul Churnock, Principal Electrical Engineer of Datacenter Technical Governance and Strategy at Microsoft, anticipates that by the end of 2024, the H100 GPUs alone will surpass the combined power consumption of all households in Phoenix, Arizona, though remaining below that of larger cities such as Houston, Texas.
Churnock highlights that, at a 61% annual utilization rate, the peak power consumption of an H100 GPU is equivalent to that of the average American household. With Nvidia forecasting sales of 1.5 to 2 million H100 GPUs in 2024, the cumulative power consumption is anticipated to rank fifth among cities, outpacing Phoenix and trailing Houston. The associated energy consumption is estimated at approximately 3,740 kilowatt-hours per GPU annually.
Considering the potential deployment of 3.5 million H100 GPUs by late 2024, the total electricity consumption is projected to reach a staggering 13,091.82 gigawatt-hours (GWh) annually. To contextualize this figure, it mirrors the annual power consumption of entire countries such as Georgia, Lithuania, or Guatemala.
However, it is crucial to note that advancements in AI and HPC GPU efficiency are on the horizon. While the upcoming Nvidia Blackwell-based B100 may surpass the power consumption of the H100, it promises heightened performance and increased efficiency, translating to more significant computational output per unit of power consumed.