To support intensive workloads that are anticipated for cloud computing, cloud providers and heavy users like research institutions are investing heavily in specialised hardware. This goes beyond standard CPUs (with AMD outselling Intel in the data centre segment recently) to include powerful accelerators such as Graphical Processing Units (GPUs), which are the workhorses for AI training and whereby Nvidia’s data centre GPU market share is 98%. Hyperscalers are developing specific hardware like AI chips to deliver high-performance, cost-efficient cloud services. This hardware evolution requires additional innovations in infrastructure to manage the immense power density.