Download this chapter (PDF)
More AI-driven hardware in data centres to manage workloads in the cloud

To support intensive workloads that are anticipated for cloud computing, cloud providers and heavy users like research institutions are investing heavily in specialised hardware. This goes beyond standard CPUs (with AMD outselling Intel in the data centre segment recently) to include powerful accelerators such as Graphical Processing Units (GPUs), which are the workhorses for AI training and whereby Nvidia’s data centre GPU market share is 98%. Hyperscalers are developing specific hardware like AI chips to deliver high-performance, cost-efficient cloud services. This hardware evolution requires additional innovations in infrastructure to manage the immense power density.

Impact

education

Education

  • Students need to gain exposure to advanced computing environments and AI-capable hardware. This can be achieved through dedicated compulsory and elective modules/ courses, as well as project work, where the utilisation of new computing paradigms is a requirement.
Research

Research

  • Thanks to the integration of advanced AI model training and cloud computing services, researchers will have faster and more feasible access to computing resources that are applicable across several disciplines.
Operations

Operations

  • At the research institute and university level, shared infrastructure serves to reduce costs. In addition, the deployment of liquid cooling systems for computational infrastructure will improve energy efficiency and help with the sustainability of these computational resources.
More info about Cloud Computing?
Visit surf.nl
Link SURF icoon