Densification & modularity: Nvidia, Vertiv leaders on the next steps for data centres in the AI era

Densification & modularity: Nvidia, Vertiv leaders on the next steps for data centres in the AI era

Generic close up of a rack of servers in a data centre

The relentless pace of AI shows no signs of slowing down going into 2025. However, the increased demand for intense workloads has created new challenges such as increased power consumption in high-density data centres.

In an interview with Capacity following a press conference in Bologna, Italy, executives from Vertiv and Nvidia discussed the pace of innovation in the AI infrastructure space, and the imperative to develop more efficient and sustainable solutions.

Subscribe today for free

Densification: Optimising Space and Efficiency

One of the primary ways to increase energy efficiency in data centres outlined by the business leaders is through densification — optimising equipment to be more efficient while occupying a smaller footprint.

As demand for data processing grows, particularly with AI, densification helps make better use of available space and resources, while maintaining both energy efficiency and cost-effectiveness.

The shift from CPUs to powerful GPUs, such as Nvidia’s Blackwell, underscores this challenge. More advanced computing power intensifies the load on existing power and cooling systems, placing significant strain on operators.

Vertiv’s experts predict that in 2025 enterprise data centres will need enhanced uninterruptible power supply (UPS) systems, batteries, power distribution equipment, and switchgear with higher power densities to handle the increased demands of AI.

Karsten Winther, president for Europe, Middle East and Africa at Vertiv, told Capacity that rack densities in data centres are increasing “dramatically.” He stressed that densification is crucial for improving equipment efficiency while reducing its footprint.

By focusing on smaller, more efficient power and cooling units, densification can minimise space usage without sacrificing performance.

Karsten Winther, president for Europe, Middle East and Africa at Vertiv
Karsten Winther, president for Europe, Middle East and Africa at Vertiv speaking during Vertiv’s 2024 EMEA press conference | Credit Vertiv

Nvidia’s upcoming Blackwell GPUs, due in 2025, represent a significant leap in performance, offering 1000x the AI computing power of the company’s 2016 Pascal units.

Carlo Ruiz, VP of enterprise solutions and operations for EMEA at Nvidia, explained that enabling breakthroughs in AI depends on creating an “AI Factory” that functions as a unified compute unit.

“If we spread the technology out across a data centre, we lose the ability to use it as a single compute resource,” he said. “This was already anticipated on our technology roadmap, which is why we’ve partnered to create more efficient systems.”

Modularity: Rapid Deployment and Flexibility

In addition to densification, modularity was another key concept driving the evolution of data centres highlighted by the technology leaders.

Modular designs enable the rapid deployment of additional computing resources or infrastructure components, which can be prefabricated and installed as needed.

In the past week, Vertiv released two new modular liquid cooling solutions in the form of coolant distribution units (CDUs) capable of providing liquid-to-liquid and liquid-to-air cooling.

Beyond simply modular CDUs, Vertiv offers prefabricated modular data centres that enable the rapid deployment of additional computing resources.

Winther outlined to Capacity that modularity “delivers not only on pace to clients but also costs.”

“The good thing about modular is that it is highly scalable. There are certain parts of the world where they are under-stimulated in terms of access to a megawatt or gigawatt of data centre capacity.

“Then all of a sudden, there is a demand in the world, since everyone starts to recognise AI because it's becoming so publicly available, that they realise they are light years behind and they need a gigawatt tomorrow.

Winther continued: “With modular, we can develop most of it in-factory and not on site. As soon as you move to on-site, things slow down for a lot of reasons. But if you can keep much of development in the factory, we can keep the pace which is a gain for the client, and then it's all about scaling at the end of the day.”

Ruiz spoke of how innovation in AI is “relentless.” At Nvidia, they’ve moved from launching new GPUs every few years to an annual cadence, or “one-year rhythm” as described by CEO Jensen Huang, where they’re trying to push next-generation computing far beyond the scale of Moore’s Law.

“As we go very fast, it's very important that we have partners that share that pace of innovation because also becomes an obstacle in our go-to-market,” Ruiz said.

Carlo Ruiz, VP of enterprise solutions and operations for EMEA at Nvidia
Nvidia's Carlo Ruiz during Vertiv's 2024 EMEA press conference | Credit: Vertiv

DPUs and software: Maximising Performance and Lowering Costs

It’s not just bigger GPU equals better workload performance, with Ruiz outlining in an earlier keynote the integral role of other hardware solutions.

One hardware unit that’s often overlooked is the data processing unit, or DPU, a specialised processor that handles data-intensive workloads more efficiently than traditional CPUs or GPUs.

Nvidia has its own line of DPUs, dubbed BlueField, with the latest, the BlueField-3, capable of 400 gigabits per second (Gb/s).

While previously overlooked, DPUs are gaining traction, however. In the past week, Microsoft unveiled its own custom built DPU for workloads on Azure. While back in early October, Nvidia’s arch-rival AMD launched a series of new DPU solutions.

Ruiz highlighted that DPUs offer significant energy efficiency gains, especially in networking and workload balancing between CPUs and GPUs.

“Think about the power balancing that we can do between the CPU and the GPUs. You can have spiky energy usage but the same thing is in the balancing of the workload between the CPU and the GPUs.

“Once you have that, you can control that in one piece of software, and suddenly you can also be much more energy efficient in that domain.”

Beyond hardware, Ruiz suggested that optimising software can also offer data centre operators a huge boost in energy efficiency.

For example, for Nvidia’s Hopper generation, he suggested that by simply updating the software, Nvidia was able to increase performance 6x purely from optimising the underlying software.

“We don't do it just for the sake of efficiency,” Ruiz explained. “It's a combination. If you want to do performance, you need to do it efficiently. The big win as well is that the economics work well also, because efficiency is not just about power, it's about cost as well. And as these AI jobs scale, the fact that you can do it at a lower cost is also a massive driver to adopt all of these new technologies.”

Energy impact and sovereign AI


The European energy landscape is expected to evolve significantly by 2025, contrasting with the energy trajectory in the US.

Europe will maintain its focus on expanding cleaner, greener energy sources, with experts at Vertiv predicting an acceleration in the adoption of energy alternatives and microgrid deployments by operators in 2025.

However, there are concerns that the growing demand for data centres to support AI-driven workloads could undermine energy sustainability efforts. Both Winther and Ruiz, however, disagreed with this notion, highlighting the industry's contributions to improving efficiency and integrating cleaner energy solutions.

Winther said that the industry makes more than a strong contribution to improve efficiency and leverage cleaner energy sources, even contributing to some scientific endeavours in the domain.

Ruiz, meanwhile, rallied around the idea of how sovereign AI could be just as important: “The debate should not cloud the necessity to move forward. You see a big difference in this region between some of the nations that are moving very fast and some that are forgetting that there's more needed than just debate.

“I think everyone should take sovereign AI seriously because enterprises and nations are sitting on their own IP, their own data, their own cultural values, and if you don't do it within your organisation, then you’re shipping off your knowledge to someone else who will sell it back to you. Sustainability is important. Doing everything in the right way is important, but action is also quite relevant to building those data centres to have the national capabilities to have your own projects.”

RELATED STORIES

Vertiv launches new coolant units for high-density AI data centres

Vertiv expands U.S. manufacturing with new South Carolina facility

Nvidia reports record Q3 earnings amid AI boom, but shares slip on investor caution

Blackwell GPU flaw '100% Nvidia's fault,' says CEO Jensen Huang

Gift this article