
We tend to think of AI as abstract—software, clouds, algorithms. But as Mike Doolan makes clear in his RED Talk, the reality behind our digital world is anything but weightless.
“I think it's very true that we are all on our phones and our laptops and our devices … and nobody really thinks what's going on somewhere worldwide to make all that happen,” Doolan says. “And it's all happening in data centers, vast huge buildings full of computers humming away doing all the applications and all the processing and running all the large language models that are now powering the growth of AI.”
Doolan, who has spent 30 years working in critical infrastructure, walks viewers through the physical pressures driving today’s data centers—from surging electrical loads to immersion cooling systems. “Some of these data centers are 100 megawatts plus now in size. Some are even up into the gigawatts. They're using a lot of power and a lot of water and they need trained technicians to build and operate them.”
Hitting the Cooling Ceiling
With AI workloads ramping up, conventional infrastructure is struggling to keep pace. “I think some of the physical parameters that we just talked about around the power and the water and the cooling of those chips are probably some of the things that are going to max out and mean that we can't go much bigger.”
To manage rising heat density, new technologies are being adopted fast. “We're starting to see direct liquid cooling, which is getting the water much closer to the chip... and we're even starting to see more immersion cooling where you're actually putting the servers in baths of oily liquid to make them even more efficient.”

Unpredictable Loads, Uncertain Grids
“I think the emerging threat is really power,” Doolan warns. “We're starting to see that these AI loads and the GPUs that are… very different characteristics to some of the computers that we've seen before.”
He describes how demand can spike unpredictably: “They’re experiencing very large load swings literally from 10 megawatts to 50 megawatts.” These surges ripple beyond facility walls. “That can have a significant effect on some of the grids that the data centers and all the other consumers in those areas rely on as well.”
Meanwhile, the internal systems that support cooling are evolving too. “The CDUs, the cooling distribution units that are providing the water to cool all this equipment, they're all new now. So, we're having to learn a new asset class and how to maintain and operate that and look for failure modes and how we can do condition monitoring and predictive maintenance on a whole new asset class.”
The Human Factor
Asked about outages, Doolan doesn’t point to hardware first. “So there's really, I think, two things that can cause data centers to go down and cause the big headlines of various applications going off… sometimes it's the equipment that fails. But it often is the processes and the people that fail as well.”
That’s why he pushes his teams toward relentless preparation. “I always say plan, plan, plan, plan twice, switch once.”
The Workforce Gap
“The industry takes hundreds if not thousands of people to build the data centers and then it requires a lot of people to operate the data centers meaning that it takes a long time to bring them up to speed.”
And that learning curve matters. “So, if we do suffer with attrition then obviously that increases the risk as well.”
Want the full picture?
Watch Mike Doolan’s RED Talk on data center reliability, AI demand, and operational resilience at RED Talks