Heat Wave

  • While today's equipment racks may mechanically look the same, we cram far more functionality into the same volume as those of a generation ago with all of our digital componentry. But we've yet to adequately deal with the facility impacts of putting 10 pounds in the proverbial five pound bag. Most existing facilities are hopelessly incapable of supporting the new levels of our densely packed technology.
  • Watching a computer company webinar recently, even the presenters were surprised that the number one issue raised by their consultant base was how to extract heat out of the room quickly enough. While the hosts were patting themselves on the back for their new blade form-factor kit, the consultants were panicking over how to cool this new monstrosity. It was clear the presenters had not tried to install a fully laden rack in a facility themselves.
  • How bad is the situation? We've all seen the ads with a pretty rack full of Quad Xeon blade computer servers sitting all by itself off in a corner quietly humming away. Nothing could be further from our truth. Using one company's online Data Center Planner, a 42 RU rack with six levels of moderately loaded blade servers specs out at:
  • Weight: 2,257 pounds
  • Heat load: 13,116 Watts
  • Power: 63 amps @ 208 volts, requiring 24 C19 plugs
  • HVAC: 3.7 tons of "sensible cooling" with 24 hour duty cycle
  • For comparative purposes, a generation ago we'd use either 1,500 Watts for a non-amplifier rack, so we need eight times the power and cooling in the same volume.
  • So this new rack density configuration requires an the same air conditioning as a house in summer, weighs about the same as a Mini Cooper, and requires the same power as two electric cars charging in your garage. The trend is likely to only get worse in the future near future. More to the point, what of a room with multiple racks? Architects and MEP engineers will think we're nuts when we start providing new power and heat loads for these racks.
  • So what is the best way to coordinate the impacts with architects and MEP engineers assigned to these efforts? I suspect that will be a period of training each other's disciplines along with some trial and error tribulations over the next few years. Our industry will need to both better understand their pathways to success and sell them on why this is an overall enhancement to their efforts as well. The most important part of the pitch is that we now require a smaller architectural footprint. The second part is that while we've scrunched our equipment into a smaller space, we have not lessened the power, heat loads or cabling required by the equipment. If we put more of these compacted systems in the previous footprint, the impression will be that our facility impacts will have gone up tremendously to no obvious benefit.
  • So once again technological enhancements to our equipment have created additional responsibilities for our industry. We system integrators will need to work hand in hand with corporate IT departments, architects and facility design engineers in order to make sure our facilities are well coordinated; ensuring sufficient power, cooling and monitoring capabilities are available as needed.
  • It's only after we get all of this organized that we will be able with sit down in our comfy chairs once again.