Features Newswire Agents of Tech

Sustainable Strategies: Designing Data Centers with Carbon in Mind

Data centers are making the rapid growth of cloud computing and artfi cial intelligence (AI) possible, making them a coveted asset class in commercial real estate. But there’s a downside to these facilities: their large environmental impact, requiring enormous amounts of water and electricity to function — equivalent to that consumed by small cities.

Fortunately, with greater attention to clean energy sourcing, advances in chip cooling techniques and thoughtful engineering design practices, it is possible to shrink the energy and water footprint of these critical facilities. To take advantage of such opportunities, property owners and design teams should consider the following approaches:

Consider factors beyond financial incentives in site selection. Although geography and grid capacity should be fundamental factors in site selection, the lure of tax incentives sometimes overrides their importance. But development teams that neglect to engage in initial conversations with local power and water utilities may face serious complications. Some local grids and water utilities simply do not have the requisite capacity to serve both a data center and the community.

We’ve all heard the unfortunate stories about new hyperscale facilities gobbling up the energy and water resources of a nearby small town. Early coordination with utilities on present and forthcoming resource capacity is a critical first step to avoid putting local communities at risk.

The local climatic context of the site can make a big difference in energy, water and carbon performance, and it should guide engineering decisions about mechanical cooling. The two biggest decisions center on how heat gets removed from the server racks (forced-air or liquid-cooled) and how that heat gets rejected to the atmosphere (air-cooled, water-cooled or through evaporative equipment).

While evaporative cooling and water-cooled chiller systems are more energy-effi cient, they can consume millions, if not hundreds of millions, of gallons of water for new hyperscale facilities. On the fl ip side, air-cooled systems save water but operate at lower energy effi ciencies.

There’s no one-size-fits-all. Take the case of the California desert. There, air-cooled data centers would be a sustainable approach because they use no water and because California’s power generation consists of 60% carbon-free sources, on average, with a target of 100% carbon-free by 2045.

The reality is that fewer data centers are being built in California due to construction costs and regulations, and those that currently exist are typically cooled with evaporative coolers (requiring tons of water use). In places like West Virginia, where rainfall is suffi cient and the largest source of grid electricity is from coal, water-cooled systems may be a better choice. That said, as we’ve seen in the news, water-cooled facilities are having negative environmental and community impacts even in historically wet regions.

A heat rejection solution that design and engineering teams should consider at the start of any project is the notion of recovering and reusing “waste” heat to limit or even eliminate rejection to the environment.

For data centers, this solution potentially means colocating near facilities that require a great deal of heat, such as hospitals, wastewater processing

facilities and certain types of industrial and manufacturing plants. Proximity allows the waste heat from data centers to be recycled and used in the neighboring facility. The heat could even be used for surrounding residential communities that are served by district heating systems.

That type of colocation, where data centers within urban areas power district heating systems, has met with success in Europe. But due to urban sprawl and limited district heating infrastructure, as well as the desire to build data centers far from potential disturbances, this strategy has yet to be implemented in a meaningful way in the U.S.

Apply new strategies and technologies. Advances in new cooling strategies and technologies are emerging nearly as quickly as AI adoption. The traditional method of simply and blindly forcing cold air into a room is waning in popularity, and it is impractical for the power density of hyperscale facilities.

A more effective and efficient approach is to bring chilled water directly to the server cabinet via hot-aisle and cold-aisle containment. This setup prevents mixing to ensure only cold air reaches the intake side of servers and directs hot exhaust air to the room’s return ductwork, maximizing cooling effectiveness and limiting short-circuiting of heat within the space.

For hyperscale applications specifically, an even more effective technique is direct-to-chip cooling, in which the majority of heat fromthe chip is captured directly via chilled water (a much better heat-transfer fluid than air).

When coupled with high temperature cooling — effectively using 85F water in lieu of typical 45F water — this strategy can significantly reduce the amount of compressor energy used by the data center. Liquid cooling isn’t replacing air — it’s replacing air at the chip, while air and water still handle heat rejection at the facility level.

Another new technique that design teams are exploring is direct submersion. Here, racks are placed within a mineral oil bath to capture heat directly. But because this process is complicated and expensive, it is less popular than direct-to- chip cooling.

In addition to the impact of good engineering on data center energy use, there is also a blossoming, and slightly ironic, notion ofleveraging AI to optimize AI. This can be theoretically employed in two layers: real-time and predictive optimization of chillers, pumps, fans and heat rejection equipment, in addition to ongoing optimization and scheduling of the computing work performed by the chips themselves (though this is outside the influence of mechanical, electrical, and plumbing engineers). Google, for instance, recently made headlines by spearheading an AI-driven facility operator in one of the company’s Midwest data centers.

Leverage alternative sources of power. Although this article focuses on how to best utilize energy once it flows onto the site, it’s important to note that some owners are exploring the possibilities of on-site generation through small modular reactors (SMR), hydropower and geothermal energy. It’s hard to find examples ofthese uses in the U.S. these days, partly because of strict regulations, but they represent great potential for grid-energy savings.

Retrofit existing facilities. Few owners seek retrofits of existing facilities, but it may be worth pursuing. For example, replacing typical servers with units that utilize liquid-to-chip technology and high-temperature cooling can allow for the liquid-to-chip systems to use the condenser water (rejected heat) from the existing cooling system. This method effectively expands capacity without a need to purchase new chillers.

Take advantage of energy modeling. Once a site is selected, energy modeling can help owners and design teams choose the most energy-efficient cooling methods and save money on eventual operations. Models can project how a data center will use energy in the future, help right-size generators and battery storage systems, and calculate power usage effectiveness (PUE).

This information can help owners decide between air-cooled and water-cooled systems and understand the trade-offs between the two. The models take location and utility costs into account while illuminating ways to mitigate waste in the design.

Learn. Generally, the most important thing that owners can do is to learn how data centers operate and how operation affects sustainability. It’s valuable, therefore, to bring in mechanical engineers and high-performance design consultants early on — even as early as the site selection process. These consultants can develop the energy models that educate the owners and inform decision-making.

Furthermore, the consultants can be available to speak to communities about how a new data center will affect the local grid and utilities and what methods an owner plans to employ to reduce the data center’s carbon footprint. In many cases the fears of community members can be allayed with communication and education.

Conclusion
Some communities don’t like the idea of nearby data centers because residents believe that data centers will limit access to electricity and water in their region. This has been a legitimate concern. But today, thanks to many new and emerging technologies, these concerns don’t have to be used to limit development. Instead, they present opportunities for talented designers to craft designs that account for geographic location, access to utilities and efficiency.

Owners can also utilize carbon-minded rating systems like LEED (Leadership in Energy and Environmental Design) specifically for data centers to demonstrate to future tenants and community stakeholders that sustainability is paramount.

Furthermore, companies like Microsoft have pledged to achieve high levels of sustainability and minimize any negative impacts on local communities while supporting initiatives that benefit residents. You can read more about Microsoft’s Data Center Community Pledge on the company website.

In summary, data centers provide a plethora of opportunities for innovative design despite being significant users of energy. In the coming years, we will see more and more examples of critical facilities that offer the high reliability that tenants expect but with low to no carbon footprint!

Owners and developers who adopt some of the strategies identified above can help us get to this future more quickly.