Resistance is Futile: Comply with DCOI or Be Assimilated

This audio was created using Microsoft Azure Speech Services

Schneider Electric offers U.S. Federal government agencies guidance on how to comply with new federal data center mandates

My team, the Data Center Science Center, has just written White Paper 250, “DCOI Compliance: A Guide to Improving PUE in U.S. Federal Data Centers”, to help agencies achieve or surpass the Federal Government’s mandated Data Center Optimization Initiative’s PUE metrics.   This paper describes the PUE metric, its drivers, and best practices for reducing physical infrastructure power losses.   This blog gives a summary list of those practices related to the mechanical plant, as well as highlights other relevant white papers and free tools that will assist you in your efforts to improve energy efficiency.

The U.S. Federal Data Center Optimization Initiative (DCOI) describes a host of requirements aimed at reducing, streamlining, and making remaining government agency data centers more energy efficient and responsible for performance.  One of the key requirements for existing data centers is to “achieve and maintain” a PUE score of 1.5 or less…new, proposed data centers must be designed and operated at 1.4 or less with 1.2 being “encouraged”.  Another key requirement is deployment of data center infrastructure management tools (DCIM) in all Federal data centers since manual collection of PUE data will no longer be acceptable.  If Agency CIOs fail to achieve these scores and implement DCIM by September 30th, 2018, “Agency CIOs shall evaluate options for consolidation or closure…”.[1]  In other words, comply or be assimilated.  Fortunately for these CIOs, legacy data centers often have plenty of room to improve infrastructure efficiency by reducing power and cooling energy losses to bring PUE scores within these limits.  In addition, DCOI targets are expected to result in the closure of approximately 52% of the overall Federal Data Center inventory1.  So it’s important to try to make as many improvements as is feasible even if you’re meeting the required 1.5 (or 1.4 for new) …i.e., increase your odds of survival by being as good as you can be.

Agencies should start with an efficiency assessment of the site in question.  Find out where you’re at now and identify areas for improvement.  White Paper 154, “Electrical Efficiency Measurement for Data Centers” explains how efficiency can be measured, evaluated, and modeled including a comparison of periodic assessment vs. continuous monitoring.  For those lacking the resources to do an assessment now, there are 3rd party vendors like Schneider Electric who can provide efficiency assessments that:

  • Identify problem areas
  • Provide recommendations for quick, inexpensive fixes
  • Offer help developing a long term energy efficiency strategy

Schneider Electric also offers a very handy (and free) TradeOff Tool that enables you to quickly and easily compare the impact of physical infrastructure improvements and change on PUE.



Most of the power losses in the physical infrastructure (power & cooling, I mean) come from the mechanical plant.  So it’s important to place a lot of your improvement focus on the cooling system.

The following is a list of best practices related to cooling:

  • Hot aisle / cold aisle arrangement – The rows of racks should be oriented so that the fronts of the servers face each other. In addition, the backs of the rows of racks should also be facing each other. This orientation of rows creates what is known as the “hot aisle / cold aisle” approach to row layout.  It helps separate cold supply air with hot return air.  Such a layout, if properly organized, can greatly reduce energy losses and also prolong the life of the servers.
  • Adding air containment systems to further isolate hot and cold air streams can further improve PUE. Containment of the hot aisle allows increased chilled water temperatures which results in increased economizer hours and significant electrical cost savings. Cooling set points can be set higher while still maintaining a comfortable work environment temperature for people.  For existing data centers, however, containing the cold aisle is likely easier to do.  Containing the hot aisle also makes it possible to operate the cooling plant in an energy-saving economizer mode for more hours per year than with a cold aisle containment system.  Retro-fitting an existing perimeter-cooled, raised floor data center with hot aisle containment system can save 40% in the annual cooling system energy cost corresponding to a 13% reduction in the annualized PUE.  See White Paper 153, “Implementing Hot and Cold Aisle Containment in Existing Data Centers” for more information.  For new data centers being planned or designed, see White Paper 135, “Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency”.
  • Operating cooling plants in economizer mode as much as possible. Economizer modes save energy by using outside air during favorable climate conditions to allow refrigerant-based components like chillers and compressors to be shutoff or operated at a reduced capacity. In certain climate conditions, some cooling systems can save over 70% in annual cooling energy costs by operating in economizer mode.  This corresponds to a 15% reduction in PUE.  TradeOFF Tool 11, “Cooling Economizer Mode PUE Calculator”, makes it easy to quickly compare the impact of specific geographies and cooling architectures on PUE, energy cost, and carbon emissions.  White Paper 132, “Economizer Modes of Data Center Cooling Systems” describes the different economizer modes and compares their performance against key data center attributes.
  • Operating with higher IT inlet temps – With the latest revisions to ASHRAE stand-ard TC9.9 where recommended temperature ranges were increased, there has been industry pressure on operators to raise the IT inlet air temperature set point to higher values. This reduces chiller energy use by increasing the efficiency of the chiller and by allowing it to operate in economizer mode for a longer period of the year. These efficiency gains, however, can be compromised by increased energy use from dry coolers, CRAHs, and from the IT server fans having to spin up faster as a result of the higher server inlet temps.  White Paper 221, “The Unexpected Impact of Raising Data Center Temperatures”, provides a CAPEX and energy cost analysis of a typical data center to demonstrate the importance of looking at the whole data center holistically inclusive of the IT equipment energy.  The impact of raising temperatures on reliability is also considered.

See White Paper 250, “DCOI Compliance: A Guide to Improving PUE in U.S. Federal Data Centers” for a more complete list of best practices to help you achieve or surpass DCOI’s required PUE target for existing and new data centers.


Tags: , , , , , , , ,


  • Patrick Corbin

    7 years ago

    We are looking for a sales consult to arrive at a solution and price for implementation of your DCIM product in compliance with DCOI. Can you please identify the appropriate avenue to arrange this meeting?

    • Jae Wiss

      7 years ago

      Hi Patrick – thank you so much for your comment. I believe you have been in contact with someone on the team. Please let us know if there is anything else I can do to assist.

Comments are closed.