Energy Efficiency Programs and Evaluation in Europe (And more importantly, an excuse to talk about the World Cup)

June 29, 2010

I am finally returning after a long hiatus from blogging, and from two weeks in Europe after speaking at the International Energy Program Evaluation Conference (IEPEC).  My talk was on EMI’s work evaluating data center efficiency programs for US utilities. EMI was also a silver sponsor for the event, which was IEPEC’s first conference outside the U.S.. The conference included numerous interesting sessions on methods and challenges in evaluating energy efficiency programs in the U.S., Europe, Australia and China, making it a truly international conference.

The U.S. vs. the Rest of the World

There was an interesting undercurrent at the conference stemming from differences in the way energy efficiency programs are run internationally. In the U.S. evaluation programs tend to focus on utility run programs, which are typically accountable to the goals put on them from state utility commissions. At the federal level, energy efficiency programs in the U.S. do not have the same level of accountability – there are efforts to accurately determine savings but there is not the same focus on independent evaluation to confirm savings.

In the rest of the world, especially in Europe, it seems that programs and evaluation are mostly on the national government level and do not have the same level of accountability that U.S. utilities are subject to by regulators. One of the most interesting sessions at IEPEC was a panel discussion with Paolo Bertoldi of the European Commission and Dian Grueneich, the Lead Commissioner for Energy Efficiency of the California Public Utilities Commission.  The debate revolved around the need for energy savings goals and accountability through evaluation.  Paolo of the EC contended that program goals were unnecessary, that you can create programs that can help people save energy, and then drive them to save energy through other efforts like carbon taxation.  With this approach, and with no goals to meet, there is less need for independent evaluation to determine the impacts of a program.  Commissioner Grueneich, on the other hand, was arguing more for goals and accountability for the money spent on energy efficiency.

I can see Paolo’s argument, but the scientist and engineer in me wants to see real results.  You set up a program on the theory that it’s going to provide energy savings, but until you study it closely you do not know what the savings are actually achieved.  Furthermore, if you do not have good metrics to measure the impacts of the program, how do you know when a program needs improvement or how much it has improved when you make changes?  In practice, there is often a large gap between program theory, implementation and results; evaluators help define and reduce these differences for program designers.  Furthermore, solid process evaluation helps programs find ways to improve its processes, which in turn improve the program impacts.

I have a feeling this debate will continue, but IEPEC deserves credit for bringing this conversation to a truly international stage.  There seemed to be a lot of interest from European participants on learning more about the established evaluation techniques used for U.S. utilities and the development of international evaluation standards, and the success of energy efficiency programs can not but help to improve by the sharing of this information.

So, What Does This Have to Do with the World Cup?

The week after the conference I spent traveling in North East France with a short jump into Germany.  As an avid soccer fan, I spent much of this time sitting in cafes and restaurants watching the World Cup.  During the France v. Uruguay game the streets of Strasbourg were virtually barren and most everything was closed on a Friday night at 7:30 pm.  It seemed as if the French were busy watching quietly at home.  Across the border, in Freiburg Germany, every restaurant had what looked like a brand new TV outside the door so you could sit outside and watch the game, or casual passer-byers could watch the game (sales of flat screens must be through the roof in Germany).  They also had huge screens in some of the public parks for viewing.  After Germany beat Australia 4-0 in their first game, the bars and streets were absolutely mobbed with celebration.  Back in the U.S., things are less intense, but interest in the Cup, and the U.S. National Team seems to be at an all-time high after the U.S.’s great run through the group stage and emotional win on Wednesday.

So What’s the Point?

Theses experiences transposed on each other has driven home the fact that the world is increasingly becoming an international community.  There are large differences in mentalities, traditions and practices around the world, and engagement in the international community, whether through friendly competition or collaboration, helps us understand and learn from our international peers and see things from alternate viewpoints.  I like to feel that international engagement and sharing of information helps everyone involved, so thanks to IEPEC for a great conference, and to FIFA and ESPN for such a great event.

As a side note, IT technology and data centers are two of the tools helping the world participate in these global experiences.  The New York Times reports that’s traffic is up 70% over its traditional annual peak during the final four – another reminder of the explosive growth of internet usage and data center power consumption.  Ok, enough on that.  Spain / Portugal kicks off in an hour – should be a scorcher! Check it out on

Data Center Temperatures and the News

May 17, 2010

I use a variety of ways to track the news these days and few use traditional paper publications.  I mostly use web-based and, yes, cloud computing applications to manage the inflow of news. These primarily consist of Google Reader, Google alerts, Twitter, and email newsletters. I bring this up for two reasons:

#1. Because unlike physical magazines or journals I don’t actually own any of this content I read.  It’s all stored somewhere else, in various data centers around the world, waiting for me to access it.

#2. It’s a great example of how web-based tools can increase the efficiency of my life, and even decrease my energy footprint.

This latter point is important, because it’s what a lot of us people interested in data center efficiency stress all the time: we don’t want people to stop using data centers and the tools they provide, we just want data centers to follow best practices and minimize their energy footprint while providing the same level of service.

As an example of my news review practices, I just skimmed the titles of probably over a hundred articles in a pretty short time (~30 minutes). Of the few articles I actually skimmed or read, I found this interesting blog post on the data center temperature debate. This is not a blog I follow, or an issue I follow particularly closely, but after scouring my many news sources this is the article that caught my eye enough to post about.  It does a good job of explaining the issues around data center temperature setpoints and the arguments of whether they result in energy savings.  The bottom line, as usual, is that energy savings will likely depend on your specific situation – if it allows you to cool more with outside air it’ll likely help, if it just causes your server fans to kick into higher gear it might not.

I’m not going to take one side or the other on this issue, but what I think is interesting is this: for years, engineers have overdesigned everything. Structures are built with a factor of safety of many times the needed strength; capacities are built out well over requirements just in case more is needed later or something goes wrong; and, yes, data centers are kept much colder than they need to be just to be safe. In some cases these decisions are based more on superstition than sound engineering. One of the things computers and servers have given us is the ability to dial in and optimize some of these design points through computer aided design, advanced monitoring, automated controls, etc.

Here’s where it all comes together:  these new tools should allow a savvy data center operator to dial in the correct cooling level across the data center and not have to significantly overdesign the cooling system or setpoints.  Good monitoring should identify problems quickly or even before they occur.  Good controls should alter settings to compensate for potential problems.  The two together should be tuned to optimize efficiency and decrease costs.  I’d be willing to bet that for the majority of data centers there can be significant energy savings by increasing temperatures, but you need to use these tools to address potential hotspots and to determine if that increase should be 1 degree, 2 degrees or 10 degrees.  Too much or too little and there may not be savings, or worse, there could be an increase in energy use.

So, in summary, we continue to keep using computers and data centers to increase efficiencies across all aspects of the economy, and, of course, use these techniques on data centers to slow the energy growth of these tools themselves.

Now, if I could just get the Economist to forward my magazine subscription to my new address.  How could this possibly take 3 weeks in this day and age? Maybe that will be the subject of another post. No matter, I’ll just read it online…

Key Take-Aways of Colocation Research for Silicon Valley Power

April 22, 2010

Last fall EMI, under subcontract to Summit Blue Consulting (now a part of Navigant Consulting), performed a process evaluation for Silicon Valley Power (SVP) in Santa Clara California.  The process evaluation focused on identifying barriers to colocation data centers (or “colos”) participating in SVP’s energy efficiency programs.  Colos provide data center infrastructure to support other companies’ IT equipment – basically data center space for rent.  This space can be leased in units as small as half a rack and as big as thousands of square feet.  The latter situation, where whole rooms or even whole data centers are leased to one tenant, are known as “wholesale” colocation facilities.

Santa Clara has a dense concentration of data centers and colocation facilities.  This is largely due to their location in the heart of Silicon Valley, but also is due to their relatively low cost of power and high reliability of their power delivery. As a result of this high concentration, data centers are a main focus of the SVP energy efficiency programs, and in 2007-2008, roughly 60% of SVP’s energy savings came from data center related projects.

EMI conducted this research by performing: online research and a document review of appropriate reports, and in-depth interviews with colo managers from inside and outside SVP’s service territory and other industry experts.  Here are some of the main take-aways from EMI’s research into participation of colocation facilities in SVP’s programs.

  • Aggravated Barriers – Since providing reliable data center space is a colo’s main business, some of the barriers to energy efficiency for typical data centers are aggravated in the colocation facilities’ case.  These include an extreme focus on reliability, with little interest in energy efficiency.  It also creates an extreme case of the split incentive, because the people paying the power bill and the IT purchasers work for completely different companies, so there is little motivation to invest in more efficient equipment.
  • Pricing Models of Colos Affect Investment in Efficiency– Different colocation companies have different methods for splitting up charges for power, cooling and space.  While data centers become more constrained by power and cooling (and less constrained by space), some colocation facilities are moving away from space-based charges and more towards charging directly for power and cooling, which helps create more of an incentive for their customers to save energy.
  • Difficulty in Reaching the Colocation Facilities’ Customers –SVP, like other utilities, is restricted to only offer incentives to their customers of record.  This is necessary as it allows SVP to be able to recoup the incentive amounts if the measure does not stay in place for the contracted five-year period, and therefore the measure does not deliver the full five years of savings SVP claims for the measure.  As a result, SVP cannot give incentives to the colocation facilities’ customers, as these customers do not pay SVP for their power, but instead the cost of power is bundled with their colo charges.  This is a major barrier for many utilities to getting colocation customers to participate in virtualization incentives, for example.
  • Lack of Expertise for Completing Calculations – Some facilities indicated that they did not have the engineering expertise on staff to complete the necessary calculations to receive incentives, as operating colos are basically “a couple IT guys with a sales department.” SVP offers support to fill out the applications, but some potential participants were not aware of this, so this was an area where better communication of the program offerings could help increase participation from companies that need this support.

All in all, the evaluation found that SVP’s focus on data centers has been very successful and that they are undertaking a lot of efforts to help overcome these barriers such as: emphasizing new construction where the barriers and inertia to energy efficiency are not as great, and offering technical support where it is needed.  Other opportunities lie in collaborating with other utilities to identify new opportunities (e.g., for prescriptive measures which simplify the application process), and investigating new ways to get to colo customers.  Although there are many barriers in place for colocation facilities, this is a large data center market, it is growing rapidly, and it is worth having progressive utilities like SVP continue to push to develop programs and processes to overcome these barriers.

The full report (available here) offers more detail on the colocation market and barriers to their participation in energy efficiency programs.

How big can data centers be? How about 19 football fields?

April 13, 2010

Today I realized that my understanding of how large data centers can get was significantly understated.  This realization came as I reviewed Data Center Knowledge’s special report on the world’s largest data centers.  I have previously used #5, Microsoft’s Chicago data center, as an example of one of the largest, but was shocked to realize that the largest is almost 60% bigger.  Another interesting result is that seven of the top ten are colocation facilities.  This is significant because it is often difficult to get colocation facilities to engage in energy efficiency programs, especially after they’re operational. The other three – and the only corporate data centers in the top 10 – are all Microsoft facilities. A number of the facilities (including the largest) are also buildings converted to data centers from other uses.  Since these are not purpose-built data centers, my guess would be that they are probably not ideally designed in terms of efficiency.

I’m also disappointed to see that relatively few have energy use or even power capacity listed.  In an environment where power is starting to dominate as the primary constraint on data center growth, wouldn’t it make sense to track a list of the largest data centers in terms of energy use?

Here are some other highlights from the report:

#10. SuperNAP (407,000 SF) – Number ten is notable mostly for its power consumption.  At 250 MW capacity it boasts densities of up to 1,500 w/SF, made possible through advanced cooling using “a high-density T-SCIF (Thermal Separate Compartment in Facility) containment system to fully separate the hot and cold aisles.”

#7. i/o Data Centers Phoenix ONE (538,000 SF) – This one just seems to keep popping up, with “enormous rooftop array of solar panels that will eventually generate as much as 4.5 megawatts of power for the data center, and a thermal storage system that will allow i/o Data Centers to run chillers for its cooling systems at night when power rates are lower.”

#6. Microsoft’s Dublin Data Center (550,000 SF) – This one operates 100% of the time on outside air through the use of economizers and “Microsoft says it can run its server rooms at temperatures of up to 95 degrees F (35 degrees Celsius),” which should give it an efficiency advantage.

#5. Microsoft Chicago Data Center (700,000 SF) – A large portion of this data center consists of double-stacked 40-foot shipping containers that are each filled with up to 2,000 servers.  Containers make the system highly scalable and efficient.

#1. Digital Realty Trust Lakeside Technology Center (1.1 M SF, 100+ MW of power) – In Chicago, this data center used to house the printing presses for the Yellow Book and the Sears Catalog. It was converted to telecom use in 1999 and is now 2nd largest power customer for Commonwealth Edison.

Some people might wonder why a bohemoth such as Google doesn’t show up on this list? Well, it seems that Google likes to focus on many data centers together on a campus, while Microsoft tends to go big, and the report only looks at individual buildings not campuses.

So how big are these?  Let’s put it in perspective:

1.1  million square feet is equivalent to just over 19 football fields

250 MW is equivalent to the average power use of about 200,000 American homes

These numbers really speak to the massive amount of computing needed in modern society, but this is actually not where the majority of energy use from data centers come from.  According to the US EPA’s 2007 report to Congress, only 38% of data center energy use in the US comes from “enterprise-class” data centers of greater than 5,000 SF.  The remaining 62% is used in the smaller data centers, which means that these smaller data centers offer the largest overall chance for energy savings in this industry.

Should Utilities Look at Data Centers to Achieve Increasing Efficiency Goals?

April 1, 2010

The market for energy efficiency is increasing, and more states and Public Utility Comissions (PUCs) are jumping on the bandwagon every day.  But where are these new electricity savings going to come from?

I recently stumbled on this ACEEE report (released in March 2009 and available here) covering the increased energy efficiency goals of many states.  As the paper summary says, “In just the last few years, energy efficiency has evolved from being largely a token gesture or a ‘public benefits’ set-aside, to being a top-priority utility system resource.”  As a result, many states (including Minnesota, Illinois, Ohio, New York, Maryland and Vermont) were increasing their yearly efficiency goals to 1.5% – 2.0% a year, when “the very few top performing states in the nation were only achieving savings in the area of 0.8% per year.”  This fact makes these new savings goals look very aggressive.  In my mind this is a great thing because efficiency is the cleanest and cheapest form of capacity.

There are a few other interesting findings from this report, which reaffirm a couple persistent issues and trends in this industry:

  1. “Energy efficiency spending was relatively balanced between the residential and non-residential sectors (median across the states of 44% and 56% respectively), but that savings were relatively skewed toward the non-residential sector (63% non- residential).”
  2. “Also striking was the extent to which the lighting end use dominated the savings accomplishments, accounting for nearly two-thirds of all savings in the states which had disaggregated data available.  In the residential sector alone, lighting accounted for between 63% and 92% of reported savings.”

So let us summarize and distill what we have learned so far:

  1. States are looking very aggressively to energy efficiency as part of their resource planning, with rapidly increasing goals.
  2. The bulk of this savings is coming from non-residential (e.g., Agricultural, Commercial, and Industrial) measures.
  3. Lighting makes up the vast majority of achieved savings.

So this begs my initial question – where are these new savings going to come from?

As the price of efficient lighting comes down, and customers are increasingly happy with the quality of new efficient lamp designs, how are utilities going to continue to squeeze savings out of an increasingly saturated market?  Like squeezing juice from an orange, at some point the effort needed increases faster than the juice keeps coming.  To make matters worse for utility programs, as the federal government gets in the mix, new legislation could eventually phase out much of the inefficient lighting.  This increases the baseline lighting efficiency and makes it more difficult to claim large savings for lighting projects.

Since the majority of savings typically come from the non-residential sector, it seems logical to focus on these industries for more savings. And, where better to look than some of the most energy dense facilities there are – data centers!  Studies indicate that data centers are up to 40 times more energy intensive than typical office buildings and that savings potential can run from 25% – 50% per facility.  This all adds up to a very concentrated opportunity for energy savings.  Sure there are a number of challenges to utility incentives in this space, but what other industries and facility types are utilities going to look to in order to achieve these kind of increased savings goals? To me, this speaks to a large need to investigate and to learn to overcome the challenges to utilities creating incentives for energy efficiency in data centers.

Utility Sponsored Incentives for Data Center Efficiency

March 18, 2010

One of the big barriers we see to energy efficiency in the data center market is a knowledge gap between utilities that are new to the data center market and data center operators who are not necessarily fluent in energy efficiency or in the language of utilities.  Last week I presented at AFCOM’s 30th Data Center World Conference in Nashville, TN.  I was there to share the research we’ve done at EMI into what utilities out there are offering data center efficiency incentives and to try and help close this gap.

Part of the presentation was focused on trying to get the data center operators to understand the utility mindset – what motivates utilities and what makes sense for them to offer money for energy efficiency. This is one of my favorite slides, because it attempts to answer one of the questions I get the most from non utility/EE folks:

I love this question because it really gets to the heart of the economics of energy efficiency.  In the end, it often comes down to this singular point made here by Bruce Folsom, the director of energy efficiency programming at the utility Avista in Eastern Washington State, “Our energy future is about using the resources we have wisely, and energy efficiency remains our lowest-cost resource.”

To further this goal of reducing the knowledge gap between utilities and data center operators, I attempt to explain incentives as trying to influence you to implement a project, or to help motivate a transition of an idea into an action. This transition is illustrated here:

The presentation includes explanations of how utility incentives can reduce payback times for energy efficiency projects and increase the ROI of these projects.  In addition, I do a run down of incentives offered for data centers and examples of utilities offering these incentives.  One breakdown I explore is where different incentives are applied within the data center, as illustrated in this slide:

I finish with a list of steps for data center operators to engage with their utility to pursue these incentives:

1. Become familiar with the utility’s programs

  • Check your utility’s web site for information on available programs and contact information
  • Contact your utility or your account manager to discuss available programs/incentives

2. Identify projects

  • Schedule an energy audit or technical assistance from utility (where available)
  • Find projects relevant for incentives

3. Confirm Projects

  • Fill out any applicable pre-application paperwork to confirm relevance and incentive amounts

4. Perform pre-inspection with utility (where applicable)

5. Install measure

6. Perform post-inspection (where applicable)

  • Calculate savings and incentive amount

7. Apply for incentive or rebate

So that’s my attempt to distill my hour long presentation into a blog post.  I was really pleased by the reaction at Data Center World, which speaks for the need for people to help plug these gaps in communication and knowledge.  I had a number of utilities in the room, a few consultants and some data center managers, and the question and answer period turned into more of a discussion between utility folks and managers.  That’s what I like to see.

I would definitely be interested in any feedback on what I’ve included here, or in any information readers have on available programs.  We’re attempting to fill out a matrix of available programs by utility so any information would be greatly appreciated.  Also if anyone is interested in the full presentation let me know.  You can always reach me at ajhoward (at)

Based on the reaction I will be attempting to update the presentation and will resubmitting my abstract to hopefully speak at the next Data Center World Conference in Las Vegas in October.  Thanks!

PUE and Demand Reduction Using Solar and Ice at AFCOM Data Center World

March 10, 2010

At the AFCOM Data Center World conference in Nashville, TN.  I’m actually presenting tomorrow on utility incentives for data centers, and am looking forward to that.

There’s been a good facilities greening track that covers a lot of issues related to data center efficiency.  The most popular topic I’ve seen is about Power usage effectiveness (PUE). If you haven’t heard about PUE it’s time to study up because I think it’s here to stay.  Popularized by the Green Grid, PUE is a measure of the overhead of the infrastructure of a data center (technically it’s the power of the whole facility divided by the utility IT power).  So a high PUE means you spend more power than you need providing power and cooling to your IT equipment, or that your infrastructure is less efficient.  The average PUE according to the EPA’s latest data collection is about 1.9, meaning facilities are spending almost as much energy powering and cooling the IT equipment than the IT equipment uses itself.

As an aside PUE’s prime competitor was DCIE, which was the inverse, and so measured as a percentage that reads more like an efficiency metric.  However, PUE won the day because it was believed a metric expressing overhead would be more digestible by the C-suite.

Here’s a great slide describing what PUE is from a presentation from Steven Carlini of APC on PUE “hype”.

It seems that PUE is everywhere and people are giving advice about where and how to measure it, what it means, and how different decisions in the data center affect the PUE.  It’s great that PUE is taking hold because it will lead to greater instrumentation in the data center and is a starting point to talk about the facility infrastructure efficiency. The EPA’s uptake of PUE for their data center building rating (taking effect in June) will also help standardize the way people measure and report this metric. That said, PUE is not without its problems. The APC presentation did a great job of explaining a lot of the drawbacks of PUE, including how not all measurements of PUE are created equal so you need to make sure you’re comparing apples to apples (the Green Grid is working on making the metric more comparable across facilities).  In addition, certain improvements to your IT load can actually increase your PUE and make the infrastructure look less efficient if you don’t appropriately scale your power and cooling subsystems.

All in all, PUE is great for the industry and it will get even more useful when it becomes more standardized in how its measured and reported.  For now – it’s a useful tool for incremental improvements to a facility, but make sure you know what’s behind the numbers.  PUE is a useful metric, but the most important is overall energy saved (or demand reduced) by the facility.

I also saw a great presentation from i/o Data Centers, which I mentioned before in a post about demand reduction due to their peak shaving system of creating ice at night to cool the data center during the day.  The speaker kept saying “it’s all about power” and that space isn’t the prime issue anymore.  They’re also now planning an 11-acre solar array on top of the same data center, partially to lower their utility feed usage during peak times.  Here’s an article on that from Data Center Knowledge on how the ice system and solar work hand in hand.  Pretty cool and innovative stuff.


February 21, 2010

Speaking of ENERGY STAR, the EPA released a framework document for the newly announced Uninterruptable Power Supplies (UPS) specification last week.  UPS, like computer power supplies before them, lack industry standard measurement procedures to specify their efficiency.  As the market for energy efficient data center equipment grows UPS makers seem to be increasingly marketing the efficiency of their devices, but manufacturers usually specify 100% load – a condition that a UPS will never actually operate in because many UPS are critically underloaded.  Also similar to server power supplies, many UPS are operated in redundant configurations where the load is split between two UPS in the case that one fails.  This means that a UPS in this configuration could only hit 50% load, max.  The efficiency of power conversion equipment tends to fall off below 50% load, so it’s important to measure and specify the efficiency of loads below 50%, because this is where a lot of this equipment is actually running.

To illustrate the point, here’s a chart of power supply efficiency curves from when I was working on the server specification, which I stole from the ENERGY STAR website:

For servers, EPA specified efficiency all the way down to the 10% load condition because available data indicated that that’s where a lot of the redundant power supplies were being operated.  My guess is that ENERGY STAR will be doing a similar thing with UPS, and then the industry will have a way to compare the efficiency of different UPS solutions across much of their operating range. This should be a great help to utilities looking to get verifiable savings through offering incentives or rebates for more efficient UPS.

EPA is also continuing the trend of pushing for standardized reporting requirements (through a power and performance data sheet) and for real-time power and temperature reporting over a standard network.  This is also similar to the V1.0 server specification and what is being proposed for data center storage equipment. EPA is looking to add similar requirements for all data center equipment so that data centers can be operated more efficiently when the managers have better access to data on what’s actually happening in their data center. The power and performance data sheet will also be helpful for proving the specifications of equipment when applying for rebates and incentives.

Interested stakeholders can download the new documents here, and offer comments by April 2, 2010.

Utility and ENERGY STAR Collaboration for Improved Specifications and Programs

February 4, 2010

I spent Tuesday reacquainting myself with my old friends over at the ENERGY STAR program by attending the ENERGY STAR information sessions for Servers and Storage that preceded the Green Grid Technical Forum. It was interesting seeing things from the “other side of the podium” by being a stakeholder at these meetings instead of being in my old role of assisting the EPA on the development of the specifications.

Status of the Specifications

In terms of status, there seems to be some significant work to be done on both specifications, but as usual EPA is asking the right questions. For both specifications the question is how can you quantify the generalized “efficiency” of the product, or the amount of useful work and performance you get from a system for a given energy consumption? This is the ideal outcome of this process – what everyone wants. As Andrew Fanara (the lead representative of the EPA) said, “I’d also like to ride a unicorn to work”. Meaning that it would be impossible to get a perfect metric, so for now we need a method to rank IT equipment by it’s efficiency, but don’t expect it to be perfect. There’s hope that we’ll get there eventually, but it will be a long processes, as there are a lot of details to be worked out.  The server specification feels like it’s getting closer – they’re currently working on version 2.0 so they’ve been asking these questions for longer – but there’s still a lot of work to be done.  One good thing is the EPA is showing that they’re willing to think a little differently about these products.  I think this is necessary because the complexity of these products and the subtleties of this market make theses specification development efforts very different from many other products the EPA is used to dealing with.

Utilities and ENERGY STAR

I’ve been feeling that there is a gap in thinking between the EPA and the utility industry, and the funny thing is that I think they really need each other. The utilities are constantly looking for new savings opportunities and it’s a lot easier for them to develop effective programs if they are built on the back of good efficiency specifications.  What the EPA needs are stakeholders with a voice to help drive these specifications towards increased levels of rigor for energy savings.

In addition, there needs to be a closing of the gap between the needs of utilities and the output of the ENERGY STAR program. ENERGY STAR should be producing specifications that can easily be adopted for utility programs.  This should be a high priority for ENERGY STAR, but it feels like the current process is to produce the specification without utilities in mind, and then try and adapt the result to a program.  If utilities want to play in this space, they need to be at the table learning about this industry and helping drive the agenda.

Right now vendors dominate the ENERGY STAR meetings. The vendors are extremely knowledgeable, but obviously biased towards their own products and agendas. The meetings often result in vendors standing up and talking about what isn’t possible or what EPA shouldn’t do. What the efficiency community needs are stakeholders at the table telling EPA what they need to help make these specifications useful tools to leverage for energy savings. The way to speed up this process and to keep ENERGY STAR specifications relevant is to have efficiency advocates help drive the process.  This may involve helping generate data and providing some technical resources. This will be expensive, but if the utilities (and other EE advocates) pool their efforts this should be cost effective and will help ensure a useful product for adoption.  The more utilities bring to the table, the more influence they will have.

The thing is that EVERYONE should benefit from useful ENERGY STAR specifications and effective utility programs that leverage these specifications:  ENERGY STAR can further increase their growing relevance in this emerging market; utilities can run influential and cost effective programs to meet their goals; and vendors can market more efficient product offerings.  It’s a win, win, win.  We can no longer let the voices of manufacturers, who seem afraid of being left out of the party because of inefficient product offerings, dominate this conversation. It’s time for utilities and other advocates to team together and help influence this process to get a leg up in this market.

Demand Response (DR) for Reduced Peak Power in Data Centers

January 31, 2010

One interesting approach to demand reduction is the idea or demand response, or “DR” programs.  The New York Times recently had this article on Idaho Power’s approach to DR.  The article includes this explanation of what DR is:

This concept, called demand response, has gained traction in utility circles. In essence, it involves paying users to make small sacrifices when there is an urgent need for extra power (the “peak”). The utility can then rely on cutting some demand on its system at crucial times — and, in theory, avoid the cost of building a new plant just to meet those peak needs.

There are many opportunities for demand response in data centers. EMI did a process evaluation for the California Emerging Technologies Program (ETP).  During this project, EMI prepared a number of case studies on different technologies assessed by the ETP. One such case study was on an “Auto-DR” technology.  My colleague who worked on this passed on this report on a joint effort between PG&E and LBNL’s Demand Response Research Center (DRRC) on an a similar Auto-DR pilot program in the summer of 2006. During the pilot program, they setup locally participating businesses to have automated controls to lower their energy consumption in response to demand response signals from PG&E. Of the 24 facilities that participated in the pilot, an office/data center had highest achieved demand reduction for a single event at 363 kW and highest average for 294 kW. In this instance the the DR strategies used at the data center site included: duct static pressure increase, Supply Air Temp (SAT) increase, fan VFD limit, chilled water (CHW) temp increase, and cooling valve limit. The chart below from the report shows how high the demand savings was for the office/data center (all the way on the left) compared to other sites.

The office data center also had the lowest payback period at 0.4 years for implementing the Auto-DR.

Following the project, the DRRC published this data sheet with information on the DR potential of data centers.  The sheet makes some interesting points including that “savings can be higher than those in other industries because reducing server loads simultaneously reduces cooling and other equipment loads.”

Here are some of the other methods the DRRC recommends in their fact sheet:

–      Dynamically shift load onto fewer servers using virtualization.

–      Migration of load to another location (i.e. another data center).

–      Temporarily raise set-point temperatures.

–      Use backup reserves such as ice storage or chilled-water storage for cooling.

PG&E is still running the Auto-DR program along with the other large California IOUs which also have programs.


Get every new post delivered to your Inbox.