Using Mathematics To Reduce Data Center Energy Consumption

Researchers at the University of Sydney have come up with a mathematical model verified by software implementation that can more than halve the energy consumption of processors in data centres at little or no operational cost.

The Energy Conscious Scheduling algorithm (ECS) has been patented by Dr Young Choon Leeand Professor Albert Zomaya at the university's Centre for Distributed and High Performance Computing. Dr Lee and Professor Zomaya are now developing an ECS prototype, with a view to commercialising their research by 2011.

Power consumption by servers has more than doubled since 2000; estimates suggest electricity use for servers worldwide cost about $US7.2 billion in 2005. Companies are increasingly looking to curb these energy costs and their ensuing greenhouse gas as demand for computing and data storage capacities continues to grow exponentially.

"The growing popularity of cloud computing will help IT managers reduce data centre energy costs, but more research needs to be done before large organisations can fully entrust their precious data in cloud technology," says Professor Zomaya.

"ECS will work well on cloud systems as they mature but will give a more immediate means for organisations to reduce their data centre's energy consumption. With demand for computing and data storage capacities growing exponentially, this world first will make a valuable contribution to both the environment to and organisations' bottom line."

ECS uses processors" so-called Dynamic Voltage Scaling capability to map computational tasks such as those arising in scientific, engineering and business applications (eg astronomy, chemistry, life sciences and financial forecasting) in order to minimise completion time and energy use.

"Computations are typically comprised of interdependent tasks, so the need to wait for a parent task to complete can create slack and therefore wastage," Professor Zomaya says. "When ECS is employed with the help of DVS capability, mapping decisions between processors, supply voltages, and tasks are streamlined to significantly lower the amount of energy required at any given time.

"This reduces slack and creates energy savings of between 10-60 per cent."

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.