Collaboration Program to Explore Energy-Efficient High-Performance Scientific Computer Systems

Tensilica®, Inc. and the U.S. Department of Energy's Lawrence Berkeley National Laboratory today announced a collaboration program to explore new design concepts for energy-efficient high-performance scientific computer systems.

The joint effort is focused on novel processor and systems architectures using large numbers of small processor cores, connected together with optimized links, and tuned to the requirements of highly-parallel applications such as climate modeling. These demanding scientific problems require 100 to 1000 times higher computation throughput than today’s high-end computing installations, but conventional systems require so much electricity, generate so much heat, and require such complex physical installations that the costs would be prohibitive. This collaboration in application-directed supercomputing aims at making “exascale systems” (up to one quintillion floating point operations per second) feasible and cost-effective.

The two organizations are well-suited for such a collaboration. Tensilica is the recognized leader in configurable processor technology and has become a leading provider of energy efficient processors for mobile audio and video applications. The Berkeley Lab Computing Sciences organization manages one of the world’s leading supercomputing centers and has extensive experience in deploying leading-edge computer architectures to accelerate scientific discovery.

“Our studies show that energy costs make current approaches for supercomputing unsustainable,” stated Horst Simon, Associate Laboratory Director, Computing Sciences for Berkeley Lab. “Hardware-software co-design using tiny processor cores, such as those made by Tensilica, holds great promise for systems that reduce power costs and increase practical system scale. Such processors, by their nature, must deliver maximum performance while consuming minimal power – exactly the challenge facing the high performance computing community. One of the most compute-intensive applications is modeling global climate change, a critical research application and the perfect pilot application for energy-efficient computing optimization.”

“Berkeley Lab is a world leader in providing supercomputing resources to support research across a wide range of disciplines, but their experience in climate modeling is especially well-suited for this project,” stated Chris Rowen, Tensilica’s president and CEO. “If we can better understand the factors influencing climate change – and do so in a dramatically more energy-efficient way – then we open the door for other breakthroughs. We are delighted to be able to contribute to this effort, applying Tensilica Xtensa processors and software to help solve a problem of global significance. The same ultra-efficient processor technology that powers cellular phones can now contribute to a breakthrough in energy-efficient scientific computing.”

The team will use Tensilica’s Xtensa LX extensible processor cores as the basic building blocks in a massively parallel system design. Each processor will dissipate a few hundred milliwatts of power, yet deliver billions of floating point operations per second and be programmable using standard programming languages and tools. This equates to an order-of-magnitude improvement in floating point operations per watt, compared to conventional desktop and server processor chips. The small size and low power of these processors allows tight integration at the chip, board and rack level and scaling to millions of processors within a power budget of a few megawatts.

The co-design effort will use automatic generation of processor designs, including simulation models, FPGA-based hardware implementation, and software tools to enable rapid prototyping and evaluation of processor instructions sets, interfaces, multi-processor communications mechanisms, and application enhancements.

The research effort also will address the challenges of optimizing memory and communication bandwidth to the massive array of processors, distribution of application functions across the array, and development of suitable prototyping and software development methods for large-scale application-optimized systems.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.