Untitled Document
Not a member yet? Register for full benefits!

Username
Password
 Parallel programming may not be so daunting

This story is from the category Libraries and Components
Printer Friendly Version
Email to a Friend (currently Down)

 

 

Date posted: 25/03/2014

Computer chips have all-but stopped getting faster: The regular performance improvements we’ve come to expect are now the result of chipmakers’ adding more cores, or processing units, to their chips, rather than increasing their clock speed.

In theory, doubling the number of cores doubles the chip’s efficiency, but splitting up computations so that they run efficiently in parallel isn’t easy. On the other hand, say a trio of computer scientists from MIT, Israel’s Technion, and Microsoft Research, neither is it as hard as had been feared.

Commercial software developers writing programs for multicore chips frequently use so-called “lock-free” parallel algorithms, which are relatively easy to generate from standard sequential code. In fact, in many cases the conversion can be done automatically.

Yet lock-free algorithms don’t come with very satisfying theoretical guarantees: All they promise is that at least one core will make progress on its computational task in a fixed span of time. But if they don’t exceed that standard, they squander all the additional computational power that multiple cores provide.

In recent years, theoretical computer scientists have demonstrated ingenious alternatives called “wait-free” algorithms, which guarantee that all cores will make progress in a fixed span of time. But deriving them from sequential code is extremely complicated, and commercial developers have largely neglected them.

In a paper to be presented at the Association for Computing Machinery’s Annual Symposium on the Theory of Computing in May, Nir Shavit, a professor in MIT’s Department of Electrical Engineering and Computer Science; his former student Dan Alistarh, who’s now at Microsoft Research; and Keren Censor-Hillel of the Technion demonstrate a new analytic technique suggesting that, in a wide range of real-world cases, lock-free algorithms actually give wait-free performance.

“In practice, programmers program as if everything is wait-free,” Shavit says. “This is a kind of mystery. What we are exposing in the paper is this little-talked-about intuition that programmers have about how [chip] schedulers work, that they are actually benevolent.”

The researchers’ key insight was that the chip’s performance as a whole could be characterized more simply than the performance of the individual cores. That’s because the allocation of different “threads,” or chunks of code executed in parallel, is symmetric. “It doesn’t matter whether thread 1 is in state A and thread 2 is in state B or if you just swap the states around,” says Alistarh, who contributed to the work while at MIT. “What we noticed is that by coalescing symmetric states, you can simplify this a lot.”

See the full Story via external site: web.mit.edu



Most recent stories in this category (Libraries and Components):

17/02/2015: New algorithms Geolocate a video from its images and sounds

25/03/2014: Parallel programming may not be so daunting

24/01/2014: Stanford scientists use 'virtual earthquakes' to forecast Los Angeles quake risk

14/04/2013: The mathematical method for simulating the evolution of the solar system has been improved by UPV/EHU researchers

13/02/2013: 3D Printing on the Micrometer Scale

07/02/2013: Gap geometry grasped: A new algorithm could help understand the structure of liquids, and how they flow through porous media

03/12/2012: The advantages of 3D printing are now being put to the test in soil science laboratories

02/12/2012: Preventing 'Cyber Pearl Harbor'