Publication
PACT 2009
Conference paper

Exploiting parallelism with dependence-aware scheduling

View publication

Abstract

It is well known that a large fraction of applications cannot be parallelized at compile time due to unpredictable data dependences such as indirect memory accesses and/or memory accesses guarded by data-dependent conditional statements. A significant body of prior work attempts to parallelize such applications using runtime data-dependence analysis and scheduling. Performance is highly dependent on the ratio of the dependence analysis overheads with respect to the actual amount of parallelism available in the code. We have found that the overheads are often high and the available parallelism is often low when evaluating applications on a modern multicore processor. We propose a novel software-based approach called dependence-aware scheduling to parallelize loops with unknown data dependences. Unlike prior work, our main goal is to reduce the negative impact of dependence computation, such that when there is not an opportunity of getting speedup, the code can still run without much slowdown. If there is an opportunity, dependence-aware scheduling is able to yield very impressive speedup. Our results indicate that dependence-aware scheduling can greatly improve performance, with up to 4x speedups, for a number of computation intensive applications. Furthermore, the results also show negligible slowdowns in a stress test, where parallelism is continuously detected but not exploited.

Date

23 Nov 2009

Publication

PACT 2009