Addressing Logical Deadlocks through Task-Parallel Language Design
Abstract
Task-parallel programming languages offer a variety of high-level
mechanisms for synchronization that trade off between flexibility and
deadlock safety. Some approaches are deadlock-free by construction but
support limited synchronization patterns, while other approaches are
trivial to deadlock. In high-level task-parallel programming, it is
imperative that language features offer both flexibility to avoid
over-synchronization and also sufficient protection against logical
deadlocks. Lack of flexibility leads to code that does not take full
advantage of the available parallelism in the computation. Lack of
deadlock protection leads to error-prone code in which a single bug
can involve arbitrarily many tasks, making it difficult to reason
about. We make advances in both flexibility and deadlock protection
for existing synchronization mechanisms by carefully designing
dynamically verifiable usage policies and language constructs.
We first define a deadlock-freedom policy for futures. The
rules of the policy follow naturally from the semantics of
asynchronous task closures and correspond to a preorder traversal of
the task tree. The policy admits an additional class of deadlock-free
programs compared to past work. Each blocking wait for a future can be
verified by a stateless, lock-free algorithm, resulting in low time
and memory overheads at runtime.
In order to define and identify deadlocks for promises, we
introduce a mechanism for promises to be owned by tasks. Simple
annotations make it possible to ensure that each promise is eventually
fulfilled by the responsible task or handed off to another
task. Ownership semantics allows us to formally define two kinds of
promise bugs: omitted sets and deadlock cycles. We present novel
detection algorithms for both bugs. We further introduce an
approximate deadlock-freedom policy for promises that, instead of
precisely detecting cycles, raises an alarm when synchronization
dependences occurring between trees of tasks are at risk of
deadlocking. To establish both the safety and the flexibility of the
approach, we prove that this over-approximation safely identifies all
deadlocks, and we prove that deadlock-free programs can be made to
comply with the policy without loss of parallelism through the use of
a novel language feature, the guard block, which acts as a hint
to the verifier.
Finally, we identify a lack of flexibility in the phaser, a
synchronization primitive that, under certain restrictions, is
deadlock-free by construction. The traditional restrictions cause
undesirable interaction between unrelated phasers and tasks, leading
to poor program design and unnecessary synchronization. We extend the
semantics of phasers by introducing the concept of a
subphase. By organizing phasers and their phases more
carefully, we can eliminate some over-synchronization and
anti-modularity that occurs in phaser programs to recover performance
while still enjoying deadlock freedom.