Semantics of well synchronized programs on weakly consistent machines
The semantics of shared-memory concurrent programs are quite intricate. For one, optimizing compilers perform transformations that may be unsound with respect to concurrent programs. Moreover, modern multiprocessors have relaxed semantics: the widespread assumption of sequential consistency is invalidated by common architectural optimizations. Processors reorder instructions (or executes them out of order), and techniques such as caching of memory data and buffering of operations break the abstraction of a single global view of the memory shared by all threads. Programs that don’t have data races, however, should execute as if they were running on a sequentially consistent machine.
In this work, we focus on a class of programs called well synchronized; access to some part of the shared memory is arbitrated by synchronizing operations and threads do not compete with each other for that memory. Using the Coq proof assistant, we formalized this notion of well synchronized programs, and used it to prove that cooperative (yield on synchronizing operations) and preemptive (interleaving) semantics of such programs coincide. Furthermore, we proved that well synchronized programs don’t have data races, hence their relaxed semantics coincides with their interleaving semantics. We prove that result in a generic and modular way, and demonstrate its applicability on the TSO memory model using the X86 ISA.