No, your definition is not quite equivalent to the definition of sequential consistency but would be closer to strict consistency. There are two relevant aspects: (a) there are multiple processors/processes involved, and (b) they only have to behave as if they were being executed in a total order. The atomicity is not required in reality, only in the “as if” model. That memory accesses are performed in program order is already true pretty much by definition.
There is a consistency model called strict consistency that requires that writes take effect in the order they were executed. If a process writes to x
at time t1, then all later reads at time t2 > t1 from x
should return the written value. This would require atomic writes that propagate immediately to all other processes. Even on a single multicore CPU, this is impossible due to speed of light constraints unless exclusive locks are used, effectively enforcing that reads/writes actually obey a global order – no “as if”.
Sequential consistency is a weaker consistency model. We do not require that writes take effect immediately/atomically, but merely that all processes observe writes in the same order, i.e. that they agree on a total order of operations.
Let's look at some processes that manipulate a shared global variable x
.
I'll write W(x)a
when a process writes the value a
to x
, and R(x)a
when the process reads from x
and gets a
. We'll have four processes that perform a sequence of operations:
- process 1: W(x)a
- process 2: W(x)b
- process 3: R(x)_, R(x)_
- process 4: R(x)_, R(x)_
Here's some ASCII-Art that illustrates the different times at which the operations could be executed by the processes. The different processes are shown above each other, and the time axis increases to the right.
Here's a strictly consistent execution where all writes take effect immediately:
-----------------------------> time
1: W(x)a
2: W(x)b
3: R(x)b R(x)b
4: R(x)b R(x)b
But here is an execution that is sequentially consistent, yet not strictly consistent:
-----------------------------> time
1: W(x)a
2: W(x)b
3: R(x)b R(x)a
4: R(x)b R(x)a
Here, process 1 first performs the W(x)a
operation. However, this operation does not take effect immediately.
Now, process 2 executes W(x)b
and this write is observed by the other processes.
Processes 3 and 4 execute R(x)b
, i.e. they get the value b
that was just written by process 2.
At this point, the write from process 1 takes effect
and is observed by the next reads R(x)a
of processes 3 and 4.
The important point is that although the writes were not executed atomically, all processes agreed on an order of events, in particular write events. The behaviour of all processes was as if they had been executed in a particular order, although this agreed-upon order was different from the actual temporal order. We don't care about when a write is observed, only about the order between writes.
In contrast, here's an execution where processes 3 and 4 don't agree on the order of W(x)a
and W(x)b
:
-----------------------------> time
1: W(x)a
2: W(x)b
3: R(x)b R(x)a
4: R(x)a R(x)b
Here, process 3 has observed events W(x)b, W(x)a
,
whereas process 4 has observed W(x)a, W(x)b
.
This is not sequentially consistent.
In practice, sequential consistency is easily achievable for single variables e.g. when using locks. The C memory model also enables sequential consistency for single memory locations when all accesses to that location use appropriate memory order.
With multiple variables things tend to get tricky, in particular because sequential consistency is a fairly strong consistency guarantee. The Jepsen.io project has developed tools for fuzzing (distributed) databases in search of consistency violations, and has a good overview of consistency models.