Understanding Traces
18 Feb 2019
For DSLabs, one of the pieces I built was a post-processing phase that runs on traces generated by the model checker. It takes a trace (i.e., a list of events) and produces a semantically equivalent trace which is, hopefully, more understandable to humans.
To motivate the problem, lets look at an example. Suppose we have a distributed system with four processes — \(p_1\), \(p_2\), \(p_3\), and \(p_4\), and consider the events represented by the following space-time diagram:
Because there are concurrent events (i.e., events not ordered by the happens-before relation) in the above execution, there is more than one way to represent these events as a well-formed trace.1 Here, a well-formed trace is one in which the happens-before relation is respected by the ordering of events. However, not all traces are created equal. Without knowing the specifics of the protocol being run in this execution, I think most people, if asked to produce a trace, would write something like this:
- \(p_1\) sends \(m_1\) to \(p_2\)
- \(p_2\) receives \(m_1\), sends \(m_2\) to \(p_1\)
- \(p_1\) receives \(m_2\), sends \(m_3\) to \(p_2\)
- \(p_2\) receives \(m_3\)
- \(p_3\) sends \(m_4\) to \(p_4\)
- \(p_4\) receives \(m_4\), sends \(m_5\) to \(p_3\)
- \(p_3\) receives \(m_5\), sends \(m_6\) to \(p_4\)
- \(p_4\) receives \(m_6\)
They might, instead, pair up \(m_1\) and \(m_4\), \(m_2\) and \(m_5\), and \(m_3\) and \(m_6\). But they certainly wouldn’t write down the following:
- \(p_1\) sends \(m_1\) to \(p_2\)
- \(p_3\) sends \(m_4\) to \(p_4\)
- \(p_2\) receives \(m_1\), sends \(m_2\) to \(p_1\)
- \(p_1\) receives \(m_2\), sends \(m_3\) to \(p_2\)
- \(p_4\) receives \(m_4\), sends \(m_5\) to \(p_3\)
- \(p_2\) receives \(m_3\)
- \(p_3\) receives \(m_5\), sends \(m_6\) to \(p_4\)
- \(p_4\) receives \(m_6\)
Not just that, but the latter trace is also harder to follow as a reader. It has several unnecessary “context switches.” The DSLabs model checker, however, does not distinguish between these two traces; they both have the same length. If an invariant was violated upon completion of the execution in question, the concurrent events in the trace produced by the model checker would be randomly ordered.
The post-processing phase I implemented is based on a simple idea, that events in traces should be followed by their immediate consequences. First, I took the trace produced by the model checker and built up the associated event graph. The event graph looks just like the space-time diagram above. Vertices are events; edges go between corresponding send and receive events and between subsequent events at the same process.
Then, I ran a depth-first topological sort of the event graph. By topologically sorting the graph, we get a well-formed trace. By doing it in a depth-first manner, we list events and their immediate consequences whenever possible.
To see what this looks like in practice, let’s look at an example from the primary-backup lab. First, we show an animated trace demonstrating an invariant-violation in a buggy solution.
Next, let’s see what the same invariant-violation looks with the post-processing phase for understandability.
Of course, you can’t necessarily follow what’s going on without knowing the details of the protocol. But, it should be much easier to follow the “flow” of the second video.
There are many possibilities for further improvement. You could bias the sorting of events towards message delivery or towards staying with the same process. You could even try to recognize group communication patterns (e.g., the “one→all→one” pattern) and order them in particular ways. For now, though, I think this solution works quite well.
-
In fact, there are 20 such traces. ↩