# Producers and Consumers

27 Aug 2018

At a recent talk of Leslie Lamport, he raised the question of how many possible executions there are for a bounded-buffer producer-consumer protocol for a given set of input values. Leslie’s point was that, from the standpoint of the values produced and consumed, there is only one execution, and the only things that we can say about the ordering of these events is that a value must be produced before it is consumed and that before a value can be produced, enough values must have been consumed so that there is room in the buffer.

I started wondering, though, just how many executions are there exactly? For a given buffer size and length of execution, how many possible (legal) sequences of produce and consume events are possible? Equivalently, how many strings of $P$s and $C$s of a given length are there such that the number of $C$s in any prefix is less than or equal to the number of $P$s and the number of $P$s minus the number of $C$s in any prefix is less than or equal to the buffer size? Writing down the recurrence for the general case is easy enough, and computers are very fast and good with numbers, so let’s see a few results.

import functools

@functools.lru_cache(maxsize=None)
def f(B, S, U=0):
if U < 0 or U > B: return 0
if S == 0: return 1
return f(B, S - 1, U - 1) + f(B, S - 1, U + 1)


Here, $B$ is the size of the buffer, $S$ is the number of steps in the execution, and $U$ is the number of used slots in the buffer (initially 0).

>>> print([ f(1, i) for i in range(15) ])
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]


When the buffer size is 1, there only ever one execution, no matter how many steps we take. It’s just the $S$-length prefix of the infinite $PCPCPCPCPC...$ string. What about when the buffer size is 2?

>>> print([ f(2, i) for i in range(15) ])
[1, 1, 2, 2, 4, 4, 8, 8, 16, 16, 32, 32, 64, 64, 128]


Here, we see once-repeated increasing powers of 2. To see why this is the case, imagine the state of the buffer. When it’s empty or full, there is only one action that can be taken, a produce or consume, respectively. On the other hand, when there is a single value in the buffer, both actions are possible.

Now, given both of those results, try to guess what the sequence looks like when $B = 3$.

>>> print([ f(3, i) for i in range(15) ])
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610]


Did you guess the Fibonacci sequence? This one’s a little harder to see. It’s not obvious (to me at least) why the Fibonacci recurrence should apply. If $f(3, i)$ is the number of executions possible with exactly $i$ steps with a buffer of size 3, let’s let $f_{0, 3}(3, i)$ be the number of those executions where the final state of the buffer is full or empty. Then, let $f_{1, 2}(3, i)$ be the number of executions ending in the buffer having 1 or 2 values. First, we have the obvious identity.

Then, we have a pair of recurrences that are easiest to see pictorially.

Namely, the following:

Then, the derivation is straightforward.

Finally, let’s see what happens when the buffer is unbounded (by setting the size of the buffer to the number of steps in the execution).

>>> print([ f(i, i) for i in range(15) ])
[1, 1, 2, 3, 6, 10, 20, 35, 70, 126, 252, 462, 924, 1716, 3432]


Here, we see a different sequence. The number of possible executions with $i$ steps is ${i \choose \lfloor i / 2 \rfloor}$. Let $X$ be set of $i$-length $P/C$ sequences with exactly $\left \lfloor \frac{i}{2} \right \rfloor$ $C$s (some of which represent invalid executions) and let $Y$ be the set of valid sequences.

First, given a sequence in $X$, we can transform it into a valid sequence in $Y$ by scanning left to right, keeping track the number of values in the buffer. If a $C$ is invalid (i.e., would result in the number of values in the buffer becoming negative), we change it to a $P$ and treat it as a null operation on the buffer (i.e., neither a $C$ nor a $P$). So, $CPCCPPPP$ would become $PPCPPPPP$. Then, to invert that transformation, we count the number of $P$s that should be transformed to $C$s and then repeatedly scan from right to left, replacing a $P$ with a $C$ every time doing so would cause the sequence to become invalid, treating the new $C$ as neither a $P$ nor $C$ for the remainder of the operation. Here are these two transformations in Python.

def t(x):
pc = 0
r = ''
for xi in x:
if pc <= 0 and xi == 'C':
r += 'P'
else:
r += xi
if xi == 'P':
pc += 1
else:
pc -= 1
return r

def ti(y):
y = list(y)
p = y.count('P')
c = len(y) - p

while p > len(y)/2:
pi, ci  = p, c
for i in range(len(y) - 1, -1, -1):
if y[i] == 'C':
ci -= 1

if y[i] == 'P':
pi -= 1

if ci >= pi:
p -= 1
y[i] = 'X'
break

return ''.join('C' if yi == 'X' else yi for yi in y)


Since these transformations are, indeed, inverses of each other, we get $|X| = |Y|$.

Of course, none of the above observations are new, and some/all are simple enough to be put to undergraduates in a discrete math course. Typically, these problems would be put in terms of the number of sequences of 1 and -1 where the partial sums are all nonnegative and less than some bound. I just liked that you could describe interesting combinatorics problems in terms of a common synchronization primitive.

Furthermore, there are still more interesting sequences for different values of $B$. For instance, when $B = 4$, we get:

If anyone knows a closed-form expression for $f (B, S)$, I would be grateful.

### Acknowledgments

Thanks to Helgi for pointing out a bug in an earlier version of my t(x) and ti(y) functions.