Archived
1
Fork 0
This repository has been archived on 2025-04-09. You can view files and clone it, but cannot push or open issues or pull requests.
word-parse/words/2007.06327.txt
2020-11-16 10:32:56 +01:00

2735 lines
57 KiB
Text
Raw Permalink Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

arXiv:2007.06327v1 [cs.LO] 13 Jul 2020
Generating Functions for Probabilistic
Programs⋆
Lutz Klinkenberg1[0000000238120572] , Kevin Batz1[0000000187052564] ,
Benjamin Lucien Kaminski1,2[0000000151852324] , Joost-Pieter
Katoen1[0000000261431926] , Joshua Moerman1[0000000198198374] , and
Tobias Winkler1[0000000310846408]
1
RWTH Aachen University, 52062 Aachen, Germany
2
University College London, United Kingdom
{lutz.klinkenberg, kevin.batz, benjamin.kaminski, katoen, joshua,
tobias.winkler}@cs.rwth-aachen.de
Abstract. This paper investigates the usage of generating functions
(GFs) encoding measures over the program variables for reasoning about
discrete probabilistic programs. To that end, we define a denotational
GF-transformer semantics for probabilistic while-programs, and show
that it instantiates Kozens seminal distribution transformer semantics.
We then study the effective usage of GFs for program analysis. We show
that finitely expressible GFs enable checking super-invariants by means
of computer algebra tools, and that they can be used to determine termination probabilities. The paper concludes by characterizing a class of
— possibly infinite-state — programs whose semantics is a rational GF
encoding a discrete phase-type distribution.
Keywords: probabilistic programs · quantitative verification · semantics · formal power series.
1
Introduction
Probabilistic programs are sequential programs for which coin flipping is a firstclass citizen. They are used e.g. to represent randomized algorithms, probabilistic
graphical models such as Bayesian networks, cognitive models, or security protocols. Although probabilistic programs are typically rather small, their analysis
is intricate. For instance, approximating expected values of program variables at
program termination is as hard as the universal halting problem [15]. Determining higher moments such as variances is even harder. Deductive program verification techniques based on a quantitative version of weakest preconditions [18]
enable to reason about the outcomes of probabilistic programs, such as what is
the probability that a program variable equals a certain value. Dedicated analysis techniques have been developed to e.g., determine tail bounds [6], decide
almost-sure termination [19,8], or to compare programs [1].
This research was funded by the ERC AdG project FRAPPANT (787914) and the
DFG RTG 2236 UnRAVeL.
2
L. Klinkenberg et al.
This paper aims at exploiting the well-tried potential of probability generating
functions (PGFs [14]) for the analysis of probabilistic programs. In our setting,
PGFs are power series representations — generating functions — encoding discrete probability mass functions of joint distributions over program variables.
PGF representations — in particular if finite — enable a simple extraction of
important information from the encoded distributions such as expected values,
higher moments, termination probabilities or stochastic independence of program variables.
To enable the usage of PGFs for program analysis, we define a denotational semantics of a simple probabilistic while-language akin to probabilistic
GCL [18]. Our semantics is defined in a forward manner: given an input distribution over program variables as a PGF, it yields a PGF representing the resulting subdistribution. The “missing” probability mass represents the probability
of non-termination. More accurately, our denotational semantics transforms formal power series (FPS). Those form a richer class than PGFs, which allows for
overapproximations of probability distributions. While-loops are given semantics
as least fixed points of FPS transformers. It is shown that our semantics is in
fact an instantiation of Kozens seminal distribution-transformer semantics [16].
The semantics provides a sound basis for program analysis using PGFs. Using
Parks Lemma, we obtain a simple technique to prove whether a given FPS overapproximates a programs semantics i.e., whether an FPS is a so-called superinvariant. Such upper bounds can be quite useful: for almost-surely terminating
programs, such bounds can provide exact program semantics, whereas, if the
mass of an overapproximation is strictly less than one, the program is provably
non-almost-surely terminating. This result is illustrated on a non-trivial random
walk and on examples illustrating that checking whether an FPS is a superinvariant can be automated using computer algebra tools.
In addition, we characterize a class of — possibly infinite-state — programs
whose PGF semantics is a rational function. These homogeneous bounded programs (HB programs) are characterized by loops in which each unbounded variable has no effect on the loop guard and is in each loop iteration incremented by
a quantity independent of its own value. Operationally speaking, HB programs
can be considered as finite-state Markov chains with rewards in which rewards
can grow unboundedly large. It is shown that the rational PGF of any program that is equivalent to an almost-surely terminating HB program represents
a multi-variate discrete phase-type distribution [22]. We illustrate this result by
obtaining a closed-form characterization for the well-studied infinite-state dueling cowboys example [18].
Related work. Semantics of probabilistic programs is a well-studied topic. This
includes the seminal works by Kozen [16] and McIver and Morgan [18]. Other
related semantics of discrete probabilistic while-programs are e.g., given in several other articles like [18,24,10,23,4]. PGFs have recent scant attention in the
analysis of probabilistic programs. A notable exception is [5] in which generating
functions of finite Markov chains are obtained by Padé approximation. Computer
Generating Functions for Probabilistic Programs
3
algebra systems have been used to transform probabilistic programs [7], and more
recently in the automated generation of moment-based loop invariants [2].
Organization of this paper. After recapping FPSs and PGFs in Sections 23,
we define our FPS transformer semantics in Section 4, discuss some elementary properties and show it instantiates Kozens distribution transformer semantics [16]. Section 5 presents our approach for verifying upper bounds to
loop invariants and illustrates this by various non-trivial examples. In addition,
it characterizes programs that are representable as finite-state Markov chains
equipped with rewards and presents the relation to discrete phase-type distributions. Section 6 concludes the paper. All proofs can be found in the appendix.
2
Formal Power Series
Our goal is to make the potential of probability generating functions available
to the formal verification of probabilistic programs. The programs we consider
will, without loss of generality, operate on a fixed set of k program variables.
The valuations of those variables range over N. A program state σ is hence a
vector in Nk . We denote the state (0, . . . , 0) by 0.
A prerequisite for understanding probability generating functions are (multivariate) formal power series — a special way of representing a potentially infinite
k-dimensional array. For k=1, this amounts to representing a sequence.
Definition 1 (Formal Power Series). Let X = X1 , . . . , Xk be a fixed sequence of k distinct formal indeterminates. For a state σ = (σ1 , . . . , σk ) ∈ Nk ,
let Xσ abbreviate the formal multiplication X1σ1 · · · Xkσk . The latter object is
called a monomial and we denote the set of all monomials over X by Mon (X).
A (multivariate) formal power series (FPS) is a formal sum
X
[σ]F · Xσ ,
where
[ · ]F : N k → R ∞
F =
≥0 ,
σ∈Nk
R∞
≥0
where
denotes the extended positive real line. We denote the set of all FPSs
by FPS. Let F, G ∈ FPS. If [σ]F < ∞ for all σ ∈ Nk , we denote this fact
by F ≪ ∞. The addition F + G and scaling r · F by a scalar r ∈ R∞
≥0 is
defined coefficient-wise by
X
X

F +G =
[σ]F + [σ]G · Xσ
r · [σ]F · Xσ .
and
r·F =
σ∈Nk
σ∈Nk
For states σ = (σ1 , . . . , σk ) and τ = (τ1 , . . . , τk ), we define σ + τ = (σ1 +
τ1 , . . . , σk + τk ). The multiplication F · G is given as their Cauchy product (or
discrete convolution)
X
[σ]F · [τ ]G · Xσ+τ .
F ·G =
σ,τ ∈Nk
Drawing coefficients from the extended reals enables us to define a complete lattice on FPSs in Section 4. Our analyses in Section 5 will, however, only consider
FPSs with F ≪ ∞.
4
3
L. Klinkenberg et al.
Generating Functions
A generating function is a device somewhat similar to a bag. Instead of
carrying many little objects detachedly, which could be embarrassing, we
put them all in a bag, and then we have only one object to carry, the bag.
— George Pólya [25]
Formal power series pose merely a particular way of encoding an infinite kdimensional array as yet another infinitary object, but we still carry all objects
forming the array (the coefficients of the FPS) detachedly and there seems to
be no advantage in this particular encoding. It even seems more bulky. We will
now, however, see that this bulky encoding can be turned into a one-object bag
carrying all our objects: the generating function.
Definition 2 (Generating
Functions). The generating function of a formal
P
power series F = σ∈Nk [σ]F · Xσ ∈ FPS with F ≪ ∞ is defined as the partial
function
f:
[0, 1]k 99K R≥0 ,
(x1 , . . . , xk ) 7→
X
[σ]F · xσ1 1 · · · xσk k .
σ=(σ1 ,...,σk )∈Nk
In other words: in order to turn an FPS into its generating function, we merely
treat every formal indeterminate Xi as an “actual” indeterminate xi , and the
formal multiplications and the formal sum also as “actual” ones. The generating
function f of F is uniquely determined by F as we require all coefficients of
F to be non-negative, and so the ordering of the summands is irrelevant: For
a given point x ∈ [0, 1]k , the sum defining f (x) either converges absolutely to
some positive real or diverges absolutely to ∞. In the latter case, f is undefined
at x and hence f may indeed be partial.
Since generating functions stem from formal power series, they are infinitely
often differentiable at 0 = (0, . . . , 0). Because of that, we can recover F from f
as the (multivariate) Taylor expansion of f at 0.
Definition 3 (Multivariate Derivatives and Taylor Expansions). For
σ = (σ1 , . . . , σk ) ∈ Nk , we write f (σ) for the function f differentiated σ1 times
in x1 , σ2 times in x2 , and so on. If f is infinitely often differentiable at 0, then
the Taylor expansion of f at 0 is given by
X
σ∈Nk
f (σ) ( 0 )
· xσ1 · · · xσk k .
σ1 ! · · · σk ! 1
If we replace every indeterminate xi by the formal indeterminate Xi in the
Taylor expansion of generating function f of F , then we obtain the formal power
series F . It is in precisely that sense, that f generates F .
Generating Functions for Probabilistic Programs
5
Example 1 (Formal Power Series and Generating Functions). Consider the infinite (1-dimensional) sequence 1/2, 1/4, 1/8, 1/16, . . .. Its (univariate) FPS — the
entity carrying all coefficients detachedly — is given as
1
1
1
1
1
1
1 1
+ X + X2 + X3 + X4 + X5 +
X6 +
X7 + . . . .
2 4
8
16
32
64
128
256
(†)
On the other hand, its generating function — the bag — is given concisely by
1
.
2x
(♭)
Figuratively speaking, (†) is itself the infinite sequence an := 21n , whereas (♭) is a
bag with the label “infinite sequence an := 21n ”. The fact that (†) generates (♭),
1
at 0 being 21 + 41 x + 18 x2 + . . ..
follows from the Taylor expansion of 2x
The potential of generating functions is that manipulations to the functions —
i.e. to the concise representations — are in a one-to-one correspondence to the
associated manipulations to FPSs [9]. For instance, if f (x) is the generating
function of F encoding the sequence a1 , a2 , a3 , . . ., then the function f (x) · x is
the generating function of F · X which encodes the sequence 0, a1 , a2 , a3 , . . .
As another example for correspondence between operations on FPSs and
generating functions, if f (x) and g(x) are the generating functions of F and G,
respectively, then f (x) + g(x) is the generating function of F + G.
Example 2 (Manipulating to Generating Functions). Revisiting Example 1, if
1
we multiply 2x
by x, we change the label on our bag from “infinite sequence
1
:= 21n ” and — just by
:=
an
2n ” to “a 0 followed by an infinite sequence an+1
changing the label — the bag will now contain what it says on its label. Indeed,
1 4
x
at 0 is 0 + 21 x + 41 x2 + 18 x3 + 16
x + . . . encoding
the Taylor expansion of 2x
the sequence 0, 1/2, 1/4, 1/8, 1/16, . . .
Due to the close correspondence of FPSs and generating functions [9], we use
both concepts interchangeably, as is common in most mathematical literature.
We mostly use FPSs for definitions and semantics, and generating functions in
calculations and examples.
Probability Generating Functions. We now use formal power series to represent probability distributions.
Definition 4 (Probability Subdistribution). A probability subdistribution
(or simply subdistribution) over Nk is a function
X
µ : Nk → [0, 1],
such that
|µ| =
µ(σ) ≤ 1 .
σ∈Nk
We call |µ| the mass of µ. We say that µ is a (full) distribution if |µ| = 1,
and a proper subdistribution if |µ| < 1. The set of all subdistributions on Nk is
denoted by D≤ (Nk ) and the set of all full distributions by D(Nk ).
6
L. Klinkenberg et al.
We need subdistributions for capturing non-termination. The “missing” probability mass 1 |µ| precisely models the probability of non-termination.
The generating function of a (sub-)distribution is called a probability generating function. Many properties of a distribution µ can be read off from its
generating function Gµ in a simple way. We demonstrate how to extract a few
common properties in the following example.
Example 3 (Geometric Distribution PGF). Recall Example 1. The presented formal power series encodes a geometric distribution µgeo with parameter 1/2 of a
single variable X. The fact that µgeo is a proper probability distribution, for
1
= 1. The expected
instance, can easily be verified computing Ggeo (1) = 21
1
value of X is given by Ggeo (1) = (21)2 = 1.
Extracting Common Properties. Important information about probability
distributions is, for instance, the first and higher moments. In general, the k th
factorial moment of variable Xi can be extracted from a PGF by computing
∂kG
(1, . . . , 1).3 This includes the mass |G| as the 0th moment. The marginal dis∂Xik
tribution of variable Xi can simply be extracted from G by G(1, . . . , Xi , . . . , 1).
We also note that PGFs can treat stochastic independence. For instance, for a
bivariate PGF H we can check for stochastic independence of the variables X
and Y by checking whether H(X, Y ) = H(X, 1) · H(1, Y ).
4
FPS Semantics for pGCL
In this section, we give denotational semantics to probabilistic programs in terms
of FPS transformers and establish some elementary properties useful for program
analysis. We begin by endowing FPSs and PGFs with an order structure:
Definition 5 (Order on FPS). For all F, G ∈ FPS, let
F  G
iff
σ ∈ Nk :
[σ]G ≤ [σ]F .
Lemma 1 (Completeness of  on FPS). (FPS, ) is a complete latttice.
4.1
FPS Transformer Semantics
Recall that we assume programs to range over exactly k variables with valuations
in Nk . Our program syntax is similar to Kozen [16] and McIver & Morgan [18].
Definition 6 (Syntax of pGCL [16,18]). A program P in probabilistic Guarded
Command Language ( pGCL) adheres to the grammar
P ::= skip
xi := E
P;P
if(B) {P } else {P }
3
{P } [p] {P }
while (B) {P } ,
In general, one must take the limit Xi → 1 from below.
Generating Functions for Probabilistic Programs
7
where xi ∈ {x1 , . . . , xk } is a program variable, E is an arithmetic expression over
program variables, p ∈ [0, 1] is a probability, and B is a predicate (called guard)
over program variables.
The FPS semantics of pGCL will be defined in a forward denotational style,
where the program variables x1 , . . . , xk correspond to the formal indeterminates
X1 , . . . , Xk of FPSs.
For handling assignments, if-conditionals and while-loops, we need some
auxiliary functions on FPSs: For an arithmetic expression E over program variables, we denote by evalσ (E) the evaluation of E in program state σ. For a
predicate B ⊆ Nk and FPS F , we define the restriction of F to B by
X
hF iB :=
[σ]F · Xσ ,
σ∈B
i.e. hF iB is the FPS obtained from F by setting all coefficients [σ]F where σ 6∈ B
to 0. Using these prerequisites, our FPS transformer semantics is given as follows:
Definition 7 (FPS Semantics of pGCL). The semantics JP K : FPS → FPS of
a loop-free pGCL program P is given according to the upper part of Table 1.
The unfolding operator ΦB,P for the loop while (B) {P } is defined by
ΦB,P :
(FPS → FPS) → (FPS → FPS),


ψ 7→ λF . hF i¬B + ψ JP K hF iB .

The partial order (FPS, ) extends to a partial order FPS → FPS, ⊑ on FPS
transformers by a point-wise lifting of . The least element of this partial order is
the transformer 0 = λF . 0 mapping any FPS F to the zero series. The semantics
of while (B) {P } is then given by the least fixed point (with respect to ⊑) of its
unfolding operator, i.e.
Jwhile (B) {P }K = lfp ΦB,P .
Table 1. FPS transformer semantics of pGCL programs.
P
JP K (F )
skip
xi := E
F
P
{P1 } [p] {P2 }
if (B) {P1 } else {P2 }
P1 # P2
while(B){P }
evalσ (E)
σ∈Nk
µσ X1σ1 · · · Xi
· · · Xkσk
p · JP1 K (F ) + (1 p) · JP2 K (F )


+ JP2 K hF i¬B

JP2 K JP1 K (F )
JP1 K hF iB

lfp ΦB,P (F ) ,
for


ΦB,P (ψ) = λF . hF i¬B + ψ JP K hF iB
8
(( G
L. Klinkenberg et al.
Example 4. Consider the program P = {x := 0} [1/2] {x := 1}# c := c + 1 and
the input PGF G = 1, which denotes a point mass on state σ = 0. Using the
annotation style shown in the left margin, denoting that JP K (G) = G , we
calculate JP K (G) as follows:
P
(( G
(( 1
{x := 0} [1/2] {x := 1}#
((
1
2
+
X
2
c := c + 1
(( C2 + CX
2
As for the semantics of c := c + 1, see Table 2.
Before we study how our FPS transformers behave on PGFs in particular, we
now first argue that our FPS semantics is well-defined. While evident for loopfree programs, we appeal to the Kleene Fixed Point Theorem for loops [17],
which requires ω-continuous functions.
Theorem 1 (ω-continuity of pGCL Semantics). The semantic functional J · K
is ω-continuous, i.e. for all programs P ∈ pGCL and all increasing ω-chains
F1  F2  . . . in FPS,


= sup JP K (Fn ) .
JP K sup Fn
n∈N
n∈N
Theorem 2 (Well-definedness of FPS Semantics). The semantics functional J · K is well-defined, i.e. the semantics of any loop while (B) {P } exists
uniquely and can be written as
Jwhile (B) {P }K = lfp ΦB,P = sup ΦnB,P (0) .
n∈N
4.2
Healthiness Conditions of FPS Transformers
In this section we show basic, yet important, properties which follow from [16].
For instance, for any input FPS F , the semantics of a program cannot yield as
output an FPS with a mass larger than |F |, i.e. programs cannot create mass.
Table 2. Common assignments and their effects on the input PGF F (X, Y ).
P
JP K (F )
x := x + k
x := k · x
X k · F (X, Y )
x := x + y
F (X, XY )
F (X k , Y )
Generating Functions for Probabilistic Programs
9
Theorem 3 (Mass Conservation). For every P ∈ pGCL and F ∈ FPS, we
have JP K (F ) ≤ |F |.
A program P is called mass conserving if |JP K (F )| = |F | for all F ∈ FPS. Mass
conservation has important implications for FPS transformers acting on PGFs:
given as input a PGF, the semantics of a program yields a PGF.
Corollary 1 (PGF Transformers). For every P ∈ pGCL and G ∈ PGF, we
have JP K (G) ∈ PGF.
Restricted to PGF, our semantics hence acts as a subdistribution transformer.
Output masses may be smaller than input masses. The probability of nontermination of the programs is captured by the “missing” probability mass.
As observed in [16], semantics of probabilistic programs are fully defined by
their effects on point masses, thus rendering probabilistic program semantics
linear. In our setting, this generalizes to linearity of our FPS transformers.
Definition 8 (Linearity). Let F, G ∈ FPS and r ∈ R∞
≥0 be a scalar. The function ψ : FPS → FPS is called a linear transformer (or simply linear), if
ψ(r · F + G) = r · ψ(F ) + ψ(G) .
Theorem 4 (Linearity of pGCL Semantics). For every program P and
guard B, the functions h · iB and JP K are linear. Moreover, the unfolding operator ΦB,P maps linear transformers onto linear transformers.
As a final remark, we can unroll while loops:
Lemma 2 (Loop Unrolling). For any FPS F ,
Jwhile (B) {P }K (F ) = hF i¬B + Jwhile (B) {P }K JP K hF iB
4.3
Embedding into Kozens Semantics Framework

.
Kozen [16] defines a generic way of giving distribution transformer semantics
based on an abstract measurable space (X n , M (n) ). Our FPS semantics instantiates his generic semantics. The state space we consider is Nk , so that (Nk , P(Nk ))
is our measurable space.4 A measure on that space is a countably-additive function µ : P(Nk ) → [0, ∞] with µ(∅) = 0. We denote the set of all measures on our
space by M. Although, we represent measures by FPSs, the two notions are in
bijective correspondence τ : FPS → M, given by
X
τ (F ) = λS.
[σ]F .
σ∈S
This map preserves the linear structure and the order .
Kozens syntax [16] is slightly different from pGCL. We compensate for this by
a translation function T, which maps pGCL programs to Kozens. The following
theorem shows that our semantics agrees with Kozens semantics.5
4
5
We note that we want each point σ to be measurable, which enforces a discrete
measurable space.
Note that Kozen regards a program P itself as a function P : M → M.
10
L. Klinkenberg et al.
Theorem 5. The FPS semantics of pGCL is an instance of Kozens semantics,
i.e. for all pGCL programs P , we have
τ ◦ JP K = T(P ) ◦ τ .
Equivalently, the following diagram commutes:
FPS
τ
M
JP K
T(P )
FPS
M
τ
For more details about the connection between FPSs and measures, as well as
more information about the actual translation, see Appendix A.3.
5
Analysis of Probabilistic Programs
Our PGF semantics enables the representation of the effect of a pGCL program on
a given PGF. As a next step, we investigate to what extent a program analysis
can exploit such PGF representations. To that end, we consider the overapproximation with loop invariants (Section 5.1) and provide examples showing that
checking whether an FPS transformer overapproximates a loop can be checked
with computer algebra tools. In addition, we determine a subclass of pGCL programs whose effect on an arbitrary input state is ensured to be a rational PGF
encoding a phase-type distribution (Section 5.2).
5.1
Invariant-style Overapproximation of Loops
In this section, we seek to overapproximate loop semantics, i.e. for a given loop
W = while (B) {P }, we want to find a (preferably simple) FPS transformer ψ,
such that JW K ⊑ ψ, meaning that for any input G, we have JW K (G)  ψ(G)
(cf. Definition 7). Notably, even if G is a PGF, we do not require ψ(G) to be
one. Instead, ψ(G) can have a mass larger than one. This is fine, because it still
overapproximates the actual semantics coefficient-wise. Such overapproximations
immediately carry over to reading off expected values (cf. Section 3), for instance
∂X
JW K (G) (1)
∂X ψ(G)(1)
.
We use invariant-style reasoning for verifying that a given ψ overapproximates
the semantics of JW K. For that, we introduce the notion of a superinvariant and
employ Parks Lemma, a well-known concept from fixed point theory, to obtain
a conceptually simple proof rule for verifying overapproximations of while loops.
Theorem 6 (Superinvariants and Loop Overapproximations). Let ΦB,P
be the unfolding operator of while(B){P } (cf. Def. 7) and ψ : FPS → FPS. Then
ΦB,P (ψ) ⊑ ψ
implies
Jwhile (B) {P }K ⊑ ψ .
Generating Functions for Probabilistic Programs
11
We call a ψ satisfying ΦB,P (ψ) ⊑ ψ a superinvariant. We are interested in linear
superinvariants, as our semantics is also linear (cf. Theorem 4). Furthermore,
linearity allows to define ψ solely in terms of its effect on monomials, which
makes reasoning considerably simpler:
Corollary 2. Given a function f : Mon (X) → FPS, let the linear extension fˆ
of f be defined by
X
[σ]F f (Xσ ) .
fˆ: FPS → FPS, F 7→
σ∈Nk
Let ΦB,P be the unfolding operator of while (B) {P }. Then
σ ∈ Nk :
ΦB,P (fˆ)(Xσ ) ⊑ fˆ(Xσ )
implies
Jwhile (B) {P }K ⊑ fˆ .
We call an f satisfying the premise of the above corollary a superinvariantlet. Notice that superinvariantlets and their extensions agree on monomials, i.e.
f (Xσ ) = fˆ(Xσ ). Let us examine a few examples for superinvariantlet-reasoning.
Example 5 (Verifying Precise Semantics). In Program 1.1, in each iteration, a
fair coin flip determines the value of x. Subsequently, c is incremented by 1.
Consider the following superinvariantlet:
(
C
, if i = 1;
f (X i C j ) = C j · 2C
X i,
if i 6= 1.
To verify that f is indeed a superinvariantlet, we have to show that

ΦB,P (fˆ)(X i C j ) = X i C j x6=1 + fˆ JP K X i C j x=1
!
⊑ fˆ X i C j
For i 6= 1, we get
ΦB,P (fˆ)(X i C j ) =
X iC j

.
x6=1
+ fˆ(JP K (0))
= X i C j = f (X i C j ) = fˆ(X i C j ) .
For i = 1, we get
ΦB,P (fˆ)(X 1 C j ) = fˆ
=
=
1 0 j+1
+ 12 X 1 C j+1
2X C


0 j+1
1
+ 21 f X 1 C j+1
2f X C


1 j
C j+1
= fˆ X 1 C j
2C = f X C

(by linearity of fˆ)
. (by definition of f )
C
.
Hence, Corollary 2 yields JW K (X) ⊑ f (X) = 2C
For this example, we can state even more. As the program is almost surely
terminating, and f (X i C j ) = 1 for all (i, j) ∈ N2 , we conclude that fˆ is exactly
the semantics of W , i.e. fˆ = JW K.
12
L. Klinkenberg et al.
while ( x = 1 ) {
{x := 0} [ 1/2 ] {x := 1}#
c := c + 1
}
Program 1.1. Geometric distribution generator.
while ( x > 0 ) {
{x := x + 1} [ 1/2 ] {x := x - 1}#
c := c + 1
}
Program 1.2. Left-bounded 1-dimensional random walk.
Example 6 (Verifying Proper Overapproximations). Program 1.2 models a one
dimensional, left-bounded random walk. Given an input (i, j) ∈ N2 , it is evident
that this program can only terminate in an even (if i is even) or odd (if i
is odd) number of steps. This information can be encoded into the following
superinvariantlet:
f (X 0 C j ) = C j
f (X i+1 C j ) = C j ·
and
(
C
1C 2 ,
1
1C 2 ,
if i is odd;
if i is even.
It is straightforward to verify that f is a proper superinvariantlet (proper because
C
3
5
1C 2 = C + C + C + . . . is not a PGF) and hence f properly overapproximates
the loop semantics. Another superinvariantlet for Program 1.2 is given by


 1 1C 2 i
, if i ≥ 1;
i j
j
C
h(X C ) = C ·
1,
if i = 0.
Given that the program terminates almost-surely [11] and that h is a superinvariantlet yielding only PGFs, it follows that the extension of h is exactly the
semantics of Program 1.2. An alternative derivation of this formula for the case
h(X) can be found, e.g., in [12].
For both f and h, we were able to prove that they are indeed superinvariantlets automatically, using the computer algebra library SymPy [20]. The code
is included in Appendix B (Program 1.5).
Example 7 (Proving Non-almost-sure Termination). In Program 1.3, the branching probability of the choice statement depends on the value of a program variable. This notation is just syntactic sugar, as this behavior can be mimicked by
loop constructs together with coin flips [3, pp. 115f].
Generating Functions for Probabilistic Programs
13
while ( x > 0 ) {
{x := x - 1} [ 1/x ] {x := x + 1}
}
Program 1.3. A non-almost-surely terminating loop.
while ( x < 1 and t < 2 ) {
if ( t = 0 ) {
{x := 1} [ a ] {t := 1}# c := c + 1
} else {
{x := 1} [ b ] {t := 0}# d := d + 1
}
}
Program 1.4. Dueling cowboys.
To prove that Program 1.3 does not terminate almost-surely, we consider the
following superinvariantlet:
f (X i ) = 1
i2
1 X 1
·
,
e n=0 n!
where e = 2.71828 . . . is Eulers number.
Again, the superinvariantlet property was verified automatically, here
 using2Math1
1
ematica [13]. Now, consider for instance f (X 3 ) = 1 1e · 0!
+ 1!
= 1 e < 1.
This proves, that the program terminates on X 3 with a probability strictly
smaller than 1, witnessing that the program is not almost surely terminating. △
5.2
Rational PGFs
In several of the examples from the previous sections, we considered PGFs which
were rational functions, that is, fractions of two polynomials. Since those are a
particularly simple class of PGFs, it is natural to ask which programs have
rational semantics. In this section, we present a semantic characterization of a
class of while-loops whose output distribution is a (multivariate) discrete phasetype dsitribution [21,22]. This implies that the resulting PGF of such programs
is an effectively computable rational function for any given input state. Let us
illustrate this by an example.
Example 8 (Dueling Cowboys). Program 1.4 models two dueling cowboys [18].
The hit chance of the first cowboy is a percent and the hit chance of the second
cowboy is b percent, where a, b ∈ [0, 1].6 The cowboys shoot at each other in
turns, as indicated by the variable t, until one of them gets hit (x is set to 1).
6
These are not program variables.
14
L. Klinkenberg et al.
The variable c counts the number of shots of the first cowboy and d those of the
second cowboy.
We observe that Program 1.4 is somewhat independent of the value of c, in
the sense that moving the statement c := c + 1 to either immediately before
or after the loop, yields an equivalent program. In our notation, this is expressed
as JW K (C · H) = C · JW K (H) for all PGFs H. By symmetry, the same applies
to variable d. Unfolding the loop once on input 1, yields
JW K (1) = (1 a)C · JW K (T ) + aCX .
A similar equation for JW K (T ) involving JW K (1) on its right-hand side holds.
This way we obtain a system of two linear equations, although the program itself
is infinite-state. The linear equation system has a unique solution JW K (1) in the
field of rational functions over the variables C, D, T , and X which is the PGF
G :=
aCX + (1 a)bCDT X
.
1 (1 b)(1 a)CD
From G we can easily read off the following: The probability that the first cowboy
a
, and the expected total number of
wins (x = 1 and t = 0) equals 1(1a)(1b)
1
G(1) = a+bab
. Notice that this quantity equals
shots of the first cowboy is ∂C
∞ if a and b are both zero, i.e. if both cowboys have zero hit chance.
If we write GV for the PGF obtained by substituting all but the variables in
V with 1, then we moreover see that GC · GD 6= GC,D . This means that C and
D (as random variables) are stochastically dependent.
The distribution encoded in the PGF JW K (1) is a discrete phase-type distribution. Such distributions are defined as follows: A Markov reward chain is a
Markov chain where each state is augmented with a reward vector in Nk . By definition, a (discrete) distribution on Nk is of phase-type iff it is the distribution of
the total accumulated reward vector until absorption in a Markov reward chain
with a single absorbing state and a finite number of transient states. In fact,
Program 1.4 can be described as a Markov reward chain with two states (X 0 T 0
and X 0 T 1 ) and 2-dimensional reward vectors corresponding to the “counters”
(c, d): the reward in state X 0 T 0 is (1, 0) and (0, 1) in the other state.
Each pGCL program describes a Markov reward chain [10]. It is not clear which
(non-trivial) syntactical restrictions to impose to guarantee for such chains to be
finite. In the remainder of this section, we give a characterization of while-loops
that are equivalent to finite Markov reward chains. The idea of our criterion is
that each variable has to fall into one of the following two categories:
Definition 9 (Homogeneous and Bounded Variables). Let P ∈ pGCL be a
program, B be a guard and xi be a program variable. Then:
xi is called homogeneous for P if JP K (Xi ·G) = Xi ·JP K (G) for all G ∈ PGF.
xi is called bounded by B if the set {σi | σ ∈ B} is finite.
Generating Functions for Probabilistic Programs
15
Intuitively, homogeneity of xi means that it does not matter whether one increments the variable before or after the execution of P . Thus, a homogeneous
variable behaves like an increment-only counter even if this may not be explicit
in the syntax. In Example 8, the variables c and d in Program 1.4 are homogeneous (for both the loop-body and the loop itself). Moreover, x and t are clearly
bounded by the loop guard. We can now state our characterization.
Definition 10 (HB Loops). A loop while (B) {P } is called homogeneousbounded (HB) if for all program states σ ∈ B, the PGF JP K (Xσ ) is a polynomial
and for all program variables x it either holds that
x is homogeneous for P and the guard B is independent of x, or that
x is bounded by the guard B.
In an HB loop, all the possible valuations of the bounded variables satisfying B
span the finite transient state space of a Markov reward chain in which the
dimension of the reward vectors equals the number of homogeneous variables.
The additional condition that JP K (Xσ ) is a polynomial ensures that there is
only a finite amount of terminal (absorbing) states. Thus, we have the following:
Proposition 1. Let W be a while-loop. Then JW K (Xσ ) is the (rational) PGF
of a multivariate discrete phase-type distribution if and only if W is equivalent
to an HB loop that almost-surely terminates on input σ.
To conclude, we remark that there are various simple syntactic conditions for
HB loops: For example, if P is loop-free, then JP K (Xσ ) is always a polynomial.
Similarly, if x only appears in assignments of the form x := x + k, k ≥ 0, then
x is homogeneous. Such updates of variables are e.g. essential in constant probability programs [?]. The crucial point is that such conditions are only sufficient
but not necessary. Our semantic conditions thus capture the essence of phasetype distribution semantics more adequately while still being reasonably simple
(albeit — being non-trivial semantic properties — undecidable in general).
6
Conclusion
We have presented a denotational distribution transformer semantics for probabilistic while-programs where the denotations are generation functions (GFs).
Moreover, we have provided a simple invariant-style technique to prove that
a given GF overapproximates the programs semantics and identified a class
of (possibly infinite-state) programs whose semantics is a rational GF ecoding a
phase-type distribution. Directions for future work include the (semi-)automated
synthesis of invariants and the development of notions on how precise overapproximations by invariants actually are.
A
Proofs of Section 4
A.1
Proofs of Section 4.1
Lemma 1 (Completeness of  on FPS). (FPS, ) is a complete latttice.
Proof. We start by showing that (FPS, ) is a partial order. Let F, G, H ∈ FPS,
σ ∈ Nk . For reflexivity, consider the following:
G  G
∀σ ∈ Nk : [σ]G ≤ [σ]G
iff
iff
true .
For antisymmetry, consider the following:
implies
implies
implies
G  H and H  G
∀σ ∈ Nk : [σ]G ≤ [σ]H
and
[σ]H ≤ [σ]G
and
[σ]H ≤ [σ]F
k
∀σ ∈ N : [σ]G = [σ]H
G = H .
For transitivity, consider the following:
implies
implies
implies
G  H
k
and H  F
∀σ ∈ N : [σ]G ≤ [σ]H
k
∀σ ∈ N : [σ]G ≤ [σ]F
G  F .
Next, we show that every set S ⊆ FPS has a supremum
X
sup [σ]F Xσ
sup S =
σ∈Nk
F ∈S
P
in FPS. In particular, notice that sup ∅ = σ∈Nk 0 · Xσ . The fact that sup S ∈
k
FPS is trivial since supF ∈S [σ]F ∈ R∞
≥0 for every σ ∈ N . Furthermore, the fact
that sup S is an upper bound on S is immediate since  is defined coefficientwise. Finally, sup S is also the least upper bound, since, by definition of , we
have [σ]sup S = supF ∈S [σ]F .
The following proofs rely on the Monotone Sequence Theorem (MST), which
we recall here: If (an )n∈N is a monotonically increasing sequence in R∞
≥0 , then
supn an = limn→∞ an . In particular, if (an )n∈N and (bn )n∈N are monotonically
increasing sequences in R∞
≥0 , then
sup an + sup bn =
n
n
lim an + lim bn =
n→∞
n→∞
lim an + bn = sup an + bn .
n→∞
n
Generating Functions for Probabilistic Programs
19
Theorem 1 (ω-continuity of pGCL Semantics). The semantic functional J · K
is ω-continuous, i.e. for all programs P ∈ pGCL and all increasing ω-chains
F1  F2  . . . in FPS,
JP K

sup Fn
n∈N

= sup JP K (Fn ) .
n∈N
Proof. By induction on the structure of P . Let S = {F1 , F2 , . . .} be an increasing
ω-chain in FPS. First, we consider the base cases.
The case P = skip. We have
JP K (sup S) = sup S = sup {F } = sup {JP K (F )} .
F ∈S
F ∈S
P
The case P = xi := E. Let sup S = Ĝ = σ∈Nk [σ]Ĝ · X σ , where for each σ ∈ Nk
we have [σ]Ĝ = supF ∈S [σ]F . We calculate
JP K (sup S)
 
= JP K Ĝ
X
= JP K 
[σ]Ĝ · X σ
σ∈Nk
= JP K 
X
[σ]Ĝ · X1σ1 · · · Xiσi · · · Xkσk 
= JP K 
X
[σ]Ĝ · X1σ1 · · · Xiσi · · · Xkσk 
σ∈Nk
σ∈Nk
=
X
σ∈Nk
=
= sup
F ∈S
sup [σ]F
F ∈S
X
· · · Xkσk
evalσ (E)
· X1σ1 · · · Xi
evalσ (E)
σ∈Nk
X
σ∈Nk
= sup JP K (F )
F ∈S

[σ]F · X1σ1 · · · Xi
= sup JP K 
F ∈S
evalσ (E)
[σ]Ĝ · X1σ1 · · · Xi
X 
σ∈Nk
· · · Xkσk
· · · Xkσk
(sup on FPS is defined coefficientwise)
[σ]F · X1σ1 · · · Xiσi · · · Xkσk 
20
L. Klinkenberg et al.
As the induction hypothesis now assume that for some arbitrary, but fixed,
programs P1 , P2 and all increasing ω-chains S1 , S2 in FPS it holds that both
JP1 K (sup S1 ) =
sup JP1 K (F )
and
F ∈S1
JP2 K (sup S2 ) =
sup JP2 K (F ) .
F ∈S2
We continue with the induction step.
The case P = {P1 } [p] {P2 }. We have
JP K (sup S)
= p · JP1 K (sup S) + (1 p) · JP2 K (sup S)




= p · sup JP1 K (F ) + (1 p) · sup JP2 K (F )
F ∈S
F ∈S

 

=
sup p · JP1 K (F ) + sup (1 p) · JP2 K (F )
F ∈S
(I.H. on P1 and P2 )
F ∈S
(scalar multiplication is defined pointwise)
= sup (p · JP1 K (F ) + (1 p) · JP2 K (F ))
(apply MST coefficientwise.)
F ∈S
= sup J{P1 } [p] {P2 }K (F )
F ∈S
= sup JP K (F ) .
F ∈S
The case P = if (B) {P1 } else {P2 }. We have
JP K (sup S)
= JP1 K (hsup SiB ) + JP2 K (hsup Si¬B )






= JP1 K sup hF iB
+ JP2 K sup hF i¬B
F ∈S
F ∈S
(restriction defined coefficientwise)
= sup JP1 K (hF iB ) + sup JP2 K (hF i¬B )
F ∈S
F ∈S
= sup (JP1 K (hF iB ) + JP2 K (hF i¬B ))
(I.H. on P1 and P2 )
(apply MST coefficientwise)
F ∈S
= sup Jif (B) {P1 } else {P2 }K (F )
F ∈S
= sup JP K (F ) .
F ∈S
The case P = while(B){P1 }. Recall that for every G ∈ FPS,
JP K (G) = (lfp ΦB,P1 ) (G)

= sup ΦnB,P1 (0) (G) .
n∈N
Hence, it suffices to show that





n
n
sup ΦB,P1 (0) (sup S) = sup
sup ΦB,P1 (0) (F ) .
n∈N
F ∈S
n∈N
Generating Functions for Probabilistic Programs
21
Assume for the moment that for every n ∈ N and all increasing ω-chains S in
FPS,


ΦnB,P1 (0) (sup S) = sup ΦnB,P1 (0) (F ) .
(1)
F ∈S
We then have


sup ΦnB,P1 (0) (sup S)
n∈N

= sup ΦnB,P1 (0) (sup S)
n∈N

= sup sup ΦnB,P1 (0) (F )
n∈N F ∈S

= sup sup ΦnB,P1 (0) (F )
F ∈S n∈N




= sup
,
sup ΦnB,P1 (0) (F )
F ∈S
(sup for ΦB,P1 is defined pointwise)
(Equation 1)
(swap suprema)
(sup for ΦB,P1 is defined pointwise)
n∈N
which is what we have to show. It remains to prove Equation 1 by induction on n.
Base case n = 0. We have


Φ0B,P1 (0) (sup S) = sup S = sup F = sup Φ0B,P1 (0) (F ) .
F ∈S
F ∈S
Induction step. We have


Φn+1
B,P1 (0) (sup S)

= ΦB,P1 ΦnB,P1 (0) (sup S)
= hsup Si¬B + ΦnB,P1 (0) (JP1 K (hsup SiB ))


= hsup Si¬B + ΦnB,P1 (0) sup JP1 K (hF iB )
F ∈S
= hsup Si¬B + sup ΦnB,P1 (0) (JP1 K (hF iB ))
F ∈S

= sup hF i¬B + ΦnB,P1 (0) (JP1 K (hF iB ))
F ∈S


= sup Φn+1
B,P1 (0) (F ) .
F ∈S
(Def. of ΦB,P1 )
(I.H. on P1 )
(I.H. on n)
(apply MST)
(Def. of ΦB,P1 )
This completes the proof.
Theorem 2 (Well-definedness of FPS Semantics). The semantics functional J · K is well-defined, i.e. the semantics of any loop while (B) {P } exists
uniquely and can be written as
Jwhile (B) {P }K = lfp ΦB,P = sup ΦnB,P (0) .
n∈N
22
L. Klinkenberg et al.
Proof. First, we show that the unfolding operator ΦB,P is ω-continuous. For
that, let f1 ⊑ f2 ⊑ . . . be an ω-chain in FPS → FPS. Then,
ΦB,P

sup{fn }
n∈N



= λG. hGi¬B + sup{fn } (JP K (hGiB ))
n∈N
= λG. hGi¬B + sup{fn (JP K (hGiB ))}
n∈N
(sup on FPS → FPS is defined pointwise)
= sup {λG. hGi¬B + fn (JP K (hGiB ))}
n∈N
(apply monotone sequence theorem coefficient-wise)
= sup {ΦB,P (fn )} .
(Def. of ΦB,P )
n∈N
Since ΦB,P is ω-continuous and (FPS → FPS, ⊑) forms a complete lattice (Lemma 1),
we get by the Kleene fixed point Theorem [17] that ΦB,P has a unique least fixed
point given by supn∈N ΦnB,P (0).
Theorem 3 (Mass Conservation). For every P ∈ pGCL and F ∈ FPS, we
have JP K (F ) ≤ |F |.
Proof. By induction on the structure of P . For the loopfree cases, this is
straightforward. For the case P = while(B){P1 }, we proceed as follows. For
every r ∈ R∞
≥0 , we define the set
FPSr = {F ∈ FPS | |F | ≤ r}
of all FPSs whose mass is at most r. First, we define the restricted unfolding
operator
ΦB,P1 ,r : (FPSr → FPSr ) → (FPSr → FPSr ),
ψ 7→ ΦB,P1 (ψ) .
Our induction hypothesis on P1 implies that ΦB,P1 ,r is welldefined.
It is now only left to show that (FPSr , ) is an ω-complete partial order,
because then ΦB,P1 ,r has a least fixed point in FPSr for every r ∈ R∞
≥0 . The
theorem then follows by letting r = |G|, because
(lfp ΦB,P1 ) (G) =

lfp ΦB,P1 ,|G| (G)
implies
|(lfp ΦB,P1 ) (G)| ≤ |G| .
(FPSr , ) is an ω-complete partial order. The fact that (FPSr , ) is a partial
order is immediate. It remains to show ω-completeness. For that, let f1  f2 
. . . be an ω-chain in FPSr . We have to show that supn Fn ∈ FPSr , which is the
case if and only if
X
sup fn =
sup [σ]fn ≤ r .
n
σ∈Nk
n
Generating Functions for Probabilistic Programs
23
Now let g : N → Nk be some bijection from N to Nk . We have
X
=
sup [σ]fn
σ∈Nk
X
n
(series converges absolutely)
sup [g(i)]fn
i=0
= sup
N
n
N
X
i=0
= sup sup
N
n
= sup sup
n
N
sup [g(i)]fn (rewrite infinite series as supremum of partial sums)
n
N
X
[g(i)]fn
(apply monotone sequence theorem)
[g(i)]fn
(swap suprema)
i=0
N
X
i=0
P
Now observe that supN N
i=0 [g(i)]fn = |fn |, which is a monotonically increasing
sequence in n. Moreover, since fn ∈ FPSr , this sequence is bounded from above
by r. Hence, the least upper bound supn |fn | of the sequence |fn | is no larger
than r, too. This completes the proof.
A.2
Proofs of Section 4.2
Lemma 3 (Representation of JwhileK). Let W = while (B) {P } be a pGCL
program. An alternative representation is:
JW K = λG.
X
i=0
ϕi (G)
¬B
,
where ϕ(G) = JP K (hGiB ).
Proof. First we show by induction, that ΦnB,P (0)(G) =
Base case. We have
Φ0B,P (0)(G) = 0 =
1
X
i=0
ϕi (G)
¬B
Pn1
i=0
.
ϕi (G)
¬B
.
24
L. Klinkenberg et al.
Induction step. We have

n
Φn+1
B,P (0)(G) = ΦB,P ΦB,P (0)(G)
= hGi¬B + ΦnB,P (0)(JP K hGiB )
= hGi¬B + ΦnB,P (0)(ϕ(G))
n1
X
= hGi¬B +
i=0
n
X
= hGi¬B +
=
n
X
ϕi (G)
JW K (G)

= sup ΦnB,P (0) (G)
n∈N

= sup ΦnB,P (0)(G)
n∈N
)
( n
X
i
= sup
ϕ (G) ¬B
n∈N
=
X
i=0
ϕi
i=1
i=0
Overall, we thus get
ϕi+1
¬B
¬B
¬B
.
(sup on FPS → FPS is defined pointwise)
(see above)
i=0
ϕi (G)
¬B
Theorem 4 (Linearity of pGCL Semantics). For every program P and guard B,
the functions h · iB and JP K are linear. Moreover, the unfolding operator ΦB,P
maps linear transformers onto linear transformers.
Proof. Linearity of h·iB . We have
*
ha · G + F iB =
=
=
*
σ∈Nk
X
X
σ∈B
=a·
µσ X σ +
(a · µσ + νσ ) X σ
(a · µσ + νσ )X
a · µσ X σ
X
X
µσ X
νσ X σ
σ∈Nk
σ∈Nk
X
σ∈B
=
X
σ
+
+
+
B
σ
X
νσ X σ
σ∈B
+
σ∈B
= a · hGiB + hF iB
X
σ∈B
νσ X σ
B
Generating Functions for Probabilistic Programs
25
Linearity of JP K. By induction on the structure of P . First, we consider the
base cases.
The case P = skip. We have
JskipK (r · F + G) = r · F + G = r · JskipK (F ) + JskipK (G)
The case P = xi := E
JXi := EK (r · F + G)
X
eval (E)
· · · Xkσk
[σ]r·F +G X1σ1 · · · Xi σ
=
σ∈Nk
=
X
σ∈Nk
= r·
= r·
evalσ (E)
· · · Xkσk
evalσ (E)
· · · Xkσk

eval (E)
· · · Xi σ
· · · Xkσk

(r · [σ]F + [σ]G ) · X1σ1 · · · Xi
X 
[σ]F · X1σ1 · · · Xi
X 
X1σ1
σ∈Nk
σ∈Nk
+
[σ]F ·
X 
σ∈Nk
evalσ (E)
[σ]G · X1σ1 · · · Xi
= r · JP K (F ) + JP K (G) .
(+ and · defined coefficientwise)


eval (E)
· · · Xkσk
+ [σ]G · X1σ1 · · · Xi σ
(+ and · defined coefficientwise)
· · · Xkσk

(+ and · defined coefficientwise)
Next, we consider the induction step.
The case P = P1 ; P2 . We have
JP1 ; P2 K (r · F + G)
= JP2 K (JP1 K (r · F + G))
= JP2 K (r · JP1 K (F ) + JP1 K (G))
= r · JP2 K (JP1 K (F )) + JP2 K (JP1 K (G)) .
(I.H. on P1 )
(I.H. on P2 )
The case P = if (B) {P1 } else {P2 }. We have
Jif (B) {P1 } else {P2 }K (r · F + G)
= hJP1 K (r · F + G)iB + hJP2 K (r · F + G)i¬B
= hr · JP1 K (F ) + JP1 K (G))iB + hr · JP2 K (F ) + JP2 K (G)i¬B
(I.H. on P1 and P2 )

= r · hJP1 K (F )iB + hJP2 K (F )i¬B + hJP1 K (G)iB + hJP2 K (G)i¬B
(linearity of h·iB and h·i¬B )
= r · Jif (B) {P1 } else {P2 }K (F ) + Jif (B) {P1 } else {P2 }K (G)
26
L. Klinkenberg et al.
The case P = {P1 } [p] {P2 }.
J{P1 } [p] {P2 }K (r · F + G)
= p · JP1 K (r · F + G) + (1 p) · JP2 K (r · F + G)
= p · (r · JP1 K (F ) + JP1 K (G)) + (1 p) · (r · JP2 K (F ) + JP2 K (G))
(I.H. on P1 and P2 )
= r · (p · JP1 K (F ) + (1 p) · JP2 K (F )) + p · JP1 K (G) + (1 p) · JP2 K (G)
(reorder terms)
= r · J{P1 } [p] {P2 }K (F ) + J{P1 } [p] {P2 }K
The case P = while (B) {P1 }.
Jwhile (B) {P1 }K (r · F + G)

= sup ΦnB,P1 (0) (r · F + G)
n∈N

= sup ΦnB,P1 (0)(r · F + G)
(sup on FPS → FPS defined pointwise)
n∈N

= sup r · ΦnB,P1 (0)(F ) + ΦnB,P1 (0)(G)
n∈N
(by straightforward induction on n using I.H. on P1 )
 n

= r · sup ΦB,P1 (0)(F ) + sup ΦnB,P1 (0)(G)
n∈N
n∈N
(apply monotone sequence theorem coefficientwise)
= r · Jwhile (B) {P1 }K (F ) + Jwhile (B) {P1 }K (G)
Linearity of ΦB,P (f ) for linear f .
 *
+
X
X
ΦB,P (f ) 
µσ X σ  =
µσ X σ
σ∈Nk
σ∈Nk
=
=
=
*
*
X
σ∈Nk
X
σ∈Nk
X
σ∈Nk
=
X
σ∈Nk
=
µσ X σ
X
σ∈Nk
µσ X σ
+
+
¬B
¬B
*
+ 
X

+ f JP K 
µσ X σ
+f
+
¬B
σ∈Nk
X
σ∈Nk
X
σ∈Nk
B
µσ JP K (hX σ iB )
(1. & 2.)
µσ · f (JP K (hX σ iB ))
(f lin.)
µσ hX σ i¬B + µσ · f (JP K (hX σ iB ))
µσ · hX σ i¬B + f (JP K (hX σ iB ))
µσ · ΦB,P (f )(X σ )

Generating Functions for Probabilistic Programs
27
Lemma 2 (Loop Unrolling). For any FPS F ,
Jwhile (B) {P }K (F ) = hF i¬B + Jwhile (B) {P }K JP K hF iB
Proof. Let W, W be as described in Lemma 2.

.
JW K (G) = (lfp ΦB,P ) (G)
= ΦB,P (lfp ΦB,P ) (G)
= hGi¬B + (lfp ΦB,P ) JP K hGiB

= Jif (B) {P ; W } else {skip}K (G)
= JW K (G)
A.3
Proofs of Section 4.3
Lemma 4. The mapping τ is a bijection. The inverse τ 1 of τ is given by
X
µ ({σ}) · Xσ
τ 1 : M → FPS, µ 7→
σ∈Nk
Proof. We show this by showing τ 1 ◦ τ = id and τ ◦ τ 1 = id.
!
X
X
1
1
σ
λN .
ασ
τ ◦τ
ασ X
σ∈N
σ∈Nk
=
X X
σ∈Nk s∈{σ}
τ ◦ τ 1 (µ) = τ 
X
σ∈Nk
ασ · Xσ =
X
ασ Xσ
σ∈Nk
µ({σ}) · Xσ  = λN .
X
µ({σ}) = µ(N ) = µ
σ∈N
Lemma 5. The mappings τ and τ 1 are monotone linear maps.
Proof. First, we show that τ 1 is linear (and hence τ , due to bijectivity):
X
(µ + ν)({σ}) · Xσ
τ 1 (µ + ν) =
σ∈Nk
=
X
(µ({σ}) + ν({σ})) · Xσ
X
(µ({σ}) · Xσ + ν({σ}) · Xσ )
σ∈Nk
=
σ∈Nk
=
X
σ∈Nk
(as M forms a vector space with standard +)
µ({σ}) · Xσ  + 
X
σ∈Nk
ν({σ}) · Xσ  = τ 1 (µ) + τ 1 (ν)
28
L. Klinkenberg et al.
Second, we show that τ is monotone:
Assume Gµ ⊑ Gµ .
τ (Gµ ) = τ 
X
σ∈Nk
≤ λS.
X
µ({σ}) · X
µ′ ({σ})
σ
= λS.
X
µ({σ})
σ∈S
σ∈S
=τ
X
σ∈Nk
(as µ({σ}) ≤ µ′ ({σ}) per definition of ⊑)
µ′ ({σ}) · Xσ  = τ (Gµ )
Third, we show that τ 1 is monotone:
Assume µ ⊑ µ′ .
τ 1 (µ) =
X
σ∈Nk
X
σ∈Nk
µ({σ}) · Xσ
µ′ ({σ}) · Xσ (as µ({σ}) ≤ µ′ ({σ}) per definition of ⊑)
= τ 1 (µ′ )
Lemma 6. Let f : (P, ≤) → (Q, ≤) be a monotone isomorphism for any partially ordered sets P and Q. Then,
f : Hom(P, P ) → Hom(Q, Q),
φ 7→ f ◦ ϕ ◦ f 1
is also a monotone isomorphism.
Proof. Let f be such a monotone isomorphism, and f the corresponding lifting.
First, we note that f is also bijective. Its inverse is given by (f )1 = (f 1 ) .
Second, f is monotone, as shown in the following calculation.
f ≤ g =⇒ ∀x.
=⇒ ∀x.
=⇒ ∀x.
f (x) ≤ g(x)


τ ◦ f τ 1 ◦ τ (x) ≤ τ ◦ g τ 1 ◦ τ (x)
τ ◦ f (τ (x)) ≤ τ ◦ g (τ (x))
=⇒ ∀y. τ ◦ f (y) ≤ τ ◦ g(y)
=⇒ τ (f ) ≤ τ (g)
Lemma 7. Let P, Q be complete lattices, and τ a monotone isomorphism. Also
let lfp be the least fixed point operator. Then the following diagram commutes.
Generating Functions for Probabilistic Programs
Hom(P, P )
τ∗
lfp
29
Hom(Q, Q)
lfp
P
τ
Q
Proof. Let ϕ ∈ Hom(P, P ) be arbitrary.

lfp ϕ = inf p ϕ(p) = p


τ (lfp ϕ) = τ inf p ϕ(p) = p

= inf τ (p) ϕ(p) = p

= inf τ (p) ϕ(τ 1 ◦ τ (p)) = τ 1 ◦ τ (p)

= inf τ (p) τ ◦ ϕ(τ 1 ◦ τ (p)) = τ (p)

= inf q τ ◦ ϕ(τ 1 (q)) = q

= inf q τ (ϕ)(q) = q
= lfp τ (ϕ)
Definition 11. Let T be the program translation from pGCL to a modified Kozen
syntax, defined inductively:
T(skip) = skip
T(xi := E) = xi := fE (x1 , . . . , xk )
T({P } [p] {Q}) = {T(P )} [p] {T(Q)}
T(P # Q) = T(P ); T(Q)
T(if (B) {P } else {Q}) = if B then T(P ) else T(Q) fi
T(while (B) {P }) = while B do T(P ) od ,
where p is a probability, k = |Var(P )|, B is a Boolean expression and P, Q are
pGCL programs. The extended construct skip as well as {P } [p] {Q} is only syntactic sugar and can be simulated by the original Kozen semantics. The intended
semantics of these constructs are
and
[skip] = id
[{P } [p] {Q}] = p · T(P ) + (1 p) · T(Q).
Lemma 8. For all guards B, the following identity holds: eB ◦ τ = τ ◦ h·iB .
P
Proof. For all Gµ = σ∈Nk µ ({σ}) · Xσ ∈ FPS:
eB ◦ τ (Gµ ) = eB (µ)
= λS. µ(S ∩ B)
X
X
τ ◦ hGµ iB = τ 
µ ({σ}) · Xσ +
0 · Xσ
σ∈B
= λS. µ(S ∩ B)
σ6∈B
30
=⇒
L. Klinkenberg et al.
∀Gµ ∈ FPS. eB ◦ τ (Gµ ) = τ ◦ hGµ iB
Theorem 5. The FPS semantics of pGCL is an instance of Kozens semantics,
i.e. for all pGCL programs P , we have
τ ◦ JP K = T(P ) ◦ τ .
Proof. The proof is done via induction on the program structure. We omit the
loop-free cases, as they are straightforward.
By definition, T(while (B) {P }) = while B do P od. Hence, the corresponding Kozen semantics is equal to lfp TB,P , where
T : (M → M) → (M → M),
S 7→ eB̄ + (S ◦ P ◦ eB ) .
First, we show that τ ◦ TB,P ◦ τ = ΦB,P , where τ is the canonical lifting
of τ , i.e., τ (S) = τ ◦ S ◦ τ 1 for all S ∈ (FPS → FPS).


τ ◦ TB,P ◦ τ (S) = τ ◦ TB,P ◦ τ ◦ S ◦ τ 1

= τ eB̄ + τ ◦ S ◦ τ 1 ◦ P ◦ eB
= τ 1 ◦ eB̄ ◦ τ + τ 1 ◦ τ ◦ S ◦ τ 1 ◦ P ◦ eB ◦ τ
= τ 1 ◦ eB̄ ◦ τ + S ◦ τ 1 ◦ P ◦ eB ◦ τ
= τ 1 ◦ τ ◦ h·iB̄ + S ◦ τ 1 ◦ P ◦ τ ◦ h·iB
= h·iB̄ + S ◦ τ 1 ◦ τ ◦ JP K ◦ h·iB
(Using I.H. on P ◦ τ )
= h·iB̄ + S ◦ JP K ◦ h·iB
= ΦB,P (S)
Having this equality at hand, we can easily proof the correspondence of our
while semantics to the one defined by Kozen in the following manner:
τ ◦ Jwhile (B) {P }K = T(while (B) {P }) ◦ τ
⇔ τ ◦ lfp ΦB,P = lfp TB,P ◦ τ
⇔ lfp ΦB,P = τ 1 ◦ lfp TB,P ◦ τ
⇔ lfp ΦB,P = τ (lfp TB,P )
⇔ lfp ΦB,P = lfp τ ◦ TB,P ◦ τ
⇔ lfp ΦB,P = lfp ΦB,P
B
(Definition of τ )

(cf. Lemma 7)
Proofs of Section 5
Theorem 6 (Superinvariants and Loop Overapproximations). Let ΦB,P
be the unfolding operator of while(B){P } (cf. Def. 7) and ψ : FPS → FPS. Then
ΦB,P (ψ) ⊑ ψ
implies
Jwhile (B) {P }K ⊑ ψ .
Generating Functions for Probabilistic Programs
31
Proof. Instance of Parks Lemma [?].
Corollary 2. Given a function f : Mon (X) → FPS, let the linear extension fˆ
of f be defined by
fˆ:
FPS → FPS,
F 7→
X
[σ]F f (Xσ ) .
σ∈Nk
Let ΦB,P be the unfolding operator of while (B) {P }. Then
σ ∈ Nk :
ΦB,P (fˆ)(Xσ ) ⊑ fˆ(Xσ )
implies
Jwhile (B) {P }K ⊑ fˆ .
Proof. Let G ∈ FPS be arbitrary.
ΦB,P (fˆ)(G) =
X
[σ]G ΦB,P (fˆ)(Xσ )
(By Theorem 4)
[σ]G f (Xσ )
(By assumption)
σ∈Nk
=⇒
X
σ∈Nk
= fˆ(G)
JW K ⊑ fˆ
(By Theorem 6)
Proof of Example 6.

 

ΦB,P fˆ X i C j =
X iC j
i=0

1
+ fˆ
X iC j
2
·
i>0
case i = 0 : ⇒ (C j + fˆ(0)) = C j = f (X 0 C j )

C  ˆ i1 j
f (X C ) + fˆ X i+1 C j
case i > 0 : ⇒
2 (
1
i even
2,
= C j · 1C
= f (X i C j )
C
,
i odd
1C 2
 
=⇒ ΦB,P fˆ (X i C j ) ⊑ f (X i C j ).
C
1
+
X iC j
X
2
· XC
i>0

32
L. Klinkenberg et al.
Thus fˆ is a superinvariant.
i
j
ΦB,P (ĥ)(X C ) =
i
X C
j
i=0
+ ĥ

1
X iC j
2
C
1
·
X iC j
+
i>0 X
2
i>0
· XC
case i = 0 : ⇒ (C j + ĥ(0)) = 1 = h(X 0 C j )


C
case i > 0 : ⇒
ĥ X i1 C j + ĥ X i+1 C j
2
!i1
!i+1 
C j+1  1 1 C 2
1 1 C2
=
+
·
2
C
C
j
= C ·
1
!i
1 C2
= h(X i C j )
C
=⇒ ΦB,P (ĥ)(X i C j ) = h(X i C j )
Thus fˆ is a superinvariant.
Verification Python Script
from sympy import
i nit p rint ing ()
x , c = sy m b o l s ( x , c )
i , j = sy m b o l s ( i , j , i n t e g e r=True )
#d e f i n e t h e h i g h e r o r d e r t r a n s f o r m e r
def Phi ( f ) :
return c /2 ( f . s u b s ( i , i 1) + f . s u b s ( i , i +1) )
def c o m p u t e d i f f e r e n c e ( f ) :
return ( Phi ( f ) f ) . s i m p l i f y ( )
#d e f i n e P o l yno m ia l v e r i f y e r
def v e r i f y p o l y ( p o l y ) :
p r i n t ( ” Check c o e f f i c i e n t s f o r nonp o s i t i v i t y : ” )
for c o e f f in poly . c o e f f s ( ) :
i f coeff > 0:
return F a l s e
return True
# a c t u a l v e r i f i c a t i o n method
def v e r i f y ( f ) :
p r i n t ( ” Check i n v a r i a n t : ” )
pprint ( f )
result = compute difference( f )
if result . is zero :
print ( ” I n v a r i a n t i s a f i x p o i n t ! ” )
return True
else :
print ( ” I n v a r i a n t i s not a f i x p o i n t check
try :
return v e r i f y p o l y ( Poly ( r e s u l t ) )
except P o l i f i c a t i o n F a i l e d :
p r i n t ( ” I n v a r i a n t i s n o t a Poly ! ” )
return F a l s e
except :
p r i n t ( ” Unexpected E r r o r ” )
raise
if
r e m a i n d e r i s Poly ” )

Generating Functions for Probabilistic Programs
33
# d e f i n e t h e l o o p i n v a r i a n t g u e s s ( i != 0) c a s e
f = c j ( ( c / (1 c 2 ) ) ( i % 2 ) + (1/(1 c 2 ) ) ( ( i +1) % 2 ) )
# Second i n v a r i a n t :
h = c j ( ( 1 s q r t ( 1 c 2 ) ) / c ) i
print ( ” I n v a r i a n t v e r i f i e d ” i f
print ( ” I n v a r i a n t v e r i f i e d ” i f
v e r i f y ( f ) e l s e ”Unknown ” )
v e r i f y ( h ) e l s e ”Unknown ” )
Program 1.5. Python program checking the invariants
Proof of Example 7.
ΦB,P (fˆ)(X i ) =
Xi
i=0
+
1 ˆ
·f
i

Xi
·
i>0
case i = 0 : ⇒ 1 + ∞ · fˆ(0) + ∞ · fˆ(0)
1
X

+

 
1
1
· fˆ X i
i

= 1 + ∞ · 0 + ∞ · 0 = 1 = f Xi



1 ˆ i1 
1
case i > 0 : ⇒ 0 +
· fˆ X i+1
+ 1
·f X
i
i
!
!


i3
i1
1
1 X 1
1 X 1
1
+ 1
· 1 ·
· 1 ·
=
i
e n=0 n!
i
e n=0 n!
!
!
i3
i1
1 X 1
1
1
1 X 1
+
1 ·
·
+
=
i
ei n=0 n!
e n=0 n!
i
!


i1
1 X 1
1
1
1
=
1 ·
+
·
+
e n=0 n!
ei
(i 2)! (i 1)!
!
!
i1
i2
1
1 X 1
1 X 1
+
=
1 ·
=
1 ·
e n=0 n!
e(i 1)!
e n=0 n!

= f Xi
=⇒ ΦB,P (fˆ)(X i ) = f (X i )
i>0
·X

i1
1 X 1
·
ei n=0 n!
Mathematica input query:
Input:
Output:
1
·
k
0
k3
1 X 1
1 ·
e n=0 n!
!
1
+ (1 ) ·
k
k1
1 X 1
1 ·
e n=0 n!
!
k2
1 X 1
1 ·
e n=0 n!
!
!