Condaforge proposes sunsetting support for PyPy
Condaforge has kindly been providing support for PyPy since 2019. The condaforge team has been very patient and generous with resources, but it seems the uptake of PyPy has not justified the effort. Major packages still are not available on PyPy, others find it hard to update versions. We don't get much feedback at all about people using PyPy, and even less about PyPy on condaforge. The condaforge team has proposed sunsetting PyPy going forward, which means current packages would remain but no new packages would be built. If you have an opinion, you can comment on that PR, or on this blog post.
Since condaforge supports PyPy3.9 but not PyPy3.10, we have continued releasing PyPy3.9 even though we typically support only one version of PyPy3. With the sunsetting proposal, we will not release any more updates to PyPy3.9. I opened a poll about the intention to drop PyPy3.9. If you have an opinion, please chime in.
A Knownbits Abstract Domain for the Toy Optimizer, Correctly
After Max' introduction to abstract interpretation for the toy optimizer in the last post, I want to present a more complicated abstract domain in this post. This abstract domain reasons about the individual bits of a variable in a trace. Every bit can be either "known zero", "known one" or "unknown". The abstract domain is useful for optimizing integer operations, particularly the bitwise operations. The abstract domain follows quite closely the tristate abstract domain of the eBPF verifier in the Linux Kernel, as described by the paper Sound, Precise, and Fast Abstract Interpretation with Tristate Numbers by Harishankar Vishwanathan, Matan Shachnai, Srinivas Narayana, and Santosh Nagarakatte.
The presentation in this post will still be in the context of the toy optimizer. We'll spend a significant part of the post convincing ourselves that the abstract domain transfer functions that we're writing are really correct, using both propertybased testing and automated proofs (again using Z3).
PyPy has implemented and merged a more complicated version of the same abstract domain for the "real" PyPy JIT. A more thorough explanation of that real world implementation will follow.
I'd like to thank Max Bernstein and Armin Rigo for lots of great feedback on drafts of this post. The PyPy implementation was mainly done by Nico Rittinghaus and me.
Contents:
 Motivation
 The Knownbits Abstract Domain
 Transfer Functions
 Propertybased Tests with Hypothesis
 When are Transfer Functions Correct? How do we test them?
 Implementing Binary Transfer Functions
 Addition and Subtraction
 Proving correctness of the transfer functions with Z3
 Cases where this style of Z3 proof doesn't work
 Making Statements about Precision
 Using the Abstract Domain in the Toy Optimizer for Generalized Constant Folding
 Using the KnownBits Domain for Conditional Peephole Rewrites
 Conclusion
Motivation
In many programs that do bitmanipulation of integers, some of the bits of the integer variables of the program can be statically known. Here's a simple example:
x = a  1 ... if x & 1: ... else: ...
After the assignment x = a  1
, we know that the lowest bit of x
must be 1
(the other bits are unknown) and an optimizer could remove the condition x & 1
by
constantfolding it to 1
.
Another (more complicated) example is:
assert i & 0b111 == 0 # check that i is a multiple of 8 j = i + 16 assert j & 0b111 == 0
This kind of code could e.g. happen in a CPU
emulator, where i
and j
are
integers that represent emulated pointers, and the assert
s are alignment
checks. The first assert implies that the lowest three bits of i must be 0
.
Adding 16 to such a number produces a result where the lowest three bits are
again all 0
, therefore the second assert is always true. So we would like a
compiler to remove the second assert.
Both of these will optimizations are doable with the help of the knownbits abstract domain that we'll discuss in the rest of the post.
The Knownbits Abstract Domain
An abstract value of the knownbits domain needs to be able to store, for every
bit of an integer variable in a program, whether it is known 0, known 1, or
unknown. To represent
three different states, we need 2 bits, which we will call one
and unknown
.
Here's the encoding:
one  unknown  knownbit 

0  0  0 
1  0  1 
0  1  ? 
1  1  illegal 
The unknown
bit is set if we don't know the value of the bit ("?"), the one
bit is set if the bit is known to be a 1
. Since two bits are enough to encode
four different states, but we only need three, the combination of a set one
bit and a set unknown
is not allowed.
We don't just want to encode a single bit, however. Instead, we want to do this
for all the bits of an integer variable. Therefore the instances of the abstract
domain get two integer fields ones
and unknowns
, where each pair of
corresponding bits encodes the knowledge about the corresponding bit of the
integer variable in the program.
We can start implementing a Python class that works like this:
from dataclasses import dataclass @dataclass(eq=False) class KnownBits: ones : int unknowns : int def __post_init__(self): if isinstance(self.ones, int): assert self.is_well_formed() def is_well_formed(self): # a bit cannot be both 1 and unknown return self.ones & self.unknowns == 0 @staticmethod def from_constant(const : int): """ Construct a KnownBits corresponding to a constant, where all bits are known.""" return KnownBits(const, 0) def is_constant(self): """ Check if the KnownBits instance represents a constant. """ # it's a constant if there are no unknowns return self.unknowns == 0
We can also add some convenience properties. Sometimes it is easier to work with an integer where all the known bits are set, or one where the positions of all the known zeros have a set bit:
class KnownBits: ... @property def knowns(self): """ return an integer where the known bits are set. """ # the knowns are just the unknowns, inverted return ~self.unknowns @property def zeros(self): """ return an integer where the places that are known zeros have a bit set. """ # it's a 0 if it is known, but not 1 return self.knowns & ~self.ones
Also, for debugging and for writing tests we want a way to print the known bits
in a humanreadable form, and also to have a way to construct a KnownBits
instance from a string. It's not important to understand the details of
__str__
or from_str
for the rest of the post, so I'm putting them into a fold:
KnownBits
from and to string conversions
class KnownBits: ... def __repr__(self): if self.is_constant(): return f"KnownBits.from_constant({self.ones})" return f"KnownBits({self.ones}, {self.unknowns})" def __str__(self): res = [] ones, unknowns = self.ones, self.unknowns # construct the string representation right to left while 1: if not ones and not unknowns: break # we leave off the leading known 0s if ones == 1 and not unknowns: # 1 has all bits set in two's complement, so the leading # bits are all 1 res.append('1') res.append("...") break if unknowns == 1: # 1 has all bits set in two's complement, so the leading bits # are all ? assert not ones res.append("?") res.append("...") break if unknowns & 1: res.append('?') elif ones & 1: res.append('1') else: res.append('0') ones >>= 1 unknowns >>= 1 if not res: res.append('0') res.reverse() return "".join(res) @staticmethod def from_str(s): """ Construct a KnownBits instance that from a string. String can start with ...1 to mean that all higher bits are 1, or ...? to mean that all higher bits are unknown. Otherwise it is assumed that the higher bits are all 0. """ ones, unknowns = 0, 0 startindex = 0 if s.startswith("...?"): unknowns = 1 startindex = 4 elif s.startswith("...1"): ones = 1 startindex = 4 for index in range(startindex, len(s)): ones <<= 1 unknowns <<= 1 c = s[index] if c == '1': ones = 1 elif c == '?': unknowns = 1 return KnownBits(ones, unknowns) @staticmethod def all_unknown(): """ convenience constructor for the "all bits unknown" abstract value """ return KnownBits.from_str("...?")
And here's a pyteststyle unit test for str
:
def test_str(): assert str(KnownBits.from_constant(0)) == '0' assert str(KnownBits.from_constant(5)) == '101' assert str(KnownBits(5, 0b10)) == '1?1' assert str(KnownBits(~0b1111, 0b10)) == '...100?0' assert str(KnownBits(1, ~0b1)) == '...?1'
An instance of KnownBits
represents a set of integers, namely those that match
the known bits stored in the instance. We can write a method contains
that
takes a concrete int
value and returns True
if the value matches the
pattern of the known bits:
class KnownBits: ... def contains(self, value : int): """ Check whether the KnownBits instance contains the concrete integer `value`. """ # check whether value matches the bit pattern. in the places where we # know the bits, the value must agree with ones. return value & self.knowns == self.ones
and a test:
def test_contains(): k1 = KnownBits.from_str('1?1') assert k1.contains(0b111) assert k1.contains(0b101) assert not k1.contains(0b110) assert not k1.contains(0b011) k2 = KnownBits.from_str('...?1') # all odd numbers for i in range(101, 100): assert k2.contains(i) == (i & 1)
Transfer Functions
Now that we have implemented the basics of the KnownBits
class, we need to
start implementing the transfer functions. They are for computing what we know
about the results of an operation, given the knownledge we have about the bits
of the arguments.
We'll start with a simple unary operation, invert(x)
(which is ~x
in Python
and C syntax), which flips all the bits of at integer. If we know some bits of
the arguments, we can compute the corresponding bits of the result. The unknown
bits remain unknown.
Here's the code:
class KnownBits: ... def abstract_invert(self): # self.zeros has bits set where the known 0s are in self return KnownBits(self.zeros, self.unknowns)
And a unittest:
def test_invert(): k1 = KnownBits.from_str('01?01?01?') k2 = k1.abstract_invert() assert str(k2) == '...10?10?10?' k1 = KnownBits.from_str('...?') k2 = k1.abstract_invert() assert str(k2) == '...?'
Before we continue with further transfer functions, we'll think about
correctness of the transfer functions and build up some test infrastructure. To
test transfer functions, it's quite important to move being simple examplestyle
unit tests. The statespace for more complicated binary transfer functions is
extremely large and it's too easy to do something wrong in a corner case.
Therefore we'll look at propertybasedtest for KnownBits
next.
Propertybased Tests with Hypothesis
We want to do propertybased tests of KnownBits
, to try
make it less likely that we'll get a cornercase in the implementation wrong.
We'll use Hypothesis for that.
I can't give a decent introduction to Hypothesis here, but want to give a few hints about the API. Hypothesis is a way to run unit tests with randomly generated input. It provides strategies to describe the data that the test functions expects. Hypothesis provides primitive strategies (for things like integers, strings, floats, etc) and ways to build composite strategies out of the primitive ones.
To be able to write the tests, we need to generate random KnownBits
instances,
and we also want an int
instance that is a member of the KnownBits
instance.
We generate tuples of (KnownBits, int)
together, to ensure this property.
We'll ask Hypothesis to generate us a random concrete int
as the concrete
value, and then we'll also generate a second random int
to use as the
unknown
masks (i.e. which bits of the concrete int we don't know in the
KnownBits
instance). Here's a function that takes two such ints and builds the
tuple:
def build_knownbits_and_contained_number(concrete_value : int, unknowns : int): # to construct a valid KnownBits instance, we need to mask off the unknown # bits ones = concrete_value & ~unknowns return KnownBits(ones, unknowns), concrete_value
We can turn this function into a hypothesis strategy to generate input data
using the strategies.builds
function:
from hypothesis import strategies, given, settings ints = strategies.integers() random_knownbits_and_contained_number = strategies.builds( build_knownbits_and_contained_number, ints, ints )
One important special case of KnownBits
are the constants, which contain only
a single concrete value. We'll also generate some of those specifically, and
then combine the random_knownbits_and_contained_number
strategy with it:
constant_knownbits = strategies.builds( lambda value: (KnownBits.from_constant(value), value), ints ) knownbits_and_contained_number = constant_knownbits  random_knownbits_and_contained_number
Now we can write the first propertybased tests, for the KnownBits.contains
method:
@given(knownbits_and_contained_number) def test_contains(t): k, n = t assert k.contains(t)
The @given
decorator is used to tell Hypothesis which strategy to use to
generate random data for the test function. Hypothesis will run the test with a
number of random examples (100 by default). If it finds an error, it will try to
minimize the example needed that demonstrates the problem, to try to make it
easier to understand what is going wrong. It also saves all failing cases into
an example database and tries them again on subsequent runs.
This test is as much a check for whether we got the strategies right as it is
for the logic in KnownBits.contains
. Here's an example output of random
concrete and abstract values that we are getting here:
110000011001101 ...?0???1 ...1011011 ...1011011 ...1001101110101000010010011111011 ...1001101110101000010010011111011 ...1001101110101000010010011111011 ...100110111010100001?010?1??1??11 1000001101111101001011010011111101000011000111011001011111101 1000001101111101001011010011111101000011000111011001011111101 1000001101111101001011010011111101000011000111011001011111101 1000001101111101001011010011111101000011000111????01?11?????1 1111100000010 1111100000010 1111100000010 ...?11111?00000?? 110110 110110 110110 ...?00?00????11??10 110110 ??0??0 ...100010111011111 ...?100?10111??111? ...1000100000110001 ...?000?00000??000? 110000001110 ...?0?0??000?00?0?0000000?00???0000?????00???000?0?00?01?000?0??1?? 110000001110 ??000000???0 1011011010000001110101001111000010001001011101010010010001000000010101010010001101110101111111010101010010101100110000011110000 1011011010000001110101001111000010001001011101010010010001000000010101010010001101110101111111010101010010101100110000011110000 ...1011010010010100 ...1011010010010100 ...1011111110110011 ...1011111110110011 101000011110110 101000011?10?1? 100101 ?00?0?
That looks suitably random, but we might want to bias our random numbers a little bit towards common error values like small constants, powers of two, etc. Like this:
INTEGER_WIDTH = 64 # some small integers ints_special = set(range(100)) # powers of two ints_special = ints_special.union(1 << i for i in range(INTEGER_WIDTH  2)) # powers of two  1 ints_special = ints_special.union((1 << i)  1 for i in range(INTEGER_WIDTH  2)) # negative versions of what we have so far ints_special = ints_special.union(x for x in ints_special) # bitflipped versions of what we have so far ints_special = ints_special.union(~x for x in ints_special) ints_special = list(ints_special) # sort them (because hypothesis simplifies towards earlier elements in the list) ints_special.sort(key=lambda element: (abs(element), element < 0)) ints = strategies.sampled_from(ints_special)  strategies.integers()
Now we get data like this:
1110 1110 ...10000000000000000001 ...10000??0??0000??00?1 1 ??0??0000??00?1 1 ? ...10101100 ...10101100 110000000011001010111011111111111111011110010001001100110001011 ...?0?101? 110000000011001010111011111111111111011110010001001100110001011 ??00000000??00?0?0???0??????????????0????00?000?00??00??000?0?? ...1011111111111111111111111111 ...?11?11?? ...1011111111111111111111111111 ...?0?????????????????????????? 0 ...?0?????????????????????????? 101101 101101 111111111111111111111111111111111111111111111 111111111111111111111111111111111111111111111 10111 10111 ...101100 ...1?111011?0 101000 ?001010?0 101000 ?0?000 110010 110010 ...100111 ...100111 1111011010010 1111011010010 ...1000000000000000000000000000000000000 ...1000000000000000000000000000000000000
We can also write a test that checks that the somewhat tricky logic in
__str__
and from_str
is correct, by making sure that the two functions
roundtrip (ie converting a KnownBits
to a string and then back to a
KnownBits
instance produces the same abstract value).
@given(knownbits_and_contained_number) def test_hypothesis_str_roundtrips(t1): k1, n1 = t1 s = str(k1) k2 = KnownBits.from_str(s) assert k1.ones == k2.ones assert k1.unknowns == k2.unknowns
Now let's actually apply this infrastructure to test abstract_invert
.
When are Transfer Functions Correct? How do we test them?
Abstract values, i.e. instances of KnownBits
represent sets of concrete
values. We want the transfer functions to compute overapproximations of the
concrete values. So if we have an arbitrary abstract value k
, with a concrete
number n
that is a member of the abstract values (i.e.
k.contains(n) == True
) then the result of the concrete operation op(n)
must be a member of the result of the abstract operation k.abstract_op()
(i.e. k.abstract_op().contains(op(n)) == True
).
Checking the correctness/overapproximation property is a good match for
hypothesis. Here's what the test for abstract_invert
looks like:
@given(knownbits_and_contained_number) def test_hypothesis_invert(t): k1, n1 = t1 n2 = ~n1 # compute the real result k2 = k1.abstract_invert() # compute the abstract result assert k2.contains(n2) # the abstract result must contain the real result
This is the only condition needed for abstract_invert
to be correct. If
abstract_invert
fulfils this property for every combination of abstract and
concrete value then abstract_invert
is correct. Note however, that this test
does not actually check whether abstract_invert
gives us precise results. A
correct (but imprecise) implementation of abstract_invert
would simply return
a completely unknown result, regardless of what is known about the input
KnownBits
.
The "proper" CS term for this notion of correctness is called soundness. The correctness condition on the transfer functions is called a Galois connection. I won't go into any mathematical/technical details here, but wanted to at least mention the terms. I found Martin Kellogg's slides to be quite an approachable introduction to the Galois connection and how to show soundness.
Implementing Binary Transfer Functions
Now we have infrastructure in place for testing transfer functions with random
inputs. With that we can start thinking about the more complicated case, that of
binary operations. Let's start with the simpler ones, and
and or
. For and
,
we can know a 0
bit in the result if either of the input bits are known 0
;
or we can know a 1
bit in the result if both input bits are known 1
.
Otherwise the resulting bit is unknown. Let's look at all the combinations:
and input1: 000111??? input2: 01?01?01? result: 00001?0??
class KnownBits: ... def abstract_and(self, other): ones = self.ones & other.ones # known ones knowns = self.zeros  other.zeros  ones return KnownBits(ones, ~knowns)
Here's an example unittest and a propertybased test for and
:
def test_and(): # test all combinations of 0, 1, ? in one example k1 = KnownBits.from_str('01?01?01?') k2 = KnownBits.from_str('000111???') res = k1.abstract_and(k2) # should be: 0...00001?0?? assert str(res) == "1?0??" @given(knownbits_and_contained_number, knownbits_and_contained_number) def test_hypothesis_and(t1, t2): k1, n1 = t1 k2, n2 = t2 k3 = k1.abstract_and(k2) n3 = n1 & n2 assert k3.contains(n3)
To implement or
is pretty similar. The result is known 1
where either of the
inputs is 1
. The result is known 0
where both inputs are known 0
, and ?
otherwise.
or input1: 000111??? input2: 01?01?01? result: 01?111?1?
class KnownBits: ... def abstract_or(self, other): ones = self.ones  other.ones zeros = self.zeros & other.zeros knowns = ones  zeros return KnownBits(ones, ~knowns)
Here's an example unittest and a propertybased test for or
:
def test_or(): k1 = KnownBits.from_str('01?01?01?') k2 = KnownBits.from_str('000111???') res = k1.abstract_or(k2) # should be: 0...01?111?1? assert str(res) == "1?111?1?" @given(knownbits_and_contained_number, knownbits_and_contained_number) def test_hypothesis_or(t1, t2): k1, n1 = t1 k2, n2 = t2 k3 = k1.abstract_or(k2) n3 = n1  n2 assert k3.contains(n3)
Implementing support for abstract_xor
is relatively simple, and left as an
exercise :).
Addition and Subtraction
invert
, and
, and or
are relatively simple transfer functions to write,
because they compose over the individual bits of the integers. The arithmetic
functions add
and sub
are significantly harder, because of carries and
borrows. Coming up with the formulas for them and gaining an intuitive
understanding is quite tricky and involves carefully going through a few
examples with pen and paper. When implementing this in PyPy, Nico and I didn't
come up with the implementation ourselves, but instead took them from the
Tristate Numbers paper. Here's the code,
with example tests and hypothesis tests:
class KnownBits: ... def abstract_add(self, other): sum_ones = self.ones + other.ones sum_unknowns = self.unknowns + other.unknowns all_carries = sum_ones + sum_unknowns ones_carries = all_carries ^ sum_ones unknowns = self.unknowns  other.unknowns  ones_carries ones = sum_ones & ~unknowns return KnownBits(ones, unknowns) def abstract_sub(self, other): diff_ones = self.ones  other.ones val_borrows = (diff_ones + self.unknowns) ^ (diff_ones  other.unknowns) unknowns = self.unknowns  other.unknowns  val_borrows ones = diff_ones & ~unknowns return KnownBits(ones, unknowns) def test_add(): k1 = KnownBits.from_str('0?10?10?10') k2 = KnownBits.from_str('0???111000') res = k1.abstract_add(k2) assert str(res) == "?????01?10" def test_sub(): k1 = KnownBits.from_str('0?10?10?10') k2 = KnownBits.from_str('0???111000') res = k1.abstract_sub(k2) assert str(res) == "...?11?10" k1 = KnownBits.from_str( '...1?10?10?10') k2 = KnownBits.from_str('...10000???111000') res = k1.abstract_sub(k2) assert str(res) == "111?????11?10" @given(knownbits_and_contained_number, knownbits_and_contained_number) def test_hypothesis_add(t1, t2): k1, n1 = t1 k2, n2 = t2 k3 = k1.abstract_add(k2) n3 = n1 + n2 assert k3.contains(n3) @given(knownbits_and_contained_number, knownbits_and_contained_number) def test_hypothesis_sub(t1, t2): k1, n1 = t1 k2, n2 = t2 k3 = k1.abstract_sub(k2) n3 = n1  n2 assert k3.contains(n3)
Now we are in a pretty good situation, and have implemented abstract versions
for a bunch of important arithmetic and binary functions. What's also surprising
is that the implementation of all of the transfer functions is quite efficient.
We didn't have to write loops over the individual bits at all, instead we found
closed form expressions using primitive operations on the underlying integers
ones
and unknowns
. This means that computing the results of abstract
operations is quite efficient, which is important when using the abstract domain
in the context of a JIT compiler.
Proving correctness of the transfer functions with Z3
As one can probably tell from my recent posts, I've been thinking about
compiler correctness a lot. Getting the transfer functions absolutely
correct is really crucial, because a bug in them would lead to miscompilation of
Python code when the abstract domain is added to the JIT. While the randomized
tests are great, it's still entirely possible for them to miss bugs. The state
space for the arguments of a binary transfer function is 3**64 * 3**64
, and if
only a small part of that contains wrong behaviour it would be really unlikely
for us to find it with random tests by chance. Therefore I was reluctant to
merge the PyPy branch that contained the new abstract domain for a long time.
To increase our confidence in the correctness of the transfer functions further, we can use Z3 to prove their correctness, which gives us much stronger guarantees (not 100%, obviously). In this subsection I will show how to do that.
Here's an attempt to do this manually in the Python repl:
>>>> import z3 >>>> solver = z3.Solver() >>>> # like last blog post, proof by failing to find counterexamples >>>> def prove(cond): assert solver.check(z3.Not(cond)) == z3.unsat >>>> >>>> # let's set up a z3 bitvector variable for an abitrary concrete value >>>> n1 = z3.BitVec('concrete_value', 64) >>>> n1 concrete_value >>>> # due to operator overloading we can manipulate z3 formulas >>>> n2 = ~n1 >>>> n2 ~concrete_value >>>> >>>> # now z3 bitvector variables for the ones and zeros fields >>>> ones = z3.BitVec('abstract_ones', 64) >>>> unknowns = z3.BitVec('abstract_unknowns', 64) >>>> # we construct a KnownBits instance with the z3 variables >>>> k1 = KnownBits(ones, unknowns) >>>> # due to operator overloading we can call the methods on k1: >>>> k2 = k1.abstract_invert() >>>> k2.ones ~abstract_unknowns & ~abstract_ones >>>> k2.unknowns abstract_unknowns >>>> # here's the correctness condition that we want to prove: >>>> k2.contains(n2) ~concrete_value & ~abstract_unknowns == ~abstract_unknowns & ~abstract_ones >>>> # let's try >>>> prove(k2.contains(n2)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in prove AssertionError >>>> # it doesn't work! let's look at the counterexample to see why: >>>> solver.model() [abstract_unknowns = 0, abstract_ones = 0, concrete_value = 1] >>>> # we can build a KnownBits instance with the values in the >>>> # counterexample: >>>> ~1 # concrete result 2 >>>> counter_example_k1 = KnownBits(0, 0) >>>> counter_example_k1 KnownBits.from_constant(0) >>>> counter_example_k2 = counter_example_k1.abstract_invert() >>>> counter_example_k2 KnownBits.from_constant(1) >>>> # let's check the failing condition >>>> counter_example_k2.contains(~1) False
What is the problem here? We didn't tell Z3 that n1
was supposed to be a
member of k1
. We can add this as a precondition to the solver, and then the
prove works:
>>>> solver.add(k1.contains(n1)) >>>> prove(k2.contains(n2)) # works!
This is super cool! It's really a proof about the actual implementation, because we call the implementation methods directly, and due to the operator overloading that Z3 does we can be sure that we are actually checking a formula that corresponds to the Python code. This eliminates one source of errors in formal methods.
Doing the proof manually on the Python REPL is kind of annoying though, and we also would like to make sure that the proofs are redone when we change the code. What we would really like to do is writing the proofs as a unittest that we can run while developing and in CI. Doing this is possible, and the unit tests that really perform proofs look pleasingly similar to the Hypothesisbased ones.
First we need to set up a bit of infrastructure:
INTEGER_WIDTH = 64 def BitVec(name): return z3.BitVec(name, INTEGER_WIDTH) def BitVecVal(val): return z3.BitVecVal(val, INTEGER_WIDTH) def z3_setup_variables(): # instantiate a solver solver = z3.Solver() # a Z3 variable for the first concrete value n1 = BitVec("n1") # a KnownBits instances that uses Z3 variables as its ones and unknowns, # representing the first abstract value k1 = KnownBits(BitVec("n1_ones"), BitVec("n1_unkowns")) # add the precondition to the solver that the concrete value n1 must be a # member of the abstract value k1 solver.add(k1.contains(n1)) # a Z3 variable for the second concrete value n2 = BitVec("n2") # a KnownBits instances for the second abstract value k2 = KnownBits(BitVec("n2_ones"), BitVec("n2_unkowns")) # add the precondition linking n2 and k2 to the solver solver.add(k2.contains(n2)) return solver, k1, n1, k2, n2 def prove(cond, solver): z3res = solver.check(z3.Not(cond)) if z3res != z3.unsat: assert z3res == z3.sat # can't be timeout, we set no timeout # make the model with the counterexample global, to make inspecting the # bug easier when running pytest pdb global model model = solver.model() print(f"n1={model.eval(n1)}, n2={model.eval(n2)}") counter_example_k1 = KnownBits(model.eval(k1.ones).as_signed_long(), model.eval(k1.unknowns).as_signed_long()) counter_example_k2 = KnownBits(model.eval(k2.ones).as_signed_long(), model.eval(k2.unknowns).as_signed_long()) print(f"k1={counter_example_k1}, k2={counter_example_k2}") print(f"but {cond=} evaluates to {model.eval(cond)}") raise ValueError(solver.model())
And then we can write proofunittests like this:
def test_z3_abstract_invert(): solver, k1, n1, _, _ = z3_setup_variables() k2 = k1.abstract_invert() n2 = ~n1 prove(k2.contains(n2), solver) def test_z3_abstract_and(): solver, k1, n1, k2, n2 = z3_setup_variables() k3 = k1.abstract_and(k2) n3 = n1 & n2 prove(k3.contains(n3), solver) def test_z3_abstract_or(): solver, k1, n1, k2, n2 = z3_setup_variables() k3 = k1.abstract_or(k2) n3 = n1  n2 prove(k3.contains(n3), solver) def test_z3_abstract_add(): solver, k1, n1, k2, n2 = z3_setup_variables() k3 = k1.abstract_add(k2) n3 = n1 + n2 prove(k3.contains(n3), solver) def test_z3_abstract_sub(): solver, k1, n1, k2, n2 = z3_setup_variables() k3 = k1.abstract_sub(k2) n3 = n1  n2 prove(k3.contains(n3), solver)
It's possible to write a bit more Pythonmetaprogrammingmagic and unify the Hypothesis and Z3 tests into the same test definition.^{1}
Cases where this style of Z3 proof doesn't work
Unfortunately the approach described in the previous section only works for a
very small number of cases. It breaks down as soon as the KnownBits
methods
that we're calling contain any if
conditions (including hidden ones like
the shortcircuiting and
and or
in Python). Let's look at an example and
implement abstract_eq
. eq
is supposed to be an operation that compares two
integers and returns 0
or 1
if they are different or equal, respectively.
Implementing this in knownbits looks like this (with example and hypothesis
tests):
class KnownBits: ... def abstract_eq(self, other): # the result is a 0, 1, or ? # if they are both the same constant, they must be equal if self.is_constant() and other.is_constant() and self.ones == other.ones: return KnownBits.from_constant(1) # check whether we have known disagreeing bits, then we know the result # is 0 if self._disagrees(other): return KnownBits.from_constant(0) return KnownBits(0, 1) # an unknown boolean def _disagrees(self, other): # check whether the bits disagree in any place where both are known both_known = self.knowns & other.knowns return self.ones & both_known != other.ones & both_known def test_eq(): k1 = KnownBits.from_str('...?') k2 = KnownBits.from_str('...?') assert str(k1.abstract_eq(k2)) == '?' k1 = KnownBits.from_constant(10) assert str(k1.abstract_eq(k1)) == '1' k1 = KnownBits.from_constant(10) k2 = KnownBits.from_constant(20) assert str(k1.abstract_eq(k2)) == '0' @given(knownbits_and_contained_number, knownbits_and_contained_number) def test_hypothesis_eq(t1, t2): k1, n1 = t1 k2, n2 = t2 k3 = k1.abstract_eq(k2) assert k3.contains(int(n1 == n2))
Trying to do the proof in the same style as before breaks:
>>>> k3 = k1.abstract_eq(k2) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "knownbits.py", line 246, in abstract_eq if self._disagrees(other): File "venv/sitepackages/z3/z3.py", line 381, in __bool__ raise Z3Exception("Symbolic expressions cannot be cast to concrete Boolean values.") z3.z3types.Z3Exception: Symbolic expressions cannot be cast to concrete Boolean values.
We cannot call abstract_eq
on a KnownBits
with Z3 variables as fields,
because once we hit an if
statement, the whole approach of relying on the
operator overloading breaks down. Z3 doesn't actually parse the Python code or
anything advanced like that, we rather build an expression only by running the
code and letting the Z3 formulas build up.
To still prove the correctness of abstract_eq
we need to manually transform
the control flow logic of the function into a Z3 formula that uses the z3.If
expression, using a small helper function:
def z3_cond(b, trueval=1, falseval=0): return z3.If(b, BitVecVal(trueval), BitVecVal(falseval)) def z3_abstract_eq(k1, k2): # follow the *logic* of abstract_eq, we can't call it due to the ifs in it case1cond = z3.And(k1.is_constant(), k2.is_constant(), k1.ones == k2.ones) case2cond = k1._disagrees(k2) # ones is 1 in the first case, 0 otherwise ones = z3_cond(case1cond, 1, 0) # in the first two cases, unknowns is 0, 1 otherwise unknowns = z3_cond(z3.Or(case1cond, case2cond), 0, 1) return KnownBits(ones, unknowns) def test_z3_abstract_eq_logic(): solver, k1, n1, k2, n2 = z3_setup_variables() n3 = z3_cond(n1 == n2) # concrete result k3 = z3_abstract_eq(k1, k2) prove(k3.contains(n3), solver)
This proof works. It is a lot less satisfying than the previous ones though,
because we could have done an error in the manual transcription from Python code
to Z3 formulas (there are possibly more heavyhanded approaches where we do
this transformation more automatically using e.g. the ast
module to analyze
the source code, but that's a much more complicated researchy project). To
lessen this problem somewhat we can factor out the parts of the logic that don't
have any conditions into small helper methods (like _disagrees
in this
example) and use them in the manual conversion of the code to Z3 formulas.^{2}
The final condition that Z3 checks, btw, is this one:
If(n1 == n2, 1, 0) & ~If(Or(And(n1_unkowns == 0, n2_unkowns == 0, n1_ones == n2_ones), n1_ones & ~n1_unkowns & ~n2_unkowns != n2_ones & ~n1_unkowns & ~n2_unkowns), 0, 1) == If(And(n1_unkowns == 0, n2_unkowns == 0, n1_ones == n2_ones), 1, 0)
Making Statements about Precision
So far we have only used Z3 to prove statements about correctness, i.e. that
our abstract operations overapproximate what can happen with concrete values.
While proving this property is essential if we want to avoid miscompilation,
correctness alone is not a very strong constraint on the implementation of our
abstract transfer functions. We could simply return Knownbits.unknowns()
for
every abstract_*
method and the resulting overapproximation would be correct,
but useless in practice.
It's much harder to make statements about whether the transfer functions are maximally precise. There are two aspects of precision I want to discuss in this section, however.
The first aspect is that we would really like it if the transfer functions compute the maximally precise results for singleton sets. If all abstract arguments of an operations are constants, i.e. contain only a single concrete element, then we know that the resulting set also has only a single element. We can prove that all our transfer functions have this property:
def test_z3_prove_constant_folding(): solver, k1, n1, k2, n2 = z3_setup_variables() k3 = k1.abstract_invert() prove(z3.Implies(k1.is_constant(), k3.is_constant()), solver) k3 = k1.abstract_and(k2) prove(z3.Implies(z3.And(k1.is_constant(), k2.is_constant()), k3.is_constant()), solver) k3 = k1.abstract_or(k2) prove(z3.Implies(z3.And(k1.is_constant(), k2.is_constant()), k3.is_constant()), solver) k3 = k1.abstract_sub(k2) prove(z3.Implies(z3.And(k1.is_constant(), k2.is_constant()), k3.is_constant()), solver) k3 = z3_abstract_eq(k1, k2) prove(z3.Implies(z3.And(k1.is_constant(), k2.is_constant()), k3.is_constant()), solver)
Proving with Z3 that the transfer functions are maximally precise for nonconstant arguments seems to be relatively hard. I tried a few completely rigorous approaches and failed. The paper Sound, Precise, and Fast Abstract Interpretation with Tristate Numbers contains an optimality proof for the transfer functions of addition and subtraction, so we can be certain that they are as precise as is possible.
I still want to show an approach for trying to find concrete examples of abstract values that are less precise than they could be, using a combination of Hypothesis and Z3. The idea is to use hypothesis to pick random abstract values. Then we compute the abstract result using our transfer function. Afterwards we can ask Z3 to find us an abstract result that is better than the one our transfer function produced. If Z3 finds a better abstract result, we have a concrete example of imprecision for our transfer function. Those tests aren't strict proofs, because they rely on generating random abstract values, but they can still be valuable (not for the transfer functions in this blog post, which are all optimal).
Here is what the code looks like (this is a little bit bonus content, I'll not explain the details and can only hope that the comments are somewhat helpful):
@given(random_knownbits_and_contained_number, random_knownbits_and_contained_number) @settings(deadline=None) def test_check_precision(t1, t2): k1, n1 = t1 k2, n2 = t2 # apply transfer function k3 = k1.abstract_add(k2) example_res = n1 + n2 # try to find a better version of k3 with Z3 solver = z3.Solver() solver.set("timeout", 8000) var1 = BitVec('v1') var2 = BitVec('v2') ones = BitVec('ones') unknowns = BitVec('unknowns') better_k3 = KnownBits(ones, unknowns) print(k1, k2, k3) # we're trying to find an example for a better k3, so we use check, without # negation: res = solver.check(z3.And( # better_k3 should be a valid knownbits instance better_k3.is_well_formed(), # it should be better than k3, ie there are known bits in better_k3 # that we don't have in k3 better_k3.knowns & ~k3.knowns != 0, # now encode the correctness condition for better_k3 with a ForAll: # for all concrete values var1 and var2, it must hold that if # var1 is in k1 and var2 is in k2 it follows that var1 + var2 is in # better_k3 z3.ForAll( [var1, var2], z3.Implies( z3.And(k1.contains(var1), k2.contains(var2)), better_k3.contains(var1 + var2))))) # if this query is satisfiable, we have found a better result for the # abstract_add if res == z3.sat: model = solver.model() rk3 = KnownBits(model.eval(ones).as_signed_long(), model.eval(unknowns).as_signed_long()) print("better", rk3) assert 0 if res == z3.unknown: print("timeout")
It does not actually fail for abstract_add
(nor the other abstract
functions). To see the test failing we can add some imprecision to the
implementation of abstract_add
to see Hypothesis and Z3 find examples of
values that are not optimally precise (for example by setting some bits
of unknowns
in the implementation of abstract_add
unconditionally).
Using the Abstract Domain in the Toy Optimizer for Generalized Constant Folding
Now after all this work we can finally actually use the knownbits abstract domain in the toy optimizer. The code for this follows Max' intro post about abstract interpretation quite closely.
For completeness sake, in the fold there's the basic infrastructure classes that make up the IR again (they are identical or at least extremely close to the previous toy posts).
toy infrastructure
class Value: def find(self): raise NotImplementedError("abstract") @dataclass(eq=False) class Operation(Value): name : str args : list[Value] forwarded : Optional[Value] = None def find(self) > Value: op = self while isinstance(op, Operation): next = op.forwarded if next is None: return op op = next return op def arg(self, index): return self.args[index].find() def make_equal_to(self, value : Value): self.find().forwarded = value @dataclass(eq=False) class Constant(Value): value : object def find(self): return self class Block(list): def __getattr__(self, opname): def wraparg(arg): if not isinstance(arg, Value): arg = Constant(arg) return arg def make_op(*args): op = Operation(opname, [wraparg(arg) for arg in args]) self.append(op) return op return make_op def bb_to_str(l : Block, varprefix : str = "var"): def arg_to_str(arg : Value): if isinstance(arg, Constant): return str(arg.value) else: return varnames[arg] varnames = {} res = [] for index, op in enumerate(l): # give the operation a name used while # printing: var = f"{varprefix}{index}" varnames[op] = var arguments = ", ".join( arg_to_str(op.arg(i)) for i in range(len(op.args)) ) strop = f"{var} = {op.name}({arguments})" res.append(strop) return "\n".join(res)
Now we can write some first tests, the first one simply checking constant folding:
def test_constfold_two_ops(): bb = Block() var0 = bb.getarg(0) var1 = bb.int_add(5, 4) var2 = bb.int_add(var1, 10) var3 = bb.int_add(var2, var0) opt_bb = simplify(bb) assert bb_to_str(opt_bb, "optvar") == """\ optvar0 = getarg(0) optvar1 = int_add(19, optvar0)"""
Calling the transfer functions on constant KnownBits
produces a constant
results, as we have seen. Therefore "regular" constant folding should hopefully
be achieved by optimizing with the KnownBits
abstract domain too.
The next two tests are slightly more complicated and can't be optimized by regular constantfolding. They follow the motivating examples from the start of this blog post, a hundred years ago:
def test_constfold_via_knownbits(): bb = Block() var0 = bb.getarg(0) var1 = bb.int_or(var0, 1) var2 = bb.int_and(var1, 1) var3 = bb.dummy(var2) opt_bb = simplify(bb) assert bb_to_str(opt_bb, "optvar") == """\ optvar0 = getarg(0) optvar1 = int_or(optvar0, 1) optvar2 = dummy(1)""" def test_constfold_alignment_check(): bb = Block() var0 = bb.getarg(0) var1 = bb.int_invert(0b111) # mask off the lowest three bits, thus var2 is aligned var2 = bb.int_and(var0, var1) # add 16 to aligned quantity var3 = bb.int_add(var2, 16) # check alignment of result var4 = bb.int_and(var3, 0b111) var5 = bb.int_eq(var4, 0) # var5 should be constfolded to 1 var6 = bb.dummy(var5) opt_bb = simplify(bb) assert bb_to_str(opt_bb, "optvar") == """\ optvar0 = getarg(0) optvar1 = int_and(optvar0, 8) optvar2 = int_add(optvar1, 16) optvar3 = dummy(1)"""
Here is simplify
to make these tests pass:
def unknown_transfer_functions(*abstract_args): return KnownBits.all_unknown() def simplify(bb: Block) > Block: abstract_values = {} # dict mapping Operation to KnownBits def knownbits_of(val : Value): if isinstance(val, Constant): return KnownBits.from_constant(val.value) return abstract_values[val] opt_bb = Block() for op in bb: # apply the transfer function on the abstract arguments name_without_prefix = op.name.removeprefix("int_") method_name = f"abstract_{name_without_prefix}" transfer_function = getattr(KnownBits, method_name, unknown_transfer_functions) abstract_args = [knownbits_of(arg.find()) for arg in op.args] abstract_res = abstract_values[op] = transfer_function(*abstract_args) # if the result is a constant, we optimize the operation away and make # it equal to the constant result if abstract_res.is_constant(): op.make_equal_to(Constant(abstract_res.ones)) continue # otherwise emit the op opt_bb.append(op) return opt_bb
The code follows the approach from the previous blog post very closely. The only difference is that we apply the transfer function first, to be able to detect whether the abstract domain can tell us that the result has to always be a constant. This code makes all three tests pass.
Using the KnownBits
Domain for Conditional Peephole Rewrites
So far we are only using the KnownBits
domain to find out that certain
operations have to produce a constant. We can also use the KnownBits
domain
to check whether certain operation rewrites are correct. Let's use one of the
examples from the Mining JIT traces for missing optimizations with
Z3
post, where Z3 found the inefficiency (x << 4) & 0xf == x << 4
in PyPy JIT
traces. We don't have shift operations, but we want to generalize this optimization
anyway. The general form of this rewrite is that under some circumstances x &
y == x
, and we can use the KnownBits
domain to detect situations where this
must be true.
To understand when x & y == x
is true, we can think about individual pairs of
bits a
and b
. If a == 0
, then a & b == 0 & b == 0 == a
. If b == 1
then a & b == a & 1 == a
. So if either a == 0
or b == 1
is true,
a & b == a
follows. And if either of these conditions is true for all the
bits of x
and y
, we can know that x & y == x
.
We can write a method on KnownBits
to check for this condition:
class KnownBits: ... def is_and_identity(self, other): """ Return True if n1 & n2 == n1 for any n1 in self and n2 in other. (or, equivalently, return True if n1  n2 == n2)""" return self.zeros  other.ones == 1
Since my reasoning about this feels ripe for errors, let's check that our understanding is correct with Z3:
def test_prove_is_and_identity(): solver, k1, n1, k2, n2 = z3_setup_variables() prove(z3.Implies(k1.is_and_identity(k2), n1 & n2 == n1), solver)
Now let's use this in the toy optimizer. Here are two tests for this rewrite:
def test_remove_redundant_and(): bb = Block() var0 = bb.getarg(0) var1 = bb.int_invert(0b1111) # mask off the lowest four bits var2 = bb.int_and(var0, var1) # applying the same mask is not redundant var3 = bb.int_and(var2, var1) var4 = bb.dummy(var3) opt_bb = simplify(bb) assert bb_to_str(opt_bb, "optvar") == """\ optvar0 = getarg(0) optvar1 = int_and(optvar0, 16) optvar2 = dummy(optvar1)""" def test_remove_redundant_and_more_complex(): bb = Block() var0 = bb.getarg(0) var1 = bb.getarg(1) # var2 has bit pattern ???? var2 = bb.int_and(var0, 0b1111) # var3 has bit pattern ...?1111 var3 = bb.int_or(var1, 0b1111) # var4 is just var2 var4 = bb.int_and(var2, var3) var5 = bb.dummy(var4) opt_bb = simplify(bb) assert bb_to_str(opt_bb, "optvar") == """\ optvar0 = getarg(0) optvar1 = getarg(1) optvar2 = int_and(optvar0, 15) optvar3 = int_or(optvar1, 15) optvar4 = dummy(optvar2)"""
The first test could also be made to pass by implementing a reassociation
optimization that turns (x & c1) & c2
into x & (c1 & c2)
and then constantfolds the second and
. But here we want to
use KnownBits
and conditionally rewrite int_and
to its first argument. So to make the tests pass,
we can change simplify
like this:
def simplify(bb: Block) > Block: abstract_values = {} # dict mapping Operation to KnownBits def knownbits_of(val : Value): ... opt_bb = Block() for op in bb: # apply the transfer function on the abstract arguments name_without_prefix = op.name.removeprefix("int_") method_name = f"abstract_{name_without_prefix}" transfer_function = getattr(KnownBits, method_name, unknown_transfer_functions) abstract_args = [knownbits_of(arg.find()) for arg in op.args] abstract_res = abstract_values[op] = transfer_function(*abstract_args) # if the result is a constant, we optimize the operation away and make # it equal to the constant result if abstract_res.is_constant(): op.make_equal_to(Constant(abstract_res.ones)) continue # <<<< new code # conditionally rewrite int_and(x, y) to x if op.name == "int_and": k1, k2 = abstract_args if k1.is_and_identity(k2): op.make_equal_to(op.arg(0)) continue # >>>> end changes opt_bb.append(op) return opt_bb
And with that, the new tests pass as well. A real implementation would also check the other argument order, but we leave that out for the sake of brevity.
This rewrite also generalizes the rewrites int_and(0, x) > 0
and
int_and(1, x) > x
, let's add a test for those:
def test_remove_and_simple(): bb = Block() var0 = bb.getarg(0) var1 = bb.getarg(1) var2 = bb.int_and(0, var0) # == 0 var3 = bb.int_invert(var2) # == 1 var4 = bb.int_and(var1, var3) # == var1 var5 = bb.dummy(var4) opt_bb = simplify(bb) assert bb_to_str(opt_bb, "optvar") == """\ optvar0 = getarg(0) optvar1 = getarg(1) optvar2 = dummy(optvar1)"""
This test just passes. And that's it for this post!
Conclusion
In this post we've seen the implementation, testing and proofs about a 'known bits' abstract domain, as well as its use in the toy optimizer to generalize constant folding, and to implement conditional peephole rewrites.
In the next posts I'll write about the real implementation of a knownbits domain in PyPy's JIT, its combination with the existing interval abstract domain, how to deal with gaining informations from conditions in the program, and some lose ends.
Sources:
 Known bits in LLVM
 Tristate numbers for known bits in Linux eBPF
 Sound, Precise, and Fast Abstract Interpretation with Tristate Numbers
 Verifying the Veriﬁer: eBPF Range Analysis Veriﬁcation

BitTwiddling: Addition with Unknown
Bits
is a super readable blog post by Dougall J. I've taken the
ones
andunknowns
naming from this post, which I find significantly clearer thanvalue
andmask
, which the Linux kernel uses.  Bits, Math and Performance(?), a fantastic blog by Harold Aptroot. There are a lot of relevant posts about known bits, range analysis etc. Harold is also the author of Haroldbot, a website that can be used for bitvector calculations, and also checks bitvector identities.
 Sharpening Constraint Programming approaches for BitVector Theory
 Deriving Abstract Transfer Functions for Analyzing Embedded Software
 Synthesizing Abstract Transformers

There's a subtletly about the Z3 proofs that I'm sort of glossing over here. Python integers are of arbitrary width, and the
KnownBits
code is actually carefully written to work for integers of any size. This property is tested by the Hypothesis tests, which don't limit the sizes of the generated random integers. However, the Z3 proofs only check bitvectors of a fixed bitwidth of 64. There are various ways to deal with this situation. For most "real" compilers, the bitwidth of integers would be fixed anyway. Then the componentsones
andunknowns
of theKnownBits
class would use the number of bits the corresponding integer variable has, and the Z3 proofs would use the same width. This is what we do in the PyPy JIT. ↩ 
The less close connection between implementation and proof for
abstract_eq
is one of the reasons why it makes sense to do unittesting in addition to proofs. For a more detailed explanation of why both tests and proofs are good to have, see Jeremy Siek's blog post, as well as the Knuth quote. ↩
Abstract interpretation in the Toy Optimizer
This is a crosspost from Max Bernstein from his excellent blog where he writes about programming languages, compilers, optimizations, virtual machines. He's looking for a (dynamic language runtime or compiler related) job too.
CF BolzTereick wrote some excellent posts in which they introduce a small IR and optimizer and extend it with allocation removal. We also did a live stream together in which we did some more heap optimizations.
In this blog post, I'm going to write a small abtract interpreter for the Toy IR and then show how we can use it to do some simple optimizations. It assumes that you are familiar with the little IR, which I have reproduced unchanged in a GitHub Gist.
Abstract interpretation is a general framework for efficiently computing properties that must be true for all possible executions of a program. It's a widely used approach both in compiler optimizations as well as offline static analysis for finding bugs. I'm writing this post to pave the way for CF's next post on proving abstract interpreters correct for range analysis and known bits analysis inside PyPy.
Before we begin, I want to note a couple of things:
 The Toy IR is in SSA form, which means that every variable is defined exactly once. This means that abstract properties of each variable are easy to track.
 The Toy IR represents a linear trace without control flow, meaning we won't talk about meet/join or fixpoints. They only make sense if the IR has a notion of conditional branches or back edges (loops).
Alright, let's get started.
Welcome to abstract interpretation
Abstract interpretation means a couple different things to different people. There's rigorous mathematical formalism thanks to Patrick and Radhia Cousot, our favorite power couple, and there's also sketchy handwavy stuff like what will follow in this post. In the end, all people are trying to do is reason about program behavior without running it.
In particular, abstract interpretation is an overapproximation of the behavior of a program. Correctly implemented abstract interpreters never lie, but they might be a little bit pessimistic. This is because instead of using real values and running the programwhich would produce a concrete result and some realworld behaviorwe "run" the program with a parallel universe of abstract values. This abstract run gives us information about all possible runs of the program.^{1}
Abstract values always represent sets of concrete values. Instead of literally storing a set (in the world of integers, for example, it could get pretty big...there are a lot of integers), we group them into a finite number of named subsets.^{2}
Let's learn a little about abstract interpretation with an example program and example abstract domain. Here's the example program:
v0 = 1 v1 = 2 v2 = add(v0, v1)
And our abstract domain is "is the number positive" (where "positive" means nonnegative, but I wanted to keep the words distinct):
top / \ positive negative \ / bottom
The special top value means "I don't know" and the special bottom value means "empty set" or "unreachable". The positive and negative values represent the sets of all positive and negative numbers, respectively.
We initialize all the variables v0
, v1
, and v2
to bottom and then walk
our IR, updating our knowledge as we go.
# here v0:bottom = 1 v1:bottom = 2 v2:bottom = add(v0, v1)
In order to do that, we have to have transfer functions for each operation. For constants, the transfer function is easy: determine if the constant is positive or negative. For other operations, we have to define a function that takes the abstract values of the operands and returns the abstract value of the result.
In order to be correct, transfer functions for operations have to be compatible with the behavior of their corresponding concrete implementations. You can think of them having an implicit universal quantifier forall in front of them.
Let's step through the constants at least:
v0:positive = 1 v1:positive = 2 # here v2:bottom = add(v0, v1)
Now we need to figure out the transfer function for add
. It's kind of tricky
right now because we haven't specified our abstract domain very well. I keep
saying "numbers", but what kinds of numbers? Integers? Real numbers? Floating
point? Some kind of fixedwidth bit vector (int8
, uint32
, ...) like an
actual machine "integer"?
For this post, I am going to use the mathematical definition of integer, which
means that the values are not bounded in size and therefore do not overflow.
Actual hardware memory constraints aside, this is kind of like a Python int
.
So let's look at what happens when we add two abstract numbers:
top  positive  negative  bottom  

top  top  top  top  bottom 
positive  top  positive  top  bottom 
negative  top  top  negative  bottom 
bottom  bottom  bottom  bottom  bottom 
As an example, let's try to add two numbers a
and b
, where a
is positive
and b
is negative. We don't know anything about their values other than their
signs. They could be 5
and 3
, where the result is 2
, or they could be
1
and 100
, where the result is 99
. This is why we can't say anything
about the result of this operation and have to return top.
The short of this table is that we only really know the result of an addition
if both operands are positive or both operands are negative. Thankfully, in
this example, both operands are known positive. So we can learn something about
v2
:
v0:positive = 1 v1:positive = 2 v2:positive = add(v0, v1) # here
This may not seem useful in isolation, but analyzing more complex programs even
with this simple domain may be able to remove checks such as if (v2 < 0) { ... }
.
Let's take a look at another example using an sample absval
(absolute value)
IR operation:
v0 = getarg(0) v1 = getarg(1) v2 = absval(v0) v3 = absval(v1) v4 = add(v2, v3) v5 = absval(v4)
Even though we have no constant/concrete values, we can still learn something
about the states of values throughout the program. Since we know that absval
always returns a positive number, we learn that v2
, v3
, and v4
are all
positive. This means that we can optimize out the absval
operation on v5
:
v0:top = getarg(0) v1:top = getarg(1) v2:positive = absval(v0) v3:positive = absval(v1) v4:positive = add(v2, v3) v5:positive = v4
Other interesting lattices include:
 Constants (where the middle row is pretty wide)
 Range analysis (bounds on min and max of a number)
 Known bits (using a bitvector representation of a number, which bits are always 0 or 1)
For the rest of this blog post, we are going to do a very limited version of "known bits", called parity. This analysis only tracks the least significant bit of a number, which indicates if it is even or odd.
Parity
The lattice is pretty similar to the positive/negative lattice:
top / \ even odd \ / bottom
Let's define a data structure to represent this in Python code:
class Parity: def __init__(self, name): self.name = name def __repr__(self): return self.name
And instantiate the members of the lattice:
TOP = Parity("top") EVEN = Parity("even") ODD = Parity("odd") BOTTOM = Parity("bottom")
Now let's write a forward flow analysis of a basic block using this lattice.
We'll do that by assuming that a method on Parity
is defined for each IR
operation. For example, Parity.add
, Parity.lshift
, etc.
def analyze(block: Block) > None: parity = {v: BOTTOM for v in block} def parity_of(value): if isinstance(value, Constant): return Parity.const(value) return parity[value] for op in block: transfer = getattr(Parity, op.name) args = [parity_of(arg.find()) for arg in op.args] parity[op] = transfer(*args)
For every operation, we compute the abstract valuethe parityof the
arguments and then call the corresponding method on Parity
to get the
abstract result.
We need to special case Constant
s due to a quirk of how the Toy IR is
constructed: the constants don't appear in the instruction stream and instead
are freefloating.
Let's start by looking at the abstraction function for concrete valuesconstants:
class Parity: # ... @staticmethod def const(value): if value.value % 2 == 0: return EVEN else: return ODD
Seems reasonable enough. Let's pause on operations for a moment and consider an example program:
v0 = getarg(0) v1 = getarg(1) v2 = lshift(v0, 1) v3 = lshift(v1, 1) v4 = add(v2, v3) v5 = dummy(v4)
This function (which is admittedly a little contrived) takes two inputs, shifts
them left by one bit, adds the result, and then checks the least significant
bit of the addition result. It then passes that result into a dummy
function,
which you can think of as "return" or "escape".
To do some abstract interpretation on this program, we'll need to implement the
transfer functions for lshift
and add
(dummy
will just always return
TOP
). We'll start with add
. Remember that adding two even numbers returns
an even number, adding two odd numbers returns an even number, and mixing even
and odd returns an odd number.
class Parity: # ... def add(self, other): if self is BOTTOM or other is BOTTOM: return BOTTOM if self is TOP or other is TOP: return TOP if self is EVEN and other is EVEN: return EVEN if self is ODD and other is ODD: return EVEN return ODD
We also need to fill in the other cases where the operands are top or bottom. In this case, they are both "contagious"; if either operand is bottom, the result is as well. If neither is bottom but either operand is top, the result is as well.
Now let's look at lshift
. Shifting any number left by a nonzero number of
bits will always result in an even number, but we need to be careful about the
zero case! Shifting by zero doesn't change the number at all. Unfortunately,
since our lattice has no notion of zero, we have to overapproximate here:
class Parity: # ... def lshift(self, other): # self << other if other is ODD: return EVEN return TOP
This means that we will miss some opportunities to optimize, but it's a tradeoff that's just part of the game. (We could also add more elements to our lattice, but that's a topic for another day.)
Now, if we run our abstract interpretation, we'll collect some interesting
properties about the program. If we temporarily hack on the internals of
bb_to_str
, we can print out parity information alongside the IR operations:
v0:top = getarg(0) v1:top = getarg(1) v2:even = lshift(v0, 1) v3:even = lshift(v1, 1) v4:even = add(v2, v3) v5:top = dummy(v4)
This is pretty awesome, because we can see that v4
, the result of the
addition, is always even. Maybe we can do something with that information.
Optimization
One way that a program might check if a number is odd is by checking the least
significant bit. This is a common pattern in C code, where you might see code
like y = x & 1
. Let's introduce a bitand
IR operation that acts like the
&
operator in C/Python. Here is an example of use of it in our program:
v0 = getarg(0) v1 = getarg(1) v2 = lshift(v0, 1) v3 = lshift(v1, 1) v4 = add(v2, v3) v5 = bitand(v4, 1) # new! v6 = dummy(v5)
We'll hold off on implementing the transfer function for itthat's left as an exercise for the readerand instead do something different.
Instead, we'll see if we can optimize operations of the form bitand(X, 1)
. If
we statically know the parity as a result of abstract interpretation, we can
replace the bitand
with a constant 0
or 1
.
We'll first modify the analyze
function (and rename it) to return a new
Block
containing optimized instructions:
def simplify(block: Block) > Block: parity = {v: BOTTOM for v in block} def parity_of(value): if isinstance(value, Constant): return Parity.const(value) return parity[value] result = Block() for op in block: # TODO: Optimize op # Emit result.append(op) # Analyze transfer = getattr(Parity, op.name) args = [parity_of(arg.find()) for arg in op.args] parity[op] = transfer(*args) return result
We're approaching this the way that PyPy does things under the hood, which is all in roughly a single pass. It tries to optimize an instruction away, and if it can't, it copies it into the new block.
Now let's add in the bitand
optimization. It's mostly some grosslooking
pattern matching that checks if the right hand side of a bitwise and
operation is 1
(TODO: the left hand side, too). CF had some neat ideas on how
to make this more ergonomic, which I might save for later.^{3}
Then, if we know the parity, optimize the bitand
into a constant.
def simplify(block: Block) > Block: parity = {v: BOTTOM for v in block} def parity_of(value): if isinstance(value, Constant): return Parity.const(value) return parity[value] result = Block() for op in block: # Try to simplify if isinstance(op, Operation) and op.name == "bitand": arg = op.arg(0) mask = op.arg(1) if isinstance(mask, Constant) and mask.value == 1: if parity_of(arg) is EVEN: op.make_equal_to(Constant(0)) continue elif parity_of(arg) is ODD: op.make_equal_to(Constant(1)) continue # Emit result.append(op) # Analyze transfer = getattr(Parity, op.name) args = [parity_of(arg.find()) for arg in op.args] parity[op] = transfer(*args) return result
Remember: because we use unionfind to rewrite instructions in the optimizer
(make_equal_to
), later uses of the same instruction get the new
optimized version "for free" (find
).
Let's see how it works on our IR:
v0 = getarg(0) v1 = getarg(1) v2 = lshift(v0, 1) v3 = lshift(v1, 1) v4 = add(v2, v3) v6 = dummy(0)
Hey, neat! bitand
disappeared and the argument to dummy
is now the constant
0
because we know the lowest bit.
Wrapping up
Hopefully you have gained a little bit of an intuitive understanding of abstract interpretation. Last year, being able to write some code made me more comfortable with the math. Now being more comfortable with the math is helping me write the code. It's nice upward spiral.
The two abstract domains we used in this post are simple and not very useful in practice but it's possible to get very far using slightly more complicated abstract domains. Common domains include: constant propagation, type inference, range analysis, effect inference, liveness, etc. For example, here is a a sample lattice for constant propagation:
It has multiple levels to indicate more and less precision. For example, you
might learn that a variable is either 1
or 2
and be able to encode that as
nonnegative
instead of just going straight to top
.
Check out some realworld abstract interpretation in open source projects:
 Known bits in LLVM
 Constant range in LLVM
 But I am told that the ranges don't form a lattice (see Interval Analysis and Machine Arithmetic: Why Signedness Ignorance Is Bliss)
 Tristate numbers for known bits in Linux eBPF
 Range analysis in Linux eBPF
 GDB prologue analysis of assembly to understand the stack and find frame pointers without using DWARF (some docs)
If you have some readable examples, please share them so I can add.
Acknowledgements
Thank you to CF BolzTereick for the toy optimizer and helping edit this post!

In the words of abstract interpretation researchers Vincent Laviron and Francesco Logozzo in their paper Refining Abstract Interpretationbased Static Analyses with Hints (APLAS 2009):
The three main elements of an abstract interpretation are: (i) the abstract elements ("which properties am I interested in?"); (ii) the abstract transfer functions ("which is the abstract semantics of basic statements?"); and (iii) the abstract operations ("how do I combine the abstract elements?").
We don't have any of these "abstract operations" in this post because there's no control flow but you can read about them elsewhere! ↩

These abstract values are arranged in a lattice, which is a mathematical structure with some properties but the most important ones are that it has a top, a bottom, a partial order, a meet operation, and values can only move in one direction on the lattice.
Using abstract values from a lattice promises two things:
 The analysis will terminate
 The analysis will be correct for any run of the program, not just one sample run

Something about
__match_args__
and@property
... ↩
Mining JIT traces for missing optimizations with Z3
In my last post I've described how to use Z3 to find simple local peephole
optimization patterns
for the integer operations in PyPy's JIT. An example is int_and(x, 0) >
0
. In this post I want to scale up the problem of identifying possible
optimizations to much bigger instruction sequences, also using Z3. For that, I
am starting with the JIT traces of real benchmarks, after they have been
optimized by the optimizer of PyPy's JIT. Then we can ask Z3 to find
inefficient integer operations in those traces.
Starting from the optimized traces of real programs has some big advantages over the "classical" superoptimization approach of generating and then trying all possible sequences of instructions. It avoids the combinatorical explosion that happens with the latter approach. Also, starting from the traces of benchmarks or (even better) actual programs makes sure that we actually care about the missing optimizations that are found in this way. And because the traces are analyzed after they have been optimized by PyPy's optimizer, we only get reports for missing optimizations, that the JIT isn't able to do (yet).
The techniques and experiments I describe in this post are again the result of a bunch of discussions with John Regehr at a conference a few weeks ago, as well as reading his blog posts and papers. Thanks John! Also thanks to Max Bernstein for super helpful feedback on the drafts of this blog post (and for poking me to write things in general).
HighLevel Approach
The approach that I took works as follows:
 Run benchmarks or other interesting programs and then dump the IR of the JIT traces into a file. The traces have at that point been already optimized by the PyPy JIT's optimizer.
 For every trace, ignore all the operations on noninteger variables.
 Translate every integer operation into a Z3 formula.
 For every operation, use Z3 to find out whether the operation is redundant (how that is done is described below).
 If the operation is redundant, the trace is less efficient than it could have been, because the optimizer could also have removed the operation. Report the inefficiency.
 Minimize the inefficient programs by removing as many operations as possible to make the problem easier to understand.
In the post I will describe the details and show some pseudocode of the approach. I'll also make the proper code public eventually (but it needs a healthy dose of cleanups first).
Dumping PyPy Traces
PyPy will write its JIT traces into the file out
if the environment variable
PYPYLOG
is set as follows:
PYPYLOG=jitlogopt:out pypy <program.py>
This environment variable works for PyPy, but also for other virtual machines built with RPython.
(This is really a side point for the rest of the blog post, but since the
question came up I wanted to clarify it: Operations on integers in the Python
program that the JIT is running don't all correspond 1to1 with the int_...
operations in the traces. The int_...
trace operations always operate on
machine words. The Python int
type supports arbitrarily large integers. PyPy
will optimistically try to lower the operations on Python integers into machine
word operations, but adds the necessary guards into the trace to make sure that
overflow outside of the range of machine words is caught. In case one of these
guards fails the interpreter switches to a big integer heapallocated
representation.)
Encoding Traces as Z3 formulas
The last blog post already contained the code to encode the results of
individual trace operations into Z3 formulas, so we don't need to repeat that
here. To encode traces of operations we introduce a Z3 variable for every
operation in the trace and then call the z3_expression
function for every
single one of the operations in the trace.
For example, for the following trace:
[i1] i2 = uint_rshift(i1, 32) i3 = int_and(i2, 65535) i4 = uint_rshift(i1, 48) i5 = int_lshift(i4, 16) i6 = int_or(i5, i3) jump(i6, i2) # equal
We would get the Z3 formula:
z3.And(i2 == LShR(i1, 32), i3 == i2 & 65535, i4 == LShR(i1, 48), i5 == i4 << 16)
Usually we won't ask for the formula of the whole trace at once. Instead we go through the trace operation by operation and try to find inefficiencies in the current one we are looking at. Roughly like this (pseudo)code:
def newvar(name): return z3.BitVec(name, INTEGER_WIDTH) def find_inefficiencies(trace): solver = z3.Solver() var_to_z3var = {} for input_argument in trace.inputargs: var_to_z3var[input_argument] = newz3var(input_argument) for op in trace: var_to_z3var[op] = z3resultvar = newz3var(op.resultvarname) arg0 = op.args[0] z3arg0 = var_to_z3var[arg0] if len(op.args) == 2: arg1 = op.args[1] z3arg1 = var_to_z3var[arg1] else: z3arg1 = None res, valid_if = z3_expression(op.name, z3arg0, z3arg1) # checking for inefficiencies, see the next sections ... if ...: return "inefficient", op # not inefficient, assert op into the solver and continue with the next op solver.add(z3resultvar == res) return None # no inefficiency found
Identifying constant booleans with Z3
To get started finding inefficiencies in a trace, we can first focus on boolean variables. For every operation in the trace that returns a bool we can ask Z3 to prove that this variable must be always True or always False. Most of the time, neither of these proofs will succeed. But if Z3 manages to prove one of them, we know have found an ineffiency: instead of computing the boolean result (eg by executing a comparison) the JIT's optimizer could have replaced the operation with the corresponding boolean constant.
Here's an example of an inefficiency found that way: if x < y
and y < z
are
both true, PyPy's JIT could conclude that x < z
must also
be true. However, currently the JIT cannot make that conclusion because it
only reasons about the concrete ranges (lower and upper bounds) for every
integer variable, but it has no way to remember anything about relationships
between different variables. This kind of reasoning would quite often be useful
to remove list/string bounds checks. Here's a talk about how LLVM does
this (but it might be
too heavyweight for a JIT setting).
Here are some more examples found that way:

x  1 == x
is always False 
x  (x == 1) == 1
is always False. The patternx  (x == 1)
happens a lot in PyPy's hash computations: To be compatible with the CPython hashes we need to make sure that no object's hash is 1 (CPython uses 1 as an error value on the C level).
Here's pseudocode for how to implement checking boolean operations for inefficiencies:
def find_inefficiencies(trace): ... for op in trace: ... res, valid_if = z3_expression(op.name, z3arg0, z3arg1) # check for boolean constant result if op.has_boolean_result(): if prove(solver, res == 0): return "inefficient", op, 0 if prove(solver, res == 1): return "inefficient", op, 1 # checking for other inefficiencies, see the next sections ... # not inefficient, add op to the solver and continue with the next op solver.add(z3resultvar == res) return None # no inefficiency found
Identifying redundant operations
A more interesting class of redundancy is to try to find two operations in a trace that compute the same result. We can do that by asking Z3 to prove for each pair of different operations in the trace to prove that the result is always the same. If a previous operation returns the same result, the JIT could have reused that result instead of recomputing it, saving time. Doing this search for equivalent operations with Z3 is quadratic in the number of operations, but since traces have a maximum length it is not too bad in practice.
This is the real workhorse of my script so far, it's what finds most of the inefficiencies. Here's a few examples:
 The very first and super useful example the script found is
int_eq(b, 1) == b
ifb
is known to be a boolean (ie and integer 0 or 1). I have already implemented this optimization in the JIT.  Similarly,
int_and(b, 1) == b
for booleans. (x << 4) & 0xf == x << 4

((x >> 63) << 1) << 2) >> 3 == x >> 63
. In general the JIT is quite bad at optimizing repeated shifts (the infrastructure for doing better with that is already in place, so this will be a relatively easy fix). 
(x & 0xffffffff)  ((x >> 32) << 32) == x
. Having the JIT optimize this would maybe require first recognizing that(x >> 32) << 32
can be expressed as a mask:(x & 0xffffffff00000000)
, and then using(x & c1)  (x & c2) == x & (c1  c2)
 A commonly occurring pattern is variations of this one:
((x & 1345) ^ 2048)  2048 == x & 1345
(with different constants, of course). xor is add without carry, andx & 1345
does not have the bit2048
set. Therefore the^ 2048
is equivalent to+ 2048
, which the 2048
cancels. More generally, ifa & b == 0
, thena + b == a  b == a ^ b
. I don't understand at all why this appears so often in the traces, but I see variations of it a lot. LLVM can optimize this, but GCC can't, thanks to Andrew Pinski for filing the bug!
And here's some implementation pseudocode again:
def find_inefficiencies(trace): ... for op in trace: ... res, valid_if = z3_expression(op.name, z3arg0, z3arg1) # check for boolean constant result ... # searching for redundant operations for previous_op in trace: if previous_op is op: break # done, reached the current op previous_op_z3var = var_to_z3var[previous_op] if prove(solver, previous_op_z3var == res): return "inefficient", op, previous_op ... # more code here later ... # not inefficient, add op to the solver and continue with the next op solver.add(z3resultvar == res) return None # no inefficiency found
Synthesizing more complicated constants with existsforall
To find out whether some integer operations always return a constant result, we
can't simply use the same trick as for those operations that return boolean
results, because enumerating 2⁶⁴ possible constants and checking them all
would take too long. Like in the last post, we can use z3.ForAll
to find out
whether Z3 can synthesize a constant for the result of an operation for us.
If such a constant exists, the JIT could have removed the operation,
and replaced it with the constant that Z3 provides.
Here a few examples of inefficiencies found this way:

(x ^ 1) ^ x == 1
(or, more generally:(x ^ y) ^ x == y
)  if
x  y == 0
, it follows thatx == 0
andy == 0
 if
x != MAXINT
, thenx + 1 > x
Implementing this is actually slightly annoying. The solver.add
calls for
noninefficient ops add assertions to the solver, which are now confusing the
z3.ForAll
query. We could remove all assertion from the solver, then do the
ForAll
query, then add the assertions back. What I ended doing instead was
instantiating a second solver object that I'm using for the ForAll
queries,
that remains empty the whole time.
def find_inefficiencies(trace): solver = z3.Solver() empty_solver = z3.Solver() var_to_z3var = {} ... for op in trace: ... res, valid_if = z3_expression(op.name, z3arg0, z3arg1) # check for boolean constant result ... # searching for redundant operations ... # checking for constant results constvar = z3.BitVec('find_const', INTEGER_WIDTH) condition = z3.ForAll( var_to_z3var.values(), z3.Implies( *solver.assertions(), expr == constvar ) ) if empty_solver.check(condition) == z3.sat: model = empty_solver.model() const = model[constvar].as_signed_long() return "inefficient", op, const # not inefficient, add op to the solver and continue with the next op solver.add(z3resultvar == res) return None # no inefficiency found
Minimization
Analyzing an inefficiency by hand in the context of a larger trace is quite tedious. Therefore I've implemented a (super inefficient) script to try to make the examples smaller. Here's how that works:
 First throw out all the operations that occur after the inefficient operation in the trace.
 Then we remove all "dead" operations, ie operations that don't have their results used (all the operations that we can analyze with Z3 are without side effects).
 Now we try to remove every guard in the trace one by one and check afterwards, whether the resulting trace still has an inefficiency.
 We also try to replace every single operation with a new argument to the trace, to see whether the inefficiency is still present.
The minimization process is sort of inefficient and I should probably be using shrinkray or CReduce instead. However, it seems to work well in practice and the runtime isn't too bad.
Results
So far I am using the JIT traces of three programs: 1) Booting Linux on the Pydrofoil RISCV emulator, 2) booting Linux on the Pydrofoil ARM emulator, and 3) running the PyPy bootstrap process on top of PyPy.
I picked these programs because most Python programs don't contain interesting amounts of integer operations, and the traces of the emulators contain a lot of them. I also used the bootstrap process because I still wanted to try a big Python program and personally care about the runtime of this program a lot.
The script identifies 94 inefficiencies in the traces, a lot of them come from repeating patterns. My next steps will be to manually inspect them all, categorize them, and implement easy optimizations identified that way. I also want a way to sort the examples by execution count in the benchmarks, to get a feeling for which of them are most important.
I didn't investigate the full set of Python benchmarks that PyPy uses yet, because I don't expect them to contain interesting amounts of integer operations, but maybe I am wrong about that? Will have to try eventually.
Conclusion
This was again much easier to do than I would have expected! Given that I had the translation of trace ops to Z3 already in place, it was a matter of about a day's of programming to use this infrastructure to find the first problems and minimizing them.
Reusing the results of existing operations or replacing operations by constants can be seen as "zeroinstruction superoptimization". I'll probably be rather busy for a while to add the missing optimizations identified by my simple script. But later extensions to actually synthesize one or several operations in the attempt to optimize the traces more and find more opportunities should be possible.
Finding inefficiencies in traces with Z3 is significantly less annoying and also less errorprone than just manually inspecting traces and trying to spot optimization opportunities.
Random Notes and Sources
Again, John's blog posts:
 Let’s Work on an LLVM Superoptimizer
 Early Superoptimizer Results
 A Few Synthesizing Superoptimizer Results
 Synthesizing Constants
and papers:
I remembered recently that I had seen the approach of optimizing the traces of a tracing JIT with Z3 a long time ago, as part of the (now long dead, I think) SPUR project. There's a workshop paper from 2010 about this. SPUR was trying to use Z3 built into the actual JIT (as opposed to using Z3 only to find places where the regular optimizers could be improved). In addition to bitvectors, SPUR also used the Z3 support for arrays to model the C# heap and remove redundant stores. This is still another future extension for all the Z3 work I've been doing in the context of the PyPy JIT.
Finding Simple Rewrite Rules for the JIT with Z3
In June I was at the PLDI conference in Copenhagen to present a paper I coauthored with Max Bernstein. I also finally met John Regehr, who I'd been talking on social media for ages but had never met. John has been working on compiler correctness and better techniques for building compilers and optimizers since a very long time. The blog post Finding JIT Optimizer Bugs using SMT Solvers and Fuzzing was heavily inspired by this work. We talked a lot about his and his groups work on using Z3 for superoptimization and for finding missing optimizations. I have applied some of the things John told me about to the traces of PyPy's JIT, and wanted to blog about that. However, my draft felt quite hard to understand. Therefore I have now written this current post, to at least try to provide a somewhat gentler onramp to the topic.
In this post we will use the PythonAPI to Z3 to find local peephole rewrite rules for the operations in the intermediate representation of PyPy's tracing JIT. The code for this is simple enough that we can go through all of it.
The PyPy JIT produces traces of machine level instructions, which are optimized
and then turned into machine code. The optimizer uses a number of approaches to
make the traces more efficient. For integer operations it applies a number of
arithmetic simplification rules rules, for example int_add(x, 0) > x
. When
implementing these rules in the JIT there are two problems: How do we know
that the rules are correct? And how do we know that we haven't forgotten any
rules? We'll try to answer both of these, but the first one in particular.
We'll be using Z3, a satisfiability module theories (SMT) solver which has good bitvector support and most importantly an excellent Python API. We can use the solver to reason about bitvectors, which are how we will model machine integers.
To find rewrite rules, we will consider the binary operations (i.e. those
taking two arguments) in PyPy traces that take and produce integers. The
completely general form op(x, y)
is not simplifiable on its own. But if
either x == y
or if one of the arguments is a constant, we can potentially simplify the
operation into a simpler form. The results are either the variable x
, or a
(potentially different) constant. We'll ignore constantfolding where both
arguments of the binary operation are constants. The possible results for a
simplifiable binary operation are the variable x
or another constant. This
leaves the following patterns as possibilities:
op(x, x) == x
op(x, x) == c1
op(x, c1) == x
op(c1, x) == x
op(x, c1) == c2
op(c1, x) == c2
Our approach will be to take every single supported binary integer operation,
instantiate all of these patterns, and try to ask Z3 whether the resulting
simplification is valid for all values of x
.
Quick intro to the Z3 PythonAPI
Here's a terminal session showing the use of the Z3 Python API:
>>>> import z3 >>>> # construct a Z3 bitvector variable of width 8, with name x: >>>> x = z3.BitVec('x', 8) >>>> # construct a more complicated formula by using operator overloading: >>>> x + x x + x >>>> x + 1 x + 1
Z3 checks the "satisfiability" of a formula. This means that it tries to find an example set of concrete values for the variables that occur in a formula, such that the formula becomes true. Examples:
>>>> solver = z3.Solver() >>>> solver.check(x * x == 3) unsat >>>> # meaning no x fulfils this property >>>> >>>> solver.check(x * x == 9) sat >>>> model = solver.model() >>>> model [x = 253] >>>> model[x].as_signed_long() 3 >>>> # 253 is the same as 3 in two's complement arithmetic with 8 bits
In order to use Z3 to prove something, we can ask Z3 to find counterexamples for the statement, meaning concrete values that would make the negation of the statement true:
>>>> solver.check(z3.Not(x ^ 1 == ~x)) unsat
The result unsat
means that we just proved that x ^ 1 == ~x
is true for
all x
, because there is no value for x
that makes not (x ^ 1 == ~x)
true (this works because 1 has all the bits set).
If we try to prove something incorrect in this way, the following happens:
>>>> solver.check(z3.Not(x ^ 1 == x)) sat
sat
shows that x ^ 1 == x
is (unsurprisingly) not always true, and we can
ask for a counterexample:
>>>> solver.model() [x = 0]
This way of proving this works because the check
calls try to solve an
(implicit) "exists" quantifier, over all the Z3 variables used in the formula.
check
will either return z3.unsat
, which means that no concrete values make
the formula true; or z3.sat
, which means that you can get some concrete
values that make the formula true by calling solver.model()
.
In math terms we prove things using check
by deMorgan's rules for quantifiers:
$$ \lnot \exists x: \lnot f(x) \implies \forall x: f(x) $$
Now that we've seen the basics of using the Z3 API on a few small examples, we'll use it in a bigger program.
Encoding the integer operations of RPython's JIT into Z3 formulas
Now we'll use the API to reason about the integer operations of the PyPy JIT intermediate representation (IR). The binary integer operations are:
opnames2 = [ "int_add", "int_sub", "int_mul", "int_and", "int_or", "int_xor", "int_eq", "int_ne", "int_lt", "int_le", "int_gt", "int_ge", "uint_lt", "uint_le", "uint_gt", "uint_ge", "int_lshift", "int_rshift", "uint_rshift", "uint_mul_high", "int_pydiv", "int_pymod", ]
There's not much special about the integer operations. Like in LLVM, most of
them are signednessindependent: int_add
, int_sub
, int_mul
, ... work
correctly for unsigned integers but also for
two'scomplement signed
integers. Exceptions for that are order comparisons like int_lt
etc. for
which we have unsigned variants uint_lt
etc. All operations that produce a
boolean result return a fullwidth integer 0
or 1
(the PyPy JIT supports
only wordsized integers in its intermediate representation)
In order to reason about the IR operations, some ground work:
import z3 INTEGER_WIDTH = 64 solver = z3.Solver() solver.set("timeout", 10000) # milliseconds, ie 10s xvar = z3.BitVec('x', INTEGER_WIDTH) constvar = z3.BitVec('const', INTEGER_WIDTH) constvar2 = z3.BitVec('const2', INTEGER_WIDTH) TRUEBV = z3.BitVecVal(1, INTEGER_WIDTH) FALSEBV = z3.BitVecVal(0, INTEGER_WIDTH)
And here's the a function to turn an integer IR operation of PyPy's JIT into Z3 formulas:
def z3_expression(opname, arg0, arg1=None): """ computes a tuple of (result, valid_if) of Z3 formulas. `result` is the formula representing the result of the operation, given argument formulas arg0 and arg1. `valid_if` is a precondition that must be true for the result to be meaningful. """ result = None valid_if = True # the precondition is mostly True, with few exceptions if opname == "int_add": result = arg0 + arg1 elif opname == "int_sub": result = arg0  arg1 elif opname == "int_mul": result = arg0 * arg1 elif opname == "int_and": result = arg0 & arg1 elif opname == "int_or": result = arg0  arg1 elif opname == "int_xor": result = arg0 ^ arg1 elif opname == "int_eq": result = cond(arg0 == arg1) elif opname == "int_ne": result = cond(arg0 != arg1) elif opname == "int_lt": result = cond(arg0 < arg1) elif opname == "int_le": result = cond(arg0 <= arg1) elif opname == "int_gt": result = cond(arg0 > arg1) elif opname == "int_ge": result = cond(arg0 >= arg1) elif opname == "uint_lt": result = cond(z3.ULT(arg0, arg1)) elif opname == "uint_le": result = cond(z3.ULE(arg0, arg1)) elif opname == "uint_gt": result = cond(z3.UGT(arg0, arg1)) elif opname == "uint_ge": result = cond(z3.UGE(arg0, arg1)) elif opname == "int_lshift": result = arg0 << arg1 valid_if = z3.And(arg1 >= 0, arg1 < INTEGER_WIDTH) elif opname == "int_rshift": result = arg0 << arg1 valid_if = z3.And(arg1 >= 0, arg1 < INTEGER_WIDTH) elif opname == "uint_rshift": result = z3.LShR(arg0, arg1) valid_if = z3.And(arg1 >= 0, arg1 < INTEGER_WIDTH) elif opname == "uint_mul_high": # zeroextend args to 2*INTEGER_WIDTH bit, then multiply and extract # highest INTEGER_WIDTH bits zarg0 = z3.ZeroExt(INTEGER_WIDTH, arg0) zarg1 = z3.ZeroExt(INTEGER_WIDTH, arg1) result = z3.Extract(INTEGER_WIDTH * 2  1, INTEGER_WIDTH, zarg0 * zarg1) elif opname == "int_pydiv": valid_if = arg1 != 0 r = arg0 / arg1 psubx = r * arg1  arg0 result = r + (z3.If(arg1 < 0, psubx, psubx) >> (INTEGER_WIDTH  1)) elif opname == "int_pymod": valid_if = arg1 != 0 r = arg0 % arg1 result = r + (arg1 & z3.If(arg1 < 0, r, r) >> (INTEGER_WIDTH  1)) elif opname == "int_is_true": result = cond(arg0 != FALSEBV) elif opname == "int_is_zero": result = cond(arg0 == FALSEBV) elif opname == "int_neg": result = arg0 elif opname == "int_invert": result = ~arg0 else: assert 0, "unknown operation " + opname return result, valid_if def cond(z3expr): """ helper function to turn a Z3 boolean result z3expr into a 1 or 0 bitvector, using z3.If """ return z3.If(z3expr, TRUEBV, FALSEBV)
We map the semantics of a PyPy JIT operation to Z3 with the z3_expression
function. It takes the name of a JIT operation and its two (or one) arguments
into a pair of Z3 formulas, result
and valid_if
. The resulting formulas are
constructed with the operator overloading of Z3 variables/formulas.
The first element result
of the result of z3_expression
represents the result
of performing the operation. valid_if
is a bool that represents a condition that
needs to be True
in order for the result of the operation to be defined. E.g.
int_pydiv(a, b)
is only valid if b != 0
. Most operations are always valid,
so they return True
as that condition (we'll ignore valid_if
for a bit, but it
will become more relevant further down in the post).
We can define a helper function to prove things by finding counterexamples:
def prove(cond): """ Try to prove a condition cond by searching for counterexamples of its negation. """ z3res = solver.check(z3.Not(cond)) if z3res == z3.unsat: return True elif z3res == z3.unknown: # eg on timeout return False elif z3res == z3.sat: return False assert 0, "should be unreachable"
Finding rewrite rules
Now we can start finding our first rewrite rules, following the first pattern
op(x, x) > x
. We do this by iterating over all the supported binary
operation names, getting the z3 expression for op(x, x)
and then asking Z3 to
prove op(x, x) == x
.
for opname in opnames2: result, valid_if = z3_expression(opname, xvar, xvar) if prove(result == xvar): print(f"{opname}(x, x) > x, {result}")
This yields the simplifications:
int_and(x, x) > x int_or(x, x) > x
Synthesizing constants
Supporting the next patterns is harder: op(x, x) == c1
, op(x, c1) == x
, and
op(c1, x) == x
. We don't know which constants to pick to try to get Z3 to
prove the equality. We could iterate over common constants like 0
, 1
,
MAXINT
, etc, or even over all the 256 values for a bitvector of length 8.
However, we will instead ask Z3 to find the constants for us too.
This can be done by using quantifiers, in this case z3.ForAll
. The query we
pose to Z3 is "does there exist a constant c1
such that for all x
the
following is true: op(x, c1) == x
? Note that the constant c1
is not
necessarily unique, there could be many of them. We generate several matching
constant, and add that they must be different to the condition of the second
and further queries.
We can express this in a helper function:
def find_constant(z3expr, number_of_results=5): condition = z3.ForAll( [xvar], z3expr ) for i in range(number_of_results): checkres = solver.check(condition) if checkres == z3.sat: # if a solver check succeeds, we can ask for a model, which is # concrete values for the variables constvar model = solver.model() const = model[constvar].as_signed_long() yield const # make sure we don't generate the same constant again on the # next call condition = z3.And(constvar != const, condition) else: # no (more) constants found break
We can use this new function for the three mentioned patterns:
# try to find constants for op(x, x) == c for opname in opnames2: result, valid_if = z3_expression(opname, xvar, xvar) for const in find_constant(result == constvar): print(f"{opname}(x, x) > {const}") # try to find constants for op(x, c) == x and op(c, x) == x for opname in opnames2: result, valid_if = z3_expression(opname, xvar, constvar) for const in find_constant(result == xvar): print(f"{opname}(x, {const}) > x") result, valid_if = z3_expression(opname, constvar, xvar) for const in find_constant(result == xvar): print(f"{opname}({const}, x) > x") # this code is not quite correct, we'll correct it later
Together this yields the following new simplifications:
# careful, these are not all correct! int_sub(x, x) > 0 int_xor(x, x) > 0 int_eq(x, x) > 1 int_ne(x, x) > 0 int_lt(x, x) > 0 int_le(x, x) > 1 int_gt(x, x) > 0 int_ge(x, x) > 1 uint_lt(x, x) > 0 uint_le(x, x) > 1 uint_gt(x, x) > 0 uint_ge(x, x) > 1 uint_rshift(x, x) > 0 int_pymod(x, x) > 0 int_add(x, 0) > x int_add(0, x) > x int_sub(x, 0) > x int_mul(x, 1) > x int_mul(1, x) > x int_and(x, 1) > x int_and(1, x) > x int_or(x, 0) > x int_or(0, x) > x int_xor(x, 0) > x int_xor(0, x) > x int_lshift(x, 0) > x int_rshift(x, 0) > x uint_rshift(x, 0) > x int_pydiv(x, 1) > x int_pymod(x, 0) > x
Most of these look good at first glance, but the last one reveals a problem:
we've been ignoring the valid_if
expression up to now. We can stop doing that by
changing the code like this, which adds z3.And(valid_if, ...)
to the argument of
the calls to find_constant
:
# try to find constants for op(x, x) == c, op(x, c) == x and op(c, x) == x for opname in opnames2: result, valid_if = z3_expression(opname, xvar, xvar) for const in find_constant(z3.And(valid_if, result == constvar)): print(f"{opname}(x, x) > {const}") # try to find constants for op(x, c) == x and op(c, x) == x for opname in opnames2: result, valid_if = z3_expression(opname, xvar, constvar) for const in find_constant(z3.And(result == xvar, valid_if)): print(f"{opname}(x, {const}) > x") result, valid_if = z3_expression(opname, constvar, xvar) for const in find_constant(z3.And(result == xvar, valid_if)): print(f"{opname}({const}, x) > x")
And we get this list instead:
int_sub(x, x) > 0 int_xor(x, x) > 0 int_eq(x, x) > 1 int_ne(x, x) > 0 int_lt(x, x) > 0 int_le(x, x) > 1 int_gt(x, x) > 0 int_ge(x, x) > 1 uint_lt(x, x) > 0 uint_le(x, x) > 1 uint_gt(x, x) > 0 uint_ge(x, x) > 1 int_add(x, 0) > x int_add(0, x) > x int_sub(x, 0) > x int_mul(x, 1) > x int_mul(1, x) > x int_and(x, 1) > x int_and(1, x) > x int_or(x, 0) > x int_or(0, x) > x int_xor(x, 0) > x int_xor(0, x) > x int_lshift(x, 0) > x int_rshift(x, 0) > x uint_rshift(x, 0) > x int_pydiv(x, 1) > x
Synthesizing two constants
For the patterns op(x, c1) == c2
and op(c1, x) == c2
we need to synthesize
two constants. We can again write a helper method for that:
def find_2consts(z3expr, number_of_results=5): condition = z3.ForAll( [xvar], z3expr ) for i in range(number_of_results): checkres = solver.check(condition) if checkres == z3.sat: model = solver.model() const = model[constvar].as_signed_long() const2 = model[constvar2].as_signed_long() yield const, const2 condition = z3.And(z3.Or(constvar != const, constvar2 != const2), condition) else: return
And then use it like this:
for opname in opnames2: # try to find constants c1, c2 such that op(c1, x) > c2 result, valid_if = z3_expression(opname, constvar, xvar) consts = find_2consts(z3.And(valid_if, result == constvar2)) for const, const2 in consts: print(f"{opname}({const}, x) > {const2}") # try to find constants c1, c2 such that op(x, c1) > c2 result, valid_if = z3_expression(opname, xvar, constvar) consts = find_2consts(z3.And(valid_if, result == constvar2)) for const, const2 in consts: print("%s(x, %s) > %s" % (opname, const, const2))
Which yields some straightforward simplifications:
int_mul(0, x) > 0 int_mul(x, 0) > 0 int_and(0, x) > 0 int_and(x, 0) > 0 uint_lt(x, 0) > 0 uint_le(0, x) > 1 uint_gt(0, x) > 0 uint_ge(x, 0) > 1 int_lshift(0, x) > 0 int_rshift(0, x) > 0 uint_rshift(0, x) > 0 uint_mul_high(0, x) > 0 uint_mul_high(1, x) > 0 uint_mul_high(x, 0) > 0 uint_mul_high(x, 1) > 0 int_pymod(x, 1) > 0 int_pymod(x, 1) > 0
A few require a bit more thinking:
int_or(1, x) > 1 int_or(x, 1) > 1
The are true because in two's complement, 1
has all bits set.
The following ones require recognizing that 9223372036854775808 == 2**63
is
the most negative signed 64bit integer, and 9223372036854775807 == 2 ** 63 
1
is the most positive one:
int_lt(9223372036854775807, x) > 0 int_lt(x, 9223372036854775808) > 0 int_le(9223372036854775808, x) > 1 int_le(x, 9223372036854775807) > 1 int_gt(9223372036854775808, x) > 0 int_gt(x, 9223372036854775807) > 0 int_ge(9223372036854775807, x) > 1 int_ge(x, 9223372036854775808) > 1
The following ones are true because the bitpattern for 1
is the largest
unsigned number:
uint_lt(1, x) > 0 uint_le(x, 1) > 1 uint_gt(x, 1) > 0 uint_ge(1, x) > 1
Strength Reductions
All the patterns so far only had a variable or a constant on the target of the
rewrite. We can also use the machinery to do strenghreductions where we
generate a singleargument operation op1(x)
for input operations op(x, c1)
or op(c1, x)
. To achieve this, we try all combinations of binary and unary
operations. (We won't consider strength reductions where a binary operation
gets turned into a "cheaper" other binary operation here.)
opnames1 = [ "int_is_true", "int_is_zero", "int_neg", "int_invert", ] for opname in opnames2: for opname1 in opnames1: result, valid_if = z3_expression(opname, xvar, constvar) # try to find a constant op(x, c) == g(x) result1, valid_if1 = z3_expression(opname1, xvar) consts = find_constant(z3.And(valid_if, valid_if1, result == result1)) for const in consts: print(f"{opname}(x, {const}) > {opname1}(x)") # try to find a constant op(c, x) == g(x) result, valid_if = z3_expression(opname, constvar, xvar) result1, valid_if1 = z3_expression(opname1, xvar) consts = find_constant(z3.And(valid_if, valid_if1, result == result1)) for const in consts: print(f"{opname}({const}, x) > {opname1}(x)")
Which yields the following new simplifications:
int_sub(0, x) > int_neg(x) int_sub(1, x) > int_invert(x) int_mul(x, 1) > int_neg(x) int_mul(1, x) > int_neg(x) int_xor(x, 1) > int_invert(x) int_xor(1, x) > int_invert(x) int_eq(x, 0) > int_is_zero(x) int_eq(0, x) > int_is_zero(x) int_ne(x, 0) > int_is_true(x) int_ne(0, x) > int_is_true(x) uint_lt(0, x) > int_is_true(x) uint_lt(x, 1) > int_is_zero(x) uint_le(1, x) > int_is_true(x) uint_le(x, 0) > int_is_zero(x) uint_gt(x, 0) > int_is_true(x) uint_gt(1, x) > int_is_zero(x) uint_ge(x, 1) > int_is_true(x) uint_ge(0, x) > int_is_zero(x) int_pydiv(x, 1) > int_neg(x)
Conclusions
With not very little code we managed to generate a whole lot of local simplifications for integer operations in the IR of PyPy's JIT. The rules discovered that way are "simple", in the sense that they only require looking at a single instruction, and not where the arguments of that instruction came from. They also don't require any knowledge about the properties of the arguments of the instructions (e.g. that they are positive).
The rewrites in this post have mostly been in PyPy's JIT already. But now we
mechanically confirmed that they are correct. I've also added the remaining
useful looking ones, in particular int_eq(x, 0) > int_is_zero(x)
etc.
If we wanted to scale this approach up, we would have to work much harder! There are a bunch of problems that come with generalizing the approach to looking at sequences of instructions:

Combinatorial explosion: if we look at sequences of instructions, we very quickly get a combinatorial explosion and it becomes untractable to try all combinations.

Finding nonminimal patterns: Some complicated simplifications can be instances of simpler ones. For example, because
int_add(x, 0) > x
, it's also true thatint_add(int_sub(x, y), 0) > int_sub(x, y)
. If we simply generate all possible sequences, we will find the latter simplification rule, which we would usually not care about. 
Unclear usefulness: if we simply generate all rewrites up to a certain number of instructions, we will get a lot of patterns that are useless in the sense that they typically aren't found in realistic programs. It would be much better to somehow focus on the patterns that real benchmarks are using.
In the next blog post I'll discuss an alternative approach to simply generating all possible sequences of instructions, that tries to address these problems. This works by analyzing the real traces of benchmarks and mining those for inefficiencies, which only shows problems that occur in actual programs.
Sources
I've been rereading a lot of blog posts from John's blog:
 Let’s Work on an LLVM Superoptimizer
 Early Superoptimizer Results
 A Few Synthesizing Superoptimizer Results
 Synthesizing Constants
but also papers:
Another of my favorite blogs has been Philipp Zucker's blog in the last year or two, lots of excellent posts about/using Z3 on there.
Profiling PyPy using the Firefox profiler user interface
Introduction
If you ever wanted to profile your Python code on PyPy, you probably came across VMProf — a statistical profiler for PyPy.
VMProf's console output can already give some insights into where your code spends time, but it is far from showing all the information captured while profiling.
There have been some tools around to visualize VMProf's output. Unfortunately the vmprof.com user interface is no longer available and vmprofserver is not as easy to use, you may want to take a look at a local viewer or converter. Those so far could give you some general visualizations of your profile, but do not show any PyPy related context like PyPy's log output (PyPyLog, which is output when using the PYPYLOG environment variable to log JIT actions).
To bring all of those features together in one tool, you may take a look at the vmproffirefoxconverter.
Created in the context of my bachelor's thesis, the vmproffirefoxconverter is a tool for analyzing VMProf profiles with the Firefox profiler user interface. Instead of building a new user interface from scratch, this allows us to reuse the user interface work Mozilla put into the Firefox profiler. The Firefox profiler offers a timeline where you can zoom into profiles and work with different visualizations like a flame graph or a stack chart. To understand why there is time spent inside a function, you can revisit the source code and even dive into the intermediate representation of functions executed by PyPy's justintime compiler. Additionally, there is a visualization for PyPy's log output, to keep track whether PyPy spent time inside the interpreter, JIT or GC throughout the profiling time.
Profiling word count
In this blog post, I want to show an example of how to use the vmproffirefoxconverter for a simple Python program. Based on Ben Hoyt's blog Performance comparison: counting words in Python, Go, C++, C, AWK, Forth, and Rust we will profile two python versions of a word counter running on PyPy. One being a bit more optimized. For this, VMProf will be used, but instead of just going with the console output, we will use the Firefox profiler user interface.
At first, we are going to look at a simple way of counting words with Collections.Counter
.
This will read one line from the standard input at a time and count the words with counter.update()
counts = collections.Counter() for line in sys.stdin: words = line.lower().split() counts.update(words) for word, count in counts.most_common(): print(word, count)
To start profiling, simply execute:
pypy m vmprofconvert run simple.py <kjvbible_x10.txt
This will run the above code with vmprof, automatically capture and convert the results and finally open the Firefox profiler.
The input file is the king James version of the bible concatenated ten times.
To get started, we take a look at the call stack.
Here we see that most of the time is spent in native code (marked as blue) e.g., the counter.update()
or split()
C implementation.
Now let's proceed with the more optimized version.
This time we read 64 Kb of data from the standard input and count the words with counter.update()
.
counts = collections.Counter() remaining = '' while True: chunk = remaining + sys.stdin.read(64*1024) if not chunk: break last_lf = chunk.rfind('\n') # process to last LF character if last_lf == 1: remaining = '' else: remaining = chunk[last_lf+1:] chunk = chunk[:last_lf] counts.update(chunk.lower().split()) for word, count in counts.most_common(): print(word, count)
As we did before, we are going to take a peek at the call stack.
Now there is more time spent in native code, caused by larger chunks of text passed to counter.update()
.
This becomes even more clear by comparing the stack charts.
Here, in the unoptimized case, we only read in one line at each loop iteration. This results in small "spikes" in the stack chart.
But let's take an even closer look.
Zoomed in, we see the call stack alternating between _count_elements()
and (unfortunately unsymbolized) native calls coming from reading and splitting the input text (e.g., decode()
).
Let us now take a look at the optimized case.
And if we look closer at the same interval as before, we see some spikes, but slightly different.
Even though we do not want to compare the (amount of) milliseconds directly, we clearly see that the spikes are wider, i.e. the time spent in those function calls is longer.
You may already know where this comes from.
We read a 64 Kb chunk of data from std in and pass that to counter.update()
, so both these tasks do more work and take longer.
Bigger chunks mean there is less alternating between reading and counting, so there is more time spent doing work than "doing" loop iterations.
Getting started
You can get the converter from GitHub.
Both VMProf and the vmproffirefoxconverter were created for profiling PyPy, but you can also use them with CPython.
This project is still somewhat experimental, so if you want to try it out, please let us know whether it worked for you.
PyPy v7.3.16 release
PyPy v7.3.16: release of python 2.7, 3.9, and 3.10
The PyPy team is proud to release version 7.3.16 of PyPy.
This release includes security fixes from upstream CPython, and bugfixes to the garbage collector, described in a gc bughunt blog post.
The release includes three different interpreters:
PyPy2.7, which is an interpreter supporting the syntax and the features of Python 2.7 including the stdlib for CPython 2.7.18+ (the
+
is for backported security updates)PyPy3.9, which is an interpreter supporting the syntax and the features of Python 3.9, including the stdlib for CPython 3.9.19.
PyPy3.10, which is an interpreter supporting the syntax and the features of Python 3.10, including the stdlib for CPython 3.10.14.
The interpreters are based on much the same codebase, thus the multiple release. This is a micro release, all APIs are compatible with the other 7.3 releases. It follows after 7.3.15 release on Jan 15, 2024
We recommend updating. You can find links to download the v7.3.16 releases here:
We would like to thank our donors for the continued support of the PyPy project. If PyPy is not quite good enough for your needs, we are available for direct consulting work. If PyPy is helping you out, we would love to hear about it and encourage submissions to our blog via a pull request to https://github.com/pypy/pypy.org
We would also like to thank our contributors and encourage new people to join the project. PyPy has many layers and we need help with all of them: bug fixes, PyPy and RPython documentation improvements, or general help with making RPython's JIT even better.
If you are a python library maintainer and use Cextensions, please consider making a HPy / CFFI / cppyy version of your library that would be performant on PyPy. In any case, both cibuildwheel and the multibuild system support building wheels for PyPy.
What is PyPy?
PyPy is a Python interpreter, a dropin replacement for CPython It's fast (PyPy and CPython 3.7.4 performance comparison) due to its integrated tracing JIT compiler.
We also welcome developers of other dynamic languages to see what RPython can do for them.
We provide binary builds for:
x86 machines on most common operating systems (Linux 32/64 bits, Mac OS 64 bits, Windows 64 bits)
64bit ARM machines running Linux (
aarch64
).Apple M1 arm64 machines (
macos_arm64
).s390x running Linux
PyPy support Windows 32bit, Linux PPC64 big and littleendian, and Linux ARM 32 bit, but does not release binaries. Please reach out to us if you wish to sponsor binary releases for those platforms. Downstream packagers provide binary builds for debian, Fedora, conda, OpenBSD, FreeBSD, Gentoo, and more.
What else is new?
For more information about the 7.3.16 release, see the full changelog.
Please update, and continue to help us make pypy better.
Cheers, The PyPy Team
Fixing a Bug in PyPy's Incremental GC
Introduction
Since last summer, I've been looking on and off into a weird and hard to reproduce crash bug in PyPy. It was manifesting only on CI, and it seemed to always happen in the AST rewriting phase of pytest, the symptoms being that PyPy would crash with a segfault. All my attempts to reproduce it locally failed, and my attempts to try to understand the problem by dumping the involved ASTs lead nowhere.
A few weeks ago, we got two more bug reports, the last one by the authors of the nanobind binding generator, with the same symptoms: crash in AST rewriting, only on CI. I decided to make a more serious push to try to find the bug this time. Ultimately the problem turned out to be several bugs in PyPy's garbage collector (GC) that had been there since its inception in 2013. Understanding the situation turned out to be quite involved, additionally complicated by this being the first time that I was working on this particular aspect of PyPy's GC. Since the bug was so much work to find, I thought I'd write a blog post about it.
The blog post consists of three parts: first a chronological description of what I did to find the bug, a technical explanation of what goes wrong, some reflections on the bug (and then a bonus bug I also found in the process).
Finding the Bug
I started from the failing nanobind CI runs that ended with a segfault of the PyPy interpreter. This was only an intermittent problem, not every run was failing. When I tried to just run the test suite locally, I couldn't get it to fail. Therefore at first I tried to learn more about what was happening by looking on the CI runners.
Running on CI
I forked the nanobind repo and hacked the CI script in order to get it to use a PyPy build with full debug information and more assertions turned on. In order to increase the probability of seeing the crash I added an otherwise unused matrix variable to the CI script that just contained 32 parameters. This means every build is done 32 times (sorry Github for wasting your CPUs 😕). With that amount of repetition, I got at least one job of every build that was crashing.
Then I added the Xfaulthandler
option to the PyPy command which will use the
faulthandler module
try to print a Python stacktrace if the VM segfaults to confirm that PyPy was
indeed crashing in the AST
rewriting
phase
of pytest, which pytest uses for nicer
assertions.
I experimented with hacking our faulthandler implementation to also give me a
Clevel callstack, but that didn't work as well as I hoped.
Then I tried to run gdb on CI to try to get it
to print a C callstack at the crash point. You can get gdb to execute commands
as if typed at the prompt with the ex
commandline option, I used something
like this:
gdb ex "set confirm off" ex "set pagination off" ex \ "set debuginfod enabled off" ex run ex where ex quit \ args <command> <arguments>
But unfortunately the crash never occurred when running in gdb.
Afterwards I tried the next best thing, which was configuring the CI runner to dump a core file and upload it as a build artifact, which worked. Looking at the cores locally only sort of worked, because I am running a different version of Ubuntu than the CI runners. So I used tmate to be able to log into the CI runner after a crash and interactively used gdb there. Unfortunately what I learned from that was that the bug was some kind of memory corruption, which is always incredibly unpleasant to debug. Basically the header word of a Python object had been corrupted somehow at the point of the crash, which means that it's vtable wasn't usable any more.
(Sidenote: PyPy doesn't really use a vtable pointer, instead it uses half a word in the header for the vtable, and the other half for flags that the GC needs to keep track of the state of the object. Corrupting all this is still bad.)
Reproducing Locally
At that point it was clear that I had to push to reproduce the problem on my laptop, to allow me to work on the problem more directly and not to always have to go via the CI runner. Memory corruption bugs often have a lot of randomness (depending on which part of memory gets modified, things might crash or more likely just happily keep running). Therefore I decided to try to bruteforce reproducing the crash by simply running the tests many many times. Since the crash happened in the AST rewriting phase of pytest, and that happens only if no pyc files of the bytecodecompiled rewritten ASTs exist, I made sure to delete them before every test run.
To repeat the test runs I used multitime, which is a simple program that runs a command repeatedly. It's meant for lightweight benchmarking purposes, but it also halts the execution of the command if that command exits with an error (and it sleeps a small random time between runs, which might help with randomizing the situation, maybe). Here's a demo:
(Max pointed out autoclave to me when reviewing this post, which is a more dedicated tool for this job.)
Thankfully, running the tests repeatedly eventually lead to a crash, solving my
"only happens on CI" problem. I then tried various variants to exclude possible
sources of errors. The first source of errors to exclude in PyPy bugs is the
justintime compiler, so I reran the tests with jit off
to see whether I
could still get it to crash, and thankfully I eventually could (JIT bugs are
often very annoying).
Next source of bugs to exclude where Cextensions. Since those were the tests
of nanobind, a framework for creating Cextension modules I was a bit worried
that the bug might be in our emulation of CPython's CAPI. But running PyPy
with the v
option (which will print all the imports as they happen)
confirmed that at the point of crash no Cextension had been imported yet.
Using rr
I still couldn't get the bug to happen in GDB, so the tool I tried next was
rr, the "reverse debugger". rr can record the execution of a program and
later replay it arbitrarily often. This gives you a timetraveling debugger
that allows you to execute the program backwards in addition to forwards.
Eventually I managed to get the crash to happen when running the tests with
rr record chaos
(chaos
randomizes some decisions that rr takes, to try to
increase the chance of reproducing bugs).
Using rr well is quite hard, and I'm not very good at it. The main approach I
use with rr to debug memory corruption is to replay the crash, then set a
watchpoint
for the corrupted memory location, then use the command reversecontinue
to
find the place in the code that mutated the memory location. reversecontinue
is like continue
, except that it will execute the program backwards from the
current point. Here's a little demo of this:
Doing this for my bug revealed that the object that was being corrupted was erroneously collected by the garbage collector. For some reason the GC had wrongly decided that the object was no longer reachable and therefore put the object into a freelist by writing a pointer to the next entry in the freelist into the first word of the object, overwriting the object's header. The next time the object was used things crashed.
Sidequest: wrong GC assertions
At this point in the process, I got massively sidetracked. PyPy's GC has a number of debug modes that you can optionally turn on. Those slow down the program execution a lot, but they should in theory help to understand why the GC goes wrong. When I turned them on, I was getting a failing assertion really early in the test execution, complaining about an invariant violation in the GC logic. At first this made me very happy. I thought that this would help me fix the bug more quickly.
Extremely frustratingly, after two days of work I concluded that the assertion logic itself was wrong. I have fixed that in the meantime too, the details of that are in the bonus section at the end of the post.
Using GDB scripting to find the real bug
After that disaster I went back to the earlier rr recording without GC assertions and tried to understand in more detail why the GC decided to free an object that was still being referenced. To be able to do that I used the GDB Python scripting API to write some helper commands to understand the state of the GC heap (rr is an extension of GDB, so the GDB scripting API works in rr too).
The first (small) helper command I wrote with the GDB scripting API was a way to prettyprint the currently active GC flags of a random PyPy object, starting just from the pointer. The more complex command I wrote was an object tracer, which follows pointers to GC objects starting from a root object to explore the object graph. The object tracer isn't complete, it doesn't deal with all the complexities of PyPy's GC. But it was good enough to help me with my problem, I found out that the corrupted object was stored in an array.
As an example, here's a function that uses the GDB API to walk one of the helper data structures of the GC, a stack of pointers:
def walk_addr_stack(obj): """ walk an instance of the AddressStack class (which is a linked list of arrays of 1019 pointers). the first of the arrays is only partially filled with used_in_last_chunk items, all the other chunks are full.""" if obj.type.code == gdb.TYPE_CODE_PTR: obj = obj.dereference() used_in_last_chunk = lookup(obj, "used_in_last_chunk") chunk = lookup(obj, "inst_chunk").dereference() while 1: items = lookup(chunk, "items") for i in range(used_in_last_chunk): yield items[i] chunk = lookup(chunk, "next") if not chunk: break chunk = chunk.dereference() used_in_last_chunk = 1019
The full file of supporting code I wrote can be found in this gist. This is pretty rough throwaway code, however.
In the following recording I show a staged debugging session with some of the extra commands I wrote with the Python API. The details aren't important, I just wanted to give a bit of a flavor of what inspecting objects looks like:
The next step was to understand why the array content wasn't being correctly
traced by the GC, which I eventually managed with some conditional
breakpoints,
more watchpoints, and using reversecontinue
. It turned out to be a bug that
occurs when the content of one array was memcopied into another array. The
technical details of why the array wasn't traced correctly are described in
detail in the next section.
Writing a unit test
To try to make sure I really understood the bug correctly I then wrote a GC unit test that shows the problem. Like most of PyPy, our GC is written in RPython, a (somewhat strange) subset/dialect of Python2, which can be compiled to C code. However, since it is also valid Python2 code, it can be unittested on top of a Python2 implementation (which is one of the reasons why we keep maintaining PyPy2).
In the GC unit tests you have a lot of control about what order things happen in, e.g. how objects are allocated, when garbage collection phases happen, etc. After some trying I managed to write a test that crashes with the same kind of memory corruption that my original crash exhibited: an object that is still reachable via an array is collected by the GC. To give you a flavor of what this kind of test looks like, here's an (edited for clarity) version of the test I eventually managed to write
def test_incrementality_bug_arraycopy(self): source = self.malloc(VAR, 8) # first array # the stackroots list emulates the C stack self.stackroots.append(source) target = self.malloc(VAR, 8) # second array self.stackroots.append(target) node = self.malloc(S) # unrelated object, will be collected node.x = 5 # store reference into source array, calling the write barrier self.writearray(source, 0, node) val = self.gc.collect_step() source = self.stackroots[0] # reload arrays, they might have moved target = self.stackroots[1] # this GC step traces target val = self.gc.collect_step() # emulate what a memcopy of arrays does res = self.gc.writebarrier_before_copy(source, target, 0, 0, 2) assert res target[0] = source[0] # copy two elements of the arrays target[1] = source[1] # now overwrite the reference to node in source self.writearray(source, 0, lltype.nullptr(S)) # this GC step traces source self.gc.collect_step() # some more collection steps, crucially target isn't traced again # but node is deleted for i in range(3): self.gc.collect_step() # used to crash, node got collected assert target[0].x == 5
One of the good properties of testing our GC that way is that all the memory is emulated. The crash in the last line of the test isn't a segfault at all, instead you get a nice exception saying that you tried to access a freed chunk of memory and you can then debug this with a python2 debugger.
Fixing the Bug
With the unit test in hand, fixing the test was relatively straightforward (the diff in its simplest form is anyway only a single line change). After this first version of my fix, I talked to Armin Rigo who helped me find different case that was still wrong, in the same area of the code.
I also got help by the developers at PortaOne who are using PyPy on their servers and had seen some mysterious PyPy crashes recently, that looked related to the GC. They did test deployments of my fixes in their various stages to their servers to try to see whether stability improved for them. Unfortunately in the end it turned out that their crashes are an unrelated GC bug related to object pinning, which we haven't resolved yet.
Writing a GC fuzzer/property based test
Finding bugs in the GC is always extremely disconcerting, particularly since this one manged to hide for so long (more than ten years!). Therefore I wanted to use these bugs as motivation to try to find more problems in PyPy's GC. Given the ridiculous effectiveness of fuzzing, I used hypothesis to write a propertybased test. Every test performs a sequence of randomly chosen steps from the following list:
 allocate an object
 read a random field from a random object
 write a random reference into a random object
 drop a random stack reference
 perform one GC step
 allocate an array
 read a random index from a random array
 write to an array
 memcopy between two arrays
This approach of doing a sequence of steps is pretty close to the stateful testing approach of hypothesis, but I just implemented it manually with the data strategy.
Every one of those steps is always performed on both the tested GC, and on some regular Python objects. The Python objects provide the "ground truth" of what the heap should look like, so we can compare the state of the GC objects with the state of the Python objects to find out whether the GC made a mistake.
In order to check whether the test is actually useful, I reverted my bug fixes and made sure that the test refinds both the spurious GC assertion error and the problems with memcopying an array.
In addition, the test also found corner cases in my fix. There was a situation that I hadn't accounted for, which the test found after eventually. I also plan on adding a bunch of other GC features as steps in the test to stress them too (for example weakrefs, identity hashes, pinning, maybe finalization).
At the point of publishing this post, the fixes got merged to the 2.7/3.9/3.10 branches of PyPy, and will be part of the next release (v7.3.16).
The technical details of the bug
In order to understand the technical details of the bug, I need to give some background explanations about PyPy's GC.
PyPy's incremental GC
PyPy uses an incremental generational marksweep GC. It's generational and therefore has minor collections (where only young objects get collected) and major collections (collecting longlived objects eventually, using a markandsweep algorithm). Young objects are allocated in a nursery using a bumppointer allocator, which makes allocation quite efficient. They are moved out of the nursery by minor collections. In order to find references from old to young objects the GC uses a write barrier to detect writes into old objects.
The GC is also incremental, which means that its major collections aren't done all at once (which would lead to long pauses). Instead, major collections are sliced up into small steps, which are done directly after a minor collection (the GC isn't concurrent though, which would mean that the GC does work in a separate thread).
The incremental GC uses tricolor marking to reason about the reachable part of the heap during the marking phase, where every old object can be:
 black: already marked, reachable, definitely survives the collection
 grey: will survive, but still needs to be marked
 white: potentially dead
The color of every object is encoded by setting flags in the object header.
The GC maintains the invariant that black objects must never point to white objects. At the start of a major collection cycle the stack roots are turned gray. During the mark phase of a major collection cycle, the GC will trace gray objects, until none are left. To trace a gray object, all the objects it references have to be marked grey if they are white so far. After a grey object is traced, it can be marked black (because all the referenced objects are now either black or gray). Eventually, there are no gray objects left. At that point (because no white object can be reached from a black one) all the white objects are known to be unreachable and can therefore be freed.
The GC is incremental because every collection step will only trace a limited number of gray objects, before giving control back to the program. This leads to a problem: if an already traced (black) object is changed between two marking steps of the GC, the program can mutate that object and write a new reference into one of its fields. This could lead to an invariant violation, if the referenced object is white. Therefore, the GC uses the write barrier (which it needs anyway to find references from old to young objects) to mark all black objects that are modified gray, and then trace them again at one of the later collection steps.
The special write barrier of memcopy
Arrays use a different kind of write barrier than normal objects. Since they can be arbitrarily large, tracing them can take a long time. Therefore it's potentially wasteful to trace them fully at a minor collection. To fix this, the array write barrier keeps more granular information about which parts of the array have been modified since the last collection step. Then only the modified parts of the array need to be traced, not the whole array.
In addition, there is another optimization for arrays, which is that memcopy is treated specially by the GC. If memcopy is implemented by simply writing a loop that copies the content of one array to the other, that will invoke the write barrier every single loop iteration for the write of every array element, costing a lot of overhead. Here's some pseudocode:
def arraycopy(source, dest, source_start, dest_start, length): for i in range(length): value = source[source_start + i] dest[dest_start + i] = value # < write barrier inserted here
Therefore the GC has a special memcopyspecific
write barrier that will perform the GC logic once before the memcopy loop, and
then use a regular (typically SIMDoptimized) memcopy implementation from
libc
. Roughly like this:
def arraycopy(source, dest, source_start, dest_start, length): gc_writebarrier_before_array_copy(source, dest, source_start, dest_start, length) raw_memcopy(cast_to_voidp(source) + source_start, cast_to_voidp(dest) + dest_start, sizeof(itemtype(source)) * length)
(this is really a rough sketch. The real code is much more complicated.)
The bug
The bugs turned out to be precisely in this memcopy write barrier. When we implemented the current GC, we adapted our previous GC, which was a generational marksweep GC but not incremental. We started with most of the previous GC's code, including the write barriers. The regular write barriers were adapted to the new incremental assumptions, in particular the need for the write barrier to also turn black objects back to gray when they are modified during a marking phase. This was simply not done at all for the memcopy write barrier, at least in two of the code paths. Fixing this problem fixes the unit tests and stops the crashes.
Reflections
The way the bug was introduced is really typical. A piece of code (the memcopy write barrier) was written under a set of assumptions. Then those assumptions changed later. Not all the code pieces that relied on these assumptions to be correct were updated. It's pretty hard to prevent this in all situations.
I still think we could have done more to prevent the bug occurring. Writing a
propertybased test for the GC would have been a good idea given the complexity
of the GC, and definitely something we did in other parts of our code at the
time (just using the random
module mostly, we started using hypothesis
later).
It's a bit of a mystery to me why this bug managed to be undetected for so
long. Memcopy happens in a lot of pretty core operations of e.g. lists in
Python (list.extend
, to name just one example). To speculate, I would suspect
that all the other preconditions for the bug occurring made it pretty rare:
 the content of an old list that is not yet marked needs to be copied into another old list that is marked already
 the source of the copy needs to also store an object that has no other references
 the source of the copy then needs to be overwritten with other data
 then the next collection steps need to be happening at the right points
 ...
Given the complexity of the GC logic I also wonder whether some lightweight formal methods would have been a good idea. Formalizing some of the core invariants in B or TLA+ and then model checking them up to some number of objects would have found this problem pretty quickly. There are also correctness proofs for GC algorithms in some research papers, but I don't have a good overview of the literature to point to any that are particularly good or bad. Going such a more formal route might have fixed this and probably a whole bunch of other bugs, but of course it's a pretty expensive (and tedious) approach.
While it was super annoying to track this down, it was definitely good to learn a bit more about how to use rr and the GDB scripting interface.
Bonus Section: The Wrong Assertion
Some more technical information about the wrong assertion is in this section.
Background: prebuilt objects
PyPy's VMbuilding bootstrapping process can "freeze" a bunch of heap objects into the final binary. This allows the VM to start up quickly, because those frozen objects are loaded by the OS as part of the binary.
Those frozen prebuilt objects are parts of the 'roots' of the garbage collector and need to be traced. However, tracing all the prebuilt objects at every collection would be very expensive, because there are a lot of them (about 150,000 in a PyPy 3.10 binary). Tracing them all is also not necessary, because most of them are never modified. Unmodified prebuilt objects can only reference other prebuilt objects, which can never be deallocated anyway. Therefore we have an optimization that uses the write barrier (which we need anyway to find oldtoyoung pointers) to notice when a prebuilt object gets modified for the very first time. If that happens, it gets added to the set of prebuilt objects that gets counted as a root, and is traced as a root at collections from then on.
The wrong assertion
The assertion that triggered when I turned on the GC debug mode was saying that the GC found a reference from a black to a white object, violating its invariant. Unmodified prebuilt objects count as black, and they aren't roots, because they can only ever reference other prebuilt objects. However, when a prebuilt object gets modified for the first time, it becomes part of the root set and will be marked gray. This logic works fine.
The wrong assertion triggers if a prebuilt object is mutated for the very first time in the middle of an incremental marking phase. While the prebuilt object gets added to the root set just fine, and will get traced before the marking phase ends, this is encoded slightly differently for prebuilt objects, compared to "regular" old objects. Therefore, the invariant checking code wrongly reported a black>white pointer in this situation.
To fix it I also wrote a unit test checking the problem, made sure that the GC hypothesis test also found the bug, and then fixed the wrong assertion to take the color encoding of prebuilt objects into account.
The bug managed to be invisible because we don't tend to turn on the GC assertions very often. We only do that when we find a GC bug, which is of course also when we need it the most to be correct.
Acknowledgements
Thanks to Matti Picus, Max Bernstein, Wouter van Heyst for giving me feedback on drafts of the post. Thanks to Armin Rigo for reviewing the code and pointing out holes in my thinking. Thanks to the original reporters of the various forms of the bug, including Lily Foote, David Hewitt, Wenzel Jakob.
PyPy v7.3.15 release
PyPy v7.3.15: release of python 2.7, 3.9, and 3.10
The PyPy team is proud to release version 7.3.15 of PyPy.
This is primarily a bugfix release, and includes work done to migrate PyPy to Git and Github.
The release includes three different interpreters:
PyPy2.7, which is an interpreter supporting the syntax and the features of Python 2.7 including the stdlib for CPython 2.7.18+ (the
+
is for backported security updates)PyPy3.9, which is an interpreter supporting the syntax and the features of Python 3.9, including the stdlib for CPython 3.9.18.
PyPy3.10, which is an interpreter supporting the syntax and the features of Python 3.10, including the stdlib for CPython 3.10.13.
The interpreters are based on much the same codebase, thus the multiple release. This is a micro release, all APIs are compatible with the other 7.3 releases. It follows after 7.3.14 release on Dec 25, 2023
We recommend updating. You can find links to download the v7.3.15 releases here:
We would like to thank our donors for the continued support of the PyPy project. If PyPy is not quite good enough for your needs, we are available for direct consulting work. If PyPy is helping you out, we would love to hear about it and encourage submissions to our blog via a pull request to https://github.com/pypy/pypy.org
We would also like to thank our contributors and encourage new people to join the project. PyPy has many layers and we need help with all of them: bug fixes, PyPy and RPython documentation improvements, or general help with making RPython's JIT even better.
If you are a python library maintainer and use Cextensions, please consider making a HPy / CFFI / cppyy version of your library that would be performant on PyPy. In any case, both cibuildwheel and the multibuild system support building wheels for PyPy.
What is PyPy?
PyPy is a Python interpreter, a dropin replacement for CPython It's fast (PyPy and CPython 3.7.4 performance comparison) due to its integrated tracing JIT compiler.
We also welcome developers of other dynamic languages to see what RPython can do for them.
We provide binary builds for:
x86 machines on most common operating systems (Linux 32/64 bits, Mac OS 64 bits, Windows 64 bits)
64bit ARM machines running Linux (
aarch64
).Apple M1 arm64 machines (
macos_arm64
).s390x running Linux
PyPy support Windows 32bit, Linux PPC64 big and littleendian, and Linux ARM 32 bit, but does not release binaries. Please reach out to us if you wish to sponsor binary releases for those platforms. Downstream packagers provide binary builds for debian, Fedora, conda, OpenBSD, FreeBSD, Gentoo, and more.
What else is new?
For more information about the 7.3.15 release, see the full changelog.
Please update, and continue to help us make pypy better.
Cheers, The PyPy Team
PyPy has moved to Git, GitHub
PyPy has moved its canonical repo and issue tracker from https://foss.heptapod.net/pypy/pypy to https://github.com/pypy/pypy. Obviously, this means development will now be tracked in Git rather than Mercurial.
Motivation
We still feel Mercurial is a better version control system. The named branch model and user interface are superior. But

foss.heptapod.net is not well indexed in google/bing/duckduckgo search, so people find it harder to search for issues in the project.

Since Heptapod has tightened its spam control, we get reports that users create issues only to have them flagged as spam.

Open Source has become synonymous with GitHub, and we are too small to change that.

Much of the current development comes as a reaction to fixing issues. Tracking interlocking issues is easier if all the code is on the same platform.

The FAQ presents two arguments against the move. Github notes solves much of point (1): the difficulty of discovering provenance of commits, although not entirely. But the main problem is point (2), it turns out that not moving to GitHub is an impediment to contribution and issue reporting.

People who wish to continue to use Mercurial can use the same method below to push to GitHub.

GitHub is more resource rich than foss.heptapod.net. We could add CI jobs to replace some of our aging buildbot infrastructure.
Method
The migration required two parts: migrating the code and then migrating the issues and merge requests.
Code migration 1: code and notes
I used a fork of gitremotehg to
create a local Git repo with all the changesets. Then I wanted to add a Git
note to each commit with the branch it came from. So I prepared a file with two
columns: the Git commit hash, and the corresponding branch from Mercurial.
Mercurial can describe each commit in two ways: either the commit hash or by a
number index. I used hg log
to convert an index i
to the Mercurial hash,
and then githghelper
from gitremotehg
to convert the Mercurial hash to
a Git hash:
$(cd pypygit; githghelper gitrev $(cd ../pypyhg; hg log r $i T"{node}\n"))
Then I used hg log
again to print the Mercurial branch for the index i
:
$(cd pypyhg; hg log r $i T'{branch}\n')
Putting these two together, I could loop over all the commits by their
numerical index to prepare the file. Then I iterated over each line in the
file, and added the Git note. Since the git note add
command works on the
current HEAD, I needed to checkout each commit in turn and then add the note:
git checkout q <hash> && git notes ref refs/notes/branch add m branch:<branch>
I could then use git push all
to push to GitHub.
Code migration 2: prepare the branches
PyPy has almost 500 open branches. The code migration created all the branch
HEADs, but git push all
did not push them. I needed to check them out and
push each one. So I created a file with all the branch names
cd pypyhg; hg branches  cut f1 d" " > branches.txt
and then push each one to the GitHub repo
while read branch; do git checkout branches/$branch && git push origin branches/$branch; done < branches.txt
Note that the branches were named branches/XXX
by the migration, not branch/XXX
. This confuses the merge request migration, more about that later.
Issue and merge request migration
I used the solution from nodegitlab2github which worked almost perfectly. It is important to do the conversion on a private repo otherwise every mention of a sucessfully mapped user name notifies the user about the transfer. This can be quite annoying for a repo the size of PyPy with 600 merge requests and over 4000 issues. Issues transfered without a problem: the script properly retained the issue numbers. However the script does not convert the Mercurial hashes to Git hashes, so the bare hashes in comments show up without a link to the commit. Merge requests are more of a problem:
 The Mercurial named branch "disappears" once it is merged, so a merge request
to a merged branch does not find the target branch name in Git. The
conversion creates an issue instead with the label
gitlab merge request
.  For some reason, the branches created by
gitremotehg
are calledbranches/XXX
and notbranch/XXX
as expected by GitLab. This messes up the merge request/PR conversion. For some of the branches (open PRs and main target branches) I manually created additional branches without thees
. The net result is that open merge requests became open PRs, merged merge requests became issues, and closednotmerged merge requests were not migrated.
Layered conversions
PyPy already migrated once from Bitbucket to Heptapod. Many of the issues reflect the multiple transitions: they have lines like "Created originally on Bitbucket by XXX" from the first transition, and an additional line "In Heptapod" from this transition.
Credits
We would like to express our gratitude to the Octobus team who support Heptapod. The transition from Bitbucket was quite an effort, and they have generously hosted our developement since then. We wish them all the best, and still believe that Mercurial should have "won".
Next steps
While the repo at GitHub is live, there are still a few more things we need to do:
 Documentation needs an update for the new repo and the build automation from readthedocs must be adjusted.
 The wiki should be copied from Heptapod.
 buildbot.pypy.org should also look at the new repo. I hope the code is up to the task of interacting with a Git repo.
 speed.pypy.org tracks changes, it too needs to reference the new location
 To keep tracking branches with Git notes on new commits, I activated a github action by Julian to add a Git branch note to each commit. Please see the README there for directions on using Git notes.
 Some of the merge requests were not migrated. If someone wants to, they could migrate those once they figure out the branch naming problems.
Additionally, now is the time for all of you to prove the move is worthwhile:
 Star the repo, let others know how to find it,
 Help fix some of the open issues or file new ones,
 Take advantage of the more familiar workflow to get involved in the project,
 Suggest ways to improve the migration: are there things I missed or could have done better?
How will development change?
Heptapod did not allow personal forks, so we were generous with a commit bit to the main repo. Additionally, we (well, me) have been using a commitdirectlytomain workflow. We will now be adopting a more structured workflow. Please fork the repo and submit a pull request for any changes. We can now add some premerge CI to check that the PR at least passes the first stage of translation. The live and active branches will be:

main
: what was "default" in Mercurial, it is the Python2.7 interpreter and the base of the RPython interpreter, 
py3.9
: the Python3.9 interpreter, which also includes all RPython changes frommain
. This is exactly like on Mercurial, and 
py3.10
: the Python3.10 interpreter, which also includes all RPython changes frommain
and all bugfixes frompy3.9
. This is exactly like on Mercurial.
Working between the repos
Finding commits
If you want to figure out how a Mercurial commit relates to a Git commit, you
can use githghelper
. You run it in the Git repo. It takes the full long
hash from one repo and gives you the corresponding hash of the other repo:
$ githghelper gitrev d64027c4c2b903403ceeef2c301f5132454491df 4527e62ad94b0e940a5b0f9f20d29428672f93f7 $ githghelper hgrev 4527e62ad94b0e940a5b0f9f20d29428672f93f7 d64027c4c2b903403ceeef2c301f5132454491df
Finding branches
Branches migrated from Mercurial will have a branches
prefix, not branch
.
While GitLab uses branch
for its prefix, the gitremotehg
script uses
branches
. New work should be in a PR targeting main
, py3.9
or py3.10
.
Thanks for helping to make PyPy better.
Matti
Update
In the meantime we found out that unfortunately something went wrong in the migration of the issues. The old issue 3655 got lost in the migration. This means that after number 3655 the numbers are different between github and heptapod, with heptapod being one larger. E.g. issue 3700 on heptapod is issue 3699 on github. We are investigating options.