Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consolidating quantified field and predicate chunks #860

Merged
merged 6 commits into from
Sep 12, 2024

Conversation

superaxander
Copy link
Contributor

I noticed that state consolidation only merges basic chunks. I've been running into issues where if I have a lot of quantified field chunks things become very slow. In particular I have some files where a method verifies fine on it's own but it's welldefinedness can not be re-verified at the call site because the heap becomes too fragmented.

A similar issue is highlighted in #831 where if we're unlucky with the chunk ordering in removePermissions we have to "remove" some amount of permissions from a lot of unrelated chunks which obviously scales badly if we have a lot of chunks.

This PR enables merging quantified chunks (with the exception of magic wands) if their condition and arguments are the same. This seems to significantly speed up the above mentioned files (allowing the first example to verify and the example with 19 quantifiers from #831 now runs in 21 seconds) and not have a (significant) negative effect on other files. I've run the Silicon test suite and the VerCors test suite and saw no significant differences.

I did also try running Gobra on the router package from the VerifiedSCION repository which results in 1 error after ~30 minutes:

Error at: <./router/dataplane.go:937:15> Loop invariant might not be preserved. Permission to msgs[i].Mem() might not suffice.

Since I cannot find another example where this PR has a negative impact and I don't know how to get a small Viper file from Gobra to test this specific case I thought I'd submit my PR anyway.

There are also still some open questions:

  • When merging singleton chunks I'm currently only keeping the singletonArguments/singletonReceiver of the most recently added chunk which might have a negative impact on chunk ordering because we cannot match syntactically on the receiver that we throw away. Would it make sense to store a set of (sequences of) singleton arguments instead?
  • When merging chunks I'm also throwing away the hints since I couldn't think of a good way to merge them. In the examples I have from VerCors the hint tends to be rather useless anyway since, for example, every array access is only done by applying the aloc function anyway. Is it worth it to find a way to preserve these hints?
  • To determine if two chunks can be merged I'm checking if their condition is the same. However, the decision of what is and isn't a part of a chunk's condition is somewhat arbitrary. For example, the expression forall i: Int :: 0 <= i && i < n ==> acc(aloc(x,i), write) would yield a chunk with the condition 0 <= i && i < n and the permission value write, whereas the expression forall i: Int :: true ==> acc(aloc(x, i), 0 <= i && i < n ? write : none) would yield a chunk with the condition true and the permission value 0 <= i && i < n ? write : none. There is a trade-off here regarding when we do and don't want to merge chunks. In fact, if we conditionalize all the permissions (i.e. push the condition into the permission value) you could merge all chunks referring to the same field. But of course doing that would yield a single very complicated quantified heap chunk where we can't use heuristics to find a good chunk ordering anymore. Somewhere there is a good balance of merging chunks to simplify the state (i.e. instead of having two chunks containing half of the permission requiring 4 checks in removePermissions in the best case you can have one chunk containing all the permission requiring only 2 checks in the best case) and keeping chunks separate to enable reasoning about them separately.
  • I have not attempted to merge quantified magic wands since I felt like there would be little benefit and I wasn't sure what all the assumptions are surrounding these chunks
  • I mentioned above that this PR speeds up Multiple quantifiers to memory locations of same type #831 but it seem to also do this if I disable merging quantified heap chunks by returning None before the equality check in findChunk, I am not sure why. Even without merging the chunks it seems that the removePermissions calls now select the right ordering every time. For the other example file I've been testing with merging the quantified heap chunks is crucial though.

Let me know what you think of these changes and if I can improve them in some way

@marcoeilers
Copy link
Contributor

Hey, thanks a lot! To be honest, I don't think I was 100% certain that we never merge quantified chunks. We should definitely do that at least sometimes.
I'll need a couple of days to think about this (as you said, there are a couple of points that are somewhat arbitrary where decisions need to be made); we'll either merge this or some other version of this before the next release.

@marcoeilers
Copy link
Contributor

marcoeilers commented Sep 5, 2024

I finally started looking into this, sorry this took so long.
Regarding this last point:

it seem to also [speed up] if I disable merging quantified heap chunks by returning None before the equality check in findChunk, I am not sure why. Even without merging the chunks it seems that the removePermissions calls now select the right ordering every time

I was really confused by this, but it really seems to be just an artifact of the resulting chunk order after a merge (even if the merge didn't merge anything), which just happens to be the perfect order for that example.

I'll merge some recent changes into this (the debugging code I just added essentially required changes everywhere, so there's a huge amount of conflicts), do some benchmarks myself, and then merge this.

@viper-admin
Copy link
Member

Could you maybe send me one or two of your examples with lots of QP chunks that gets sped up by this?
I'm trying out different options for when to merge chunks.

@superaxander
Copy link
Contributor Author

Here you go: https://gist.github.com/superaxander/b7a80e5de28cbb8a4597acf72387b782
With Z3 4.8.7 on the master branch with an assertTimeout of 10 seconds this takes around 50 seconds (with a timeout of 5s it sometimes fails) and on the branch for this PR it takes 30 seconds with the same settings. This example can also be obtained on the class-by-value-2 branch here https://github.com/superaxander/vercors/tree/class-by-value-2 by running bin/vct examples/concepts/c/structs.c --backend-file-base structs.

@marcoeilers
Copy link
Contributor

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants