Spot issueshttps://gitlab.lrde.epita.fr/spot/spot/-/issues2018-11-08T11:08:07+01:00https://gitlab.lrde.epita.fr/spot/spot/-/issues/249Streett changes for HOA 1.12018-11-08T11:08:07+01:00Alexandre Duret-LutzStreett changes for HOA 1.1As discussed in [HOA issue #62](https://github.com/adl/hoaf/issues/62) and its [associated pull request](https://github.com/adl/hoaf/pull/66) there are plan to reverse the current definition of Streett, from
```
acc-name: Streett 3
Acceptance: 6 (Fin(0)|Inf(1))&(Fin(2)|Inf(3))&(Fin(4)|Inf(5))
```
to
```
acc-name: Streett 3
Acceptance: 6 (Inf(0)|Fin(1))&(Inf(2)|Fin(3))&(Inf(4)|Fin(5))
```
this is so that Streett becomes the dual of Rabin, and so that we can introduce generalized Streett in a way that is both a generalization of Streett and the dual of generalized Rabin.
This change has not been accepted yet.
We have very few Streett-specific code in Spot so far, so here is what I think we should change:
1. [x] we have nothing to change on input, because currently we ignore the ```acc-name:``` line.
2. [x] we should change `streett_to_generalized_buchi()` to work with any Streett-like acceptance condition, by this I mean that we should accept each Streett pairs to list Fin and Inf in both orders. (BTW, can we generalize this algorithm to generalize Streett acceptance ? Think about it.)
3. [ ] we should change the output order of `to_generalized_streett()`
4. [ ] `parse_acc()` is tricky, because it does not know what HOA format it is reading so it does not know what `Streett n` means. Should we add an optional argument?
5. [ ] we should change `print_hoa()` to detect the favor of Streett that is compatible with the selected HOA version before we output `acc-name: Streett n`.
6. [ ] we should also change `print_hoa()` to recognize `generalized Streett` (I would use the definition from HOA 1.1 in HOA 1.0 as well since it is not defined there).
I think points 2 & 3 can be done right away, and that 4–6 should not be merged before the aforementioned HOA issue is resolved.As discussed in [HOA issue #62](https://github.com/adl/hoaf/issues/62) and its [associated pull request](https://github.com/adl/hoaf/pull/66) there are plan to reverse the current definition of Streett, from
```
acc-name: Streett 3
Acceptance: 6 (Fin(0)|Inf(1))&(Fin(2)|Inf(3))&(Fin(4)|Inf(5))
```
to
```
acc-name: Streett 3
Acceptance: 6 (Inf(0)|Fin(1))&(Inf(2)|Fin(3))&(Inf(4)|Fin(5))
```
this is so that Streett becomes the dual of Rabin, and so that we can introduce generalized Streett in a way that is both a generalization of Streett and the dual of generalized Rabin.
This change has not been accepted yet.
We have very few Streett-specific code in Spot so far, so here is what I think we should change:
1. [x] we have nothing to change on input, because currently we ignore the ```acc-name:``` line.
2. [x] we should change `streett_to_generalized_buchi()` to work with any Streett-like acceptance condition, by this I mean that we should accept each Streett pairs to list Fin and Inf in both orders. (BTW, can we generalize this algorithm to generalize Streett acceptance ? Think about it.)
3. [ ] we should change the output order of `to_generalized_streett()`
4. [ ] `parse_acc()` is tricky, because it does not know what HOA format it is reading so it does not know what `Streett n` means. Should we add an optional argument?
5. [ ] we should change `print_hoa()` to detect the favor of Streett that is compatible with the selected HOA version before we output `acc-name: Streett n`.
6. [ ] we should also change `print_hoa()` to recognize `generalized Streett` (I would use the definition from HOA 1.1 in HOA 1.0 as well since it is not defined there).
I think points 2 & 3 can be done right away, and that 4–6 should not be merged before the aforementioned HOA issue is resolved.Spot 2.7https://gitlab.lrde.epita.fr/spot/spot/-/issues/264translation to improve2017-09-02T20:14:51+02:00Alexandre Duret-Lutztranslation to improveThis was sent by František Blahoudek.
The translation of `FGa | GFb&GFc` was recently improved (#260) to a 3-state TGBA:
```
% ltl2tgba 'FGa | GFb&GFc'
```
![notgood](/uploads/271bd4adc58fb362ab7616701215a69d/notgood.png)
However it could be reduced further by merging states 1 and 2:
![better](/uploads/e3a161c4753aead23f25f28444d1946f/better.png)This was sent by František Blahoudek.
The translation of `FGa | GFb&GFc` was recently improved (#260) to a 3-state TGBA:
```
% ltl2tgba 'FGa | GFb&GFc'
```
![notgood](/uploads/271bd4adc58fb362ab7616701215a69d/notgood.png)
However it could be reduced further by merging states 1 and 2:
![better](/uploads/e3a161c4753aead23f25f28444d1946f/better.png)https://gitlab.lrde.epita.fr/spot/spot/-/issues/265more families of LTL formulas for genltl2017-06-09T16:53:18+02:00Alexandre Duret-Lutzmore families of LTL formulas for genltlThe page http://www.schuppan.de/viktor/atva11/ has a link to a tarball with 3723 formulas (each of them is expressed in various syntax and also in negated form, so the total appears to be much larger). All these formulas comes with references to their sources, and many of them are actually scalable patterns (manually?) instantiated for multiple values. I suspect that many of those scalable patterns would fit well in `genltl`; and it might make sense to also consider some of the non-scalable satisfiability formulas even if these are not model-checking properties.The page http://www.schuppan.de/viktor/atva11/ has a link to a tarball with 3723 formulas (each of them is expressed in various syntax and also in negated form, so the total appears to be much larger). All these formulas comes with references to their sources, and many of them are actually scalable patterns (manually?) instantiated for multiple values. I suspect that many of those scalable patterns would fit well in `genltl`; and it might make sense to also consider some of the non-scalable satisfiability formulas even if these are not model-checking properties.https://gitlab.lrde.epita.fr/spot/spot/-/issues/266Transform Alternating Automata into Weak Alternating Automata2017-10-01T02:10:12+02:00Thomas MedioniTransform Alternating Automata into Weak Alternating AutomataA construction allowing to translate co-Büchi alternating automata into weak alternating automata is introduced in [KV97].
Another construction translating Rabin and Parity alternating automata into weak alternating automata is introduced in [KV98].
The goal is to Implement those constructions into Spot:
- [x] Implement co-Büchi alternating automata -> weak alternating automata.
- [x] Support Büchi alternating automata -> Weak alternating automata by complementing a Büchi automaton, translating it into weak and then complementing it back.
- [x] See if it is possible to improve the co-Büchi -> weak construction so that it is not always performing as the worst-case complexity.
- [ ] Implement Rabin alternating -> Weak alternating
[KV97] O. Kupferman and M. Vardi. Weak alternating automata are not that weak. In *Proc. 5th Israeli Symposium on Theory of Computing and Systems*, pages 147-158. IEEE Computer Society Press, 1997.
[KV98] O. Kupferman and M. Vardi. Weak alternating automata and tree automata emptiness. In *Proc 30th ACM Symposium on Theory of Computing*, pages 224-233, Dallas, 1998.A construction allowing to translate co-Büchi alternating automata into weak alternating automata is introduced in [KV97].
Another construction translating Rabin and Parity alternating automata into weak alternating automata is introduced in [KV98].
The goal is to Implement those constructions into Spot:
- [x] Implement co-Büchi alternating automata -> weak alternating automata.
- [x] Support Büchi alternating automata -> Weak alternating automata by complementing a Büchi automaton, translating it into weak and then complementing it back.
- [x] See if it is possible to improve the co-Büchi -> weak construction so that it is not always performing as the worst-case complexity.
- [ ] Implement Rabin alternating -> Weak alternating
[KV97] O. Kupferman and M. Vardi. Weak alternating automata are not that weak. In *Proc. 5th Israeli Symposium on Theory of Computing and Systems*, pages 147-158. IEEE Computer Society Press, 1997.
[KV98] O. Kupferman and M. Vardi. Weak alternating automata and tree automata emptiness. In *Proc 30th ACM Symposium on Theory of Computing*, pages 224-233, Dallas, 1998.https://gitlab.lrde.epita.fr/spot/spot/-/issues/272streett-like to streett2017-06-21T17:35:19+02:00Alexandre Duret-Lutzstreett-like to streettWe need a `streett_like_to_strett(aut, reversed=false)` that converts Streett-like acceptance into Streett as defined in HOA 1.1 (if reversed = false) and as defined in HOA 1.0 if reversed=true.
For the record, HOA 1.1 (which might be renamed as 2.0) wants `(Inf(0)|Fin(1))&(Inf(2)|Fin(3)&...` while HOA 1.0 wants `(Fin(0)|Inf(1))&(Fin(2)|Inf(3)&...`.
We also need `rabin_like_to_rabin(aut)`, but there is no ordering difference in HOA in this case.We need a `streett_like_to_strett(aut, reversed=false)` that converts Streett-like acceptance into Streett as defined in HOA 1.1 (if reversed = false) and as defined in HOA 1.0 if reversed=true.
For the record, HOA 1.1 (which might be renamed as 2.0) wants `(Inf(0)|Fin(1))&(Inf(2)|Fin(3)&...` while HOA 1.0 wants `(Fin(0)|Inf(1))&(Fin(2)|Inf(3)&...`.
We also need `rabin_like_to_rabin(aut)`, but there is no ordering difference in HOA in this case.https://gitlab.lrde.epita.fr/spot/spot/-/issues/275is_inherently_weak should support alternating automata2017-08-09T18:29:02+02:00Thomas Medioniis_inherently_weak should support alternating automataThe current implementation of is_inherently_weak does not support alternating automata
Once it does, there is a shortcut in spot::to_weak_alternating() that can be done.The current implementation of is_inherently_weak does not support alternating automata
Once it does, there is a shortcut in spot::to_weak_alternating() that can be done.https://gitlab.lrde.epita.fr/spot/spot/-/issues/173missed simplifications2017-09-06T11:39:30+02:00Alexandre Duret-Lutzmissed simplifications```
ltl2tgba -f '(XXG!b M F!b) W (!a | !b)'
```
![output](/uploads/e743ce948b8fb8ee3ec3a18456f1f499/out.png)
```
ltl2tgba -f '(XXG!b M F!b) W (!a | !b)' | autfilt --small
```
![out2](/uploads/9c2c60f66f23205974158fb4df975131/out2.png)
I'm really surprised the first command does not produce exactly the second output.```
ltl2tgba -f '(XXG!b M F!b) W (!a | !b)'
```
![output](/uploads/e743ce948b8fb8ee3ec3a18456f1f499/out.png)
```
ltl2tgba -f '(XXG!b M F!b) W (!a | !b)' | autfilt --small
```
![out2](/uploads/9c2c60f66f23205974158fb4df975131/out2.png)
I'm really surprised the first command does not produce exactly the second output.https://gitlab.lrde.epita.fr/spot/spot/-/issues/232SPOT_PARSE_LOCATION2017-09-06T11:39:31+02:00Alexandre Duret-LutzSPOT_PARSE_LOCATIONThe emacs mode for HOA, supports multiple automata in the same buffer. `C-c C-c` will display the automaton at point: this works by copying the surrounding HOA automaton, and piping it to `autfilt` and then `dot`. Of course when `autfilt` reports a syntax error, it will use line and column numbers that are relative to its input (just one automaton) so this will not make a lot of sense in the context of a buffer with many automata.
I would like to have an environment variable called `SPOT_PARSE_LOCATION` that could be used to "shift" the diagnostics of the HOA or LTL parsers in this kind of contexts. The variable would be read by the command-line tools in `bin/` (I don't think it's good to have such a variable checked by the library), but the two parsers still need to provide the machinery necessary to shift error messages (or maybe simply set the original location).
For instance I'd like to have:
```sh
% autfilt -q < foo.hoa
3.9: syntax error, unexpected identifier
11.3: syntax error, unexpected ']'
11.2-3: ignoring this invalid label
% SPOT_PARSE_LOCATION=bar:11.1 autfilt -q < foo.hoa
bar:13.9: syntax error, unexpected identifier
bar:21.3: syntax error, unexpected ']'
bar:21.2-3: ignoring this invalid label
```
(Note that `bar:11.1` describes the actual location of line 1, column 1 of the input.)The emacs mode for HOA, supports multiple automata in the same buffer. `C-c C-c` will display the automaton at point: this works by copying the surrounding HOA automaton, and piping it to `autfilt` and then `dot`. Of course when `autfilt` reports a syntax error, it will use line and column numbers that are relative to its input (just one automaton) so this will not make a lot of sense in the context of a buffer with many automata.
I would like to have an environment variable called `SPOT_PARSE_LOCATION` that could be used to "shift" the diagnostics of the HOA or LTL parsers in this kind of contexts. The variable would be read by the command-line tools in `bin/` (I don't think it's good to have such a variable checked by the library), but the two parsers still need to provide the machinery necessary to shift error messages (or maybe simply set the original location).
For instance I'd like to have:
```sh
% autfilt -q < foo.hoa
3.9: syntax error, unexpected identifier
11.3: syntax error, unexpected ']'
11.2-3: ignoring this invalid label
% SPOT_PARSE_LOCATION=bar:11.1 autfilt -q < foo.hoa
bar:13.9: syntax error, unexpected identifier
bar:21.3: syntax error, unexpected ']'
bar:21.2-3: ignoring this invalid label
```
(Note that `bar:11.1` describes the actual location of line 1, column 1 of the input.)https://gitlab.lrde.epita.fr/spot/spot/-/issues/225Add other logics to Spot2017-09-06T11:39:31+02:00Maximilien ColangeAdd other logics to SpotSpot currently supports LTL and PSL. It would be nice to be able to read/write other logics of equivalent expressive power.
Especially, first-order logic (FO) is equivalent to LTL, and monadic second-order logic (MSO) is equivalent to omega-automata.
The translation between FO and LTL is easier when using past operators, which are not supported yet.
- [ ] Add support for past operators in LTL. This may require to update translation algorithms.
- [ ] Add support for mu-calculus: input/output, transformations to/from PSL/automata
- [ ] Add support for FO: input/output, transformation to/from LTL
- [ ] Add support for MSO: input/output, transformation to/from PSL/automataSpot currently supports LTL and PSL. It would be nice to be able to read/write other logics of equivalent expressive power.
Especially, first-order logic (FO) is equivalent to LTL, and monadic second-order logic (MSO) is equivalent to omega-automata.
The translation between FO and LTL is easier when using past operators, which are not supported yet.
- [ ] Add support for past operators in LTL. This may require to update translation algorithms.
- [ ] Add support for mu-calculus: input/output, transformations to/from PSL/automata
- [ ] Add support for FO: input/output, transformation to/from LTL
- [ ] Add support for MSO: input/output, transformation to/from PSL/automatahttps://gitlab.lrde.epita.fr/spot/spot/-/issues/222autfilt --when EXPR2017-09-06T11:39:31+02:00Alexandre Duret-Lutzautfilt --when EXPRI frequently want to find some examples of automata that get reduced by `autfilt --small`, or that do not get enlarged by `autfilt --deterministic` or whatever. My current way to do that is to use `--stats=%S,%s,%h,...` to get the input and output sizes in a CSV file along with the automaton, and then somehow (perl, awk, etc.) keep only the lines that match my criterion.
It would be a lot easier if I could write simply `autfilt --small --when='%s<%S'` and have `autfilt` output only the automata for which the given predicate is true. We have almost anything needed to implement this, except for a small arithmetic parser that would evaluate an expression to true or false.
Note that such a feature would have some overlap with some existing options. E.g. `--when='%[v]C<=10'` would be the same as `--triv-sccs=..10`. But it would be more powerful: for instance we have currently have no easy way to implement more complex rule such as `--when='(%[v]C<=10)||((%[a]C>3) && (%C<8))'`.
I think this `--when` also makes sense for `ltlfilt`.I frequently want to find some examples of automata that get reduced by `autfilt --small`, or that do not get enlarged by `autfilt --deterministic` or whatever. My current way to do that is to use `--stats=%S,%s,%h,...` to get the input and output sizes in a CSV file along with the automaton, and then somehow (perl, awk, etc.) keep only the lines that match my criterion.
It would be a lot easier if I could write simply `autfilt --small --when='%s<%S'` and have `autfilt` output only the automata for which the given predicate is true. We have almost anything needed to implement this, except for a small arithmetic parser that would evaluate an expression to true or false.
Note that such a feature would have some overlap with some existing options. E.g. `--when='%[v]C<=10'` would be the same as `--triv-sccs=..10`. But it would be more powerful: for instance we have currently have no easy way to implement more complex rule such as `--when='(%[v]C<=10)||((%[a]C>3) && (%C<8))'`.
I think this `--when` also makes sense for `ltlfilt`.https://gitlab.lrde.epita.fr/spot/spot/-/issues/281overhaul the formula simplification routines2017-09-15T15:55:47+02:00Alexandre Duret-Lutzoverhaul the formula simplification routinesThe current way to implement LTL simplifications has a few issues:
1. The number of rules is huge, and the `spot/twaalgos/simplify.cc` file implementing them is the largest all handwritten source files of Spot. Reading `doc/tl/tl.pdf` in parallel is almost mandatory to understand what is being done.
2. These rules are hard-coded.
3. They use nested switches to match the structure of some patterns, sometimes relying on formulas properties. But sometimes we have a set of rules that should only be applied in some context (e.g., star normal form, or rules that apply inside `GF(...)`). These nesting patterns are quite painful to follow, and are often implemented twice for the dual rules.
4. Some of generic options can also affect the way some rules are applied (sometimes they affect the direction of some rewriting).
5. Dealing with n-ary operators is quite painful, especially when multiple rules compete to combine the operands.
All these theses points make it very hard to implement new simplification rules. It's also impossible to test a new simplification rule without implementing it. Finally, adding a new operator to the syntax would have an enormous cost since we would also need to alter many rules to support that new operator.
Our goal is therefore to change the way we do LTL simplifications. We need at least:
- [ ] a [DSL](https://en.wikipedia.org/wiki/Domain-specific_language) to express all our simplification rules in a way that is simple and unambiguous
- [ ] the (hopefully) complete list of rules expressed in that language
- [ ] a parser for that language
- [ ] a function that applies all the reductions on some given formula
- [ ] some abstract representation of a set of rules, such that the above function can be implemented efficiently
Once that is done, we should evaluate ways to make it faster. Parsing the list of all rules before the first simplification is likely to incur some overhead. We could try to generate some C++ code from this list at compile time to avoid the parsing at run time, but we would loose the ability to edit the list of rules at run time.
An option would be store the abstract (parsed) representation of the set of rules into some kind of bytecode that can be loaded quickly and used readily without costly syntax checks.
It would be useful if the users of the command-line tools could be allowed to supply their own rules. (This implies that the DSL will need to be documented.)
I suggest keeping the current simplification function around until all of the above is working, so that the result can be compared (if possible). When we are confident that we can switch to the new function, we still need to maintain backward compatibility of the API.The current way to implement LTL simplifications has a few issues:
1. The number of rules is huge, and the `spot/twaalgos/simplify.cc` file implementing them is the largest all handwritten source files of Spot. Reading `doc/tl/tl.pdf` in parallel is almost mandatory to understand what is being done.
2. These rules are hard-coded.
3. They use nested switches to match the structure of some patterns, sometimes relying on formulas properties. But sometimes we have a set of rules that should only be applied in some context (e.g., star normal form, or rules that apply inside `GF(...)`). These nesting patterns are quite painful to follow, and are often implemented twice for the dual rules.
4. Some of generic options can also affect the way some rules are applied (sometimes they affect the direction of some rewriting).
5. Dealing with n-ary operators is quite painful, especially when multiple rules compete to combine the operands.
All these theses points make it very hard to implement new simplification rules. It's also impossible to test a new simplification rule without implementing it. Finally, adding a new operator to the syntax would have an enormous cost since we would also need to alter many rules to support that new operator.
Our goal is therefore to change the way we do LTL simplifications. We need at least:
- [ ] a [DSL](https://en.wikipedia.org/wiki/Domain-specific_language) to express all our simplification rules in a way that is simple and unambiguous
- [ ] the (hopefully) complete list of rules expressed in that language
- [ ] a parser for that language
- [ ] a function that applies all the reductions on some given formula
- [ ] some abstract representation of a set of rules, such that the above function can be implemented efficiently
Once that is done, we should evaluate ways to make it faster. Parsing the list of all rules before the first simplification is likely to incur some overhead. We could try to generate some C++ code from this list at compile time to avoid the parsing at run time, but we would loose the ability to edit the list of rules at run time.
An option would be store the abstract (parsed) representation of the set of rules into some kind of bytecode that can be loaded quickly and used readily without costly syntax checks.
It would be useful if the users of the command-line tools could be allowed to supply their own rules. (This implies that the DSL will need to be documented.)
I suggest keeping the current simplification function around until all of the above is working, so that the result can be compared (if possible). When we are confident that we can switch to the new function, we still need to maintain backward compatibility of the API.Florian Perlié-LongFlorian Perlié-Longhttps://gitlab.lrde.epita.fr/spot/spot/-/issues/177connect tgba_determinize() and dtwa_sat_minimize() in ltl2tgba2017-11-15T14:34:25+01:00Alexandre Duret-Lutzconnect tgba_determinize() and dtwa_sat_minimize() in ltl2tgbaThe framework illustrated by the figure at https://spot.lrde.epita.fr/satmin.html#sec-4 could be improved since we can now generate deterministic parity automata for any LTL formula.
In short, I'd like
```
ltl2tgba -D -x sat-minimize formula
```
to work for any formula for which a deterministic TGBA exists.The framework illustrated by the figure at https://spot.lrde.epita.fr/satmin.html#sec-4 could be improved since we can now generate deterministic parity automata for any LTL formula.
In short, I'd like
```
ltl2tgba -D -x sat-minimize formula
```
to work for any formula for which a deterministic TGBA exists.https://gitlab.lrde.epita.fr/spot/spot/-/issues/209option to limit stack depth in emptiness checks2017-11-15T14:34:25+01:00Alexandre Duret-Lutzoption to limit stack depth in emptiness checksIt would be nice to be able to limit the depth of the DFS used in the various emptiness checks. This can be used as a form of bounded model checking; and it would have been useful to the people preparing the model-checking contest and looking for cases where the emptiness check is still undecided after having explored a large part of the state space.
Note that when the emptiness check does not find a counterexample, it should be able to tell whether it hit the depth-limit (in which case a counterexample could exist nonetheless) or not (full state-space explored).It would be nice to be able to limit the depth of the DFS used in the various emptiness checks. This can be used as a form of bounded model checking; and it would have been useful to the people preparing the model-checking contest and looking for cases where the emptiness check is still undecided after having explored a large part of the state space.
Note that when the emptiness check does not find a counterexample, it should be able to tell whether it hit the depth-limit (in which case a counterexample could exist nonetheless) or not (full state-space explored).https://gitlab.lrde.epita.fr/spot/spot/-/issues/169ltlcross -D should allow setting a limit to the size of the deterministic aut...2017-11-15T14:34:25+01:00Alexandre Duret-Lutzltlcross -D should allow setting a limit to the size of the deterministic automatonI'd like to use for example `ltlcross -D1000` and not have `ltlcross` waste to much time determinizing automata that will become too large, especially since those deterministic automata will be ultimately passed through `remove_fin()`.
This would require `tgba_determinize()` to accept an upper-bound on the number of states it can generate, and return `nullptr` when that is reached.I'd like to use for example `ltlcross -D1000` and not have `ltlcross` waste to much time determinizing automata that will become too large, especially since those deterministic automata will be ultimately passed through `remove_fin()`.
This would require `tgba_determinize()` to accept an upper-bound on the number of states it can generate, and return `nullptr` when that is reached.https://gitlab.lrde.epita.fr/spot/spot/-/issues/201macport2017-11-15T14:34:25+01:00Etienne RenaultmacportWe would like to have a package for spot for MacOs users. Ideally it would distribute sources as well as dmgWe would like to have a package for spot for MacOs users. Ideally it would distribute sources as well as dmghttps://gitlab.lrde.epita.fr/spot/spot/-/issues/163Rewrite testing automata as Fin( ! 0)2017-11-15T14:34:25+01:00Etienne RenaultRewrite testing automata as Fin( ! 0)Testing automata can be re-write using the acceptance F(!0) for livelock accepting states.
Thus ta will be consistent with the rest of the libraryTesting automata can be re-write using the acceptance F(!0) for livelock accepting states.
Thus ta will be consistent with the rest of the libraryhttps://gitlab.lrde.epita.fr/spot/spot/-/issues/151Produce fair Kripke when loading DVE file2017-11-15T14:34:25+01:00Etienne RenaultProduce fair Kripke when loading DVE fileWhen loading a DVE model, we are (sometimes) unable to detect the process that progress. Thus we cannot check fair properties on these models. In the code attached, we cannot express that P or Q progress. We could directly inherit fair kripke to annotate transitions.
[file.dve](/uploads/63897a6731830c1aa8640e4bcac92f3e/file.dve)
When loading a DVE model, we are (sometimes) unable to detect the process that progress. Thus we cannot check fair properties on these models. In the code attached, we cannot express that P or Q progress. We could directly inherit fair kripke to annotate transitions.
[file.dve](/uploads/63897a6731830c1aa8640e4bcac92f3e/file.dve)
https://gitlab.lrde.epita.fr/spot/spot/-/issues/140case where ltlcross consumes too much memory2017-11-15T14:34:25+01:00Alexandre Duret-Lutzcase where ltlcross consumes too much memoryThis is a very large automaton created by `ltl2dstar`, and `remove_fin` create even larger automata. Maybe there is something to improve.
```
% ltlcross --verbose -f 'F(Ga | Gb | Gc | Gd | Ge)' 'ltl2dstar --union=no --ltl2nba=spin:ltl2tgba@-Ds --output-format=hoa %L %O'
F | | | | G p0 G p1 G p2 G p3 G p4
Running [P0]: ltl2dstar --union=no --ltl2nba=spin:ltl2tgba@-Ds --output-format=hoa 'lcr-i0-kcDtFC' 'lcr-o0-DaCj7N'
Running [N0]: ltl2dstar --union=no --ltl2nba=spin:ltl2tgba@-Ds --output-format=hoa 'lcr-i0-iJErXM' 'lcr-o0-P5qANL'
Performing sanity checks and gathering statistics...
info: getting rid of any Fin acceptance...
info: P0 (58852 st., 1883264 ed., 18 sets) -> (250133 st., 5917479 ed., 1 sets)
info: Comp(P0) (58852 st., 1883264 ed., 18 sets) -> (3896448 st., 103304326 ed., 18 sets)
info: Comp(N0) (11 st., 352 ed., 1 sets) -> (16 st., 610 ed., 1 sets)
info: check_empty P0*N0
^C
```This is a very large automaton created by `ltl2dstar`, and `remove_fin` create even larger automata. Maybe there is something to improve.
```
% ltlcross --verbose -f 'F(Ga | Gb | Gc | Gd | Ge)' 'ltl2dstar --union=no --ltl2nba=spin:ltl2tgba@-Ds --output-format=hoa %L %O'
F | | | | G p0 G p1 G p2 G p3 G p4
Running [P0]: ltl2dstar --union=no --ltl2nba=spin:ltl2tgba@-Ds --output-format=hoa 'lcr-i0-kcDtFC' 'lcr-o0-DaCj7N'
Running [N0]: ltl2dstar --union=no --ltl2nba=spin:ltl2tgba@-Ds --output-format=hoa 'lcr-i0-iJErXM' 'lcr-o0-P5qANL'
Performing sanity checks and gathering statistics...
info: getting rid of any Fin acceptance...
info: P0 (58852 st., 1883264 ed., 18 sets) -> (250133 st., 5917479 ed., 1 sets)
info: Comp(P0) (58852 st., 1883264 ed., 18 sets) -> (3896448 st., 103304326 ed., 18 sets)
info: Comp(N0) (11 st., 352 ed., 1 sets) -> (16 st., 610 ed., 1 sets)
info: check_empty P0*N0
^C
```https://gitlab.lrde.epita.fr/spot/spot/-/issues/183improve detection and handling of weak automata in remove_fin()2017-11-15T14:34:25+01:00Alexandre Duret-Lutzimprove detection and handling of weak automata in remove_fin()`remove_fin()` currently has just
```c++
// FIXME: we should check whether the automaton is weak.
if (aut->prop_inherently_weak().is_true() && is_deterministic(aut))
return remove_fin_det_weak(aut);
````remove_fin()` currently has just
```c++
// FIXME: we should check whether the automaton is weak.
if (aut->prop_inherently_weak().is_true() && is_deterministic(aut))
return remove_fin_det_weak(aut);
```https://gitlab.lrde.epita.fr/spot/spot/-/issues/127add an intersect() method to twa_graph2018-06-28T21:27:12+02:00Alexandre Duret-Lutzadd an intersect() method to twa_graphThe use of
```
bool res = product(l->translation, g->translation)->is_empty();
```
in `language_containment_checker::incompatible_(l,g)` is bad for two reasons:
1. `product()` builds the entire product, even in the cases where the product is non-empty and `is_empty()` does not need to explore it fully.
2. `is_empty()` calls an emptiness checks that uses the "on-the-fly" interface, therefore allocating states and iterators all over the place.
The product could be done on-the-fly with `otf_product()`. But `otf_product()` would also use the on-the-fly interface of the two input automata, so allocating memory. In the case where `is_empty()` has to explore the entire product, this would be a net loss.
So the proposal is to implement a function
`bool intersects(const twa_graph_ptr& left, const twa_graph_ptr& right);`
that would do an emptiness check over the product of left and right, computed on-the-fly. Because the two automata are `twa_graph_ptr`, the emptiness check will be easier to implement efficiently, because it can refer to states by their numbers. Then we would also add a method `bool twa_graph::intersect(const twa_graph_ptr& right)`.
This should be useful in other contexts. For instance `ltlcross` and `autfilt --intersect` would benefit from that as well (although in the case of `ltlcross`, a counterexample would be better than `bool`).
For benchmarking, `reduc.test` seems to be a nice candidate. This test currently takes several minutes
and the checks for LTL equivalence spend approx 20% of time building products, and 13% testing those products for emptiness.The use of
```
bool res = product(l->translation, g->translation)->is_empty();
```
in `language_containment_checker::incompatible_(l,g)` is bad for two reasons:
1. `product()` builds the entire product, even in the cases where the product is non-empty and `is_empty()` does not need to explore it fully.
2. `is_empty()` calls an emptiness checks that uses the "on-the-fly" interface, therefore allocating states and iterators all over the place.
The product could be done on-the-fly with `otf_product()`. But `otf_product()` would also use the on-the-fly interface of the two input automata, so allocating memory. In the case where `is_empty()` has to explore the entire product, this would be a net loss.
So the proposal is to implement a function
`bool intersects(const twa_graph_ptr& left, const twa_graph_ptr& right);`
that would do an emptiness check over the product of left and right, computed on-the-fly. Because the two automata are `twa_graph_ptr`, the emptiness check will be easier to implement efficiently, because it can refer to states by their numbers. Then we would also add a method `bool twa_graph::intersect(const twa_graph_ptr& right)`.
This should be useful in other contexts. For instance `ltlcross` and `autfilt --intersect` would benefit from that as well (although in the case of `ltlcross`, a counterexample would be better than `bool`).
For benchmarking, `reduc.test` seems to be a nice candidate. This test currently takes several minutes
and the checks for LTL equivalence spend approx 20% of time building products, and 13% testing those products for emptiness.https://gitlab.lrde.epita.fr/spot/spot/-/issues/182implement simulation-based inclusion checks2017-11-15T14:34:25+01:00Alexandre Duret-Lutzimplement simulation-based inclusion checks- compute the signatures and classes of the states of both automata,
- if the initial state of A is a class implied by the class of B, then L(B) is included in L(A).
That only a sufficient condition, but it may help us avoid computing the complement of A to decide B ⊂ A- compute the signatures and classes of the states of both automata,
- if the initial state of A is a class implied by the class of B, then L(B) is included in L(A).
That only a sufficient condition, but it may help us avoid computing the complement of A to decide B ⊂ Ahttps://gitlab.lrde.epita.fr/spot/spot/-/issues/124Python API documentation2017-11-15T14:34:25+01:00Alexandre Duret-LutzPython API documentationIt would be nice to have a complete documentation of the Python API.It would be nice to have a complete documentation of the Python API.https://gitlab.lrde.epita.fr/spot/spot/-/issues/180missed simplification2017-11-15T14:34:25+01:00Alexandre Duret-Lutzmissed simplification```
ltl2tgba 'G(!p0 | ((!p1 | X((!p2 U (p3 & Fp4)) | (p2 R !p3))) W p2))'
```
Gives a 10-state TGBA:
![tgba](/uploads/ea69267bd849dd97fdab8b20e8cd9506/tgba.png)
```
ltl2tgba -B 'G(!p0 | ((!p1 | X((!p2 U (p3 & Fp4)) | (p2 R !p3))) W p2))'
```
Gives a 9-state BA:
![ba](/uploads/70f6dd19c1d890b7d2513dba59a1d5fd/ba.png)
Looks like a case where simulation is more efficient on BA that on TGBA.```
ltl2tgba 'G(!p0 | ((!p1 | X((!p2 U (p3 & Fp4)) | (p2 R !p3))) W p2))'
```
Gives a 10-state TGBA:
![tgba](/uploads/ea69267bd849dd97fdab8b20e8cd9506/tgba.png)
```
ltl2tgba -B 'G(!p0 | ((!p1 | X((!p2 U (p3 & Fp4)) | (p2 R !p3))) W p2))'
```
Gives a 9-state BA:
![ba](/uploads/70f6dd19c1d890b7d2513dba59a1d5fd/ba.png)
Looks like a case where simulation is more efficient on BA that on TGBA.https://gitlab.lrde.epita.fr/spot/spot/-/issues/121minimize_wdba() should have improved detection of terminal automata in its ou...2017-11-15T14:34:25+01:00Alexandre Duret-Lutzminimize_wdba() should have improved detection of terminal automata in its outputBut this should not be done at the cost of computing another SCC decomposition. Can it be done inside `minimize_dfa()`?But this should not be done at the cost of computing another SCC decomposition. Can it be done inside `minimize_dfa()`?https://gitlab.lrde.epita.fr/spot/spot/-/issues/120minimize_wdba() should be specialized for terminal automata2017-11-15T14:34:25+01:00Alexandre Duret-Lutzminimize_wdba() should be specialized for terminal automatain particular, the call to `wdba_scc_is_accepting()` seems overkillin particular, the call to `wdba_scc_is_accepting()` seems overkillhttps://gitlab.lrde.epita.fr/spot/spot/-/issues/111Automatic build for various distributions2017-11-15T14:34:25+01:00Alexandre Duret-LutzAutomatic build for various distributionsRegister on the https://build.opensuse.org/ service, and use it to build Spot for many distributions.Register on the https://build.opensuse.org/ service, and use it to build Spot for many distributions.https://gitlab.lrde.epita.fr/spot/spot/-/issues/102named temporal operators2017-11-15T14:34:25+01:00Alexandre Duret-Lutznamed temporal operatorsSpin's LTL syntax allows `always`, `equivalent`, `eventually`, `until`, `next` for the command line (i.e., `spin -f`). The syntax for LTL formula specified inside a promela model additionally accept `stronguntil`, `weakuntil`, `release`.
PSL has `always`, `never`, `next!` and `next`, `eventually!`, `until!`, `until`, `until!_`, `until_`, `before!`, `before`, `before!_`, `before_`. Where the `!` denotes the strong variant.
It would be nice to support these operators as well. Unfortunately, there is one semantics incompatibility: `until` means `U` for Spin, and `W` for PSL.
Spin's LTL syntax allows `always`, `equivalent`, `eventually`, `until`, `next` for the command line (i.e., `spin -f`). The syntax for LTL formula specified inside a promela model additionally accept `stronguntil`, `weakuntil`, `release`.
PSL has `always`, `never`, `next!` and `next`, `eventually!`, `until!`, `until`, `until!_`, `until_`, `before!`, `before`, `before!_`, `before_`. Where the `!` denotes the strong variant.
It would be nice to support these operators as well. Unfortunately, there is one semantics incompatibility: `until` means `U` for Spin, and `W` for PSL.
https://gitlab.lrde.epita.fr/spot/spot/-/issues/75An example of BA translation that is better than its original TGBA2020-12-14T11:24:53+01:00Alexandre Duret-LutzAn example of BA translation that is better than its original TGBAIn this example a TGBA with 2 acceptance sets is degeneralized into a BA with one transition less (a self loop is removed).
```
% ltl2tgba -H 'GF(!a & X!a & Fa)'
HOA: v1
name: "GF(!a & X!a & Fa)"
States: 3
Start: 0
AP: 1 "a"
acc-name: generalized-Buchi 2
Acceptance: 2 Inf(0)&Inf(1)
properties: trans-labels explicit-labels trans-acc
--BODY--
State: 0
[t] 0 {0}
[!0] 1 {1}
State: 1
[!0] 1 {1}
[!0] 2
State: 2
[0] 0 {0}
--END--
```
```
% ltl2tgba -BH 'GF(!a & X!a &Fa)'
HOA: v1
name: "GF(!a & X!a & Fa)"
States: 3
Start: 2
AP: 1 "a"
acc-name: Buchi
Acceptance: 1 Inf(0)
properties: trans-labels explicit-labels state-acc
--BODY--
State: 0
[!0] 1
State: 1 {0}
[0] 2
State: 2
[!0] 0
[t] 2
--END--
```
If we disable the simulation on the BA, we get a BA with the same structure as the original TGBA.
```
% ltl2tgba -x ba-simul=0 -BH 'GF(!a & X!a &Fa)'
HOA: v1
name: "GF(!a & X!a & Fa)"
States: 3
Start: 0
AP: 1 "a"
acc-name: Buchi
Acceptance: 1 Inf(0)
properties: trans-labels explicit-labels state-acc
--BODY--
State: 0
[t] 0
[!0] 1
State: 1
[!0] 1
[!0] 2
State: 2 {0}
[0] 0
--END--
```
It would be nice to have a similar reduction on the TGBA.In this example a TGBA with 2 acceptance sets is degeneralized into a BA with one transition less (a self loop is removed).
```
% ltl2tgba -H 'GF(!a & X!a & Fa)'
HOA: v1
name: "GF(!a & X!a & Fa)"
States: 3
Start: 0
AP: 1 "a"
acc-name: generalized-Buchi 2
Acceptance: 2 Inf(0)&Inf(1)
properties: trans-labels explicit-labels trans-acc
--BODY--
State: 0
[t] 0 {0}
[!0] 1 {1}
State: 1
[!0] 1 {1}
[!0] 2
State: 2
[0] 0 {0}
--END--
```
```
% ltl2tgba -BH 'GF(!a & X!a &Fa)'
HOA: v1
name: "GF(!a & X!a & Fa)"
States: 3
Start: 2
AP: 1 "a"
acc-name: Buchi
Acceptance: 1 Inf(0)
properties: trans-labels explicit-labels state-acc
--BODY--
State: 0
[!0] 1
State: 1 {0}
[0] 2
State: 2
[!0] 0
[t] 2
--END--
```
If we disable the simulation on the BA, we get a BA with the same structure as the original TGBA.
```
% ltl2tgba -x ba-simul=0 -BH 'GF(!a & X!a &Fa)'
HOA: v1
name: "GF(!a & X!a & Fa)"
States: 3
Start: 0
AP: 1 "a"
acc-name: Buchi
Acceptance: 1 Inf(0)
properties: trans-labels explicit-labels state-acc
--BODY--
State: 0
[t] 0
[!0] 1
State: 1
[!0] 1
[!0] 2
State: 2 {0}
[0] 0
--END--
```
It would be nice to have a similar reduction on the TGBA.https://gitlab.lrde.epita.fr/spot/spot/-/issues/74'Step by step' mode for algorithms in python2017-11-15T14:34:25+01:00Etienne Renault'Step by step' mode for algorithms in pythonIf spot has educational objectives it could be interesting to
have a step by step mode in order to run algorithms such as degeneralisation,
emptiness check, ...
How can we do this without modifying C++ interface?If spot has educational objectives it could be interesting to
have a step by step mode in order to run algorithms such as degeneralisation,
emptiness check, ...
How can we do this without modifying C++ interface?https://gitlab.lrde.epita.fr/spot/spot/-/issues/58are_isomorphic and canonicalize should also consider reorderings of acceptanc...2017-11-15T14:34:25+01:00Alexandre Duret-Lutzare_isomorphic and canonicalize should also consider reorderings of acceptance setsFor instance
HOA: v1
States: 2
Start: 0
Acceptance: 2 Inf(0)&Inf(1)
--BODY--
State: 0 {0} [t] 1
State: 1 {1} [t] 0
--END--
should be considered as isomorphic to
HOA: v1
States: 2
Start: 0
Acceptance: 2 Inf(0)&Inf(1)
--BODY--
State: 0 {1} [t] 1
State: 1 {0} [t] 0
--END--
since we just flipped the acceptance set numbers.For instance
HOA: v1
States: 2
Start: 0
Acceptance: 2 Inf(0)&Inf(1)
--BODY--
State: 0 {0} [t] 1
State: 1 {1} [t] 0
--END--
should be considered as isomorphic to
HOA: v1
States: 2
Start: 0
Acceptance: 2 Inf(0)&Inf(1)
--BODY--
State: 0 {1} [t] 1
State: 1 {0} [t] 0
--END--
since we just flipped the acceptance set numbers.https://gitlab.lrde.epita.fr/spot/spot/-/issues/54hoa: use acc-name to validate Acceptance2017-11-15T14:34:25+01:00Alexandre Duret-Lutzhoa: use acc-name to validate AcceptanceWith version 1.1.1, `ltl3ba -H2 -f Fa` outputs
acc-name: generalized-Buchi 1
Acceptance: 1 t
instead of
acc-name: generalized-Buchi 1
Acceptance: 1 Inf(0)
Since tonight our parser ignores useless acceptance sets (see [this HOA issue](https://github.com/adl/hoaf/issues/36)), so the first example is silently converted to
acc-name: all
Acceptance: 0 t
To catch the error in the first example, we should probably look at `acc-name:` and match the values we recognize against the `Acceptance:` line.With version 1.1.1, `ltl3ba -H2 -f Fa` outputs
acc-name: generalized-Buchi 1
Acceptance: 1 t
instead of
acc-name: generalized-Buchi 1
Acceptance: 1 Inf(0)
Since tonight our parser ignores useless acceptance sets (see [this HOA issue](https://github.com/adl/hoaf/issues/36)), so the first example is silently converted to
acc-name: all
Acceptance: 0 t
To catch the error in the first example, we should probably look at `acc-name:` and match the values we recognize against the `Acceptance:` line.https://gitlab.lrde.epita.fr/spot/spot/-/issues/51fusion-star support2017-11-15T14:34:25+01:00Alexandre Duret-Lutzfusion-star supportThis keep tracks of the support for the `[:*i..j]` operator. This is currently on branch `adl/si-psl`.
* [x] documentation of semantics, parsing, pretty-printing, trivial simplifications, translation
* [x] update `ltl2tgba.html` to mention the operator
* [ ] non-trivial simplifications
* [x] `is_syntactic_stutter_invariant()` following Def.2 of [Dax et al. (ATVA'09)][]
* [x] generalize `remove_x()` following function κ and τ in [Dax et al. (ATVA'09)][] (bug this seems bogus, see comment below)
* [ ] investigate why we need a bidirectional check in our implementation of `is_stutter_invariant()` using Etessami's rewriting, why [Dax et al. (ATVA'09)] claim that their τ function (based on the paper of Peled & Wilke) is an overapproximation, and therefore would need only one inclusion check. Is it the case that Etessami's function is also an overapproximation and we are doing to much work or is it the case that Etessami's is not an overapproximation?
* [ ] generalize the syntactic version of `is_stutter_invariant()` to use `stutterize_psl()`
[Dax et al. (ATVA'09)]: http://www.daxc.de/eth/paper/09atva.pdfThis keep tracks of the support for the `[:*i..j]` operator. This is currently on branch `adl/si-psl`.
* [x] documentation of semantics, parsing, pretty-printing, trivial simplifications, translation
* [x] update `ltl2tgba.html` to mention the operator
* [ ] non-trivial simplifications
* [x] `is_syntactic_stutter_invariant()` following Def.2 of [Dax et al. (ATVA'09)][]
* [x] generalize `remove_x()` following function κ and τ in [Dax et al. (ATVA'09)][] (bug this seems bogus, see comment below)
* [ ] investigate why we need a bidirectional check in our implementation of `is_stutter_invariant()` using Etessami's rewriting, why [Dax et al. (ATVA'09)] claim that their τ function (based on the paper of Peled & Wilke) is an overapproximation, and therefore would need only one inclusion check. Is it the case that Etessami's function is also an overapproximation and we are doing to much work or is it the case that Etessami's is not an overapproximation?
* [ ] generalize the syntactic version of `is_stutter_invariant()` to use `stutterize_psl()`
[Dax et al. (ATVA'09)]: http://www.daxc.de/eth/paper/09atva.pdfAlexandre Duret-LutzAlexandre Duret-Lutzhttps://gitlab.lrde.epita.fr/spot/spot/-/issues/38improve failure reporting of ltlcross2017-11-15T14:34:25+01:00Alexandre Duret-Lutzimprove failure reporting of ltlcrossImprove failure reporting of `ltlcross`. Currently it only reports
counterexample for errors in cross products. We should improve
diagnostics in the other checks.
Also we could have heuristics to highlight the most pertinent
error to look at (e.g., the smallest automaton causing a problem),
and to hint at probable erroneous translators (e.g., if the
$P_i\otimes N_i$ is non-empty, or if $P_i\otimes N_j$ is non-empty
for one $i$ and multiple $j$).
Improve failure reporting of `ltlcross`. Currently it only reports
counterexample for errors in cross products. We should improve
diagnostics in the other checks.
Also we could have heuristics to highlight the most pertinent
error to look at (e.g., the smallest automaton causing a problem),
and to hint at probable erroneous translators (e.g., if the
$P_i\otimes N_i$ is non-empty, or if $P_i\otimes N_j$ is non-empty
for one $i$ and multiple $j$).
https://gitlab.lrde.epita.fr/spot/spot/-/issues/37implement a feed-back arc set heuristic2017-11-15T14:34:26+01:00Alexandre Duret-Lutzimplement a feed-back arc set heuristicThis is needed to improve conversion from DRA to NBA, as well as
complementation of DBA.
Alexandre L has a branch for this, but it is out-of-date w.r.t. to the current code base, and as far as I know the FAS hasn't been integrated into both algorithms.This is needed to improve conversion from DRA to NBA, as well as
complementation of DBA.
Alexandre L has a branch for this, but it is out-of-date w.r.t. to the current code base, and as far as I know the FAS hasn't been integrated into both algorithms.Alexandre LewkowiczAlexandre Lewkowiczhttps://gitlab.lrde.epita.fr/spot/spot/-/issues/29PSL and persistent formulas2017-11-15T14:34:27+01:00Alexandre Duret-LutzPSL and persistent formulasWhen translating syntactically persistent formulas, we do not have to use more than one acceptance condition, and the marking of opertors should not be required.When translating syntactically persistent formulas, we do not have to use more than one acceptance condition, and the marking of opertors should not be required.https://gitlab.lrde.epita.fr/spot/spot/-/issues/28Finalize PSL support2017-11-15T14:34:27+01:00Alexandre Duret-LutzFinalize PSL support
- [ ] Finish integrating PSL rewriting rules from Cimatty & al.
- [ ] Rewrite the SERE-FA translator to use transition-based FA.
- [ ] Look into syntactic sugar like `between`.
- [ ] Add eventural/universal rules. For instance when translating `{(a;b)*;c*;e}[]->Gf`, we only have to translate the equivalent of `(a&X(b&Gf))|(c->Gf)|(e->Gf)`. This could be done by aborting the translation of `{e}[]->Gf` as soon as `Gf` is emitted (ignoring any other prefixes of `e`)
- [ ] Finish integrating PSL rewriting rules from Cimatty & al.
- [ ] Rewrite the SERE-FA translator to use transition-based FA.
- [ ] Look into syntactic sugar like `between`.
- [ ] Add eventural/universal rules. For instance when translating `{(a;b)*;c*;e}[]->Gf`, we only have to translate the equivalent of `(a&X(b&Gf))|(c->Gf)|(e->Gf)`. This could be done by aborting the translation of `{e}[]->Gf` as soon as `Gf` is emitted (ignoring any other prefixes of `e`)https://gitlab.lrde.epita.fr/spot/spot/-/issues/10improve syntactic characterization to take simplification rules into account2017-11-15T14:34:27+01:00Alexandre Duret-Lutzimprove syntactic characterization to take simplification rules into account`!((G(X(p0))) & (F(G(!(p1)))))` is a syntactic recurrence, but it simplifies to `XF(!p0 | GFp1)` which is not detected as such.
`!((G(X(p0))) & (F(G(!(p1)))))` is a syntactic recurrence, but it simplifies to `XF(!p0 | GFp1)` which is not detected as such.
https://gitlab.lrde.epita.fr/spot/spot/-/issues/27improve remove-x2017-11-15T14:34:27+01:00Alexandre Duret-Lutzimprove remove-ximplement Tian&Duan "A note on stutter-invariant PLTL" so that `ltlfilt --remove-x` gives smaller formulas than it currently doesimplement Tian&Duan "A note on stutter-invariant PLTL" so that `ltlfilt --remove-x` gives smaller formulas than it currently doeshttps://gitlab.lrde.epita.fr/spot/spot/-/issues/2find a way to reduce {b*;r} = (b W r) when r is satisfiable2017-11-15T14:34:27+01:00Alexandre Duret-Lutzfind a way to reduce {b*;r} = (b W r) when r is satisfiableCommit 11568666 removed the following rules
{b*;r} = b W r
!{b*;r} = !b M !{r}
because it is only true when `r` is satisifiable, but the code did not check that.
Currently the only way we have to ensure that `r` is satisifiable is to translate it into an automaton. Doing so seems unlikely to be efficient.
But there are many formulas that we can tell as satisfiable without even translation: for instance any SERE that do not involve `&`, `&&`, and `false` is satisfiable. So maybe each formula could have a Boolean property that tells when it is trivially satisfiable.Commit 11568666 removed the following rules
{b*;r} = b W r
!{b*;r} = !b M !{r}
because it is only true when `r` is satisifiable, but the code did not check that.
Currently the only way we have to ensure that `r` is satisifiable is to translate it into an automaton. Doing so seems unlikely to be efficient.
But there are many formulas that we can tell as satisfiable without even translation: for instance any SERE that do not involve `&`, `&&`, and `false` is satisfiable. So maybe each formula could have a Boolean property that tells when it is trivially satisfiable.https://gitlab.lrde.epita.fr/spot/spot/-/issues/26improve speed of wdba-minimization2017-11-15T14:34:27+01:00Alexandre Duret-Lutzimprove speed of wdba-minimizationHere is a formula (reported by Étienne Renault) where
WDBA-minimization takes a lot time, yet it is useless!
```
% f='(Fa U (Fb xor ((c xor Fd) W e))) U Gf'
% ltl2tgba -x wdba-minimize=0 "$f" --stats '%s states, %r seconds'
81 states, 0.377423 seconds
% ltl2tgba -x wdba-minimize=1 "$f" --stats '%s states, %r seconds'
81 states, 41.5798 seconds
```Here is a formula (reported by Étienne Renault) where
WDBA-minimization takes a lot time, yet it is useless!
```
% f='(Fa U (Fb xor ((c xor Fd) W e))) U Gf'
% ltl2tgba -x wdba-minimize=0 "$f" --stats '%s states, %r seconds'
81 states, 0.377423 seconds
% ltl2tgba -x wdba-minimize=1 "$f" --stats '%s states, %r seconds'
81 states, 41.5798 seconds
```https://gitlab.lrde.epita.fr/spot/spot/-/issues/24ltlfilt --ltl=subclass2017-11-15T14:34:27+01:00Alexandre Duret-Lutzltlfilt --ltl=subclassI'd like to filter formulas that belong to some specific subclasses.
For instance `subclass` could be `-X` (no X), `-GU` (no G nor U), `GF` (only G and F), `GXFX` (only GX and FX).
I'd like to filter formulas that belong to some specific subclasses.
For instance `subclass` could be `-X` (no X), `-GU` (no G nor U), `GF` (only G and F), `GXFX` (only GX and FX).
https://gitlab.lrde.epita.fr/spot/spot/-/issues/18rewrite the web page ltl2tgba.html so its interface is similar to that of the...2017-11-15T14:34:27+01:00Alexandre Duret-Lutzrewrite the web page ltl2tgba.html so its interface is similar to that of the command-line ltl2tgbaI wish we would also have `ltlfilt.html`, `autfilt.html`, etc.I wish we would also have `ltlfilt.html`, `autfilt.html`, etc.https://gitlab.lrde.epita.fr/spot/spot/-/issues/13better compilation options2017-11-15T14:34:27+01:00Alexandre Duret-Lutzbetter compilation options`-mtune=native` and `-march=native` will produce code optimized for the host architecture as detected using the cpuid instruction.
`-mtune=native` and `-march=native` will produce code optimized for the host architecture as detected using the cpuid instruction.
https://gitlab.lrde.epita.fr/spot/spot/-/issues/12even more LTL simplification for GF formulas2017-11-15T14:34:27+01:00Alexandre Duret-Lutzeven more LTL simplification for GF formulaswe could implement those rules:
```
GF(a & Fb) = G(Fa & Fb)
GF(a & XXXXXFb) = G(Fa & Fb)
GF(a & eventual) = G(Fa & eventual)
```we could implement those rules:
```
GF(a & Fb) = G(Fa & Fb)
GF(a & XXXXXFb) = G(Fa & Fb)
GF(a & eventual) = G(Fa & eventual)
```https://gitlab.lrde.epita.fr/spot/spot/-/issues/308accessibility properties2017-12-06T10:23:40+01:00Alexandre Duret-Lutzaccessibility propertiesWe just discussed with @max and @aga the fact that some algorithms call `scc_filter()` as a preprocessing step, some don't. For instance currently `tgba_determinize()` calls `scc_filter()` only if simulation-based reductions are activated, and it is unclear why this this done only in this case.
It might be interesting to add a property recording whether an automaton has been trimmed, in order to skip duplicate calls to such a function. I would suggest to use two properties: `accessible` (all states are reachable from initial state) and `coaccessible` (all states can reach some accepting cycle). `accessible` would for instance be set by `purge_unreachable_states()`, and both properties would be set by `scc_filter_states()` and similar functions. Note that `is_empty()` can simplified when run on automata that are both `accessible` and `coaccessible`, but I figure this is unlikely to be very useful in practice.
One issue is that these proposed properties are not always enough to decide if `scc_filter()` can be skipped, because `scc_filter()` does more than just removing useless states: it also simplifies the acceptance sets in each SCC.We just discussed with @max and @aga the fact that some algorithms call `scc_filter()` as a preprocessing step, some don't. For instance currently `tgba_determinize()` calls `scc_filter()` only if simulation-based reductions are activated, and it is unclear why this this done only in this case.
It might be interesting to add a property recording whether an automaton has been trimmed, in order to skip duplicate calls to such a function. I would suggest to use two properties: `accessible` (all states are reachable from initial state) and `coaccessible` (all states can reach some accepting cycle). `accessible` would for instance be set by `purge_unreachable_states()`, and both properties would be set by `scc_filter_states()` and similar functions. Note that `is_empty()` can simplified when run on automata that are both `accessible` and `coaccessible`, but I figure this is unlikely to be very useful in practice.
One issue is that these proposed properties are not always enough to decide if `scc_filter()` can be skipped, because `scc_filter()` does more than just removing useless states: it also simplifies the acceptance sets in each SCC.https://gitlab.lrde.epita.fr/spot/spot/-/issues/310complementation of semi-deterministic automata2020-07-30T14:58:31+02:00Alexandre Duret-Lutzcomplementation of semi-deterministic automataThere are many places where we need to complement automata, for instance to prove equivalence. Currently we just make a distinction between deterministic and non-deterministic (which are then first determinized), but it would make sense to implement simpler (or at least more efficient) complementations for subclasses of non-deterministic automata.
@xblahoud suggests we implement a [complemention of semi-deterministic Büchi automata](http://www.fi.muni.cz/~xstrejc/publications/tacas2016coSDBA_preprint.pdf).
We already have a `is_semi_deterministic()` test.There are many places where we need to complement automata, for instance to prove equivalence. Currently we just make a distinction between deterministic and non-deterministic (which are then first determinized), but it would make sense to implement simpler (or at least more efficient) complementations for subclasses of non-deterministic automata.
@xblahoud suggests we implement a [complemention of semi-deterministic Büchi automata](http://www.fi.muni.cz/~xstrejc/publications/tacas2016coSDBA_preprint.pdf).
We already have a `is_semi_deterministic()` test.https://gitlab.lrde.epita.fr/spot/spot/-/issues/322evaluation of formula on word2018-03-24T13:17:21+01:00Alexandre Duret-Lutzevaluation of formula on wordIt would be useful to have a method or function such as
```
bool match_word(formula f, twa_word_ptr w);
```
that tells if a formula `f` would be satisfied by word `w`.
This could be used by `ltlcross` to decide which side of a failing intersecting test is incorrect. It could be also added as an option to `ltlfilt`, similar to the `--accept-word`, `--reject-word` we already have in `autfilt`. Finally it can be useful in some translation algorithms (e.g., Müller & Sickert, GandALF'17).It would be useful to have a method or function such as
```
bool match_word(formula f, twa_word_ptr w);
```
that tells if a formula `f` would be satisfied by word `w`.
This could be used by `ltlcross` to decide which side of a failing intersecting test is incorrect. It could be also added as an option to `ltlfilt`, similar to the `--accept-word`, `--reject-word` we already have in `autfilt`. Finally it can be useful in some translation algorithms (e.g., Müller & Sickert, GandALF'17).Maxime MehioMaxime Mehiohttps://gitlab.lrde.epita.fr/spot/spot/-/issues/344better handling of silly histories in sat_minimize()2018-04-20T15:22:22+02:00Alexandre Duret-Lutzbetter handling of silly histories in sat_minimize()Currently if the acceptance is `Fin(0)&Inf(1)` the SAT-minimization code detects that any history containing `0` is rejecting and cannot be augmented into an accepting history. Hence it removes `1` from any history where `0` is present.
The code generalizes this example from the DNF of the acceptance. However we should at least also implement the dual version and simplify histories that are accepting and cannot be augmented into a rejecting history. For instance if the acceptance is `Fin(0)|Inf(1)`, we do not need to distinguish between history `{0,1}` and history `{1}`.
I suspect that doing both might help the case of parity acceptance, where the only histories that matter have at most 1 element (the min or max seen so far).Currently if the acceptance is `Fin(0)&Inf(1)` the SAT-minimization code detects that any history containing `0` is rejecting and cannot be augmented into an accepting history. Hence it removes `1` from any history where `0` is present.
The code generalizes this example from the DNF of the acceptance. However we should at least also implement the dual version and simplify histories that are accepting and cannot be augmented into a rejecting history. For instance if the acceptance is `Fin(0)|Inf(1)`, we do not need to distinguish between history `{0,1}` and history `{1}`.
I suspect that doing both might help the case of parity acceptance, where the only histories that matter have at most 1 element (the min or max seen so far).https://gitlab.lrde.epita.fr/spot/spot/-/issues/364partition an automaton into safety and liveness2018-11-05T18:17:23+01:00Alexandre Duret-Lutzpartition an automaton into safety and livenesshttps://www.cs.cornell.edu/fbs/publications/RecSafeLive.pdf
The safety closure is what we use to build monitors, I think. We need the part building the liveness extension.
We should also provide a liveness detector, and add a safety/liveness classification in the on-line application.
See also ftp://www-cs.stanford.edu/cs/theory/amir/hierarchy.ps for some discussion of "liveness" vs. "uniform liveness". Maybe we could detect that as well.https://www.cs.cornell.edu/fbs/publications/RecSafeLive.pdf
The safety closure is what we use to build monitors, I think. We need the part building the liveness extension.
We should also provide a liveness detector, and add a safety/liveness classification in the on-line application.
See also ftp://www-cs.stanford.edu/cs/theory/amir/hierarchy.ps for some discussion of "liveness" vs. "uniform liveness". Maybe we could detect that as well.https://gitlab.lrde.epita.fr/spot/spot/-/issues/365a curious case of smaller "unambiguous" automaton2018-10-01T17:45:06+02:00Alexandre Duret-Lutza curious case of smaller "unambiguous" automatonThis was discovered by playing with the online translator's random formulas.
The formula is `p1 <-> !((X(p0 <-> Xp0) xor G(p1 & p2)) W Fp0)`, and the unambiguous version of the automaton is smaller.
![unambig](/uploads/14cc0d13048271bc0c9862cb8f68d8af/unambig.png)
It would be better to generate that unambiguous automaton right away.This was discovered by playing with the online translator's random formulas.
The formula is `p1 <-> !((X(p0 <-> Xp0) xor G(p1 & p2)) W Fp0)`, and the unambiguous version of the automaton is smaller.
![unambig](/uploads/14cc0d13048271bc0c9862cb8f68d8af/unambig.png)
It would be better to generate that unambiguous automaton right away.https://gitlab.lrde.epita.fr/spot/spot/-/issues/369add acc_cond::is_hyper_rabin() and acc_cond::is_hyper_streett() functions2018-11-07T10:36:23+01:00Alexandre Duret-Lutzadd acc_cond::is_hyper_rabin() and acc_cond::is_hyper_streett() functionsHyper-Rabin is disjunction of Streett (as used by Emerson & Lei), and Hyper-Streett is conjunction of Rabin. These two names were introduced by http://www.faculty.idc.ac.il/udiboker/files/AutomataTypes.pdfHyper-Rabin is disjunction of Streett (as used by Emerson & Lei), and Hyper-Streett is conjunction of Rabin. These two names were introduced by http://www.faculty.idc.ac.il/udiboker/files/AutomataTypes.pdfhttps://gitlab.lrde.epita.fr/spot/spot/-/issues/370Enumerate all possibles SCCs/automata of size N2018-11-13T10:27:47+01:00Etienne RenaultEnumerate all possibles SCCs/automata of size NA tool enumerating all SCCs structure of size N would be useful in many situations (for instance, run an algorithm on all small instances to ensure that no case is missed).
Such a tool may also produce all automata structure of size N.
Such a tool may be latter to generate all automata according to a given acceptance mark.A tool enumerating all SCCs structure of size N would be useful in many situations (for instance, run an algorithm on all small instances to ensure that no case is missed).
Such a tool may also produce all automata structure of size N.
Such a tool may be latter to generate all automata according to a given acceptance mark.https://gitlab.lrde.epita.fr/spot/spot/-/issues/385ltl2tgba -G cases that can be improved2020-07-22T21:33:58+02:00Alexandre Duret-Lutzltl2tgba -G cases that can be improvedOn Sun, Apr 21, 2019 at 9:46 PM Juraj Major <major@fi.muni.cz> wrote:
>
> Hi Alexandre,
>
> Jan asked me to send you some formulae from LTL(F,G,X) where Delag
> produces smaller automata than ltl2tgba. I use Spot 2.7.3 and call
> `ltl2tgba -G 'formula'`.
[...]
> GFX((e & XXXa) -> (e & X!d)) (ltl2tgba 2 states vs. Delag 1 state)<br/>
> G(Gc | Fa | F!G(c | Ge)) (6 vs. 3)
These looks like cases where Spot is simply not applying rewritings
like GF(a | b) = GF(a)|GF(b). Those rules would make sense for -G,
but they usually don't make sense when building other types of
automata, so I haven't worked on those yet. It's unfortunate that the
rewriting rules in Spot is not very modular.
> Ge | XGF(Ge & X(c & Fd)) (5 vs. 2) <br/>
> G!GXXe -> GF(b & c & Gc) - this one is interesting, because ltl2tgba
> -DG produces smaller automaton (2 states) then ltl2tgba -G (3 states).
> However, Delag still wins with 1 state
These are cases where Spot is missing the rule GF(a & Gb) = GFa & FGb
Because GF(b & c & Gc) is not a GF(guarantee), it's determinized via
Safra if -D is given, and this is not tried without -D. The
non-deterministic automaton output by -G for GF(b & c & Gc) is a
2-state incomplete automaton and deterministic automaton from -G -D is
a 2-state complete automaton, so the incompleteness is what makes the
difference during the product, as you need complete automata to make a
disjunction.
> G!(Gd & (c | Fb)) (4 vs. 2)
Running delag with "-f ltl2tgba" will output the same automaton as
ltl2tgba, so this is a case where the fallback translator used by
delag (I'm not sure what that is) does a better job. In their paper,
they used Spot as fallback I think, but it seems they are now using
something else.
The tool ltl2da, distributed with Owl, also produces a 2-state (but
nicer IMHO) TELA for this.
> XF(F(Gc & Xb) -> a) (6 vs. 3)
Again we are missing whatever rule needed to reduce this to:
GF(!c) | FG(!b) | XF(a)
(in which case its 3 states if deterministic or 2 if non-deterministic)
> From the genltl's patterns, there's a --sejk-f pattern where Delag
> performs better than Spot but those are not LTL(F,G,X) formulae.
I'm not sure how that one works. Look like some kind of suspension
would be useful.
So it seems a large part of these issues would be solved by LTL rewritings.On Sun, Apr 21, 2019 at 9:46 PM Juraj Major <major@fi.muni.cz> wrote:
>
> Hi Alexandre,
>
> Jan asked me to send you some formulae from LTL(F,G,X) where Delag
> produces smaller automata than ltl2tgba. I use Spot 2.7.3 and call
> `ltl2tgba -G 'formula'`.
[...]
> GFX((e & XXXa) -> (e & X!d)) (ltl2tgba 2 states vs. Delag 1 state)<br/>
> G(Gc | Fa | F!G(c | Ge)) (6 vs. 3)
These looks like cases where Spot is simply not applying rewritings
like GF(a | b) = GF(a)|GF(b). Those rules would make sense for -G,
but they usually don't make sense when building other types of
automata, so I haven't worked on those yet. It's unfortunate that the
rewriting rules in Spot is not very modular.
> Ge | XGF(Ge & X(c & Fd)) (5 vs. 2) <br/>
> G!GXXe -> GF(b & c & Gc) - this one is interesting, because ltl2tgba
> -DG produces smaller automaton (2 states) then ltl2tgba -G (3 states).
> However, Delag still wins with 1 state
These are cases where Spot is missing the rule GF(a & Gb) = GFa & FGb
Because GF(b & c & Gc) is not a GF(guarantee), it's determinized via
Safra if -D is given, and this is not tried without -D. The
non-deterministic automaton output by -G for GF(b & c & Gc) is a
2-state incomplete automaton and deterministic automaton from -G -D is
a 2-state complete automaton, so the incompleteness is what makes the
difference during the product, as you need complete automata to make a
disjunction.
> G!(Gd & (c | Fb)) (4 vs. 2)
Running delag with "-f ltl2tgba" will output the same automaton as
ltl2tgba, so this is a case where the fallback translator used by
delag (I'm not sure what that is) does a better job. In their paper,
they used Spot as fallback I think, but it seems they are now using
something else.
The tool ltl2da, distributed with Owl, also produces a 2-state (but
nicer IMHO) TELA for this.
> XF(F(Gc & Xb) -> a) (6 vs. 3)
Again we are missing whatever rule needed to reduce this to:
GF(!c) | FG(!b) | XF(a)
(in which case its 3 states if deterministic or 2 if non-deterministic)
> From the genltl's patterns, there's a --sejk-f pattern where Delag
> performs better than Spot but those are not LTL(F,G,X) formulae.
I'm not sure how that one works. Look like some kind of suspension
would be useful.
So it seems a large part of these issues would be solved by LTL rewritings.https://gitlab.lrde.epita.fr/spot/spot/-/issues/386postprocessing twice may improve things2019-07-31T15:41:49+02:00Alexandre Duret-Lutzpostprocessing twice may improve thingsThis example was sent by Fanda.
```sh
% cat p.hoa
HOA: v1 States: 6 Start: 0 AP: 3 "a" "b" "c" acc-name: Buchi
Acceptance: 1 Inf(0) properties: trans-labels explicit-labels trans-acc
semi-deterministic --BODY-- State: 0 [t] 0 [!0&1&2] 1 [0&2 | !1] 2
[0&!1] 3 State: 1 [!0&1&2] 1 [0&2 | !1] 2 [0&!1] 3 State: 2 [!0&1&2]
1 [0&2 | !1] 2 [0&!1] 3 State: 3 [!1] 3 [!1] 4 State: 4 [0&!1] 4 {0}
[!0&!1] 5 {0} State: 5 [!0&!1] 5 [0&!1] 5 {0} --END--
% autfilt --stats=%s p.hoa
6
% autfilt --small --stats=%s p.hoa
4
% autfilt --small p.hoa | autfilt --small --stats=%s
3
```
Here are the three automata:<br/>
![p](/uploads/96b378748b2648f8fb492dd4caf05007/p.png)![pp](/uploads/63a59faefe4cdab366a76da3e64622d3/pp.png)![ppp](/uploads/d3a7fc0c31a9cb12f4082f9b256a336f/ppp.png)
It would nice if the first postprocessing would give the smallest reasult already.This example was sent by Fanda.
```sh
% cat p.hoa
HOA: v1 States: 6 Start: 0 AP: 3 "a" "b" "c" acc-name: Buchi
Acceptance: 1 Inf(0) properties: trans-labels explicit-labels trans-acc
semi-deterministic --BODY-- State: 0 [t] 0 [!0&1&2] 1 [0&2 | !1] 2
[0&!1] 3 State: 1 [!0&1&2] 1 [0&2 | !1] 2 [0&!1] 3 State: 2 [!0&1&2]
1 [0&2 | !1] 2 [0&!1] 3 State: 3 [!1] 3 [!1] 4 State: 4 [0&!1] 4 {0}
[!0&!1] 5 {0} State: 5 [!0&!1] 5 [0&!1] 5 {0} --END--
% autfilt --stats=%s p.hoa
6
% autfilt --small --stats=%s p.hoa
4
% autfilt --small p.hoa | autfilt --small --stats=%s
3
```
Here are the three automata:<br/>
![p](/uploads/96b378748b2648f8fb492dd4caf05007/p.png)![pp](/uploads/63a59faefe4cdab366a76da3e64622d3/pp.png)![ppp](/uploads/d3a7fc0c31a9cb12f4082f9b256a336f/ppp.png)
It would nice if the first postprocessing would give the smallest reasult already.https://gitlab.lrde.epita.fr/spot/spot/-/issues/394use graphviz layers for interactive display2019-11-28T11:36:28+01:00Alexandre Duret-Lutzuse graphviz layers for interactive displayUsing [the `layer` and `layerselect` attributes of GraphViz](https://www.graphviz.org/faq/#FaqOverlays) it should be possible to write an interactive widget that has a slider showing the different layers of an automaton. This could be used to replay the progression of an algorithm interactively, while preserving the global layout, for instance
I believe this could be implemented by adding some named properties "layer-edges" and "layer-states" which would be vectors of layer numbers, and let `print_dot()` know about it. Then we also need some Python functions to set those layer attributes.Using [the `layer` and `layerselect` attributes of GraphViz](https://www.graphviz.org/faq/#FaqOverlays) it should be possible to write an interactive widget that has a slider showing the different layers of an automaton. This could be used to replay the progression of an algorithm interactively, while preserving the global layout, for instance
I believe this could be implemented by adding some named properties "layer-edges" and "layer-states" which would be vectors of layer numbers, and let `print_dot()` know about it. Then we also need some Python functions to set those layer attributes.https://gitlab.lrde.epita.fr/spot/spot/-/issues/397a case when -G -D produce a parity automaton more than twice smaller than -P -D2020-01-16T12:21:35+01:00Alexandre Duret-Lutza case when -G -D produce a parity automaton more than twice smaller than -P -DThe following formula comes from https://www.ruediger-ehlers.de/blog/deterministicparity.html
```sh
% f='!g & (GFr -> GFg) & FGinit & G(XXg -> (init & Xinit & XXinit)) & !Xg & G(!g | X!g) & G((init & !Xinit) -> (!r & !Xr))'
```
With Spot 2.8.5.dev
```sh
% ltl2tgba -G -D -f "$f" --stats="%s states, %[o]g"
5 states, parity min odd 3
% ltl2tgba -P -D -f "$f" --stats="%s states, %[o]g"
11 states, parity min odd 3
```
It's related to LTL-splitting:
```sh
% ltl2tgba -G -D -x ltl-split=0 -f "$f" --stats="%s states, %[o]g"
11 states, parity min odd 3
```
Once we remove the obligation terms of the formula, we can reduce the case to this:
```sh
% ltl2tgba -P -D "(GFr -> GFg) & FGinit" --stats=%s
4
% ltl2tgba -G -D "(GFr -> GFg) & FGinit" --stats=%s
1
% ltl2tgba -G -D -x ltl-split=0 "(GFr -> GFg) & FGinit" --stats=%s
4
```
It's seem it's just "luck" that the construction of the formula follows a "parity pattern" and leads to a parity acceptance with `-G`. Can we detect those cases?The following formula comes from https://www.ruediger-ehlers.de/blog/deterministicparity.html
```sh
% f='!g & (GFr -> GFg) & FGinit & G(XXg -> (init & Xinit & XXinit)) & !Xg & G(!g | X!g) & G((init & !Xinit) -> (!r & !Xr))'
```
With Spot 2.8.5.dev
```sh
% ltl2tgba -G -D -f "$f" --stats="%s states, %[o]g"
5 states, parity min odd 3
% ltl2tgba -P -D -f "$f" --stats="%s states, %[o]g"
11 states, parity min odd 3
```
It's related to LTL-splitting:
```sh
% ltl2tgba -G -D -x ltl-split=0 -f "$f" --stats="%s states, %[o]g"
11 states, parity min odd 3
```
Once we remove the obligation terms of the formula, we can reduce the case to this:
```sh
% ltl2tgba -P -D "(GFr -> GFg) & FGinit" --stats=%s
4
% ltl2tgba -G -D "(GFr -> GFg) & FGinit" --stats=%s
1
% ltl2tgba -G -D -x ltl-split=0 "(GFr -> GFg) & FGinit" --stats=%s
4
```
It's seem it's just "luck" that the construction of the formula follows a "parity pattern" and leads to a parity acceptance with `-G`. Can we detect those cases?https://gitlab.lrde.epita.fr/spot/spot/-/issues/400random conjunctions of patterns2020-03-12T10:56:36+01:00Alexandre Duret-Lutzrandom conjunctions of patternshttps://www.cs.rice.edu/~vardi/papers/time13.pdf suggests building random LTL formulas from conjunctions of known patterns. They use the DAC patterns. We could offer such a service and even generalize it to all the patterns that `genltl` supports.
I'm not sure what the ideal interface would be. Maybe some `ltlcombine` tool that takes a list of formulas as input, some yet-to-be-specified parameters, and outputs combinations of the input formulas.https://www.cs.rice.edu/~vardi/papers/time13.pdf suggests building random LTL formulas from conjunctions of known patterns. They use the DAC patterns. We could offer such a service and even generalize it to all the patterns that `genltl` supports.
I'm not sure what the ideal interface would be. Maybe some `ltlcombine` tool that takes a list of formulas as input, some yet-to-be-specified parameters, and outputs combinations of the input formulas.https://gitlab.lrde.epita.fr/spot/spot/-/issues/415handling unconstrained I/O in ltlsynt2020-05-29T20:56:17+02:00Alexandre Duret-Lutzhandling unconstrained I/O in ltlsyntIt's possible to call `ltlsynt` with `--ins` and `--outs` options that refers to AP that are not in the formula, or that disappear from the formula after simplification. Currently those AP are added to the automaton after translation, but it's probably more efficient to just add them to the output once the game has been solved.It's possible to call `ltlsynt` with `--ins` and `--outs` options that refers to AP that are not in the formula, or that disappear from the formula after simplification. Currently those AP are added to the automaton after translation, but it's probably more efficient to just add them to the output once the game has been solved.https://gitlab.lrde.epita.fr/spot/spot/-/issues/423for ltlsynt parity game solving could stop once the winner of initial state i...2020-09-10T11:51:57+02:00Alexandre Duret-Lutzfor ltlsynt parity game solving could stop once the winner of initial state is known`ltlsynt` only needs to know if the initial state is winning for player 0 or 1, so Zielonka can possibly be aborted erlier, as soon as the initial state is know to be winning for either player.`ltlsynt` only needs to know if the initial state is winning for player 0 or 1, so Zielonka can possibly be aborted erlier, as soon as the initial state is know to be winning for either player.https://gitlab.lrde.epita.fr/spot/spot/-/issues/438support for exclusive APs in ltl2tgba2020-11-12T19:10:27+01:00Alexandre Duret-Lutzsupport for exclusive APs in ltl2tgbaIt would be nice to have an option to support exclusive APs in `ltl2tgba`. Currently it requires chaining three commands as in
http://lists.lrde.epita.fr/pipermail/spot/2020q3/000293.htmlIt would be nice to have an option to support exclusive APs in `ltl2tgba`. Currently it requires chaining three commands as in
http://lists.lrde.epita.fr/pipermail/spot/2020q3/000293.htmlhttps://gitlab.lrde.epita.fr/spot/spot/-/issues/440iterate over bit sets in bitvect or bitset2020-11-17T10:54:33+01:00Alexandre Duret-Lutziterate over bit sets in bitvect or bitsetIt would be nice to be able to iterate efficiently on the bits that are sets in a `bitset` or `bitvect`. For the `bitset` case, which is used to implement `mark_t`, we already have such an implementation in `mark_container`, but this feature could better be implemented in `bitset`.It would be nice to be able to iterate efficiently on the bits that are sets in a `bitset` or `bitvect`. For the `bitset` case, which is used to implement `mark_t`, we already have such an implementation in `mark_container`, but this feature could better be implemented in `bitset`.https://gitlab.lrde.epita.fr/spot/spot/-/issues/444improve determinisation of automata matching string-like patterns2020-12-09T10:03:48+01:00Alexandre Duret-Lutzimprove determinisation of automata matching string-like patternsThe following NBA for formula `(!(G({(a)} |=> {(b)[*10]})))` is derived from #443.
![nba](/uploads/84f009e5d0bca9e1072129f567ba546f/nba.png)
It can be represented by a DBA with as many states, where the states essentially keep track of the number of `!a & b` seen since the last `a`.
![dba](/uploads/d3508ab2c8242272a744abef017ce983/dba.png)
However the WDBA-minimization code used to obtain the above is suboptimal. Currently it:
1. calls the powerset construction to build a 2048-state automaton without looking at the acceptance condition,
2. fixes the accepting states of the DBA by comparing it with the NBA (using a product to locate the accepting SCCs),
3. minimize it to get the above 12 states.
Changing the formula from `*10` to something larger will led to very large intermediate automata. (The original formula used in #443 was with `*32`...)
However in this case most of these steps seem overkill.
- [X] The powerset construction could be taught about the notion of "accepting sink": if a powerstate contains an accepting sink, it can be reduced to this sink. Doing so (I have a patch already) reduces the intermediate automaton to 1025 states.
- [X] Since we have a terminal NBA as input, the output should be terminal as well, and we could avoid running the product in step 2.
- [ ] Finally, I suspect there should be a way to build this DBA from the input NBA right away, but it not yet very clear how.The following NBA for formula `(!(G({(a)} |=> {(b)[*10]})))` is derived from #443.
![nba](/uploads/84f009e5d0bca9e1072129f567ba546f/nba.png)
It can be represented by a DBA with as many states, where the states essentially keep track of the number of `!a & b` seen since the last `a`.
![dba](/uploads/d3508ab2c8242272a744abef017ce983/dba.png)
However the WDBA-minimization code used to obtain the above is suboptimal. Currently it:
1. calls the powerset construction to build a 2048-state automaton without looking at the acceptance condition,
2. fixes the accepting states of the DBA by comparing it with the NBA (using a product to locate the accepting SCCs),
3. minimize it to get the above 12 states.
Changing the formula from `*10` to something larger will led to very large intermediate automata. (The original formula used in #443 was with `*32`...)
However in this case most of these steps seem overkill.
- [X] The powerset construction could be taught about the notion of "accepting sink": if a powerstate contains an accepting sink, it can be reduced to this sink. Doing so (I have a patch already) reduces the intermediate automaton to 1025 states.
- [X] Since we have a terminal NBA as input, the output should be terminal as well, and we could avoid running the product in step 2.
- [ ] Finally, I suspect there should be a way to build this DBA from the input NBA right away, but it not yet very clear how.https://gitlab.lrde.epita.fr/spot/spot/-/issues/450generalize defrag_state()2021-01-20T10:02:47+01:00Alexandre Duret-Lutzgeneralize defrag_state()Currently defrag_state() has the undocumented requirement that the new number should be smaller than the old one.
To allow users to renumber states arbitrarily we could detect cases where that's not true, and use a slightly slower implementation.Currently defrag_state() has the undocumented requirement that the new number should be smaller than the old one.
To allow users to renumber states arbitrarily we could detect cases where that's not true, and use a slightly slower implementation.