Spot issueshttps://gitlab.lrde.epita.fr/spot/spot/-/issues2020-11-17T10:54:33+01:00https://gitlab.lrde.epita.fr/spot/spot/-/issues/440iterate over bit sets in bitvect or bitset2020-11-17T10:54:33+01:00Alexandre Duret-Lutziterate over bit sets in bitvect or bitsetIt would be nice to be able to iterate efficiently on the bits that are sets in a `bitset` or `bitvect`. For the `bitset` case, which is used to implement `mark_t`, we already have such an implementation in `mark_container`, but this feature could better be implemented in `bitset`.It would be nice to be able to iterate efficiently on the bits that are sets in a `bitset` or `bitvect`. For the `bitset` case, which is used to implement `mark_t`, we already have such an implementation in `mark_container`, but this feature could better be implemented in `bitset`.https://gitlab.lrde.epita.fr/spot/spot/-/issues/438support for exclusive APs in ltl2tgba2020-11-12T19:10:27+01:00Alexandre Duret-Lutzsupport for exclusive APs in ltl2tgbaIt would be nice to have an option to support exclusive APs in `ltl2tgba`. Currently it requires chaining three commands as in
http://lists.lrde.epita.fr/pipermail/spot/2020q3/000293.htmlIt would be nice to have an option to support exclusive APs in `ltl2tgba`. Currently it requires chaining three commands as in
http://lists.lrde.epita.fr/pipermail/spot/2020q3/000293.htmlhttps://gitlab.lrde.epita.fr/spot/spot/-/issues/423for ltlsynt parity game solving could stop once the winner of initial state i...2020-09-10T11:51:57+02:00Alexandre Duret-Lutzfor ltlsynt parity game solving could stop once the winner of initial state is known`ltlsynt` only needs to know if the initial state is winning for player 0 or 1, so Zielonka can possibly be aborted erlier, as soon as the initial state is know to be winning for either player.`ltlsynt` only needs to know if the initial state is winning for player 0 or 1, so Zielonka can possibly be aborted erlier, as soon as the initial state is know to be winning for either player.https://gitlab.lrde.epita.fr/spot/spot/-/issues/415handling unconstrained I/O in ltlsynt2020-05-29T20:56:17+02:00Alexandre Duret-Lutzhandling unconstrained I/O in ltlsyntIt's possible to call `ltlsynt` with `--ins` and `--outs` options that refers to AP that are not in the formula, or that disappear from the formula after simplification. Currently those AP are added to the automaton after translation, but it's probably more efficient to just add them to the output once the game has been solved.It's possible to call `ltlsynt` with `--ins` and `--outs` options that refers to AP that are not in the formula, or that disappear from the formula after simplification. Currently those AP are added to the automaton after translation, but it's probably more efficient to just add them to the output once the game has been solved.https://gitlab.lrde.epita.fr/spot/spot/-/issues/400random conjunctions of patterns2020-03-12T10:56:36+01:00Alexandre Duret-Lutzrandom conjunctions of patternshttps://www.cs.rice.edu/~vardi/papers/time13.pdf suggests building random LTL formulas from conjunctions of known patterns. They use the DAC patterns. We could offer such a service and even generalize it to all the patterns that `genltl` supports.
I'm not sure what the ideal interface would be. Maybe some `ltlcombine` tool that takes a list of formulas as input, some yet-to-be-specified parameters, and outputs combinations of the input formulas.https://www.cs.rice.edu/~vardi/papers/time13.pdf suggests building random LTL formulas from conjunctions of known patterns. They use the DAC patterns. We could offer such a service and even generalize it to all the patterns that `genltl` supports.
I'm not sure what the ideal interface would be. Maybe some `ltlcombine` tool that takes a list of formulas as input, some yet-to-be-specified parameters, and outputs combinations of the input formulas.https://gitlab.lrde.epita.fr/spot/spot/-/issues/397a case when -G -D produce a parity automaton more than twice smaller than -P -D2020-01-16T12:21:35+01:00Alexandre Duret-Lutza case when -G -D produce a parity automaton more than twice smaller than -P -DThe following formula comes from https://www.ruediger-ehlers.de/blog/deterministicparity.html
```sh
% f='!g & (GFr -> GFg) & FGinit & G(XXg -> (init & Xinit & XXinit)) & !Xg & G(!g | X!g) & G((init & !Xinit) -> (!r & !Xr))'
```
With Spot 2.8.5.dev
```sh
% ltl2tgba -G -D -f "$f" --stats="%s states, %[o]g"
5 states, parity min odd 3
% ltl2tgba -P -D -f "$f" --stats="%s states, %[o]g"
11 states, parity min odd 3
```
It's related to LTL-splitting:
```sh
% ltl2tgba -G -D -x ltl-split=0 -f "$f" --stats="%s states, %[o]g"
11 states, parity min odd 3
```
Once we remove the obligation terms of the formula, we can reduce the case to this:
```sh
% ltl2tgba -P -D "(GFr -> GFg) & FGinit" --stats=%s
4
% ltl2tgba -G -D "(GFr -> GFg) & FGinit" --stats=%s
1
% ltl2tgba -G -D -x ltl-split=0 "(GFr -> GFg) & FGinit" --stats=%s
4
```
It's seem it's just "luck" that the construction of the formula follows a "parity pattern" and leads to a parity acceptance with `-G`. Can we detect those cases?The following formula comes from https://www.ruediger-ehlers.de/blog/deterministicparity.html
```sh
% f='!g & (GFr -> GFg) & FGinit & G(XXg -> (init & Xinit & XXinit)) & !Xg & G(!g | X!g) & G((init & !Xinit) -> (!r & !Xr))'
```
With Spot 2.8.5.dev
```sh
% ltl2tgba -G -D -f "$f" --stats="%s states, %[o]g"
5 states, parity min odd 3
% ltl2tgba -P -D -f "$f" --stats="%s states, %[o]g"
11 states, parity min odd 3
```
It's related to LTL-splitting:
```sh
% ltl2tgba -G -D -x ltl-split=0 -f "$f" --stats="%s states, %[o]g"
11 states, parity min odd 3
```
Once we remove the obligation terms of the formula, we can reduce the case to this:
```sh
% ltl2tgba -P -D "(GFr -> GFg) & FGinit" --stats=%s
4
% ltl2tgba -G -D "(GFr -> GFg) & FGinit" --stats=%s
1
% ltl2tgba -G -D -x ltl-split=0 "(GFr -> GFg) & FGinit" --stats=%s
4
```
It's seem it's just "luck" that the construction of the formula follows a "parity pattern" and leads to a parity acceptance with `-G`. Can we detect those cases?https://gitlab.lrde.epita.fr/spot/spot/-/issues/394use graphviz layers for interactive display2019-11-28T11:36:28+01:00Alexandre Duret-Lutzuse graphviz layers for interactive displayUsing [the `layer` and `layerselect` attributes of GraphViz](https://www.graphviz.org/faq/#FaqOverlays) it should be possible to write an interactive widget that has a slider showing the different layers of an automaton. This could be used to replay the progression of an algorithm interactively, while preserving the global layout, for instance
I believe this could be implemented by adding some named properties "layer-edges" and "layer-states" which would be vectors of layer numbers, and let `print_dot()` know about it. Then we also need some Python functions to set those layer attributes.Using [the `layer` and `layerselect` attributes of GraphViz](https://www.graphviz.org/faq/#FaqOverlays) it should be possible to write an interactive widget that has a slider showing the different layers of an automaton. This could be used to replay the progression of an algorithm interactively, while preserving the global layout, for instance
I believe this could be implemented by adding some named properties "layer-edges" and "layer-states" which would be vectors of layer numbers, and let `print_dot()` know about it. Then we also need some Python functions to set those layer attributes.https://gitlab.lrde.epita.fr/spot/spot/-/issues/386postprocessing twice may improve things2019-07-31T15:41:49+02:00Alexandre Duret-Lutzpostprocessing twice may improve thingsThis example was sent by Fanda.
```sh
% cat p.hoa
HOA: v1 States: 6 Start: 0 AP: 3 "a" "b" "c" acc-name: Buchi
Acceptance: 1 Inf(0) properties: trans-labels explicit-labels trans-acc
semi-deterministic --BODY-- State: 0 [t] 0 [!0&1&2] 1 [0&2 | !1] 2
[0&!1] 3 State: 1 [!0&1&2] 1 [0&2 | !1] 2 [0&!1] 3 State: 2 [!0&1&2]
1 [0&2 | !1] 2 [0&!1] 3 State: 3 [!1] 3 [!1] 4 State: 4 [0&!1] 4 {0}
[!0&!1] 5 {0} State: 5 [!0&!1] 5 [0&!1] 5 {0} --END--
% autfilt --stats=%s p.hoa
6
% autfilt --small --stats=%s p.hoa
4
% autfilt --small p.hoa | autfilt --small --stats=%s
3
```
Here are the three automata:<br/>
![p](/uploads/96b378748b2648f8fb492dd4caf05007/p.png)![pp](/uploads/63a59faefe4cdab366a76da3e64622d3/pp.png)![ppp](/uploads/d3a7fc0c31a9cb12f4082f9b256a336f/ppp.png)
It would nice if the first postprocessing would give the smallest reasult already.This example was sent by Fanda.
```sh
% cat p.hoa
HOA: v1 States: 6 Start: 0 AP: 3 "a" "b" "c" acc-name: Buchi
Acceptance: 1 Inf(0) properties: trans-labels explicit-labels trans-acc
semi-deterministic --BODY-- State: 0 [t] 0 [!0&1&2] 1 [0&2 | !1] 2
[0&!1] 3 State: 1 [!0&1&2] 1 [0&2 | !1] 2 [0&!1] 3 State: 2 [!0&1&2]
1 [0&2 | !1] 2 [0&!1] 3 State: 3 [!1] 3 [!1] 4 State: 4 [0&!1] 4 {0}
[!0&!1] 5 {0} State: 5 [!0&!1] 5 [0&!1] 5 {0} --END--
% autfilt --stats=%s p.hoa
6
% autfilt --small --stats=%s p.hoa
4
% autfilt --small p.hoa | autfilt --small --stats=%s
3
```
Here are the three automata:<br/>
![p](/uploads/96b378748b2648f8fb492dd4caf05007/p.png)![pp](/uploads/63a59faefe4cdab366a76da3e64622d3/pp.png)![ppp](/uploads/d3a7fc0c31a9cb12f4082f9b256a336f/ppp.png)
It would nice if the first postprocessing would give the smallest reasult already.https://gitlab.lrde.epita.fr/spot/spot/-/issues/385ltl2tgba -G cases that can be improved2020-07-22T21:33:58+02:00Alexandre Duret-Lutzltl2tgba -G cases that can be improvedOn Sun, Apr 21, 2019 at 9:46 PM Juraj Major <major@fi.muni.cz> wrote:
>
> Hi Alexandre,
>
> Jan asked me to send you some formulae from LTL(F,G,X) where Delag
> produces smaller automata than ltl2tgba. I use Spot 2.7.3 and call
> `ltl2tgba -G 'formula'`.
[...]
> GFX((e & XXXa) -> (e & X!d)) (ltl2tgba 2 states vs. Delag 1 state)<br/>
> G(Gc | Fa | F!G(c | Ge)) (6 vs. 3)
These looks like cases where Spot is simply not applying rewritings
like GF(a | b) = GF(a)|GF(b). Those rules would make sense for -G,
but they usually don't make sense when building other types of
automata, so I haven't worked on those yet. It's unfortunate that the
rewriting rules in Spot is not very modular.
> Ge | XGF(Ge & X(c & Fd)) (5 vs. 2) <br/>
> G!GXXe -> GF(b & c & Gc) - this one is interesting, because ltl2tgba
> -DG produces smaller automaton (2 states) then ltl2tgba -G (3 states).
> However, Delag still wins with 1 state
These are cases where Spot is missing the rule GF(a & Gb) = GFa & FGb
Because GF(b & c & Gc) is not a GF(guarantee), it's determinized via
Safra if -D is given, and this is not tried without -D. The
non-deterministic automaton output by -G for GF(b & c & Gc) is a
2-state incomplete automaton and deterministic automaton from -G -D is
a 2-state complete automaton, so the incompleteness is what makes the
difference during the product, as you need complete automata to make a
disjunction.
> G!(Gd & (c | Fb)) (4 vs. 2)
Running delag with "-f ltl2tgba" will output the same automaton as
ltl2tgba, so this is a case where the fallback translator used by
delag (I'm not sure what that is) does a better job. In their paper,
they used Spot as fallback I think, but it seems they are now using
something else.
The tool ltl2da, distributed with Owl, also produces a 2-state (but
nicer IMHO) TELA for this.
> XF(F(Gc & Xb) -> a) (6 vs. 3)
Again we are missing whatever rule needed to reduce this to:
GF(!c) | FG(!b) | XF(a)
(in which case its 3 states if deterministic or 2 if non-deterministic)
> From the genltl's patterns, there's a --sejk-f pattern where Delag
> performs better than Spot but those are not LTL(F,G,X) formulae.
I'm not sure how that one works. Look like some kind of suspension
would be useful.
So it seems a large part of these issues would be solved by LTL rewritings.On Sun, Apr 21, 2019 at 9:46 PM Juraj Major <major@fi.muni.cz> wrote:
>
> Hi Alexandre,
>
> Jan asked me to send you some formulae from LTL(F,G,X) where Delag
> produces smaller automata than ltl2tgba. I use Spot 2.7.3 and call
> `ltl2tgba -G 'formula'`.
[...]
> GFX((e & XXXa) -> (e & X!d)) (ltl2tgba 2 states vs. Delag 1 state)<br/>
> G(Gc | Fa | F!G(c | Ge)) (6 vs. 3)
These looks like cases where Spot is simply not applying rewritings
like GF(a | b) = GF(a)|GF(b). Those rules would make sense for -G,
but they usually don't make sense when building other types of
automata, so I haven't worked on those yet. It's unfortunate that the
rewriting rules in Spot is not very modular.
> Ge | XGF(Ge & X(c & Fd)) (5 vs. 2) <br/>
> G!GXXe -> GF(b & c & Gc) - this one is interesting, because ltl2tgba
> -DG produces smaller automaton (2 states) then ltl2tgba -G (3 states).
> However, Delag still wins with 1 state
These are cases where Spot is missing the rule GF(a & Gb) = GFa & FGb
Because GF(b & c & Gc) is not a GF(guarantee), it's determinized via
Safra if -D is given, and this is not tried without -D. The
non-deterministic automaton output by -G for GF(b & c & Gc) is a
2-state incomplete automaton and deterministic automaton from -G -D is
a 2-state complete automaton, so the incompleteness is what makes the
difference during the product, as you need complete automata to make a
disjunction.
> G!(Gd & (c | Fb)) (4 vs. 2)
Running delag with "-f ltl2tgba" will output the same automaton as
ltl2tgba, so this is a case where the fallback translator used by
delag (I'm not sure what that is) does a better job. In their paper,
they used Spot as fallback I think, but it seems they are now using
something else.
The tool ltl2da, distributed with Owl, also produces a 2-state (but
nicer IMHO) TELA for this.
> XF(F(Gc & Xb) -> a) (6 vs. 3)
Again we are missing whatever rule needed to reduce this to:
GF(!c) | FG(!b) | XF(a)
(in which case its 3 states if deterministic or 2 if non-deterministic)
> From the genltl's patterns, there's a --sejk-f pattern where Delag
> performs better than Spot but those are not LTL(F,G,X) formulae.
I'm not sure how that one works. Look like some kind of suspension
would be useful.
So it seems a large part of these issues would be solved by LTL rewritings.https://gitlab.lrde.epita.fr/spot/spot/-/issues/370Enumerate all possibles SCCs/automata of size N2018-11-13T10:27:47+01:00Etienne RenaultEnumerate all possibles SCCs/automata of size NA tool enumerating all SCCs structure of size N would be useful in many situations (for instance, run an algorithm on all small instances to ensure that no case is missed).
Such a tool may also produce all automata structure of size N.
Such a tool may be latter to generate all automata according to a given acceptance mark.A tool enumerating all SCCs structure of size N would be useful in many situations (for instance, run an algorithm on all small instances to ensure that no case is missed).
Such a tool may also produce all automata structure of size N.
Such a tool may be latter to generate all automata according to a given acceptance mark.https://gitlab.lrde.epita.fr/spot/spot/-/issues/369add acc_cond::is_hyper_rabin() and acc_cond::is_hyper_streett() functions2018-11-07T10:36:23+01:00Alexandre Duret-Lutzadd acc_cond::is_hyper_rabin() and acc_cond::is_hyper_streett() functionsHyper-Rabin is disjunction of Streett (as used by Emerson & Lei), and Hyper-Streett is conjunction of Rabin. These two names were introduced by http://www.faculty.idc.ac.il/udiboker/files/AutomataTypes.pdfHyper-Rabin is disjunction of Streett (as used by Emerson & Lei), and Hyper-Streett is conjunction of Rabin. These two names were introduced by http://www.faculty.idc.ac.il/udiboker/files/AutomataTypes.pdfhttps://gitlab.lrde.epita.fr/spot/spot/-/issues/365a curious case of smaller "unambiguous" automaton2018-10-01T17:45:06+02:00Alexandre Duret-Lutza curious case of smaller "unambiguous" automatonThis was discovered by playing with the online translator's random formulas.
The formula is `p1 <-> !((X(p0 <-> Xp0) xor G(p1 & p2)) W Fp0)`, and the unambiguous version of the automaton is smaller.
![unambig](/uploads/14cc0d13048271bc0c9862cb8f68d8af/unambig.png)
It would be better to generate that unambiguous automaton right away.This was discovered by playing with the online translator's random formulas.
The formula is `p1 <-> !((X(p0 <-> Xp0) xor G(p1 & p2)) W Fp0)`, and the unambiguous version of the automaton is smaller.
![unambig](/uploads/14cc0d13048271bc0c9862cb8f68d8af/unambig.png)
It would be better to generate that unambiguous automaton right away.https://gitlab.lrde.epita.fr/spot/spot/-/issues/364partition an automaton into safety and liveness2018-11-05T18:17:23+01:00Alexandre Duret-Lutzpartition an automaton into safety and livenesshttps://www.cs.cornell.edu/fbs/publications/RecSafeLive.pdf
The safety closure is what we use to build monitors, I think. We need the part building the liveness extension.
We should also provide a liveness detector, and add a safety/liveness classification in the on-line application.
See also ftp://www-cs.stanford.edu/cs/theory/amir/hierarchy.ps for some discussion of "liveness" vs. "uniform liveness". Maybe we could detect that as well.https://www.cs.cornell.edu/fbs/publications/RecSafeLive.pdf
The safety closure is what we use to build monitors, I think. We need the part building the liveness extension.
We should also provide a liveness detector, and add a safety/liveness classification in the on-line application.
See also ftp://www-cs.stanford.edu/cs/theory/amir/hierarchy.ps for some discussion of "liveness" vs. "uniform liveness". Maybe we could detect that as well.https://gitlab.lrde.epita.fr/spot/spot/-/issues/344better handling of silly histories in sat_minimize()2018-04-20T15:22:22+02:00Alexandre Duret-Lutzbetter handling of silly histories in sat_minimize()Currently if the acceptance is `Fin(0)&Inf(1)` the SAT-minimization code detects that any history containing `0` is rejecting and cannot be augmented into an accepting history. Hence it removes `1` from any history where `0` is present.
The code generalizes this example from the DNF of the acceptance. However we should at least also implement the dual version and simplify histories that are accepting and cannot be augmented into a rejecting history. For instance if the acceptance is `Fin(0)|Inf(1)`, we do not need to distinguish between history `{0,1}` and history `{1}`.
I suspect that doing both might help the case of parity acceptance, where the only histories that matter have at most 1 element (the min or max seen so far).Currently if the acceptance is `Fin(0)&Inf(1)` the SAT-minimization code detects that any history containing `0` is rejecting and cannot be augmented into an accepting history. Hence it removes `1` from any history where `0` is present.
The code generalizes this example from the DNF of the acceptance. However we should at least also implement the dual version and simplify histories that are accepting and cannot be augmented into a rejecting history. For instance if the acceptance is `Fin(0)|Inf(1)`, we do not need to distinguish between history `{0,1}` and history `{1}`.
I suspect that doing both might help the case of parity acceptance, where the only histories that matter have at most 1 element (the min or max seen so far).https://gitlab.lrde.epita.fr/spot/spot/-/issues/322evaluation of formula on word2018-03-24T13:17:21+01:00Alexandre Duret-Lutzevaluation of formula on wordIt would be useful to have a method or function such as
```
bool match_word(formula f, twa_word_ptr w);
```
that tells if a formula `f` would be satisfied by word `w`.
This could be used by `ltlcross` to decide which side of a failing intersecting test is incorrect. It could be also added as an option to `ltlfilt`, similar to the `--accept-word`, `--reject-word` we already have in `autfilt`. Finally it can be useful in some translation algorithms (e.g., Müller & Sickert, GandALF'17).It would be useful to have a method or function such as
```
bool match_word(formula f, twa_word_ptr w);
```
that tells if a formula `f` would be satisfied by word `w`.
This could be used by `ltlcross` to decide which side of a failing intersecting test is incorrect. It could be also added as an option to `ltlfilt`, similar to the `--accept-word`, `--reject-word` we already have in `autfilt`. Finally it can be useful in some translation algorithms (e.g., Müller & Sickert, GandALF'17).Maxime MehioMaxime Mehiohttps://gitlab.lrde.epita.fr/spot/spot/-/issues/310complementation of semi-deterministic automata2020-07-30T14:58:31+02:00Alexandre Duret-Lutzcomplementation of semi-deterministic automataThere are many places where we need to complement automata, for instance to prove equivalence. Currently we just make a distinction between deterministic and non-deterministic (which are then first determinized), but it would make sense to implement simpler (or at least more efficient) complementations for subclasses of non-deterministic automata.
@xblahoud suggests we implement a [complemention of semi-deterministic Büchi automata](http://www.fi.muni.cz/~xstrejc/publications/tacas2016coSDBA_preprint.pdf).
We already have a `is_semi_deterministic()` test.There are many places where we need to complement automata, for instance to prove equivalence. Currently we just make a distinction between deterministic and non-deterministic (which are then first determinized), but it would make sense to implement simpler (or at least more efficient) complementations for subclasses of non-deterministic automata.
@xblahoud suggests we implement a [complemention of semi-deterministic Büchi automata](http://www.fi.muni.cz/~xstrejc/publications/tacas2016coSDBA_preprint.pdf).
We already have a `is_semi_deterministic()` test.https://gitlab.lrde.epita.fr/spot/spot/-/issues/308accessibility properties2017-12-06T10:23:40+01:00Alexandre Duret-Lutzaccessibility propertiesWe just discussed with @max and @aga the fact that some algorithms call `scc_filter()` as a preprocessing step, some don't. For instance currently `tgba_determinize()` calls `scc_filter()` only if simulation-based reductions are activated, and it is unclear why this this done only in this case.
It might be interesting to add a property recording whether an automaton has been trimmed, in order to skip duplicate calls to such a function. I would suggest to use two properties: `accessible` (all states are reachable from initial state) and `coaccessible` (all states can reach some accepting cycle). `accessible` would for instance be set by `purge_unreachable_states()`, and both properties would be set by `scc_filter_states()` and similar functions. Note that `is_empty()` can simplified when run on automata that are both `accessible` and `coaccessible`, but I figure this is unlikely to be very useful in practice.
One issue is that these proposed properties are not always enough to decide if `scc_filter()` can be skipped, because `scc_filter()` does more than just removing useless states: it also simplifies the acceptance sets in each SCC.We just discussed with @max and @aga the fact that some algorithms call `scc_filter()` as a preprocessing step, some don't. For instance currently `tgba_determinize()` calls `scc_filter()` only if simulation-based reductions are activated, and it is unclear why this this done only in this case.
It might be interesting to add a property recording whether an automaton has been trimmed, in order to skip duplicate calls to such a function. I would suggest to use two properties: `accessible` (all states are reachable from initial state) and `coaccessible` (all states can reach some accepting cycle). `accessible` would for instance be set by `purge_unreachable_states()`, and both properties would be set by `scc_filter_states()` and similar functions. Note that `is_empty()` can simplified when run on automata that are both `accessible` and `coaccessible`, but I figure this is unlikely to be very useful in practice.
One issue is that these proposed properties are not always enough to decide if `scc_filter()` can be skipped, because `scc_filter()` does more than just removing useless states: it also simplifies the acceptance sets in each SCC.https://gitlab.lrde.epita.fr/spot/spot/-/issues/281overhaul the formula simplification routines2017-09-15T15:55:47+02:00Alexandre Duret-Lutzoverhaul the formula simplification routinesThe current way to implement LTL simplifications has a few issues:
1. The number of rules is huge, and the `spot/twaalgos/simplify.cc` file implementing them is the largest all handwritten source files of Spot. Reading `doc/tl/tl.pdf` in parallel is almost mandatory to understand what is being done.
2. These rules are hard-coded.
3. They use nested switches to match the structure of some patterns, sometimes relying on formulas properties. But sometimes we have a set of rules that should only be applied in some context (e.g., star normal form, or rules that apply inside `GF(...)`). These nesting patterns are quite painful to follow, and are often implemented twice for the dual rules.
4. Some of generic options can also affect the way some rules are applied (sometimes they affect the direction of some rewriting).
5. Dealing with n-ary operators is quite painful, especially when multiple rules compete to combine the operands.
All these theses points make it very hard to implement new simplification rules. It's also impossible to test a new simplification rule without implementing it. Finally, adding a new operator to the syntax would have an enormous cost since we would also need to alter many rules to support that new operator.
Our goal is therefore to change the way we do LTL simplifications. We need at least:
- [ ] a [DSL](https://en.wikipedia.org/wiki/Domain-specific_language) to express all our simplification rules in a way that is simple and unambiguous
- [ ] the (hopefully) complete list of rules expressed in that language
- [ ] a parser for that language
- [ ] a function that applies all the reductions on some given formula
- [ ] some abstract representation of a set of rules, such that the above function can be implemented efficiently
Once that is done, we should evaluate ways to make it faster. Parsing the list of all rules before the first simplification is likely to incur some overhead. We could try to generate some C++ code from this list at compile time to avoid the parsing at run time, but we would loose the ability to edit the list of rules at run time.
An option would be store the abstract (parsed) representation of the set of rules into some kind of bytecode that can be loaded quickly and used readily without costly syntax checks.
It would be useful if the users of the command-line tools could be allowed to supply their own rules. (This implies that the DSL will need to be documented.)
I suggest keeping the current simplification function around until all of the above is working, so that the result can be compared (if possible). When we are confident that we can switch to the new function, we still need to maintain backward compatibility of the API.The current way to implement LTL simplifications has a few issues:
1. The number of rules is huge, and the `spot/twaalgos/simplify.cc` file implementing them is the largest all handwritten source files of Spot. Reading `doc/tl/tl.pdf` in parallel is almost mandatory to understand what is being done.
2. These rules are hard-coded.
3. They use nested switches to match the structure of some patterns, sometimes relying on formulas properties. But sometimes we have a set of rules that should only be applied in some context (e.g., star normal form, or rules that apply inside `GF(...)`). These nesting patterns are quite painful to follow, and are often implemented twice for the dual rules.
4. Some of generic options can also affect the way some rules are applied (sometimes they affect the direction of some rewriting).
5. Dealing with n-ary operators is quite painful, especially when multiple rules compete to combine the operands.
All these theses points make it very hard to implement new simplification rules. It's also impossible to test a new simplification rule without implementing it. Finally, adding a new operator to the syntax would have an enormous cost since we would also need to alter many rules to support that new operator.
Our goal is therefore to change the way we do LTL simplifications. We need at least:
- [ ] a [DSL](https://en.wikipedia.org/wiki/Domain-specific_language) to express all our simplification rules in a way that is simple and unambiguous
- [ ] the (hopefully) complete list of rules expressed in that language
- [ ] a parser for that language
- [ ] a function that applies all the reductions on some given formula
- [ ] some abstract representation of a set of rules, such that the above function can be implemented efficiently
Once that is done, we should evaluate ways to make it faster. Parsing the list of all rules before the first simplification is likely to incur some overhead. We could try to generate some C++ code from this list at compile time to avoid the parsing at run time, but we would loose the ability to edit the list of rules at run time.
An option would be store the abstract (parsed) representation of the set of rules into some kind of bytecode that can be loaded quickly and used readily without costly syntax checks.
It would be useful if the users of the command-line tools could be allowed to supply their own rules. (This implies that the DSL will need to be documented.)
I suggest keeping the current simplification function around until all of the above is working, so that the result can be compared (if possible). When we are confident that we can switch to the new function, we still need to maintain backward compatibility of the API.Florian Perlié-LongFlorian Perlié-Longhttps://gitlab.lrde.epita.fr/spot/spot/-/issues/275is_inherently_weak should support alternating automata2017-08-09T18:29:02+02:00Thomas Medioniis_inherently_weak should support alternating automataThe current implementation of is_inherently_weak does not support alternating automata
Once it does, there is a shortcut in spot::to_weak_alternating() that can be done.The current implementation of is_inherently_weak does not support alternating automata
Once it does, there is a shortcut in spot::to_weak_alternating() that can be done.https://gitlab.lrde.epita.fr/spot/spot/-/issues/272streett-like to streett2017-06-21T17:35:19+02:00Alexandre Duret-Lutzstreett-like to streettWe need a `streett_like_to_strett(aut, reversed=false)` that converts Streett-like acceptance into Streett as defined in HOA 1.1 (if reversed = false) and as defined in HOA 1.0 if reversed=true.
For the record, HOA 1.1 (which might be renamed as 2.0) wants `(Inf(0)|Fin(1))&(Inf(2)|Fin(3)&...` while HOA 1.0 wants `(Fin(0)|Inf(1))&(Fin(2)|Inf(3)&...`.
We also need `rabin_like_to_rabin(aut)`, but there is no ordering difference in HOA in this case.We need a `streett_like_to_strett(aut, reversed=false)` that converts Streett-like acceptance into Streett as defined in HOA 1.1 (if reversed = false) and as defined in HOA 1.0 if reversed=true.
For the record, HOA 1.1 (which might be renamed as 2.0) wants `(Inf(0)|Fin(1))&(Inf(2)|Fin(3)&...` while HOA 1.0 wants `(Fin(0)|Inf(1))&(Fin(2)|Inf(3)&...`.
We also need `rabin_like_to_rabin(aut)`, but there is no ordering difference in HOA in this case.