New in spot 2.4.0.dev (not yet released) Bugs fixed: - spot::scc_info::determine_unknown_acceptance() incorrectly considered some rejecting SCC as accepting. - spot::streett_to_generalized_buchi() could generate automata with empty language if some Fin set did not intersect all accepting SCCs. As a consequence, some Streett-like automata were considered empty even though they were not. Also, the same function could crash on input that had a Streett-like acceptance not using all declared sets. - The twa_graph::mege_edges() function relied on BDD IDs to sort edges. This in turn caused some algorithm (like the degeneralization) to produce slighltly different outputs (but still correct outputs) depending on the BDD operations performed before. New in spot 2.4 (2017-09-06) Build: - Spot is now built in C++14 mode, so you need at least GCC 5 or clang 3.4. The current version of all major linux distributions ship with at least GCC 6, which defaults to C++14, so this should not be a problem. In *this* release of Spot, most of the header files are still C++11 compatible, so you should be able to include Spot in a C++11 project in case you do not yet want to upgrade. There is also an --enable-c++17 option to configure in case you want to force a build of Spot in C++17 mode. Tools: - genaut is a new binary that produces families of automata defined in the literature (in the same way as we have genltl for LTL formulas). - autcross is a new binary that compares the output of tools transforming automata (in the same way as we have ltlcross for LTL translators). - genltl learned two generate two new families of formulas: --fxg-or=RANGE F(p0 | XG(p1 | XG(p2 | ... XG(pn)))) --gxf-and=RANGE G(p0 & XF(p1 & XF(p2 & ... XF(pn)))) The later is a generalization of --eh-pattern=9, for which a single state TGBA always exists (but previous version of Spot would build larger automata). - autfilt learned to build the union (--sum) or the intersection (--sum-and) of two languages by putting two automata side-by-side and fiddling with the initial states. This complements the already implemented intersection (--product) and union (--product-or), both based on a product. - autfilt learned to complement any alternating automaton with option --dualize. (See spot::dualize() below.) - autfilt learned --split-edges to convert labels that are Boolean formulas into labels that are min-terms. (See spot::split_edges() below.) - autfilt learned --simplify-acceptance to simplify some acceptance conditions. (See spot::simplify_acceptance() below.) - autfilt --decompote-strength has been renamed to --decompose-scc because it can now extract the subautomaton leading to an SCC specified by number. (The old name is still kept as an alias.) - The --stats=%c option of tools producing automata can now be restricted to count complete SCCs, using %[c]c. - Tools producing automata have a --stats=... option, and tools producing formulas have a --format=... option. These two options work similarly, the use of different names is just historical. Starting with this release, all tools that recognize one of these two options also accept the other one as an alias. Library: - A new library, libspotgen, gathers all functions used to generate families of automata or LTL formulas, used by genltl and genaut. - spot::sum() and spot::sum_and() implements the union and the intersection of two automata by putting them side-by-side and using non-deterministim or universal branching on the initial state. - twa objects have a new property: prop_complete(). This obviously acts as a cache for the is_complete() function. - spot::dualize() complements any alternating automaton. Since the dual of a deterministic automaton is still deterministic, the function spot::dtwa_complement() has been deprecated and simply calls spot::dualize(). - spot::decompose_strength() was extended and renamed to spot::decompose_scc() as it can now also extract a subautomaton leading to a particular SCC. A demonstration of this feature via the Python bindings can be found at https://spot.lrde.epita.fr/ipynb/decompose.html - The print_dot() function will now display names for well known acceptance conditions under the formula when option 'a' is used. We plan to enable 'a' by default in a future release, so a new option 'A' has been added to hide the acceptance condition. - The print_dot() function has a new experimental option 'x' to output labels are LaTeX formulas. This is meant to be used in conjunction with the dot2tex tool. See https://spot.lrde.epita.fr/oaut.html#dot2tex - A new named property for automata called "original-states" can be used to record the origin of a state before transformation. It is currently defined by the degeneralization algorithms, and by transform_accessible() and algorithms based on it (like remove_ap::strip(), decompose_scc()). This is realy meant as an aid for writing algorithms that need this mapping, but it can also be used to debug these algorithms: the "original-states" information is displayed by the dot printer when the 'd' option is passed. For instance in % ltl2tgba 'GF(a <-> Fb)' --dot=s % ltl2tgba 'GF(a <-> Fb)' | autfilt -B --dot=ds the second command outputs an automaton with states that show references to the first one. - A new named property for automata called "degen-levels" keeps track of the level of a state in a degeneralization. This information complements the one carried in "original-states". - A new named property for automata called "simulated-states" can be used to record the origin of a state through simulation. The behavior is similar to "original-states" above. Determinization takes advantage of this in its pretty print. - The new function spot::acc_cond::is_streett_like() checks whether an acceptance condition is conjunction of disjunctive clauses containing at most one Inf and at most one Fin. It builds a vector of pairs to use if we want to assume the automaton has Streett acceptance. The dual function is spot::acc_cond::is_rabin_like() works similarly. - The degeneralize() function has learned to consider acceptance marks common to all edges comming to a state to select its initial level. A similar trick was already used in sbacc(), and saves a few states in some cases. - There is a new spot::split_edges() function that transforms edges (labeled by Boolean formulas over atomic propositions) into transitions (labeled by conjunctions where each atomic proposition appear either positive or negative). This can be used to preprocess automata before feeding them to algorithms or tools that expect transitions labeled by letters. - spot::scc_info has two new methods to easily iterate over the edges of an SCC: edges_of() and inner_edges_of(). - spot::scc_info can now be passed a filter function to ignore or cut some edges. - spot::scc_info now keeps track of acceptance sets that are common to all edges in an SCC. These can be retrieved using scc_info::common_sets_of(scc), and they are used by scc_info to classify some SCCs as rejecting more easily. - The new function acc_code::remove() removes all the given acceptance sets from the acceptance condition. - It is now possible to change an automaton acceptance condition directly by calling twa::set_acceptance(). - spot::cleanup_acceptance_here now takes an additional boolean parameter specifying whether to strip useless marks from the automaton. This parameter is defaulted to true, in order to keep this modification backward-compatible. - The new function spot::simplify_acceptance() is able to perform some simplifications on an acceptance condition, and might lead to the removal of some acceptance sets. - The function spot::streett_to_generalized_buchi() is now able to work on automata with Streett-like acceptance. - The function for converting deterministic Rabin automata to deterministic Büchi automata (when possible), internal to the remove_fin() procedure, has been updated to work with transition-based acceptance and with Rabin-like acceptance. - spot::relabel_here() was used on automata to rename atomic propositions, it can now replace atomic propositions by Boolean subformula. This makes it possible to use relabeling maps produced by relabel_bse() on formulas. - twa_graph::copy_state_names_from() can be used to copy the state names from another automaton, honoring "original-states" if present. - Building automata for LTL formula with a large number N of atomic propositions can be costly, because several loops and data-structures are exponential in N. However a formula like ((a & b & c) | (d & e & f)) U ((d & e & f) | (g & h & i)) can be translated more efficiently by first building an automaton for (p0 | p1) U (p1 | p2), and then substituting p0, p1, p2 by the appropriate Boolean formula. Such a trick is now attempted for translation of formulas with 4 atomic propositions or more (this threshold can be changed, see -x relabel-bool=N in the spot-x(7) man page). - The LTL simplification routines learned that an LTL formula like G(a & XF(b & XFc & Fd) can be simplified to G(a & Fb & Fc & Fd), and dually F(a | XG(b | XGc | Gd)) = F(a | Gb | Gc | Gd). When working with SERE, the simplification of "expr[*0..1]" was improved. E.g. {{a[*]|b}[*0..1]} becomes {a[*]|b} instead of {{a[+]|b}[*0..1]}. - The new function spot::to_weak_alternating() is able to take an input automaton with generalized Büchi/co-Büchi acceptance and convert it to a weak alternating automaton. - spot::sbacc() is can now also convert alternating automata to state-based acceptance. - spot::sbacc() and spot::degeneralize() learned to merge accepting sinks. - If the SPOT_BDD_TRACE environment variable is set, statistics about BDD garbage collection and table resizing are shown. - The & and | operators for acceptannce conditions have been changed slightly to be more symmetrical. In older versions, operator & would move Fin() terms to the front, but that is not the case anymore. Also operator & was already grouping all Inf() terms (for efficiency reasons), in this version operator | is symmetrically grouping all Fin() terms. - The automaton parser is now reentrant, making it possible to process automata from different streams at the same time (i.e., using multiple spot::automaton_stream_parser instances at once). - The print_hoa() and parse_automaton() functions have been updated to recognize the "exist-branch" property of the non-released HOA v1.1, as well as the new meaning of property "deterministic". (In HOA v1 "properties: deterministic" means that the automaton has no existential branching; in HOA v1.1 it disallows universal branching as well.) The meaning of "deterministic" in Spot has been adjusted to these new semantics, see "Backward-incompatible changes" below. - The parser for HOA now recognizes and verifies correct use of the "univ-branch" property. This is known to be a problem with option -H1 of ltl3ba 1.1.2 and ltl3dra 0.2.4, so the environment variable SPOT_HOA_TOLERANT can be set to disable the diagnostic. Python: - The 'spot.gen' package exports the functions from libspotgen. See https://spot.lrde.epita.fr/ipynb/gen.html for examples. Bugs fixed: - When the remove_fin() function was called on some automaton with Inf-less acceptance involving at least one disjunction (e.g., generalized co-Büchi), it would sometimes output an automaton with transition-based acceptance but marked as state-based. - The complete() function could complete an empty co-Büchi automaton into an automaton accepting everything. Backward-incompatible changes: - spot::acc_cond::mark_t::operator bool() has been marked as explicit. The implicit converion to bool (and, via bool, to int) was a source of bugs. - spot::twa_graph::set_init_state(const state*) has been removed. It was never used. You always want to use spot::twa_graph::set_init_state(unsigned) in practice. - The previous implementation of spot::is_deterministic() has been renamed to spot::is_universal(). The new version of spot::is_deterministic() requires the automaton to be both universal and existential. This should not make any difference in existing code unless you work with the recently added support for alternating automata. - spot::acc_cond::mark_t::sets() now returns an internal iterable object instead of an std::vector. - The color palette optionally used by print_dot() has been extended from 9 to 16 colors. While the first 8 colors are similar, they are a bit more saturated now. Deprecation notices: (These functions still work but compilers emit warnings.) - spot::decompose_strength() is deprecated, it has been renamed to spot::decompose_scc(). - spot::dtwa_complement() is deprecated. Prefer the more generic spot::dualize() instead. - The spot::twa::prop_deterministic() methods have been renamed to spot::twa::prop_universal() for consistency with the change to is_deterministic() listed above. We have kept spot::twa::prop_deterministic() as a deprecated synonym for spot::twa::prop_universal() to help backward compatibility. - The spot::copy() function is deprecated. Use spot::make_twa_graph() instead. New in spot 2.3.5 (2017-06-22) Bugs fixed: - We have fixed new cases where translating multiple formulas in a single ltl2tgba run could produce automata different from those produced by individual runs. - The print_dot() function had a couple of issues when printing alternating automata: in particular, when using flag 's' (display SCC) or 'y' (split universal destination by colors) universal edges could be connected to undefined states. - Using --stats=%s or --stats=%s or --stats=%t could take an unnecessary long time on automata with many atomic propositions, due to a typo. Furthermore, %s/%e/%t/%E/%T were printing a number of reachable states/edges/transitions, but %S was incorrectly counting all states even unreachable. - Our verson of BuDDy had an incorrect optimization for the biimp operator. New in spot 2.3.4 (2017-05-11) Bugs fixed: - The transformation to state-based acceptance (spot::sbacc()) was incorrect on automata where the empty acceptance mark is accepting. - The --help output of randaut and ltl2tgba was showing an unsupported %b stat. - ltldo and ltlcross could leave temporary files behind when aborting on error. - The LTL simplifcation rule that turns F(f)|q into F(f|q) when q is a subformula that is both eventual and universal was documented but not applied in some forgotten cases. - Because of some caching inside of ltl2tgba, translating multiple formula in single ltl2tgba run could produce automata different from those produced by individual runs. New in spot 2.3.3 (2017-04-11) Tools: - ltldo and ltlcross learned shorthands to talk to ltl2da, ltl2dpa, and ltl2ldba (from Owl) without needing to specify %f>%O. - genltl learned --spec-patterns as an alias for --dac-patterns; it also learned two new sets of LTL formulas under --hkrss-patterns (a.k.a. --liberouter-patterns) and --p-patterns (a.k.a. --beem-patterns). Bugs fixed: - In "lenient" mode the formula parser would fail to recover from a missing closing brace. - The output of 'genltl --r-left=1 --r-right=1 --format=%F' had typos. - 'ltl2tgba Fa | autfilt --complement' would incorrectly claim that the output is "terminal" because dtwa_complement() failed to reset that property. - spot::twa_graph::purge_unreachable_states() was misbehaving on alternating automata. - In bench/stutter/ the .cc files were not compiling due to warnings being caught as errors. - The code in charge of detecting DBA-type Rabin automata is actually written to handle a slightly larger class of acceptance conditions (e.g., Fin(0)|(Fin(1)&Inf(2))), however it failed to correctly detect DBA-typeness in some of these non-Rabin acceptance. New in spot 2.3.2 (2017-03-15) Tools: - In tools that output automata, the number of atomic propositions can be output using --stats=%x (output automaton) or --stats=%X (input automaton). Additional options can be passed to list atomic propositions instead of counting them. Tools that output formulas also support --format=%x for this purpose. Python: - The bdd_to_formula(), and to_generalized_buchi() functions can now be called in Python. Documentation: - https://spot.lrde.epita.fr/tut11.html is a new page describing how to build monitors in Shell, Python, or C++. Bugs fixed: - The tests using LTSmin's patched version of divine would fail if the current (non-patched) version of divine was installed. - Because of a typo, the output of --stats='...%P...' was correct only if %p was used as well. - genltl was never meant to have (randomly attributed) short options for --postive and --negative. - a typo in the code for transformating transition-based acceptance to state-based acceptance could cause a superfluous initial state to be output in some cases (the result was still correct). - 'ltl2tgba --any -C -M ...' would not complete automata. - While not incorrect, the HOA properties output by 'ltl2tgba -M' could be 'inherently-weak' or 'terminal', while 'ltl2tgba -M -D' would always report 'weak' automata. Both variants now report the most precise between 'weak' or 'terminal'. - spot::twa_graph::set_univ_init_state() could not be called with an initializer list. - The Python wrappers for spot::twa_graph::state_from_number and spot::twa_graph::state_acc_sets were broken in 2.3. - Instantiating an emptiness check on an automaton with unsupported acceptance condition should throw an exception. This used to be just an assertion, disabled in release builds; the difference matters for the Python bindings. Deprecation notice: - Using --format=%a to print the number of atomic propositions in ltlfilt, genltl, and randltl still works, but it is not documented anymore and should be replaced by the newly-introduced --format=%x for consistency with tools producing automata, where %a means something else. New in spot 2.3.1 (2017-02-20) Tools: - ltldo learnt to act like a portfolio: --smallest and --greatest will select the best output automaton for each formula translated. See https://spot.lrde.epita.fr/ltldo.html#portfolio for examples. - The colors used in the output of ltlcross have been adjusted to work better with white backgrounds and black backgrounds. - The option (y) has been added to --dot. It splits the universal edges with the same targets but different colors. - genltl learnt three new families for formulas: --kr-n2=RANGE, --kr-nlogn=RANGE, and --kr-n=RANGE. These formulas, from Kupferman & Rosenberg [MoChArt'10] are recognizable by deterministic Büchi automata with at least 2^2^n states. Library: - spot::twa_run::as_twa() has an option to preserve state names. - the method spot::twa::is_alternating(), introduced in Spot 2.3 was badly named and has been deprecated. Use the negation of the new spot::twa::is_existential() instead. Bugs fixed: - spot::otf_product() was incorrectly registering atomic propositions. - spot::ltsmin_model::kripke() forgot to register the "dead" proposition. - The spot::acc_word type (used to construct acceptance condition) was using some non-standard anonymous struct. It is unlikely that this type was actually used outside Spot, but if you do use it, spot::acc_word::op and spot::acc_word::type had to be renamed as spot::acc_word::sub.op and spot::acc_word::sub.type. - alternation_removal() was not always reporting the unsupported non-weak automata. - a long-standing typo in the configure code checking for Python caused any user-defined CPPFLAGS to be ignored while building Spot. - The display of clusters with universal edges was confused, because the intermediate node was not in the cluster even if one of the target was in the same one. New in spot 2.3 (2017-01-19) Build: * While Spot only relies on C++11 features, the configure script learned --enable-c++14 to compile in C++14 mode. This allows us check that nothing breaks when we will switch to C++14. * Spot is now distributed with PicoSAT 965, and uses it for SAT-based minimization of automata without relying on temporary files. It is still possible to use an external SAT solver by setting the SPOT_SATSOLVER environment variable. * The development Debian packages for Spot now install static libraries as well. * We now install configuration files for users of pkg-config. Tools: * ltlcross supports translators that output alternating automata in the HOA format. Cross-comparison checks will only work with weak alternating automata (not necessarily *very* weak), but "ltlcross --no-check --product=0 --csv=..." will work with any alternating automaton if you just want satistics. * autfilt can read alternating automata. This is still experimental (see below). Some of the algorithms proposed by autfilt will refuse to work because they have not yet been updated to work with alternating automata, but in any case they should display a diagnostic: if you see a crash, please report it. * autfilt has three new filters: --is-very-weak, --is-alternating, and --is-semi-deterministic. * the --check option of autfilt/ltl2tgba/ltldo/dstar2tgba can now take "semi-determinism" as argument. * autfilt --highlight-languages will color states that recognize identical languages. (Only works for deterministic automata.) * 'autfilt --sat-minimize' and 'ltl2tgba -x sat-minimize' have undergone some backward incompatible changes. They use binary search by default, and support different options than they used too. See spot-x(7) for details. The defaults are those that were best for the benchmark in bench/dtgbasat/. * ltlfilt learned --recurrence and --persistence to match formulas belonging to these two classes of the temporal hierarchy. Unlike --syntactic-recurrence and --syntactic-persistence, the new checks are automata-based and will also match pathological formulas. See https://spot.lrde.epita.fr/hierarchy.html * The --format option of ltlfilt/genltl/randltl/ltlgrind learned to print the class of a formula in the temporal hierarchy of Manna & Pnueli using %h. See https://spot.lrde.epita.fr/hierarchy.html * ltldo and ltlcross learned a --relabel option to force the relabeling of atomic propositions to p0, p1, etc. This is more useful with ltldo, as it allows calling a tool that restricts the atomic propositions it supports, and the output automaton will then be fixed to use the original atomic propositions. * ltldo and ltlcross have learned how to call ltl3hoa, so 'ltl3hoa -f %f>%O' can be abbreviated to just 'ltl3hoa'. Library: * A twa is required to have at least one state, the initial state. An automaton can only be empty while it is being constructed, but should not be passed to other algorithms. * Couvreur's emptiness check has been rewritten to use the explicit interface when possible, to avoid overkill memory allocations. The new version has further optimizations for weak and terminal automata. Overall, this new version is roughly 4x faster on explicit automata than the former one. The old version has been kept for backward compatibility, but will be removed eventually. * The new version of the Couvreur emptiness check is now the default one, used by twa::is_empty() and twa::accepting_run(). Always prefer these functions over an explicit call to Couvreur. * experimental support for alternating automata: - twa_graph objects can now represent alternating automata. Use twa_graph::new_univ_edge() and twa_graph::set_univ_init_state() to create universal edges an initial states; and use twa_graph::univ_dests() to iterate over the universal destinations of an edge. - the automaton parser will now read alternating automata in the HOA format. The HOA and dot printers can output them. - the file twaalgos/alternation.hh contains a few algorithms specific to alternating automata: + remove_alternation() will transform *weak* alternating automata into TGBA. + the class outedge_combiner can be used to perform "and" and "or" on the outgoing edges of some alternating automaton. - scc_info has been adjusted to handle universal edges as if they were existential edges. As a consequence, acceptance information is not accurate. - postprocessor will call remove_alternation() right away, so it can be used as a way to transform alternating automata into different sub-types of (generalized) Büchi automata. Note that although prostprocessor optimize the resulting automata, it still has no simplification algorithms that work at the alternating automaton level. - See https://spot.lrde.epita.fr/tut23.html https://spot.lrde.epita.fr/tut24.html and https://spot.lrde.epita.fr/tut31.html for some code examples. * twa objects have two new properties, very-weak and semi-deterministic, that can be set or retrieved via twa::prop_very_weak()/twa::prop_semi_deterministic(), and that can be tested by is_very_weak_automaton()/is_semi_deterministic(). * twa::prop_set has a new attribute used in twa::prop_copy() and twa::prop_keep() to indicate that determinism may be improved by an algorithm. In other words properties like deterministic/semi-deterministic/unambiguous should be preserved only if they are positive. * language_map() and highlight_languages() are new functions that implement autfilt's --highlight-languages option mentionned above. * dtgba_sat_minimize_dichotomy() and dwba_sat_minimize_dichotomy() use language_map() to estimate a lower bound for binary search. * The encoding part of SAT-based minimization consumes less memory. * SAT-based minimization of automata can now be done using two incremental techniques that take a solved minimization and attempt to forbid the use of some states. This is done either by adding clauses, or by using assumptions. * If the environment variable "SPOT_XCNF" is set during incremental SAT-based minimization, XCNF files suitable for the incremental SAT competition will be generated. This requires the use of an exteral SAT solver, setup with "SPOT_SATSOLVER". See spot-x(7). * The new function mp_class(f) returns the class of the formula f in the temporal hierarchy of Manna & Pnueli. Python: * spot.show_mp_hierarchy() can be used to display the membership of a formula to the Manna & Pnueli hierarchy, in notebooks. An example is in https://spot.lrde.epita.fr/ipynb/formulas.html * The on-line translator will now display the temporal hierarchy in the "Formula > property information" output. Bugs fixed: * The minimize_wdba() function was not correctly minimizing automata with useless SCCs. This was not an issue for the LTL translation (where useless SCCs are always removed first), but it was an issue when deciding if a formula was safety or guarantee. As a consequence, some tricky safety or guarantee properties were only recognized as obligations. * When ltlcross was running a translator taking the Spin syntax as input (%s) it would not automatically relabel any unsupported atomic propositions as ltldo already do. * When running "autfilt --sat-minimize" on a automaton representing an obligation property, the result would always be a complete automaton even without the -C option. * ltlcross --products=0 --csv should not output any product-related column in the CSV output since it has nothing to display there. New in spot 2.2.2 (2016-12-16) Build: * If the system has an installed libltdl library, use it instead of the one we distribute. Bug fixes: * scc_filter() had a left-over print statement that would print "names" when copying the name of the states. * is_terminal() should reject automata that have accepting transitions going into rejecting SCCs. The whole point of being a terminal automaton is that reaching an accepting transition guarantees that any suffix will be accepted. * The HOA parser incorrectly read "Acceptance: 1 Bar(0)" as a valid way to specify "Acceptance: 1 Fin(0)" because it assumed that everything that was not Inf was Fin. These errors are now diagnosed. * Some of the installed headers (spot/misc/fixpool.hh, spot/misc/mspool.hh, spot/twaalgos/emptiness_stats.hh) were not self-contained. * ltlfilt --from-ltlf should ensure that "alive" holds initially in order to reject empty traces. * the on-line translator had a bug where a long ltl3ba process would continue running even after the script had timeout'ed. New in spot 2.2.1 (2016-11-21) Bug fix: * The bdd_noderesize() function, as modified in 2.2, would always crash. New in spot 2.2 (2016-11-14) Command-line tools: * ltlfilt has a new option --from-ltlf to help reducing LTLf (i.e., LTL over finite words) model checking to LTL model checking. This is based on a transformation by De Giacomo & Vardi (IJCAI'13). * "ltldo --stats=%R", which used to display the serial number of the formula processed, was renamed to "ltldo --stats=%#" to free %R for the following feature. * autfilt, dstar2tgba, ltl2tgba, ltlcross, ltldo learned to measure cpu-time (as opposed to wall-time) using --stats=%R. User or system time, for children or parent, can be measured separately by adding additional %[LETTER]R options. The difference between %r (wall-clock time) and %R (CPU time) can also be used to detect unreliable measurements. See https://spot.lrde.epita.fr/oaut.html#timing Library: * from_ltlf() is a new function implementing the --from-ltlf transformation described above. * is_unambiguous() was rewritten in a more efficient way. * scc_info learned to determine the acceptance of simple SCCs made of a single self-loop without resorting to remove_fin() for complex acceptance conditions. * remove_fin() has been improved to better deal with automata with "unclean" acceptance, i.e., acceptance sets that are declared but not used. In particular, this helps scc_info to be more efficient at deciding the acceptance of SCCs in presence of Fin acceptance. * Simulation-based reductions now implement just bisimulation-based reductions on deterministic automata, to save time. As an example, this halves the run time of genltl --rv-counter=10 | ltl2tgba * scc_filter() learned to preserve state names and highlighted states. * The BuDDy library has been slightly optimized: the initial setup of the unicity table can now be vectorized by GCC, and all calls to setjmp are now avoided when variable reordering is disabled (the default). * The twa class has three new methods: aut->intersects(other) aut->intersecting_run(other) aut->intersecting_word(other) currently these are just convenient wrappers around !otf_product(aut, other)->is_empty() otf_product(aut, other)->accepting_run()->project(aut) otf_product(aut, other)->accepting_word() with any Fin-acceptance removal performed before the product. However the plan is to implement these more efficiently in the future. Bug fixes: * ltl2tgba was always using the highest settings for the LTL simplifier, ignoring the --low and --medium options. Now genltl --go-theta=12 | ltl2tgba --low --any is instantaneous as it should be. * The int-vector compression code could encode 6 and 22 in the same way, causing inevitable issues in our LTSmin interface. (This affects tests/ltsmin/modelcheck with option -z, not -Z.) * str_sere() and str_utf8_sere() were not returning the same string that print_sere() and print_utf8_sere() would print. * Running the LTL parser in debug mode would crash. * tgba_determinize() could produce incorrect deterministic automata when run with use_simulation=true (the default) on non-simplified automata. * When the automaton_stream_parser reads an automaton from an already opened file descriptor, it will not close the file anymore. Before that the spot.automata() python function used to close a file descriptor twice when reading from a pipe, and this led to crashes of the 0MQ library used by Jupyter, killing the Python kernel. * remove_fin() could produce incorrect result on incomplete automata tagged as weak and deterministic. * calling set_acceptance() several times on an automaton could result in unexpected behaviors, because set_acceptance(0,...) used to set the state-based acceptance flag automatically. * Some buffering issue caused syntax errors to be displayed out of place by the on-line translator. New in spot 2.1.2 (2016-10-14) Command-line tools: * genltl learned 5 new families of formulas (--tv-f1, --tv-f2, --tv-g1, --tv-g2, --tv-uu) defined in Tabakov & Vardi's RV'10 paper. * ltlcross's --csv and --json output was changed to not include information about the ambiguity or strength of the automata by default. Computing those can be costly and not needed by every user, so it should now be requested explicitely using options --strength and --ambiguous. Library: * New LTL simplification rule: - GF(f & q) = G(F(f) & q) if q is purely universal and a pure eventuality. In particular GF(f & GF(g)) now ultimately simplifies to G(F(f) & F(g)). Bug fixes: * Fix spurious uninitialized read reported by valgrind when is_Kleene_star() is compiled by clang++. * Using "ltlfilt some-large-file --some-costly-filter" could take to a lot of time before displaying the first results, because the output of ltlfilt is buffered: the buffer had to fill up before being flushed. The issue did not manifest when the input is standard input, because of the C++ feature that reading std::cin should flush std::cout; however it was well visible when reading from files. Flushing is now done more regularly. * Fix compilation warnings when -Wimplicit-fallthrough it enabled. * Fix python errors on Darwin when using methods from the spot module inside of the spot.ltsmin submodule. * Fix ltlcross crash when combining --no-check with --verbose. * Adjust paths and options used in bench/dtgbasat/ to match the changes introduced in Spot 2.0. New in spot 2.1.1 (2016-09-20) Command-line tools: * ltlfilt, randltl, genltl, and ltlgrind learned to display the size (%s), Boolean size (%b), and number of atomic propositions (%a) with the --format and --output options. A typical use-case is to sort formulas by size: genltl --dac --format='%s,%f' | sort -n | cut -d, -f2 or to group formulas by number of atomic propositions: genltl --dac --output='ap-%a.ltl' * autfilt --stats learned the missing %D, %N, %P, and %W sequences, to complete the existing %d, %n, %p, and %w. * The --stats %c option of ltl2tgba, autfilt, ltldo, and dstar2tgba now accepts options to filter the SCCs to count. For instance --stats='%[awT]c' will count the SCCs that are (a)ccepting and (w)eak, but (not t)erminal. See --help for all supported filters. Bugs fixed: * Fix several cases where command-line tools would fail to diagnose write errors (e.g. when writing to a filesystem that is full). * Typos in genltl --help and in the man page spot-x(7). New in spot 2.1 (2016-08-08) Command-line tools: * All tools that input formulas or automata (i.e., autfilt, dstar2tgba, ltl2tgba, ltl2tgta, ltlcross, ltldo, ltlfilt, ltlgrind) now have a more homogeneous handling of the default input. - If no formula/automaton have been specified, and the standard input is a not a tty, then the default is to read that. This is a change for ltl2tgba and ltl2tgta. In particular, it simplifies genltl --dac | ltl2tgba -F- | autfilt ... into genltl --dac | ltl2tgba | autfilt ... - If standard input is a tty and no other input has been specified, then an error message is printed. This is a change for autfilt, dstar2tgba, ltlcross, ltldo, ltlfilt, ltlgrind, that used to expect the user to type formula or automata at the terminal, confusing people. - All tools now accept - as a shorthand for -F-, to force reading input from the standard input (regardless of whether it is a tty or not). This is a change for ltl2tgba, ltl2tgta, ltlcross, and ltldo. * ltldo has a new option --errors=... to specify how to deal with errors from executed tools. * ltlcross and ltldo learned to bypass the shell when executing simple commands (with support for single or double-quoted arguments, and redirection of stdin and stdout, but nothing more). * ltlcross and ltldo learned a new syntax to specify that an input formula should be written in some given syntax after rewriting some operators away. For instance the defaults arguments passed to ltl2dstar have been changed from --output-format=hoa %L %O into --output-format=hoa %[WM]L %O where [WM] specifies that operators W and M should be rewritten away. As a consequence running ltldo ltl2dstar -f 'a M b' will now work and call ltl2dstar on the equivalent formula 'b U (a & b)' instead. The operators that can be listed between brackets are the same as those of ltlfilt --unabbreviate option. * ltlcross learned to show some counterexamples when diagnosing failures of cross-comparison checks against random state spaces. * autfilt learned to filter automata by count of SCCs (--sccs=RANGE) or by type of SCCs (--accepting-sccs=RANGE, --rejecting-sccs=RANGE, trivial-sccs=RANGE, --terminal-sccs=RANGE, --weak-sccs=RANGE, --inherently-weak-sccs=RANGE). * autfilt learned --remove-unused-ap to remove atomic propositions that are declared in the input automaton, but not actually used. This of course makes sense only for input/output formats that declare atomic propositions (HOA & DSTAR). * autfilt learned two options to filter automata by count of used or unused atomic propositions: --used-ap=RANGE --unused-ap=RANGE. These differ from --ap=RANGE that only consider *declared* atomic propositions, regardless of whether they are actually used. * autfilt learned to filter automata by count of nondeterministsic states with --nondet-states=RANGE. * autfilt learned to filter automata representing stutter-invariant properties with --is-stutter-invariant. * autfilt learned two new options to highlight non-determinism: --highlight-nondet-states=NUM and --highlight-nondet-states=NUM where NUM is a color number. Additionally --highlight-nondet=NUM is a shorthand for using the two. * autfilt learned to highlight a run matching a given word using the --highlight-word=[NUM,]WORD option. However currently this only work on automata with Fin-less acceptance. * autfilt learned two options --generalized-rabin and --generalized-streett to convert the acceptance conditions. * genltl learned three new families: --dac-patterns=1..45, --eh-patterns=1..12, and --sb-patterns=1..27. Unlike other options these do not output scalable patterns, but simply a list of formulas appearing in these three papers: Dwyer et al (FMSP'98), Etessami & Holzmann (Concur'00), Somenzi & Bloem (CAV'00). * genltl learned two options, --positive and --negative, to control wether formulas should be output after negation or not (or both). * The formater used by --format (for ltlfilt, ltlgrind, genltl, randltl) or --stats (for autfilt, dstar2tgba, ltl2tgba, ltldo, randaut) learned to recognize double-quoted fields and double the double-quotes output inbetween as expected from RFC4180-compliant CSV files. For instance ltl2tgba -f 'a U "b+c"' --stats='"%f",%s' will output "a U ""b+c""",2 * The --csv-escape option of genltl, ltlfilt, ltlgrind, and randltl is now deprecated. The option is still here, but hidden and undocumented. * The --stats option of autfilt, dstar2tgba, ltl2tgba, ltldo, randaut learned to display the output (or input if that makes sense) automaton as a HOA one-liner using %h (or %H), helping to create CSV files contained automata. * autfilt and dstar2tgba learned to read automata from columns in CSV files, specified using the same filename/COLUMN syntax used by tools reading formulas. * Arguments passed to -x (in ltl2tgba, ltl2tgta, autfilt, dstar2tgba) that are not used are now reported as they might be typos. This ocurred a couple of times in our test-suite. A similar check is done for the arguments of autfilt --sat-minimize=... Library: * The print_hoa() function will now output version 1.1 of the HOA format when passed the "1.1" option (i.e., use -H1.1 from any command-line tool). As far as Spot is concerned, this allows negated properties to be expressed. Version 1 of the HOA format is still the default, but we plan to default to version 1.1 in the future. * The "highlight-states" and "highlight-edges" named properties, which were introduced in 1.99.8, will now be output using "spot.highlight.edges:" and "spot.highlight.states:" headers if version 1.1 of the HOA format is selected. The automaton parser was secretly able of reading that since 1.99.8, but that is now documented at https://spot.lrde.epita.fr/hoa.html#extensions * highlight_nondet_states() and highlight_nondet_edges() are new functions that define the above two named properties. * is_deterministic(), is_terminal(), is_weak(), and is_inherently_weak(), count_nondet_states(), highlight_nondet_edges(), highlight_nondet_states() will update the corresponding properties of the automaton as a side-effect of their check. * the sbacc() function, used by "ltl2tgba -S" and "autfilt -S" to convert automata to state-based acceptance, learned some tricks (using SCCs, pulling accepting marks common to all outgoing edges, and pushing acceptance marks common to all incoming edges) to reduce the number of additional states needed. * to_generalized_rabin() and to_generalized_streett() are two new functions that convert the acceptance condition as requested without changing the transition structure. * language_containment_checker now has default values for all parameters of its constructor. * spot::twa_run has a new method, project(), that can be used to project a run found in a product onto one of the original operand. This is for instance used by autfilt --highlight-word. The old spot::project_twa_run() function has been removed: it was not used anywhere in Spot, and had an obvious bug in its implementation, so it cannot be missed by anyone. * spot::twa has two new methods that supplement is_empty(): twa::accepting_run() and twa::accepting_word(). They compute what their names suggest. Note that twa::accepting_run(), unlike the two others, is currently restricted to automata with Fin-less acceptance. * spot::check_stutter_invariance() can now work on non-deterministic automata for which no corresponding formula is known. This implies that "autfilt --check=stutter" will now label all automata, not just deterministic automata. * New LTL and PSL simplification rules: - if e is pure eventuality and g => e, then e U g = Fg - if u is purely universal and u => g, then u R g = Gg - {s[*0..j]}[]->b = {s[*1..j]}[]->b - {s[*0..j]}<>->b = {s[*1..j]}<>->b * spot::twa::succ_iterable renamed to spot::internal::twa_succ_iterable to make it clear this is not for public consumption. * spot::fair_kripke::state_acceptance_conditions() renamed to spot::fair_kripke::state_acceptance_mark() for consistency. This is backward incompatible, but we are not aware of any actual use of this method. Python: * The __format__() method for formula supports the same operator-rewritting feature introduced in ltldo and ltlcross. So "{:[i]s}".format(f) is the same as "{:s}".format(f.unabbreviate("i")). * Bindings for language_containment_checker were added. * Bindings for randomize() were added. * Iterating over edges via "aut.out(s)" or "aut.edges()" now allows modifying the edge fields. * Under IPython the spot.ltsmin module now offers a %%pml magic to define promela models, compile them with spins, and dynamically load them. This is akin to the %%dve magic that was already supported. * The %%dve and %%pml magics honor the SPOT_TMPDIR and TMPDIR environment variables. This especially helps when the current directory is read-only. Documentation: * A new example page shows how to test the equivalence of two LTL/PSL formulas. https://spot.lrde.epita.fr/tut04.html * A new page discusses explicit vs. on-the-fly interfaces for exploring automata in C++. https://spot.lrde.epita.fr/tut50.html * Another new page shows how to implement an on-the-fly Kripke structure for a custom state space. https://spot.lrde.epita.fr/tut51.html * The concepts.html page now lists all named properties used by automata. Bug fixes: * When ltlcross found a bug using a product of complemented automata, the error message would report "Comp(Ni)*Comp(Pj)" as non-empty while the actual culprit was "Comp(Nj)*Comp(Pi)". * Fix some non-deterministic execution of minimize_wdba(), causing test-suite failures with the future G++ 7, and clang 3.9. * print_lbtt() had a memory leak when printing states without successors. New in spot 2.0.3 (2016-07-11) Bug fixes: * The degen-lcache=1 option of the degeneralization algorithm (which is a default option) did not behave exactly as documented: instead of reusing the first level ever created for a state where the choice of the level is free, it reused the last level ever used. This caused some posterior simulation-based reductions to be less efficient at reducing automata (on the average). * The generalized testing automata displayed by the online translator were incorrect (those output by ltl2tgta were OK). * ltl2tgta should not offer options --ba, --monitor, --tgba and such. * the relabel() function could incorrectly unregister old atomic propositions even when they were still used in the output (e.g., if a&p0 is relabeled to p0&p1). This could cause ltldo and the online translator to report errors. New in spot 2.0.2 (2016-06-17) Documentation: * We now have a citing page at https://spot.lrde.epita.fr/citing.html providing a list of references about Spot. * The Python examples have been augmented with the two examples from our ATVA'16 tool paper. Bug fixes: * Fix compilation error observed with Clang++ 3.7.1 and GCC 6.1.1 headers. * Fix an infinite recursion in relabel_bse(). * Various small typos and cosmetic cleanups. New in spot 2.0.1 (2016-05-09) Library: * twa::unregister_ap() and twa_graph::remove_unused_ap() are new methods introduced to fix some of the bugs listed below. Documentation: * Add missing documentation for the option string passed to spot::make_emptiness_check_instantiator(). * There is now a spot(7) man page listing all installed command-line tools. Python: * The tgba_determinize() function is now accessible in Python. Bug fixes: * The automaton parser would choke on comments like /******/. * check_strength() should also set negated properties. * Fix autfilt to apply --simplify-exclusive-ap only after the simplifications of (--small/--deterministic) have been performed. * The automaton parser did not fully register atomic propositions for automata read from never claim or as LBTT. * spot::ltsmin::kripke() had the same issue. * The sub_stats_reachable() function used to count the number of transitions based on the number of atomic propositions actually *used* by the automaton instead of using the number of AP declared. * print_hoa() will now output all the atomic propositions that have been registered, not only those that are used in the automaton. (Note that it will also throw an exception if the automaton uses an unregistered AP; this is how some of the above bugs were found.) * For Small or Deterministic preference, the postprocessor will now unregister atomic propositions that are no longer used in labels. Simplification of exclusive properties and remove_ap::strip() will do similarly. * bench/ltl2tgba/ was not working since the source code reorganization of 1.99.7. * Various typos and minor documentation fixes. New in spot 2.0 (2016-04-11) Command-line tools: * ltlfilt now also support the --accept-word=WORD and --reject-word=WORD options that were introduced in autfilt in the previous version. Python: * The output of spot.atomic_prop_collect() is printable and can now be passed directly to spot.ltsmin.model.kripke(). Library: * digraph::valid_trans() renamed to digraph::is_valid_edge(). Documentation: * The concepts page (https://spot.lrde.epita.fr/concepts.html) now includes a highlevel description of the architecture, and some notes aboute automata properties. * More Doxygen documentation for spot::formula and spot::digraph. New in spot 1.99.9 (2016-03-14) Command-line tools: * autfilt has two new options: --accept-word=WORD and --reject-word=WORD for filtering automata that accept or reject some word. The option may be used multiple times. Library: * The parse_word() function can be used to parse a lasso-shaped word and build a twa_word. The twa_word::as_automaton() method can be used to create an automaton out of that. * twa::ap_var() renamed to twa::ap_vars(). * emptiness_check_instantiator::min_acceptance_conditions() and emptiness_check_instantiator::max_acceptance_conditions() renamed to emptiness_check_instantiator::min_sets() and emptiness_check_instantiator::max_sets(). * tgba_reachable_iterator (and subclasses) was renamed to twa_reachable_iterator for consistency. Documentation: * The page https://spot.lrde.epita.fr/upgrade2.html should help people migrating old C++ code written for Spot 1.2.x, and update it for (the upcoming) Spot 2.0. Bug fixes: * spot/twaalgos/gtec/gtec.hh was incorrectly installed as spot/tgbaalgos/gtec/gtec.hh. * The shared libraries should now compile again on Darwin. New in spot 1.99.8 (2016-02-18) Command-line tools: * ltl2tgba now also support the --generic option (already supported by ltldo and autfilt) to lift any restriction on the acceptance condition produced. This option now has a short version: -G. * ltl2tgba and autfilt have learnt how to determinize automata. For this to work, --generic acceptance should be enabled (this is the default for autfilt, but not for ltl2tgba). "ltl2tgba -G -D" will now always outpout a deterministic automaton. It can be an automaton with transition-based parity acceptance in case Spot could not find a deterministic automaton with (maybe generalized) Büchi acceptance. "ltl2tgba -D" is unchanged (the --tgba acceptance is the default), and will output a deterministic automaton with (generalized) Büchi acceptance only if one could be found. Otherwise a non-deterministic automaton is output, but this does NOT mean that no deterministic Büchi automaton exist for this formula. It only means Spot could not find it. "autfilt -D" will determinize any automaton, because --generic acceptance is the default for autfilt. "autfilt -D --tgba" will behave like "ltl2tgba -D", i.e., it may fail to find a deterministic automaton (even if one exists) and return a nondeterministic automaton. * "autfilt --complement" now also works for non-deterministic automata but will output a deterministic automaton. "autfilt --complement --tgba" will likely output a nondeterministic TGBA. * autfilt has a new option, --included-in, to filter automata whose language are included in the language of a given automaton. * autfilt has a new option, --equivalent-to, to filter automata that are equivalent (language-wise) to a given automaton. * ltlcross has a new option --determinize to instruct it to complement non-deterministic automata via determinization. This option is not enabled by default as it can potentially be slow and generate large automata. When --determinize is given, option --product=0 is implied, since the tests based on products with random state-space are pointless for deterministic automata. * ltl2tgba and ltldo now support %< and %> in the string passed to --stats when reading formulas from a CSV file. * ltlfilt's option --size-min=N, --size-max=N, --bsize-min=N, and --bsize-max=N have been reimplemented as --size=RANGE and --bsize=RANGE. The old names are still supported for backward compatibility, but they are not documented anymore. * ltlfilt's option --ap=N can now take a RANGE as parameter. * autfilt now has a --ap=RANGE option to filter automata by number of atomic propositions. Library: * Building products with different dictionaries now raise an exception instead of using an assertion that could be disabled. * The load_ltsmin() function has been split in two. Now you should first call ltsmin_model::load(filename) to create an ltsmin_model, and then call the ltsmin_model::kripke(...) method to create an automaton that can be iterated on the fly. The intermediate object can be queried about the supported variables and their types. * print_dot() now accepts several new options: - use " instead of #include This implies that when Spot headers are installed in /usr/include/spot/... (the default when using the Debian packages) or /usr/local/include/spot/... (the default when compiling from source), then it is no longuer necessary to add -I/usr/include/spot or -I/usr/local/include/spot when compiling, as /usr/include and /usr/local/include are usually searched by default. Inside the source distribution, the subdirectory src/ has been renamed to spot/, so that the root of the source tree can also be put on the preprocessor's search path to compile against a non-installed version of Spot. Similarly, iface/ltsmin/ has been renamed to spot/ltsmin/, so that installed and non-installed directories can be used similarly. * twa::~twa() is now calling get_dict()->unregister_all_my_variable(this); so this does not need to be done in any subclass. * is_inherently_weak_automaton() is a new function, and check_strength() has been modified to also check inherently weak automata. * decompose_strength() is now extracting inherently weak SCCs instead of just weak SCCs. This gets rid of some corner cases that used to require ad hoc handling. * acc_cond::acc_code's methods append_or(), append_and(), and shift_left() have been replaced by operators |=, &=, <<=, and for completeness the operators |, &, and << have been added. * Several methods have been removed from the acc_cond class because they were simply redundant with the methods of acc_cond::mark_t, and more complex to use. acc_cond::marks(...) -> use acc_cond::mark_t(...) acc_cond::sets(m) -> use m.sets() acc_cond::has(m, u) -> use m.has(u) acc_cond::cup(l, r) -> use l | r acc_cond::cap(l, r) -> use l & r acc_cond::set_minus(l, r) -> use l - r Additionally, the following methods/functions have been renamed: acc_cond::is_tt() -> acc_cond::is_t() acc_cond::is_ff() -> acc_cond::is_f() parse_acc_code() -> acc_cond::acc_code(...) * Automata property flags (those that tell whether the automaton is deterministic, weak, stutter-invariant, etc.) are now stored using three-valued logic: in addition to "maybe"/"yes" they can now also represent "no". This is some preparation for the upcomming support of the HOA v1.1 format, but it also saves time in some algorithms (e.g, is_deterministic() can now return immediately on automata marked as not deterministic). * The automaton parser now accept negated properties as they will be introduced in HOA v1.1, and will check for some inconsistencies. These properties are stored and used when appropriate, but they are not yet output by the HOA printer. Python: * Iterating over the transitions leaving a state (the twa_graph::out() C++ function) is now possible in Python. See https://spot.lrde.epita.fr/tut21.html for a demonstration. * Membership to acceptance sets can be specified using Python list when calling for instance the Python version of twa_graph::new_edge(). See https://spot.lrde.epita.fr/tut22.html for a demonstration. * Automaton states can be named via the set_state_names() method. See https://spot.lrde.epita.fr/ipynb/product.html for an example. Documentation: * There is a new page explaining how to compile example programs and and link them with Spot. https://spot.lrde.epita.fr/compile.html * Python bindings for manipulating acceptance conditions are demonstrated by https://spot.lrde.epita.fr/ipynb/acc_cond.html, and a Python implementation of the product of two automata is illustrated by https://spot.lrde.epita.fr/ipynb/product.html Source code reorganisation: * A lot of directories have been shuffled around in the distribution: src/ -> spot/ (see rational above) iface/ltsmin/ (code) -> spot/ltsmin/ wrap/python/ -> python/ src/tests/ -> tests/core/ src/sanity/ -> tests/sanity/ iface/ltsmin/ (tests) -> tests/ltsmin/ wrap/python/tests -> tests/python/ Bug fixes: * twa_graph would incorrectly replace named-states during purge_dead_states and purge_unreachable_states. * twa::ap() would contain duplicates when an atomic proposition was registered several times. * product() would incorrectly mark the product of two sttuter-sensitive automata as stutter-sensitive as well. * complete() could incorrectly reuse an existing (but accepting!) state as a sink. New in spot 1.99.6 (2015-12-04) Command-line tools: * autfilt has two new filters: --is-weak and --is-terminal. * autfilt has a new transformation: --decompose-strength, implementing the decomposition of our TACAS'13 paper. A demonstration of this feature via the Python bindings can be found at https://spot.lrde.epita.fr/ipynb/decompose.html * All tools that output HOA files accept a --check=strength option to request automata to be marked as "weak" or "terminal" as appropriate. Without this option, these properties may only be set as a side-effect of running transformations that use this information. Library: * Properties of automata (like the "properties:" line of the HOA format) are stored as bits whose interpretation is True=yes, False=maybe. Having getters like "aut->is_deterministic()" or "aut->is_unambiguous()" was confusing, because there are separate functions "is_deterministic(aut)" and "is_unambiguous(aut)" that do actually check the automaton. The getters have been renamed to avoid confusion, and get names more in line with the HOA format. - twa::has_state_based_acc() -> twa::prop_state_acc() - twa::prop_state_based_acc(bool) -> twa::prop_state_acc(bool) - twa::is_inherently_weak() -> twa::prop_inherently_weak() - twa::is_deterministic() -> twa::prop_deterministic() - twa::is_unambiguous() -> twa::prop_unambiguous() - twa::is_stutter_invariant() -> twa::prop_stutter_invariant() - twa::is_stutter_sensitive() -> twa::prop_stutter_sensitive() The setters have the same names as the getters, except they take a Boolean argument. This argument used to be optionnal (defaulting to True), but it no longer is. * Automata now support the "weak" and "terminal" properties in addition to the previously supported "inherently-weak". * By default the HOA printer tries not to bloat the output with properties that are redundant and probably useless. The HOA printer now has a new option "v" (use "-Hv" from the command line) to output more verbose "properties:". This currently includes outputing "no-univ-branch", outputting "unambiguous" even for automata already tagged as "deterministic", and "inherently-weak" or "weak" even for automata tagged "weak" or "terminal". * The HOA printer has a new option "k" (use "-Hk" from the command line) to output automata using state labels whenever possible. This is useful to print Kripke structures. * The dot output will display pair of states when displaying an automaton built as an explicit product. This works in IPython with spot.product() or spot.product_or() and in the shell with autfilt's --product or --product-or options. * The print_dot() function supports a new option, +N, where N is a positive integer that will be added to all set numbers in the output. This is convenient when displaying two automata before building their product: use +N to shift the displayed sets of the second automaton by the number of acceptance sets N of the first one. * The sat minimization for DTωA now does a better job at selecting reference automata when the output acceptance is the the same as the input acceptance. This can provide nice speedups when tring to syntethise larged automata with different acceptance conditions. * Explicit Kripke structures (i.e., stored as explicit graphs) have been rewritten above the graph class, using an interface similar to the twa class. The new class is called kripke_graph. The ad hoc Kripke parser and printer have been removed, because we can now use print_hoa() with the "k" option to print Kripke structure in the HOA format, and furthermore the parse_aut() function now has an option to load such an HOA file as a kripke_graph. * The HOA parser now accepts identifier, aliases, and headernames containing dots, as this will be allowed in the next version of the HOA format. * Renamings: is_guarantee_automaton() -> is_terminal_automaton() tgba_run -> twa_run twa_word::print -> operator<< dtgba_sat_synthetize() -> dtwa_sat_synthetize() dtgba_sat_synthetize_dichotomy() -> dtwa_sat_synthetize_dichotomy() Python: * Add bindings for is_unambiguous(). * Better interface for sat_minimize(). Bug fixes: * the HOA parser was ignoring the "unambiguous" property. * --dot=Bb should work like --dot=b, allowing us to disable a B option set via an environment variable. New in spot 1.99.5 (2015-11-03) Command-line tools: * autfilt has gained a --complement option. It currently works only for deterministic automata. * By default, autfilt does not simplify automata (this has not changed), as if the --low --any options were used. But now, if one of --small, --deterministic, or --any is given, the optimization level automatically defaults to --high (unless specified otherwise). For symmetry, if one of --low, --medium, or --high is given, then the translation intent defaults to --small (unless specified otherwise). * autfilt, dstar2tgba, ltlcross, and ltldo now trust the (supported) automaton properties declared in any HOA file they read. This can be disabled with option --trust-hoa=no. * ltlgrind FILENAME[/COL] is now the same as ltlgrind -F FILENAME[/COL] for consistency with ltlfilt. Library: * dtgba_complement() was renamed to dtwa_complement(), moved to complement.hh, and its purpose was restricted to just completing the automaton and complementing its acceptance condition. Any further acceptance condition transformation can be done with to_generalized_buchi() or remove_fin(). * The remove_fin() has learnt how to better deal with automata that are declared as weak. This code was previously in dtgba_complement(). * scc_filter_states() has learnt to remove useless acceptance marks that are on transitions between SCCs, while preserving state-based acceptance. The most visible effect is in the output of "ltl2tgba -s XXXa": it used to have 5 accepting states, it now has only one. (Changing removing acceptance of those 4 states has no effect on the language, but it speeds up some algorithms like NDFS-based emptiness checks, as discussed in our Spin'15 paper.) * The HOA parser will diagnose any version that is not v1, unless it looks like a subversion of v1 and no parse error was detected. * The way to pass option to the automaton parser has been changed to make it easier to introduce new options. One such new option is "trust_hoa": when true (the default) supported properties declared in HOA files are trusted even if they cannot be easily be verified. Another option "raise_errors" now replaces the method automaton_stream_parser::parse_strict(). * The output of the automaton parser now include the list of parse errors (that does not have to be passed as a parameters) and the input filename (making the output of error messages easier). * The following renamings make the code more consistent: ltl_simplifier -> tl_simplifier tgba_statistics::transitions -> twa_statistics::edges tgba_sub_statistics::sub_transitions -> twa_sub_statistics::transitions tgba_run -> twa_run reduce_run -> twa_run::reduce replay_tgba_run -> twa_run::replay print_tgba_run -> operator<< tgba_run_to_tgba -> twa_run::as_twa format_parse_aut_errors -> parsed_aut::format_errors twa_succ_iterator::current_state -> twa_succ_iterator::dst twa_succ_iterator::current_condition -> twa_succ_iterator::cond twa_succ_iterator::current_acceptance_conditions -> twa_succ_iterator::acc ta_succ_iterator::current_state -> ta_succ_iterator::dst ta_succ_iterator::current_condition -> ta_succ_iterator::cond ta_succ_iterator::current_acceptance_conditions -> ta_succ_iterator::acc Python: * The minimum supported Python version is now 3.3. * Add bindings for complete() and dtwa_complement() * Formulas now have a custom __format__ function. See https://spot.lrde.epita.fr/tut01.html for examples. * The Debian package is now compiled for all Python3 versions supported by Debian, not just the default one. * Automata now have get_name()/set_name() methods. * spot.postprocess(aut, *options), or aut.postprocess(*options) simplify the use of the spot.postprocessor object. (Just like we have spot.translate() on top of spot.translator().) * spot.automata() and spot.automaton() now have additional optional arguments: - timeout: to restrict the runtime of commands that produce automata - trust_hoa: can be set to False to ignore HOA properties that cannot be easily verified - ignore_abort: can be set to False if you do not want to skip automata ended with --ABORT--. Documentation: * There is a new page showing how to use spot::postprocessor to convert any type of automaton to Büchi. https://spot.lrde.epita.fr/tut30.html Bugs fixed: * Work around some weird exception raised when using the randltlgenerator under Python 3.5. * Recognize "nullptr" formulas as None in Python. * Fix compilation of bench/stutter/ * Handle saturation of formula reference counts. * Fix typo in the Python code for the CGI server. * "randaut -Q0 1" used to segfault. * "ltlgrind -F FILENAME/COL" did not preserve other CSV columns. * "ltlgrind --help" did not document FORMAT. * unabbreviate could easily use forbidden operators. * "autfilt --is-unambiguous" could fail to detect the nonambiguity of some automata with empty languages. * When parsing long tokens (e.g, state labels representing very large strings) the automaton parser could die with "input buffer overflow, can't enlarge buffer because scanner uses REJECT" New in spot 1.99.4 (2015-10-01) New features: * autfilt's --sat-minimize now takes a "colored" option to constrain all transitions (or states) in the output automaton to belong to exactly one acceptance sets. This is useful when targeting parity acceptance. * autfilt has a new --product-or option. This builds a synchronized product of two (completed of needed) automata in order to recognize the *sum* of their languages. This works by just using the disjunction of their acceptance conditions (with appropriate renumbering of the acceptance sets). For consistency, the --product option (that builds a synchronized product that recognizes the *intersection* of the languages) now also has a --product-and alias. * the parser for ltl2dstar's format has been merged with the parser for the other automata formats. This implies two things: - autfilt and dstar2tgba (despite its name) can now both read automata written in any of the four supported syntaxes (ltl2dstar's, lbtt's, HOA, never claim). - "dstar2tgba some files..." now behaves exactly like "autfilt --tgba --high --small some files...". (But dstar2tgba does not offer all the filtering and transformations options of autfilt.) Major code changes and reorganization: * The class hierarchy for temporal formulas has been entirely rewritten. This change is actually quite massive (~13200 lines removed, ~8200 lines added), and brings some nice benefits: - LTL/PSL formulas are now represented by lightweight formula objects (instead of pointers to children of an abstract formula class) that perform reference counting automatically. - There is no hierachy anymore: all operators are represented by a single type of node in the syntax tree, and an enumerator is used to distinguish between operators. - Visitors have been replaced by member functions such as map() or traverse(), that take a function (usually written as a lambda function) and apply it to the nodes of the tree. - As a consequence, writing algorithms that manipulate formula is more friendly, and several algorithms that spanned a few pages have been reduced to a few lines. The page https://spot.lrde.epita.fr/tut03.html illustrates the new interface, in both C++ and Python. * Directories ltlast/, ltlenv/, and ltlvisit/, have been merged into a single tl/ directory (for temporal logic). This is motivated by the fact that these formulas are not restricted to LTL, and by the fact that we no longuer use the "visitor" pattern. * The LTL/PSL parser is now declared in tl/parse.hh (instead of ltlparse/public.hh). * The spot::ltl namespace has been merged with the spot namespace. * The dupexp_dfs() function has been renamed to copy(), and has learned to preserve named states if required. * Atomic propositions can be declared without going through an environment using the spot::formula::ap() static function. They can be registered for an automaton directly using the spot::twa::register_ap() method. The vector of atomic propositions used by an automaton can now be retrieved using the spot::twa::ap() method. New in spot 1.99.3 (2015-08-26) * The CGI script for LTL translation offers a HOA download link for each generated automaton. * The html documentation now includes a HTML copies of the man pages, and HTML copies of the Python notebooks. * scc_filter(aut, true) does not remove Fin marks from rejecting SCCs, but it now does remove Fin marks from transitions between SCCs. * All the unabbreviation functions (unabbreviate_ltl(), unabbreviate_logic(), unabbreviate_wm()) have been merged into a single unabbreviate() function that takes a string representing the list of operators to remove among "eFGiMRW^" where 'e', 'i', and '^' stand respectively for <->, ->, and xor. This feature is also available via ltlfilt --unabbreviate. * In LTL formulas, atomic propositions specified using double-quotes can now include \" and \\. (This is more consistent with the HOA format, which already allows that.) * All the conversion routines that were written specifically for ltl2dstar's output format (DRA->BA & DRA->TGBA) have been ported to the new TωA structure supporting the HOA format. The DRA->TGBA conversion was reimplemented in the previous release, and the DRA->BA conversion has been reimplemented in this release (but it is still restricted to state-based acceptance). All these conversions are called automatically by to_generalized_buchi() or remove_fin() so there should be no need to call them directly. As a consequence: - "autfilt --remove-fin" or "autfilt -B" is better at converting state-based Rabin automata: it will produce a DBA if the input is deterministic and DBA-realizable, but will preserve as much determinism as possible otherwise. - a lot of obsolete code that was here only to support the old conversion routines has been removed. (The number of lines removed by this release is twice the number of lines added.) - ltlcross now uses automata in ltl2dstar's format directly, without converting them to Büchi (this makes the statistics reported in CSV files more relevant). - ltlcross no longer outputs additional columns about the size of the input automaton in the case ltl2dstar's format is used. - ltldo uses results in ltl2dstar's format directly, without converting them to Büchi. - dstar2tgba has been greatly simplified and now uses the same output routines as all the other tools that output automata. This implies a few minor semantic changes, for instance --stats=%A used to output the number of acceptance *pairs* in the input automaton, while it now outputs the number of acceptance sets like in all the other tools. * Bugs fixed - Some acceptance conditions like Fin(0)|Fin(1)|Fin(2)&Inf(3) were not detected as generalized-Rabin. - Unknown arguments for print_hoa() (i.e., option -H in command-line tools) are now diagnosed. - The CGI script for LTL translation now forces transition-based acceptance on WDBA-minimized automata when TGBA is requested. - ltlgrind --help output had some options documented twice, or in the wrong place. - The man page for ltlcross had obsolete examples. - When outputting atomic propositions in double quotes, the escaping routine used by the two styles of LaTeX output was slightly wrong. For instance ^ was incorrectly escaped, and the double quotes where not always properly rendered. - A spurious assertion was triggered by streett_to_generalized_buchi(), but only when compiled in DEBUG mode. - LTL formula rewritten in Spin's syntax no longer have their -> and <-> rewritten away. - Fix some warnings reported by the development version of GCC 6. - The spot.translate() function of the Python binding had a typo preventing the use of 'low'/'medium'/'high' as argument. - Fix spurious failure of uniq.test under different locales. - ltlcross now recovers from out-of-memory errors during state-space products. - bitvect.test was failing on 32bit architectures with assertions enabled because of a bug in the test case. New in spot 1.99.2 (2015-07-18) * The scc_info object, used to build a map of SCCs while gathering additional information, has been simplified and speed up. One test case where ltlcross would take more than 13min (to check the translation of one PSL formula) now takes only 75s. * streett_to_generalized_buchi() is a new function that implements what its name suggests, with some SCC-based optimizations over the naive definition. It is used by the to_generalized_buchi() and remove_fin() functions when the input automaton is a Streett automaton with more than 3 pairs (this threeshold can be changed via the SPOT_STREETT_CONV_MIN environment variable -- see the spot-x(7) man page for details). This is mainly useful to ltlcross, which has to get rid of "Fin" acceptance to perform its checks. As an example, the Streett automaton generatated by ltl2dstar (configured with ltl2tgba) for the formula !((GFa -> GFb) & (GFc -> GFd)) has 4307 states and 14 acceptance sets. The new algorithm can translate it into a TGBA with 9754 states and 7 acceptance sets, while the default approch used for converting any acceptance to TGBA would produce 250967 states and 7 acceptance sets. * Bugs fixed: - p[+][:*2] was not detected as belonging to siPSL. - scc_filter() would incorrectly remove Fin marks from rejecting SCCs. - the libspotltsmin library is installed. - ltlcross and ltldo did not properly quote atomic propositions and temporary file names containing a single-quote. - a missing Python.h is now diagnosed at ./configure time, with the suggestion to either install python3-devel, or run ./configure --disable-python. - Debian packages for libraries have been split from the main Spot package, as per Debian guidelines. New in spot 1.99.1 (2015-06-23) * Major changes motivating the jump in version number - Spot now works with automata that can represent more than generalized Büchi acceptance. Older versions were built around the concept of TGBA (Transition-based Generalized Büchi Automata) while this version now deals with what we call TωA (Transition-based ω-Automata). TωA support arbitrary acceptance conditions specified as a Boolean formula of transition sets that must be visited infinitely often or finitely often. This genericity allows for instance to represent Rabin or Streett automata, as well as some generalized variants of those. - Spot has near complete support for the Hanoi Omega Automata format. http://adl.github.io/hoaf/ This formats supports automata with the generic acceptance condition described above, and has been implemented in a number of third-party tools (see http://adl.github.io/hoaf/support.html) to ease their interactions. The only part of the format not yet implemented in Spot is the support for alternating automata. - Spot is now compiling in C++11 mode. The set of C++11 features we use requires GCC >= 4.8 or Clang >= 3.5. Although GCC 4.8 is more than 2-year old, people with older installations won't be able to install this version of Spot. - As a consequence of the switches to C++11 and to TωA, a lot of the existing C++ interfaces have been renamed, and sometime reworked. This makes this version of Spot not backward compatible with Spot 1.2.x. See below for the most important API changes. Furtheremore, the reason this release is not called Spot 2.0 is that we have more of those changes planned. - Support for Python 2 was dropped. We now support only Python 3.2 or later. The Python bindings have been improved a lot, and include some conveniance functions for better integration with IPython's rich display system. User familiar with IPython's notebook should have a look at the notebook files in wrap/python/tests/*.ipynb * Major news for the command-line tools - The set of tools installed by spot now consists in the following 11 commands. Those marked with a '+' are new in this release. - randltl Generate random LTL/PSL formulas. - ltlfilt Filter and convert LTL/PSL formulas. - genltl Generate LTL formulas from scalable patterns. - ltl2tgba Translate LTL/PSL formulas into Büchi automata. - ltl2tgta Translate LTL/PSL formulas into Testing automata. - ltlcross Cross-compare LTL/PSL-to-Büchi translators. + ltlgrind Mutate LTL/PSL formula. - dstar2tgba Convert deterministic Rabin or Streett automata into Büchi. + randaut Generate random automata. + autfilt Filter and convert automata. + ltldo Run LTL/PSL formulas through other tools. randaut does not need any presentation: it does what you expect. ltlgrind is a new tool that mutates LTL or PSL formulas. If you have a tool that is bogus on some formula that is too large to debug, you can use ltlgrind to generate smaller derived formulas and see if you can reproduce the bug on those. autfilt is a new tool that processes a stream of automata. It allows format conversion, filtering automata based on some properties, and general transformations (e.g., change of acceptance conditions, removal of useless states, product between automata, etc.). ltldo is a new tool that runs LTL/PSL formulas through other tools, but uses Spot's command-line interfaces for specifying input and output. This makes it easier to use third-party tool in a pipe, and it also takes care of some necessary format conversion. - ltl2tgba has a new option, -U, to produce unambiguous automata. In unambiguous automata any word is recognized by at most one accepting run, but there might be several ways to reject a word. This works for LTL and PSL formulas. - ltl2tgba has a new option, -S, to produce generalized-Büchi automata with state-based acceptance. Those are obtained by converting some transition-based GBA into a state-based GBA, so they are usually not as small as one would wish. The same option -S is also supported by autfilt. - ltlcross will work with translator producing automata with any acceptance condition, provided the output is in the HOA format. So it can effectively be used to validate tools producing Rabin or Streett automata. - ltlcross has several new options: --grind attempts to reduce the size of any bogus formula it discovers, while still exhibiting the bug. --ignore-execution-failures ignores cases where a translator exits with a non-zero status. --automata save the produced automata into the CSV or JSON file. Those automata are saved using the HOA format. ltlcross will also output two extra columns in its CSV/JSON output: "ambiguous_aut" and "complete_aut" are Boolean that respectively tells whether the automaton is ambiguous and complete. - "ltlfilt --stutter-invariant" will now work with PSL formulas. The implementation is actually much more efficient than our previous implementation that was only for LTL. - ltlfilt's old -q/--quiet option has been renamed to --ignore-errors. The new -q/--quiet semantic is the same as in grep (and also autfilt): disable all normal input, for situtations where only the exit status matters. - ltlfilt's old -n/--negate option can only be used as --negate now. The short '-n NUM' option is now the same as the new --max-count=N option, for consistency with other tools. - ltlfilt has a new --count option to count the number of matching automata. - ltlfilt has a new --exclusive-ap option to constrain formulas based on a list of mutually exclusive atomic propositions. - ltlfilt has a new option --define to be used in conjunction with --relabel or --relabel-bool to print the mapping between old and new labels. - all tools that produce formulas or automata now have an --output (a.k.a. -o) option to redirect that output to a file instead of standard output. The name of this file can be constructed using the same %-escape sequences that are available for --stats or --format. - all tools that output formulas have a -0 option to separate formulas with \0. This helps in conjunction with xargs -0. - all tools that output automata have a --check option that request extra checks to be performed on the output to fill in properties values for the HOA format. This options implies -H for HOA output. For instance ltl2tgba -H 'formula' will declare the output automaton as 'stutter-invariant' only if the formula is syntactically stutter-invariant (e.g., in LTL\X). With ltl2tgba --check 'formula' additional checks will be performed, and the automaton will be accurately marked as either 'stutter-invariant' or 'stutter-sensitive'. Another check performed by --check is testing whether the automaton is unambiguous. - ltlcross (and ltldo) have a list of hard-coded shorthands for some existing tools. So for instance running 'ltlcross spin ...' is the same as running 'ltlcross "spin -f %s>%N" ...'. This feature is much more useful for ltldo. - For options that take an output filename (i.e., ltlcross's --save-bogus, --grind, --csv, --json) you can force the file to be opened in append mode (instead of being truncated) by by prefixing the filename with ">>". For instance --save-bogus=">>bugs.ltl" will append to the end of the file. * Other noteworthy news - The web site moved to http://spot.lrde.epita.fr/. - We now have Debian packages. See http://spot.lrde.epita.fr/install.html - The documentation now includes some simple code examples for both Python and C++. (This is still a work in progress.) - The curstomized version of BuDDy (libbdd) used by Spot has be renamed as (libbddx) to avoid issues with copies of BuDDy already installed on the system. - There is a parser for the HOA format (http://adl.github.io/hoaf/) available as a spot::automaton_stream_parser object or spot::parse_aut() function. The former version is able to parse a stream of automata in order to do batch processing. This format can be output by all tools (since Spot 1.2.5) using the --hoa option, and it can be read by autfilt (by default) and ltlcross (using the %H specifier). The current implementation does not support alternation. Multiple initial states are converted into an extra initial state; complemented acceptance sets Inf(!x) are converted to Inf(x); explicit or implicit labels can be used; aliases are supported; "--ABORT--" can be used in a stream. - The above HOA parser can also parse never claims, and LBTT automata, so the never claim parser and the LBTT parser have been removed. This implies that autfilt can input a mix of HOA, never claims, and LBTT automata. ltlcross also use the same parser for all these output, and the old %T and %N specifiers have been deprecated and replaced by %O (for output). - While not all algorithms in the library are able to work with any acceptance condition supported by the HOA format, the following two new functions mitigate that: - remove_fin() takes a TωA whose accepting condition uses Fin(x) primitive, and produces an equivalent TωA without Fin(x): i.e., the output acceptance is a disjunction of generalized Büchi acceptance. This type of acceptance is supported by SCC-based emptiness-check, for instance. - similarly, to_tgba() converts any TωA into an automaton with generalized-Büchi acceptance. - randomize() is a new algorithm that randomly reorders the states and transitions of an automaton. It can be used from the command-line using "autfilt --randomize". - the interface in iface/dve2 has been renamed to iface/ltsmin because it can now interface the dynamic libraries created either by Divine (as patched by the LTSmin group) or by Spins (the LTSmin compiler for Promela). - LTL/PSL formulas can include /* comments */. - PSL SEREs support a new operator [:*i..j], the iterated fusion. [:*i..j] is to the fusion operator ':' what [*i..j] is to the concatenation operator ';'. For instance (a*;b)[:*3] is the same as (a*;b):(a*;b):(a*;b). The operator [:+], is syntactic sugar for [:*1..], and corresponds to the operator ⊕ introduced by Dax et al. (ATVA'09). - GraphViz output now uses an horizontal layout by default, and also use circular states (unless the automaton has more than 100 states, or uses named-states). The --dot option of the various command-line tools takes an optional parameter to fine-tune the GraphViz output (including vertical layout, forced circular or elliptic states, named automata, SCC information, ordered transitions, and different ways to colorize the acceptance sets). The environment variables SPOT_DOTDEFAULT and SPOT_DOTEXTRA can also be used to respectively provide a default argument to --dot, and add extra attributes to the output graph. - Never claims can now be output in the style used by Spin since version 6.2.4 (i.e., using do..od instead of if..fi, and with atomic statements for terminal acceptance). The default output is still the old one for compatibility with existing tools. The new style can be requested from command-line tools using option --spin=6 (or -s6 for short). - Support for building unambiguous automata. ltl_to_tgba() has a new options to produce unambiguous TGBA (used by ltl2tgba -U as discussed above). The function is_unambiguous() will check whether an automaton is unambigous, and this is used by autfilt --is-unmabiguous. - The SAT-based minimization algorithm for deterministic automata has been updated to work with ω-Automaton with any acceptance. The input and the output acceptance can be different, so for instance it is possible to create a minimal deterministic Streett automaton starting from a deterministic Rabin automaton. This functionnality is available via autfilt's --sat-minimize option. See doc/userdoc/satmin.html for details. - The on-line interface at http://spot.lrde.epita.fr/trans.html can be used to check stutter-invariance of any LTL/PSL formula. - The on-line interface will work around atomic propositions not supported by ltl3ba. (E.g. you can now translate F(A) or G("foo < bar").) * Noteworthy code changes - Boost is not used anymore. - Automata are now manipulated exclusively via shared pointers. - Most of what was called tgba_something is now called twa_something, unless it is really meant to work only for TGBA. This includes functions, classes, file, and directory names. For instance the class tgba originally defined in tgba/tgba.hh, has been replaced by the class twa defined in twa/twa.hh. - the tgba_explicit class has been completely replaced by a more efficient twa_graph class. Many of the algorithms that were written against the abstract tgba (now twa) interface have been rewritten using twa_graph instances as input and output, making the code a lot simpler. - The tgba_succ_iterator (now twa_succ_iterator) interface has changed. Methods next(), and first() should now return a bool indicating whether the current iteration is valid. - The twa base class has a new method, release_iter(), that can be called to give a used iterator back to its automaton. This iterator is then stored in a protected member, iter_cache_, and all implementations of succ_iter() can be updated to recycle iter_cache_ (if available) instead of allocating a new iterator. - The tgba (now called twa) base class has a new method, succ(), to support C++11' range-based for loop, and hide all the above change. Instead of the following syntax: tgba_succ_iterator* i = aut->succ_iter(s); for (i->first(); !i->done(); i->next()) { // use i->current_state() // i->current_condition() // i->current_acceptance_conditions() } delete i; We now prefer: for (auto i: aut->succ(s)) { // use i->current_state() // i->current_condition() // i->current_acceptance_conditions() } And the above syntax is really just syntactic suggar for twa_succ_iterator* i = aut->succ_iter(s); if (i->first()) do { // use i->current_state() // i->current_condition() // i->current_acceptance_conditions() } while (i->next()); aut->release_iter(i); // allow the automaton to recycle the iterator Where the virtual calls to done() and delete have been avoided. - twa::succ_iter() now takes only one argument. The optional global_state and global_automaton arguments have been removed. - The following methods have been removed from the TGBA interface and all their subclasses: - tgba::support_variables() - tgba::compute_support_variables() - tgba::all_acceptance_conditions() // use acc().accepting(...) - tgba::neg_acceptance_conditions() - tgba::number_of_acceptance_conditions() // use acc().num_sets() - Membership to acceptance sets are now stored using bit sets, currently limited to 32 bits. Each TωA has a acc() method that returns a reference to an acceptance object (of type spot::acc_cond), able to operate on acceptance marks (spot::acc_cond::mark_t). Instead of writing code like i->current_acceptance_conditions() == aut->all_acceptance_conditions() we now write aut->acc().accepting(i->current_acceptance_conditions()) (Note that for accepting(x) to return something meaningful, x should be a set of acceptance sets visitied infinitely often. So let's imagine that in the above example i is looking at a self-loop.) Similarly, aut->number_of_acceptance_conditions() is now aut->acc().num_sets() - All functions used for printing LTL/PSL formulas or automata have been renamed to print_something(). Likewise the various parsers should be called parse_something() (they haven't yet all been renamed). - All test suites under src/ have been merged into a single one in src/tests/. The testing tool that was called src/tgbatest/ltl2tgba has been renamed as src/tests/ikwiad (short for "I Know What I Am Doing") so that users should be less tempted to use it instead of src/bin/ltl2tgba. * Removed features - The long unused interface to GreatSPN (or rather, interface to a non-public, customized version of GreatSPN) has been removed. As a consequence, we could get rid of many cruft in the implementation of Couvreur's FM'99 emptiness check. - Support for symbolic, BDD-encoded TGBAs has been removed. This includes the tgba_bdd_concrete class and associated supporting classes, as well as the ltl_to_tgba_lacim() LTL translation algorithm. Historically, this TGBA implementation and LTL translation were the first to be implemented in Spot (by mistake!) and this resulted in many bad design decisions. In practice they were of no use as we only work with explicit automata (i.e. not symbolic) in Spot, and those produced by these techniques are simply too big. - All support for ELTL, i.e., LTL logic extended with operators represented by automata has been removed. It was never used in practice because it had no practical user interface, and the translation was a purely-based BDD encoding producing huge automata (when viewed explictely), using the above and non longuer supported tgba_bdd_concrete class. - Our implementation of the Kupferman-Vardi complementation has been removed: it was unused in practice beause of the size of the automata built, and it did not survive the conversion of acceptance sets from BDDs to bitsets. - The unused implementation of state-based alternating Büchi automata has been removed. - Input and output in the old, Spot-specific, text format for TGBA, has been fully removed. We now use HOA everywhere. (In case you have a file in this format, install Spot 1.2.6 and use "src/tgbatest/ltl2tgba -H -X file" to convert the file to HOA.) New in spot 1.2.6 (2014-12-06) * New features: - ltlcross --verbose is a new option to see what is being done * Bug fixes: - Remove one incorrect simplification rule for PSL discovered via checks on random formulaes. (The bug was very unlikely to trigger on non-random formulas, because it requires a SERE with an entire subexpression that is unsatisfiable.) - When the automaton resulting from the translation of a positive formula is deterministic, ltlcross will compute its complement to perform additional checks against other translations of the positive formula. The same procedure should be performed with automata obtained from negated formulas, but because of a typo this was not the case. - the neverclaim parser will now diagnose redefinitions of state labels. - the acceptance specification in the HOA format output have been adjusted to match recent changes in the format specifications. - atomic propositions are correctly escaped in the HOA output. - the build rules for documentation have been made compatible with version 8.0 of Org-mode. (This was only a problem if you build from the git repository, or if you want to edit the documentation.) - recent to changes to libstd++ (as shipped by g++ 4.9.2) have demonstrated that the order of transitions output by the LTL->TGBA translation used to be dependent on the implementation of the STL. This is now fixed. - some developpement version of libstd++ had a bug (PR 63698) in the assignment of std::set, and that was triggered in two places in Spot. The workaround (not assigning sets) is actually more efficient, so we can consider it as a bug fix, even though libstd++ has also been fixed. - all parsers would report wrong line numbers while processing files with DOS style newlines. - add support for SWIG 3.0. New in spot 1.2.5 (2014-08-21) * New features: - The online ltl2tgba translator will automatically attempt to parse a formula using LBT's syntax if it cannot parse it using the normal infix syntax. It also has an option to display formulas using LBT's syntax. - ltl2tgba and dstar2tgba have a new experimental option --hoaf to output automata in the Hanoï Omega Automaton Format whose current draft is at http://adl.github.io/hoaf/ The corresponding C++ function is spot::hoaf_reachable() in tgbaalgos/hoaf.hh. - 'randltl 4' is now a shorthand for 'randltl p0 p1 p2 p3'. - ltlcross has a new option --save-bogus=FILENAME to save any formula for which a problem (other than timeout) was detected during translation or using the resulting automatas. * Documentation: - The man page for ltl2tgba has some new notes and references about TGBA and about monitors. * Bug fixes: - Fix incorrect simplification of promises in the translation of the M operator (you may suffer from the bug even if you do not use this operator as some LTL patterns are automatically reduced to it). - Fix simplification of bounded repetition in SERE formulas. - Fix incorrect translation of PSL formulas of the form !{f} where f is unsatisifable. A similar bug was fixed for {f} in Spot 1.1.4, but for some reason it was not fixed for !{f}. - Fix parsing of neverclaims produced by Modella. - Fix a memory leak in the little-used conversion from transition-based alternating automata to tgba. - Fix a harmless uninitialized read in BuDDy. - When writing to the terminal, ltlcross used to display each formula in bright white, to make them stand out. It turns out this was actually hiding the formulas for people using a terminal with white background... This version displays formula in bright blue instead. - 'randltl -n -1 --seed 0' and 'randltl -n -1 --seed 1' used to generate nearly the same list of formulas, shifted by one, because the PRNG write reset with an incremented seed between each output formula. The PRNG is now reset only once. New in spot 1.2.4 (2014-05-15) * New features: - "-B -x degen-lskip" can be used to disable level-skipping in the degeralization procedure called by ltl2tgba and dstar2tgba. This is mostly meant for running experiments. - "-B -x degen-lcache=N" can be used to experiment with different type of level caching during degeneralization. * Bug fixes: - Change the Python bindings to make them compatible with Swig 3.0. - "ltl2tgta --ta" could crash in certain conditions due to the introduction of a simulation-based reduction after degeneralization. - Fix four incorrect formula-simplification rules, three were related to the factorization of Boolean subformulas in operands of the non-length-matching "&" SERE operator, and a fourth one could only be enabled by explicitely passing the favor_event_univ option to the simplifier (not the default). - Fix incorrect translation of the fusion operator (":") in SERE such as {xx;1}:yy[*] where the left operand has 1 as tail. New in spot 1.2.3 (2014-02-11) * New features: - The SPOT_SATLOG environment variable can be set to a filename to obtain statistics about the different iterations of the SAT-based minimization. For an example, see http://spot.lrde.epita.fr/satmin.html - The bench/dtgbasat/ benchmark has been updated to use SPOT_SATLOG and record more statistics. - The default value for the SPOT_SATSOLVER environment variable has been changed to "glucose -verb=0 -model %I >%O". This assumes that glucose 3.0 is installed. For older versions of glucose, remove the "-model" option. * Bug fixes: - More fixes for Python 3 compatibility. - Fix calculation of length_boolone(), were 'Xa|b|c' was considered as length 6 instead of 4 (because it is 'Xa|(b|a)' were (b|a) is Boolean). - Fix Clang-3.5 warnings. - randltl -S did not honor --boolean-priorities. - randltl had trouble generating formulas when all unary, or all binary/n-ary operators were disabled. - Fix spurious testsuite failure when using Pandas 0.13. - Add the time spent in child processes when measuring time with the timer class. - Fix determinism of the SAT-based minimization encoding. (It would sometimes produce different equivalent automata, because of a different encoding order.) - If the SAT-based minimization is asked for a 10-state automaton and returns a 6-state automaton, do not ask for a 9-state automaton in the next iteration... - Fix some compilation issue with the version of Apple's Clang that is installed with MacOS X 10.9. - Fix VPATH builds when building from the git repository. - Fix UP links in the html documentation for command-line tools. New in spot 1.2.2 (2014-01-24) * Bug fixes: - Fix compilation *and* behavior of bitvectors on 32-bit architectures. - Fix some compilation errors observed using the antique G++ 4.0.1. - Fix compatibility with Python 3 in the test suite. - Fix a couple of new clang warnings (like "unused private member"). - Add some missing #includes that are not included indirectly when the C++ compiler is in C++11 mode. - Fix detection of numbers that are too large in the ELTL parser. - Fix a memory leak in the ELTL parser, and avoid some unnecessary calls to strlen() at the same time. New in spot 1.2.1 (2013-12-11) * New features: - commands for translators specified to ltlcross can now be given "short names" to be used in the CSV or JSON output. For instance ltlcross '{small} ltl2tgba -s --small %f >%N' ... will run the command "ltl2tgba -s --small %f >%N", but only print "small" in output files. - ltlcross' CSV and JSON output now contains two additional columns: exit_status and exit_code, used to report failures of the translator. If the translation failed, only the time is reported, and the rest of the statistics, which are missing, area left empty (in CVS) or null (in JSON). A new option, --omit-missing can be used to remove lines for failed translations, and remove these two columns. - if ltlcross is used with --products=+5 instead of --products=5 then the stastics for each of the five products will be output separately instead of being averaged. - if ltlcross is used with tools that produce deterministic Streett or Rabin automata (as specified with %D), then the statistics output in CSV or JSON will have some extra columns to report the size of these input automata before ltlcross converts them into TGBA to perform its regular checks. - ltlfilt, ltl2tgba, ltl2tgta, and ltlcross can now read formulas from CSV files. Use option -F FILE/COL to read formulas from column COL of FILE. Use -F FILE/-COL if the first line of FILE be ignored. - when ltlfilt processes formulas from a CSV file, it will output each CSV line whose formula matches the given constraints, with the rewriten formula. The new escape sequence %< (text in columns before the formula) and %> (text after) can be used with the --format option to alter this output. - ltlfilt, genltl, randltl, and ltl2tgba have a --csv-escape option to help escape formulas in CSV files. - Please check http://spot.lrde.epita.fr/csv.html for some discussion and examples of the last few features. * Bug fixes: - ltlcross' CSV output has been changed to be more RFC 4180 compliant: it no longuer output useless cosmetic spaces, and use double-quotes with proper escaping for strings. The only RFC 4180 rule that it does not follow is that it will terminate lines with \n instead of \r\n because the latter cause issues with a couple of tools. - ltlcross failed to report missing input or output escape sequences on all but the first configured translator. New in spot 1.2 (2013-10-01) * Changes to command-line tools: - ltlcross has a new option --color to color its output. It is enabled by default when the output is a terminal. - ltlcross will give an example of infinite word accepted by the two automata when the product between a positive automaton and a negative automaton is non-empty. - ltlcross can now read the Rabin and Streett automata output by ltl2dstar. This type of output should be specified using '%D': ltlcross 'ltl2dstar --ltl2nba=spin:path/to/ltl2tgba@-s %L %D' However because Spot only supports Büchi acceptance, these Rabin and Streett automata are immediately converted to TGBAs before further processing by ltlcross. This is still interesting to search for bugs in translators to Rabin or Streett automata, but the statistics (of the resulting TGBAs) might not be very relevant. - When ltlcross obtains a deterministic automaton from a translator it will now complement this automaton to perform additional intersection checks. This is complementation is done only for deterministic automata (because that is cheap) and can be disabled with --no-complement. - To help with debugging problems detected by ltlcross, the environment variables SPOT_TMPDIR and SPOT_TMPKEEP control where temporary files are created and if they should be erased. Read the man page of ltlcross for details. - There is a new command, named dstar2tgba, that converts a deterministic Rabin or Streett automaton (expressed in the output format of ltl2dstar) into a TGBA, BA or Monitor. In the case of Rabin acceptance, the conversion will output a deterministic Büchi automaton if one such automaton exist. Even if no such automaton exists, the conversion will actually preserves the determinism of any SCC that can be kept deterministic. In the case of Streett acceptance, the conversion produces non-deterministic Büchi automata with Generalized acceptance. These are then degeneralized if requested. See http://spot.lrde.epita.fr/dstar2tgba.html for some examples, and the man page for more reference. - The %S escape sequence used by ltl2tgba --stats to display the number of SCCs in the output automaton has been renamed to %c. This makes it more homogeneous with the --stats option of the new dstar2tgba command. Additionally, the %p escape can now be used to show whether the output automaton is complete, and the %r escape will give the number of seconds spent building the output automaton (excluding the time spent parsing the input). - ltl2tgba, ltl2tgta, and dstar2tgba have a --complete option to output complete automata. - ltl2tgba, ltl2tgta, and dstar2tgba can use a SAT-solver to minimize deterministic automata. Doing so is only needed on properties that are stronger than obligations (for obligations our WDBA-minimization procedure will return a minimimal deterministic automaton more efficiently) and is disabled by default. See the spot-x(7) man page for documentation about the related options: sat-minimize, sat-states, sat-acc, state-based. See also http://spot.lrde.epita.fr/satmin.html for some examples. - ltlfilt, genltl, and randltl now have a --latex option to output formulas in a way that its easier to embed in a LaTeX document. Each operator is output as a command such as \U, \F, etc. doc/tl/spotltl.sty gives one possible definition for each macro. - ltlfilt, genltl, and randltl have a new --format option to indicate how to present the output formula, possibly with information about the input. - ltlfilt as a new option, --relabel-bool, to abstract independent Boolean subformulae as if they were atomic propositions. For instance "a & GF(c | d) & b & X(c | d)" would be rewritten as "p0 & GF(p1) & Xp1". * New functions and classes in the library: - dtba_sat_synthetize(): Use a SAT-solver to build an equivalent deterministic TBA with a fixed number of states. - dtba_sat_minimize(), dtba_sat_minimize_dichotomy(): Iterate dtba_sat_synthetize() to reduce the number of states of a TBA. - dtgba_sat_synthetize(), dtgba_sat_minimize(), dtgba_sat_minimize_dichotomy(): Likewise, for deterministic TGBA. - is_complete(): Check whether a TGBA is complete. - tgba_complete(): Complete an automaton by adding a sink state if needed. - dtgba_complement(): Complement a deterministic TGBA. - satsolver(): Run an (external) SAT solver, honoring the SPOT_SATSOLVER environment variable if set. - tba_determinize(): Run a power-set construction, and attempt to fix the acceptance simulation to build a deterministic TBA. - dstar_parse(): Read a Streett or Rabin automaton in ltl2dstar's format. Note that this format allows only deterministic automata. - nra_to_nba(): Convert a (possibly non-deterministic) Rabin automaton to a non-deterministic Büchi automaton. - dra_to_ba(): Convert a deterministic Rabin automaton to a Büchi automaton, preserving acceptance in all SCCs where this is possible. - nsa_to_tgba(): Convert a (possibly non-deterministic) Streett automaton to a non-deterministic TGBA. - dstar_to_tgba(): Convert any automaton returned by dstar_parse() into a TGBA. - build_tgba_mask_keep(): Build a masked TGBA that shows only a subset of states of another TGBA. - build_tgba_mask_ignore(): Build a masked TGBA that ignore a subset of states of another TGBA. - class tgba_proxy: Helps writing on-the-fly algorithms that delegate most of their methods to the original automaton. - class bitvect: A dynamic bit vector implementation. - class word: An infinite word, stored as prefix + cycle, with a simplify() methods to simplify cycle and prefix in obvious ways. - class temporary_file: A temporary file. Can be instanciated with create_tmp_file() or create_open_tmpfile(). - count_state(): Return the number of states of a TGBA. Implement a couple of specializations for classes where is can be know without exploration. - to_latex_string(): Output a formula using LaTeX syntax. - relabel_bse(): Relabeling of Boolean Sub-Expressions. Implements ltlfilt's --relabel-bool option describe above. * Noteworthy internal changes: - When minimize_obligation() is not given the formula associated to the input automaton, but that automaton is deterministic, it can still attempt to call minimize_wdba() and check the correcteness using dtgba_complement(). This allows dstar2tgba to apply WDBA-minimization on deterministic Rabin automata. - tgba_reachable_iterator_depth_first has been redesigned to effectively perform a DFS. As a consequence, it does not inherit from tgba_reachable_iterator anymore. - postproc::set_pref() was used to accept an argument among Any, Small or Deterministic. These can now be combined with Complete as Any|Complete, Small|Complete, or Deterministic|Complete. - operands of n-ary operators (like & and |) are now ordered so that Boolean terms come first. This speeds up syntactic implication checks slightly. Also, literals are now sorted using strverscmp(), so that p5 comes before p12. - Syntactic implication checks have been generalized slightly (for instance 'a & b & F(a & b)' is now reduced to 'a & b' while it was not changed in previous versions). - All the parsers implemented in Spot now use the same type to store locations. - Cleanup of exported symbols All symbols in the library now have hidden visibility on ELF systems. Public classes and functions have been marked explicitely for export with the SPOT_API macro. During this massive update, some of functions that should not have been made public in the first place have been moved away so that they can only be used from the library. Some old of unused functions have been removed. removed: - class loopless_modular_mixed_radix_gray_code hidden: - class acc_compl - class acceptance_convertor - class bdd_allocator - class free_list * Bug fixes: - Degeneralization was not indempotant on automata with an accepting initial state that was on a cycle, but without self-loop. - Configuring with --enable-optimization would reset the value of CXXFLAGS. New in spot 1.1.4 (2013-07-29) * Bug fixes: - The parser for neverclaim, updated in 1.1.3, would fail to parse guards of the form (a) || (b) output by ltl2ba or ltl3ba, and would only understand ((a) || (b)). - When used from ltlcross, the same parser would fail to parse further neverclaims after the first failure. - Add a missing newline in some error message of ltlcross. - Expressions like {SERE} were wrongly translated and simplified for SEREs that accept the empty word: they were wrongly reduced to true. Simplification and translation rules have been fixed, and the doc/tl/tl.pdf specifications have been updated to better explain that {SERE} has the semantics of a closure operator that is not exactly what one could expect after reading the PSL standard. - Various typos. New in spot 1.1.3 (2013-07-09) * New feature: - The neverclaim parser now understands the new style of output used by Spin 6.24 and later. * Bug fixes: - The scc_filter() function could abort with a BDD error. If all the acceptance sets of an SCC but the first one were useless. - The script in bench/spin13/ would not work on MacOS X because of some non-portable command. - A memory corruption in ltlcross. New in spot 1.1.2 (2013-06-09) * Bug fixes: - Uninitialized variables in ltlcross (affect the count of terminal weak, and strong SCCs). - Workaround an old GCC bug to allow compilation with g++ <= 4.5 - Fix several Doxygen comments so that they display correctly. New in spot 1.1.1 (2013-05-13): * New features: - lbtt_reachable(), the function that outputs a TGBA in LBTT's format, has a new option to indicate that the TGBA being printed is in fact a Büchi automaton. In this case it outputs an LBTT automaton with state-based acceptance. The output of the guards has also been changed in two ways: 1. atomic propositions that do not match p[0-9]+ are always double-quoted. This avoids issues where t or f were used as atomic propositions in the formula, output as-is in the automaton, and read back as true or false. Other names that correspond to LBT operators would cause problem as well. 2. formulas that label transitions are now output as irredundant-sums-of-products. - 'ltl2tgba --ba --lbtt' will now output automata with state-based acceptance. You can use 'ltl2tgba --ba --lbtt=t' to force the output of transition-based acceptance like in the previous versions. Some illustrations of this point and the previous one can be found in the man page for ltl2tgba(1). - There is a new function scc_filter_states() that removes all useless states from a TGBA. It is actually an abbridged version of scc_filter() that does not alter the acceptance conditions of the automaton. scc_filter_state() should be used when post-processing TGBAs that actually represent BAs. - simulation_sba(), cosimulation_sba(), and iterated_simulations_sba() are new functions that apply to TGBAs that actually represent BAs. They preserve the imporant property that if a state of the BA is is accepting, the outgoing transitions of that state are all accepting in the TGBA that represent the BA. This is something that was not preserved by functions cosimultion() and iterated_simulations() as mentionned in the bug fixes below. - ltlcross has a new option --seed, that makes it possible to change the seed used by the random graph generator. - ltlcross has a new option --products=N to check the result of each translation against N different state spaces, and everage the statistics of these N products. N default to 1; larger values increase the chances to detect inconsistencies in the translations, and also make the average size of the product built against the translated automata a more pertinent statistic. - bdd_dict::unregister_all_typed_variables() is a new function, making it easy to unregister all BDD variables of a given type owned by some object. * Bug fixes: - genltl --gh-r generated the wrong formulas due to a typo. - ltlfilt --eventual and --universal were not handled properly. - ltlfilt --stutter-invariant would trigger an assert on PSL formulas. - ltl2tgba, ltl2tgta, ltlcross, and ltlfilt, would all choke on empty lines in a file of formulas. They now ignore empty lines. - The iterated simulation applied on degeneralized TGBA was bogus for two reasons: one was that cosimulation was applied using the generic cosimulation for TGBA, and the second is that SCC-filtering, performed between iterations, was also a TGBA-based algorithm. Both of these algorithms could lose the property that if a TGBA represents a BA, all the outgoing transitions of a state should be accepting. As a consequence, some formulas where translated to incorrect Büchi automata. New in spot 1.1 (2013-04-28): Several of the new features described below are discribed in Tomáš Babiak, Thomas Badie, Alexandre Duret-Lutz, Mojmír Křetínský, Jan Strejček: Compositional Approach to Suspension and Other Improvements to LTL Translation. To appear in the proceedings of SPIN'13. * New features in the library: - The postprocessor class now takes an optional option_map argument that can be used to specify fine-tuning options, making it easier to benchmark different scenarios while developing new postprocessings. - A new translator class implements a complete translation chain, from LTL/PSL to TGBA/BA/Monitor. It performs pre- and post-processings in addition to the core translation, and offers an interface similar to that used in the postprocessor class, to specify the intent of the translation. - The degeneralization algorithm has learned three new tricks: level reset, level caching, and SCC-based ordering. The former two are enabled by default. Benchmarking has shown that the latter one does not always have a positive effect, so it is disabled by default. (See SPIN'13 paper.) - The scc_filter() function, which removes dead SCCs and also simplify acceptance conditions, has learnt how to simplify acceptance conditions in a few tricky situations that were not simplified previously. (See SPIN'13 paper.) - A new translation, called compsusp(), for "Compositional Suspension" is implemented on top of ltl_to_tgba_fm(). (See SPIN'13 paper.) - Some experimental LTL rewriting rules that trie to gather suspendable formulas are implemented and can be activated with the favor_event_univ option of ltl_simplifier. As always please check doc/tl/tl.tex for the list of rules. - An experimental "don't care" (direct) simulation has been implemented. This simulations consider the acceptance of out-of-SCC transitions as "don't care". It is not enabled by default because it currently is very slow. - remove_x() is a function that take a formula, and rewrite it without the X operator. The rewriting is only correct for stutter-insensitive LTL formulas (See K. Etessami's paper in IFP vol. 75(6). 2000) This algorithm is accessible from the command-line using ltlfilt's --remove-x option. - is_stutter_insensitive() takes any LTL formula, and check whether it is stutter-insensitive. This algorithm is accessible from the command-line using ltlfilt's --stutter-insensitive option. - Several functions have been introduced to check the strength of an SCC. is_inherently_weak_scc() is_weak_scc() is_syntactic_weak_scc() is_complete_scc() is_terminal_scc() is_syntactic_terminal_scc() Beware that the costly is_weak_scc() function introduced in Spot 1.0, which is based on a cycle enumeration, has been renammed to is_inherently_weak_scc() to match established vocabulary. * Command-line tools: - ltl2tgba and ltl2tgta now honor a new --extra-options (or -x) flag to fine-tune the algorithms used. The available options are documented in the spot-x (7) manpage. For instance use '-x comp-susp' to use the afore-mentioned compositional suspension. - The output format of 'ltlcross --json' has been changed slightly. In a future version we will offer some reporting script that turn such JSON output into various tables and graphs, and these change are required to make the format usable for other benchmarks (not just ltlcross). - ltlcross will now count the number of non-accepting, terminal, weak, and strong SCCs, as well as the number of terminal, weak, and strong automata produced by each tool. * Documentation: - org-mode files used to generate the documentation about command-line tools (shown at http://spot.lrde.epita.fr/tools.html) is distributed in doc/org/. The resulting html files are also in doc/userdoc/. * Web interface: - A new "Compositional Suspension" tab has been added to experiment with compositional suspension. * Benchmarks: - See bench/spin13/README for instructions to reproduce our Spin'13 benchmark for the compositional suspension. * Bug fixes: - There was a memory leak in the LTL simplification code, that could only be triggered when disabling advanced simplifications. - The translation of the PSL formula !{xxx} was incorrect when xxx simplified to false. - Various warnings triggered by new compilers. New in spot 1.0.2 (2013-03-06): * New features: - the on-line ltl2tgba.html interface can output deterministic or non-deterministic monitors. However, and unlike the ltl2tgba command-line tool, it doesn't different output formats. - the class ltl::ltl_simplifier now has an option to rewrite Boolean subformulaes as irredundante-sum-of-product during the simplification of any LTL/PSL formula. The service is also available as a method ltl_simplifier::boolean_to_isop() that applies this rewriting to a Boolean formula and implements a cache. ltlfilt as a new option --boolean-to-isop to try to apply the above rewriting from the command-line: % ltlfilt --boolean-to-isop -f 'GF((a->b)&(b->c))' GF((!a & !b) | (b & c)) This is currently not used anywhere else in the library. * Bug fixes: - 'ltl2tgba --high' is documented to be the same as 'ltl2tgba', but by default ltl2tgba forgot to enable LTL simplifications based on language containment, which --high do enable. There are now enabled by default. - the on-line ltl2tgba.html interface failed to output monitors, testing automata, and generalized testing automata due to two issues with the Python bindings. It also used to display Testing Automaton Options when the desired output was set to Monitor. - bench/ltl2tgba would not work in a VPATH build. - a typo caused some .dir-locals.el configuration parameters to be silently ignored by emacs - improved Doxygen comments for formula_to_bdd, bdd_to_formula, and bdd_dict. - src/tgbatest/ltl2tgba (not to be confused with src/bin/ltl2tgba) would have a memory leak when passed the conflicting option -M and -O. It probably has many other problems. Do not use src/tgbatest/ltl2tgba if you are not writing a test case for Spot. Use src/bin/ltl2tgba instead. New in spot 1.0.1 (2013-01-23): * Bug fixes: - Two executions of the simulation reductions could produce two isomorphic automata, but with transitions in a different order. - ltlcross did not diagnose write errors to temporary files, and certain versions of g++ would warn about it. - "P0.init" is parsed as an atomic even without the double quotes, but it was always output with double quotes. This version will not quote this atomic proposition anymore. - "U", "W", "M", "R" were correctly parsed as atomic propositions (instead of binary operators) when placed in double quotes, but on output they were output without quotes, making the result unparsable. - the to_lbt_string() functions would always output a trailing space. This is not the case anymore. - tgba_product::transition_annotation() would segfault when called in a product against a Kripke structure. * Minor improvements: - Four new LTL simplifications rules: GF(a|Xb) = GF(a|b) GF(a|Fb) = GF(a|b) FG(a&Xb) = FG(a&b) FG(a&Gb) = FG(a&b) - The on-line version of ltl2tgba now displays edge and transition counts, just as the ltlcross tool. - ltlcross will display the number of timeouts at the end of its execution. - ltlcross will diagnose tools with missing input or output %-sequence before attempting to run any of them. - The parser for LBT's prefix-style LTL formulas will now read atomic propositions that are not of the form p1, p2... This makes it possible to process formulas written in ltl2dstar's syntax. * Pruning: - lbtt has been removed from the distribution. A copy of the last version we distributed is still available at http://spot.lip6.fr/dl/lbtt-1.2.1a.tar.gz and our test suite will use it if it is installed, but the same tests are already performed by ltlcross. - the bench/ltl2tgba/ benchmark, that used lbtt to compare various LTL-to-Büchi translators, has been updated to use ltlcross. It now output summary tables in LaTeX. Support for Modella (no longer available online), and Wring (requires a too old Perl version) have been dropped. - the half-baked and underdocumented "Event TGBA" support in src/evtgba*/ has been removed, as it was last worked on in 2004. New in spot 1.0 (2012-10-27): * License change: Spot is now distributed using GPL v3+ instead of GPL v2+. This is because we started using some third-party files distributed under GPL v3+. * Command-line tools Useful command-line tools are now installed in addition to the library. Some of these tools were originally written for our test suite and had evolved organically into useful programs with crappy interfaces: they have now been rewritten with better argument parsing, saner defaults, and they come with man pages. - genltl: Generate LTL formulas from scalable patterns. This offers 20 patterns so far. - randltl: Generate random LTL/PSL formulas. - ltlfilt: Filter lists of formulas according to several criteria (e.g., match only safety formulas that are larger than some given size). Besides being used as a "grep" tool for formulas, this can also be used to convert files of formulas between different syntaxes, apply some simplifications, check whether to formulas are equivalent, ... - ltl2tgba: Translate LTL/PSL formulas into Büchi automata (TGBA, BA, or Monitor). A fundamental change to the interface is that you may now specify the goal of the translation: do you you favor deterministic or smaller automata? - ltl2tgta: Translate LTL/PSL formulas into Testing Automata. - ltlcross: Compare the output of translators from LTL/PSL to Büchi automata, to find bug or for benchmarking. This is essentially a Spot-based reimplementation of LBTT that supports PSL in addition to LTL, and that can output more statistics. An introduction to these tools can be found on-line at http://spot.lrde.lip6.fr/tools.html The former test versions of genltl and randltl have been removed from the source tree. The old version of ltl2tgba with its gazillion options is still in src/tgbatest/ and is meant to be used for testing only. Although ltlcross is meant to replace LBTT, we are still using both tools in this release; however this is likely to be the last release of Spot that redistributes LBTT. * New features in the Spot library: - Support for various flavors of Testing Automata. The flavors are: + "classical" Testing Automata, as used for instance by Geldenhuys and Hansen (Spin'06), using Büchi and livelock acceptance conditions. + Generalized Testing Automata, extending the previous with multiple Büchi acceptance sets. + Transition-based Generalized Testing Automata moving Büchi acceptance to transitions, and getting rid of livelock acceptance conditions by expliciting stuttering self-loops. Supporting algorithms include anything required to run the automata-theoretic approach using testing automata: + dedicated synchronized product + dedicated emptiness-check for TA and GTA, as these may require two passes because of the two kinds of acceptance, while a TGTA can be checked for emptiness with the same one-pass algorithm as a TGBA. + conversion from a TGBA to any of the above kind, with options to reduce these automata with bisimulation, and to produce a BA/GBA that require a single pass (at the expense of determinism). + output in dot format for display A discussion of these automata, part of Ala Eddine BEN SALEM's PhD work, should appear in ToPNoC VI (LNCS 7400). The web-based interface and the aforementioned ltl2tgta tool can be used to build testing automata. - TGBA can now be reduced by Reverse Simulation (in addition to the Direct Simulation introduced in 0.9). A function called iterated_simulations() will alternate direct and reverse simulations in a loop as long as it diminishes the size of the automaton. - The enumerate_cycles class implements the Loizou-Thanisch algorithm to enumerate elementary cycles in a SCC. As an example of use, is_weak_scc() will tell whether an SCC is inherently weak (all its cycles are accepting, or none of them are). - parse_lbt() will parse an LTL formula expressed in the prefix syntax used (at least) by LBT, LBTT and Scheck. to_lbt_string() can be used to print an LTL formula using this syntax. - to_wring_string() can be used to print an LTL formula into Wring's syntax. - The LTL/PSL parser now has a lenient mode that can be useful to interpret atomic proposition with language-specific constructs. In lenient mode, any (...) or {...} block that cannot be parsed as formula will be assumed to be an atomic proposition. For instance the input (a < b) U (process[2]@ok), normally flagged as a syntax error, is read as "a < b" U "process[2]@ok" in lenient mode. - minimize_obligation() has a new option to disable WDBA minimization it cases it would produce a deterministic automaton that is bigger than the original TGBA. This can help choosing between less states or more determinism. - new functions is_deterministic() and count_nondet_states() (The count of nondeterministic states is now displayed on automata generated with the web interface.) - A new class, "postprocessor", makes it easier to apply all available simplification algorithms on a TGBA/BA/Monitors. * Minor changes: - The '*' operator can (again) be used as an AND in LTL formulas. This is for compatibility with formula written in Wring's syntax. However inside SERE it is interpreted as the Kleen star. - When printing a formula using Spin's LTL syntax, we don't double-quote complex atomic propositions (that was not valid Spin input anyway). For instance F"foo == 2" used to be output as <>"foo == 2". We now output <>(foo == 2) instead. The latter syntax is understood by Spin 6. It can be read back by Spot in lenient mode (see above). - The gspn-ssp benchmark has been removed. New in spot 0.9.2 (2012-07-02): * New features to the web interface. - It can run ltl3ba (Babiak et al., TACAS'12) where available. - "a loading logo" is displayed when result is not instantaneous. * Speed improvements: - The unicity hash table of BuDDy has been separated separated node table for better cache-friendliness. The resulting speedup is around 5% on BDD-intensive algorithms. - A new BDD operation, called bdd_implies() has been added to BuDDy to check whether one BDD implies another. This benefits mostly the simulation and degeneralization algorithms of Spot. - A new offline implementation of the degeneralization (which had always been performed on-the-fly so far) available. This especially helps the Safra complementation. * Bug fixes: - The CGI script running for ltl2tgba.html will correctly timeout after 30s when Spot's translation takes more time. - Applying WDBA-minimization on an automaton generated by the Couvreur/LaCIM translator could lead to an incorrect automaton due to a bug in the definition of product with symbolic automata. - The Makefile.am of BuDDy, LBTT, and Spot have been adjusted to accomodate Automake 1.12 (while still working with 1.11). - Better error recovery when parsing broken LTL formulae. - Fix errors and warnings reported by clang 3.1 and the upcoming g++ 4.8. New in spot 0.9.1 (2012-05-23): * The version of LBTT we distribute includes a patch from Tomáš Babiak to count the number of non-deterministic states, and the number of deterministic automata produced. See lbtt/NEWS for the list of other differences with the original version of LBTT 1.2.1. * The Couvreur/FM translator has learned two new tricks. These only help to speedup the translation by not issuing states or acceptance conditions that would be latter suppresed by other optimizations. - The translation rules used to translate subformulae of the G operator have been adjusted not to produce useless loops already implied by G. This generalizes the "GF" trick presented in Couvreur's original FM'99 paper. - Promises generated for formula of the form P(a U (b U c)) are reduced into P(c), avoiding the introduction of many promises that imply each other. * The tgba_parse() function is now available via the Python bindings. * Bug fixes: - The random SERE generator was using the wrong operators for "and" and "or", mistaking And/Or with AndRat/OrRat. - The translation of !{r} was incorrect when this subformula was recurring (e.g. in G!{r}) and r had loops. - Correctly recognize ltl2tgba's option -rL. - Using LTL simplification rules based on syntactic implication, or based on language containment checks, caused BDD variables to be allocated in an "unnatural" order, resulting in a slower translation and a less optimal degeneralization. - When ltl2tgba reads a neverclaim, it now considers the resulting TGBA as a Büchi automaton, and will display double circles in the dotty output. New in spot 0.9 (2012-05-09): * New features: - Operators from the linear fragment of PSL are supported. This basically extends LTL with Sequential Extended Regulat Expressions (SERE), and a couple of operators to bridge SERE and LTL. See doc/tl/tl.pdf for the list of operators and their semantics. - Formula rewritings have been completely revamped, and augmented with rules for PSL operators (and some new LTL rules as well). See doc/tl/tl.pdf for the list of the rewritings implemented. - Some of these rewritings that may produce larger formulas (for instance to rewrite "{a;b;c}" into "a & X(b & Xc)") may be explicitely disabled with a new option. - The src/ltltest/randltl tool can now generate random SEREs and random PSL formulae. - Only one translator (ltl2tgba_fm) has been augmented to translate the new SERE and PSL operators. The internal translation from SERE to DFA is likely to be rewriten in a future version. - A new function, length_boolone(), computes the size of an LTL/PSL formula while considering that any Boolean term has length 1. - The LTL/PSL parser recognizes some UTF-8 characters (like ◇ or ∧) as operators, and some output routines now have an UTF-8 output mode. Tools like randltl and ltl2tgba have gained an -8 option to enable such output. See doc/tl/tl.pdf for the list of recognized codepoints. - A new direct simulation reduction has been implemented. It works directly on TGBAs. It is in src/tgbaalgos/simlation.hh, and it can be tested via ltl2tgba's -RDS option. - unabbreviate_wm() is a function that rewrites the W and M operators of LTL formulae using R and U. This is called whenever we output a formula in Spin syntax. By combining this with the aforementioned PSL rewriting rules, many PSL formulae that use simple SERE can be converted into LTL formulae that can be feed to tools that only understand U and R. The web interface will let you do this. - changes to the on-line translator: + SVG output is available + can display some properties of a formula + new options for direct simulation, larger rewritings, and utf-8 output - configure --without-included-lbtt will prevent LBTT from being configured and built. This helps on systems (such as MinGW) where LBTT cannot be built. The test-suite will skip any LBTT-based test if LBTT is missing. * Interface changes: - Operators ->, <->, U, W, R, and M are now parsed as right-associative to better match the PSL standard. - The constructors for temporal formulae will perform some trivial simplifications based on associativity, commutativity, idempotence, and neutral elements. See doc/tl/tl.pdf for the list of such simplifications. - Formula instances now have many methods to inspect their properties (membership to syntactic classes, absence of X operator, etc...) in constant time. - LTL/PSL formulae are now handled everywhere as 'const formula*' and not just 'formula*'. This reflects the true nature of these (immutable) formula objects, and cleanups a lot of code. Unfortunately, it is a backward incompatible change: you may have to add 'const' to a couple of lines in your code, and change 'ltl::const_vistitor' into 'ltl::visitor' if you have written a custom visitor. - The new entry point for LTL/PSL simplifications is the function ltl_simplifier::simplify() declared in src/ltlvisit/simplify.hh. The ltl_simplifier class implements a cache. Functions such as reduce() or reduce_tau03() are deprecated. - The old game-theory-based implementations for direct and delayed simulation reductions have been removed. The old direct simulation would only work on degeneralized automata, and yet produce results inferior to the new direct simulation introduced in this release. The implementation of delayed simulation was unreliable. The function reduc_tgba_sim() has been kept for compatibility (it calls the new direct simulation whatever the type of simulation requested) and marked as deprecated. ltl2tgba's options -Rd, -RD are gone. Options -R1t, -R1s, -R2s, and -R2t are deprecated and all made equivalent to -RDS. - The tgba_explicit hierarchy has been reorganized in order to make room for sba_explicit classes that share most of the code. The main consequence is that the tgba_explicit type no longuer exists. However the tgba_explicit_number, tgba_explicit_formula, and tgba_explicit_string still do. New in spot 0.8.3 (2012-03-09): * Support for both Python 2.x and Python 3.x. (Previous versions would only work with Python 2.x.) * The online ltl2tgba.html now stores its state in the URL so that history is preserved, and links to particular setups can be sent. * Bug fixes: - Fix a segfault in the compression code used by the -Z option of dve2check. - Fix a race condition in the CGI script. - Fix a segfault in the CGI script when computing a Büchi run. New in spot 0.8.2 (2012-01-19): * configure now has a --disable-python option to disable the compilation of Python bindings. * Minor speedups in the Safra complementation. * Better memory management for the on-the-fly degeneralization algorithm. This mostly benefits to the Safra complementation. * Bug fixes: - spot::ltl::length() forgot to count the '&' and '|' operators in an LTL formula. - minimize_wdba() could fail to mark some transiant SCCs as accepting, producing an automaton that was not fully minimized. - minimize_dfa() could produce incorrect automata, but it is not clear whether this could have had an inpact on WDBA minimization (the worse case is that some TGBA would not have been minimized when they could). - Fix a Python syntax error in the CGI script. - Fix compilation with g++ 4.0. - Fix a make check failure when valgrind is missing. New in spot 0.8.1 (2011-12-18): * Only bug fixes: - When ltl2tgba is set to perform both WDBA minimization and degeneralization, do the latter only if the former failed. In previous version, automata were (uselessly) degeneralized before WDBA minimization, causing important slowdowns. - Fix compilation with Clang 3.0. - Fix a Makefile setup causing a "make check" failure on MacOS X. - Fix an mkdir error in the CGI script. New in spot 0.8 (2011-11-28): * Major new features: - Spot can read DiVinE models. See iface/dve2/README for details. - The genltl tool can now output 20 different LTL formula families. It also replaces the LTLcounter Perl scripts. - There is a printer and parser for Kripke structures in text format. * Major interface changes: - The destructor of all states is now private. Any code that looks like "delete some_state;" will cause an compile error and should be updated to "some_state->destroy();". This new syntax is supported since version 0.7. - The experimental Nips interface has been removed. * Minor changes: - The dotty_reachable() function has a new option "assume_sba" that can be used for rendering automata with state-based acceptance. In that case, acceptance states are displayed with a double circle. ltl2tgba (both command line and on-line) Use it to display degeneralized automata. - The dotty_reachable() function will also display transition annotations (as returned by the tgba::transitition_annotation()). This can be useful when displaying (small) state spaces. - Identifiers used to name atomic proposition can contain dots. E.g.: X.Y is now an atomic proposition, while it was understood as X&Y in previous versions. - The Doxygen documentation is no longer built as a PDF file. * Internal improvements: - The on-line ltl2tgba CGI script uses a cache to produce faster answers. - Better memory management for the states of explicit automata. Thanks to the aforementioned ->destroy() change, we can avoid cloning explicit states. - tgba_product has learned how to be faster when one of the operands is a Kripke structure (15% speedup). - The reduction rule for "a M b" has been improved: it can be reduced to "a & b" if "a" is a pure eventuallity. - More useless acceptance conditions are removed by SCC simplifications. * Bug fixes: - Safra complementation has been fixed in cases where more than one acceptance conditions where needed to convert the deterministic Streett automaton as a TGBA. - The degeneralization is now idempotent. Previously, degeneralizing an already degeneralized automaton could add some states. - The degeneralization now has a deterministic behavior. Previously it was possible to obtain different output depending on the memory layout. - Spot now outputs neverclaims with fully parenthesized guards. I.e., instead of (!x && y) -> goto S1 it now outputs ((!(x)) && (y)) -> goto S1 This prevents problems when the model defines `x' as #define x flag==0 because !x then evaluated to (!flag)==0 instead of !(flag==0). New in spot 0.7.1 (2011-02-07): * The LTL parser will accept operator ~ (for not) as well as --> and <--> (for implication and equivalence), allowing formulae from the Büchi Store to be read directly. * The neverclaim parser will accept guards of the form :: !(...) -> goto ... instead of the more commonly used :: (!(...)) -> goto ... This makes it possible to read neverclaims provided by the Büchi Store. * A new ltl2tgba option, -kt, will count the number of "sub-transitions". I.e., a transition labelled by "true" counts for 4 "sub-transitions" if the automaton uses 2 atomic propositions. * Bugs fixed: - Fix segfault during WDBA minimization on automata with useless states. - Use the included BuDDy library if the one already installed is older than the one distributed with Spot 0.7. - Fix two typos in the code of the CGI scripts. New in spot 0.7 (2011-02-01): * Spot is now able to read an automaton expressed as a Spin neverclaim. * The "experimental" Kripke structure introduced in Spot 0.5 has been rewritten, and is no longer experimental. We have a developement version of checkpn using it, and it should be released shortly after Spot 0.7. * The function to_spin_string(), that outputs an LTL formula using Spin's syntax, now takes an optional argument to request parentheses at all levels. * src/ltltest/genltl is a new tool that generates some interesting families of LTL formulae, for testing purpose. * bench/ltlclasses/ uses the above tool to conduct the same benchmark as in the DepCoS'09 paper by Cichoń et al. The resulting benchmark completes in 12min, while it tooks days (or exhausted the memory) when the paper was written (they used Spot 0.4). * Degeneralization has again been improved in two ways: - It will merge degeneralized transitions that can be merged. - It uses a cache to speed up the improvement introduced in 0.6. * An implementation of Dax et al.'s paper for minimizing obligation formulae has been integrated. Use ltl2tgba -Rm to enable this optimization from the command-line; it will have no effect if the property is not an obligation. * bench/wdba/ conducts a benchmark similar to the one on Dax's webpage, comparing the size of the automata expressing obligation formula before and after minimization. See bench/wdba/README for results. * Using similar code, Spot can now construct deterministic monitors. * New ltl2tgba options: -XN: read an input automaton as a neverclaim. -C, -CR: Compute (and display) a counterexample after running the emptiness check. With -CR, the counterexample will be replayed on the automaton to ensure it is correct (previous version would always compute a replay a counterexample when emptiness-check was enabled) -ks: traverse the automaton to compute its number of states and transitions (this is faster than -k which will also count SCCs and paths). -M: Build a deterministic monitor. -O: Tell whether a formula represents a safety, guarantee, or obligation property. -Rm: Minimize automata representing obligation properties. * The on-line tool to translate LTL formulae into automata has been rewritten and is now at http://spot.lip6.fr/ltl2tgba.html It requires a javascript-enabled browser. * Bug fixes: - Location of the errors messages in the TGBA parser where inaccurate. - Various warning fixes for different versions of GCC and Clang. - The neverclaim output with ltl2tgba -N or -NN used to ignore any automaton simplification performed after degeneralization. - The formula simplification based on universality and eventuality had a quadratic run-time. New in spot 0.6 (2010-04-16): * Several optimizations to improve some auxiliary steps of the LTL translation (not the core of the translation): - Better degeneralization - SCC simplifications has been tuned for degeneralization (ltl2tgba now has two options -R3 and -R3f: the latter will remove every acceptance condition that used to be removed in Spot 0.5 while the former will leave useless acceptance conditions going to accepting SCC. Experience shows that -R3 is more favorable to degeneralization). - ltl2tgba will perform SCC optimizations before degeneralization and not the converse - We added a syntactic simplification rule to rewrite F(a)|F(b) as F(a|b). We only had a rule for the more specific FG(a)|FG(b) = F(Ga|Gb). - The syntactic simplification rule for F(a&GF(b)) = F(a)&GF(b) has be disabled because the latter formula is in fact harder to translate efficiently. * New LTL operators: W (weak until) and its dual M (strong release) - Weak until allows many LTL specification to be specified more compactly. - All LTL translation algorithms have been updated to support these operators. - Although they do not add any expressive power, translating "a W b" is more efficient (read smaller output automaton) than translating the equivalent form using the U operator. - Basic syntactic rewriting rules will automatically rewrite "a U (b | G(a))" and "(a U b)|G(a)" as "a W b", so you will benefit from the new operators even if you do not use them. Similar rewriting rules exist for R and M, although they are less used. * New options have been added to the CGI script for - SVG output - SCC simplifications * Bug fixes: - The precedence of the "->" and "<->" Boolean operators has been adjusted to better match other tools. Spot <= 0.5 used to parse "a & b -> c & d" as "a & (b -> c) & d"; Spot >= 0.6 will parse it as "(a & b) -> (c & d)". - The random graph generator was fixed (again!) not to produce dead states as documented. - Locations in the error messages of the LTL parser were off by one. New in spot 0.5 (2010-02-01): * We have setup two mailing lists: - is read-only and will be used to announce new releases. You may subscribe at https://www.lrde.epita.fr/mailman/listinfo/spot-announce - can be used to discuss anything related to Spot. You may subscribe at https://www.lrde.epita.fr/mailman/listinfo/spot-announce * Two new LTL translations have been implemented: - eltl_to_tgba_lacim() is a symbolic translation for ELTL based on Couvreur's LaCIM'00 paper. For this translation (available with ltl2tgba's option -le), all operators are described as finite automata. A default set of operators is provided for LTL (option -lo) and user may add more automaton operators. - ltl_to_taa() is a translation based on Tauriainen's PhD thesis. LTL is translated to "self-loop" alternating automata and then to Transition-based Generalized Automata. (ltl2tgba's option -taa). The "Couvreur/FM" translation remains the best LTL translation available in Spot. * The data structures used to represent LTL formulae have been overhauled, and it resulted in a big performence improvement (in time and memory consumption) for the LTL translation. * Two complementation algorithms for state-based Büchi automata have been implemented: - tgba_kv_complement is an on-the-fly implementation of the Kupferman-Vardi construction (TCS'05) for generalized acceptance conditions. - tgba_safra_complement is an implementation of Safra's complementation. This algorithm takes a degeneralized Büchi automaton as input, but our implementation for the Streett->Büchi step will produce a generalized automaton in the end. * ltl2tgba has gained several options and the help text has been reorganized. Please run src/tgbatest/ltl2tgba without arguments for details. Couvreur/FM is now the default translation. * The ltl2tgba.py CGI script can now run standalone. It also offers the Tauriainen/TAA translation, and some options for SCC-based reductions. * Automata using BDD-encoded transitions relation can now be pruned for useless states symbolically using the delete_unaccepting_scc() function. This is ltl2tgba's -R3b option. * The SCC-based simplification (ltl2tgba's -R3 option) has been rewritten and improved. * The "*" symbol, previously parsed as a synonym for "&" is no longer recognized. This makes room for an upcoming support of rational operators. * More benchmarks in the bench/ directory: - gspn-ssp/ some benchmarks published at ACSD'07, - ltlcounter/ translation of a class of LTL formulae used by Rozier & Vardi at SPIN'07 - scc-stats/ SCC statistics after translation of LTL formulae - split-product/ parallelizing gain after splitting LTL automata * An experimental Kripke interface has been developed to simplify the integration of third party tools that do not use acceptance conditions and that have label on states instead of transitions. This interface has not been used yet. * Experimental interface with the Nips virtual machine. It is not very useful as Spot isn't able to retrieve any property information from the model. This will just check assertions. * Distribution: - The Boost C++ library is now required. - Update to Autoconf 2.65, Automake 1.11.1, Libtool 2.2.6b, Bison 2.4.1, and Swig 1.3.40. - Thanks to the newest Automake, "make check" will now run in parallel if you use "make -j2 check" or more. * Bug fixes: - Disable warnings from the garbage collection of BuDDy, it could break the standard output of ltl2tgba. - Fix several C++ constructs to ensure Spot will build with GCC 4.3, 4.4, and older 3.x releases, as well as with Intel's ICC compiler. - A very old bug in the hash function for LTL formulae caused Spot to sometimes (but very rarely) consider two different LTL formulae as equal. New in spot 0.4 (2007-07-17): * Upgrade to Autoconf 2.61, Automake 1.10, Bison 2.3, and Swig 1.3.31. * Better LTL simplifications. * Don't initialize Buddy if it has already been initialized (in case the client software is already using Buddy). * Lots of work in the greatspn interface for our ACSD'05 paper. * Bug fixes: - Fix the random graph generator not to produce dead states as documented. - Fix synchronized product in case both side use acceptance conditions. - Fix some syntax errors with newer versions of GCC. New in spot 0.3 (2006-01-25): * lbtt 1.2.0 * The CGI script for LTL translation also offers emptiness check algorithms. * tau03_opt_search implements the "ordering heuristic". (Submitted by Heikki Tauriainen.) * A couple of bugs were fixed into the LTL or automata simplifications. New in spot 0.2 (2005-04-08): * Emptiness checks: - the new spot::option_map class is used to pass options to emptiness-check algorithms. - the new emptiness_check_instantiator class is used to turn a string such as `algorithm(option1, option2)' into an actual instance of this emptiness-check algorithm with the given options. All tools use this. - tau03_opt_search implements the "condition heuristic". (Suggested by Heikki Tauriainen.) * Minor bug fixes. New in spot 0.1 (2005-01-31): * Emptiness checks: - They all follow the same interface, and gather statistical data. - New algorithms: gv04.hh, se05.hh, tau03.hh, tau03opt.hh - New options for couvreur99: poprem and group. - reduce_run() try to reduce accepting runs produced by emptiness checks. - replay_run() ensure accepting runs are actually accepting runs. * New testing tools: - ltltest/randltl: Generate random LTL formulae. - tgbatest/randtgba: Generate random TGBAs. Optionally multiply them against random LTL formulae. Optionally check them for emptiness with all available algorithms. Optionally gather statistics. * bench/emptchk/: Contains scripts that benchmark emptiness-checks. * Split the degeneralization proxy in two: - tgba_tba_proxy uses at most max(N,1) copies - tgba_sba_proxy uses at most 1+max(N,1) copies and has a state_is_accepting() method * tgba::transition_annotation annotate a transition with some string. This comes handy to associate that transition to its high-level name. * Preliminary support for Event-based GBA (the evtgba*/ directories). This might as well disappear in a future release. * LTL formulae are now sorting using their string representation, instead of their memory address (which is still unique). This makes the output of the various functions more deterministic. * The Doxygen documentation is now organized using modules. New in spot 0.0x (2004-08-13): * New atomic_prop_collect() function: collect atomic propositions in an LTL formula. * Fix several typos in documentation, and some warnings in the code. * Now compiles on Darwin and Cygwin. * Upgrade to Automake 1.9.1, and lbtt 1.1.2. (And drop support for older lbtt versions.) * Support newer versions of Valgrind (>= 2.1.0). New in spot 0.0v (2004-06-29): * LTL formula simplifications using basic rewriting rules, a-la Wring syntactic approximations, and Etessami's universal and existential classes. - Function reduce() in ltlvisit/reduce.hh is the main interface. - Can be tested with the CGI script. * TGBA simplifications using direct simulation, delayed simulation, and SCC-based simplifications. This is still experimental. * The LTL parser will now read LTL formulae written using Wring's syntax. * ltl2tgba_fm() now has options for on-the-fly fair-loop approximations, and Modella-like branching-postponement. * GreatSPN interface: - The `declarative_environment' is now part of Spot itself rather than part of the interface with GreatSPN. - the RG and SRG interface can deal with dead markings in three ways (omit deadlocks from the state graph, stutter on the deadlock and consider as a regular behavior, or stutter and distinguish the deadlock with a property). - update SSP interface to Soheib Baarir latest work. * Preliminary Python bindings for BuDDy's FDD and BVEC. * Upgrade to BuDDy 2.3. New in spot 0.0t (2004-04-23): * `emptiness_check': - fix two bugs in the computation of the counter example, - revamp the interface for better customization. * `never_claim_reachable': new function. * Introduce annonymous BDD variables in `bdd_dict', and start to use it in `ltl_to_tgba_fm'. * Offer never claim in the CGI script. * Rename EESRG as SSP, and offer specialized variants of the emptiness_check. New in spot 0.0r (2004-03-08): * In ltl_to_tgba_fm: - New option `exprop' to optimize determinism. - Make the `symbolic indentification' from 0.0p optional. * `nonacceptant_lbtt_reachable' new function to help getting accurate statistics from LBTT. * Revamp the cgi script's user interface. * Upgrade to lbtt 1.0.3, swig 1.3.21, automake 1.8.3 New in spot 0.0p (2004-02-03): * In ltl_to_tgba_fm: - identify states with identical symbolic expansions (i.e., identical continuations) - use Acc[b] as acceptance condition for Fb, not Acc[Fb]. * Update and speed-up the cgi script. * Improve degeneralization. New in spot 0.0n (2004-01-13): * emptiness_check::check2() is a variant of Couvreur's emptiness check that explores visited states first. * Build the EESRG supporting code condinally, as the associated GreatSPN changes have not yet been contributed to GreatSPN. * Add a powerset algorithm (determinize TGBA ignoring acceptance conditions, i.e., as if they were used to recognize finite languages). * tgba_explicit::merge_transitions: merge transitions with same source, destination, and acceptance condition. * Run test cases within valgrind. * Various bug fixes. New in spot 0.0l (2003-12-01): * Computation of prime implicants. This simplify the output of ltl_to_tgba_fm, and allows conditions to be output as some of product in dot output. * Optimize translation of GFy in ltl_to_tgba_fm. * tgba_explicit supports arbitrary binary formulae on transitions (only conjunctions were allowed). New in spot 0.0j (2003-11-03): * Use hash_map's instead of map's almost everywhere. * New emptiness check, based on Couvreur's algorithm. * LTL propositions can be put inside doublequotes to disambiguate some constructions. * Preliminary support for GreatSPN's EESRG. * Various bug fixes. New in spot 0.0h (2003-08-18): * More python bindings: - "import buddy" works (see wrap/python/tests/bddnqueen.py for an example), - almost all the Spot API is now available via "import spot". * wrap/python/cgi/ltl2tgba.py is an LTL-to-Büchi translator that work as as a cgi script. * Couvreur's FM'99 ltl-to-tgba translation. New in spot 0.0f (2003-08-01): * More python bindings, still only for the spot::ltl:: namespace. * Functional GSPN interface. (Enable with --with-gspn=directory.) * The LTL scanner recognizes /\, \/, and xor. * Upgrade to lbtt 1.0.2. * tgba_tba_proxy is an on-the-fly degeneralizer. * Implements the "magic search" algorithm. (Works only on a tgba_tba_proxy.) * Tgba's output algorithms (save(), dotty()) now non-recursive. * During products, succ_iter will optimize its set of successors using information computed from the current product state. * BDD dictionaries are now shared between automata and. This gets rid of all the BDD-variable translating machinery. New in spot 0.0d (2003-07-13): * Optimize translation of G operators occurring at the root of a formula (or its immediate children when the root is a conjunction). This saves two BDD variables per G operator. * Distribute lbtt, and run it during `make check'. * First sketch of GSPN interface. * succ_iter_concreate::next() completely rewritten. * Transitions are now labelled by boolean formulae (not only conjunctions). * Documentation: - Output collaboration diagrams. - Build and distribute PDF manual. * Many bug fixes. New in spot 0.0b (2003-06-26): * Everything.