Commit 06d5aa5e authored by Alexandre Duret-Lutz's avatar Alexandre Duret-Lutz
Browse files

org: several typos

* doc/org/tut01.org, doc/org/tut02.org, doc/org/tut03.org,
doc/org/tut04.org, doc/org/tut10.org, doc/org/tut20.org,
doc/org/tut21.org, doc/org/tut50.org: Fix some typos and reword some
sentences.
parent 14bee1ae
......@@ -31,8 +31,9 @@ ltlfilt --lbt-input -f "$formula" --spin -p
The reason the LBT parser has to be explicitly enabled is because of
some corner cases that have different meanings in the two syntaxes.
(For instance =t= and =f= are the true constant in LBT's syntax, but
they are considered as atomic propositions in all the other syntaxes.)
(For instance =t= and =f= are the true and false constants in LBT's
syntax, but they are considered as atomic propositions in all the
other syntaxes.)
* Python bindings
......@@ -94,17 +95,18 @@ After [[file:compile.org][compiling and executing]] we get:
: (p1) && (p2) && ([](p0))
Notice that, except for the =<<= operator, the different output
routines specify in their name the syntax the output, and the type of
formula they can output. Here we are only using LTL formulas for
demonstration, so those three functions are OK with that. The
routine used by =<<= is =print_psl()=.
routines specify in their name the syntax to use for output, and the
type of formula they can output. Here we are only using LTL formulas
for demonstration, and PSL is a superset of LTL, so those three output
functions are all OK with that. The routine used by =<<= is
=print_psl()=, the default syntax used by Spot.
We do not recommend using the =parse_formula()= interface because of
the potential formulas (like =f= or =t=) that have different meanings
in the two parsers that are tried.
Instead, depending on whether you want to parse formulas with infix
syntax, or formulas with prefix syntax, you should call the specific
syntax, or formulas with prefix syntax, you should call the appropriate
parser. Additionally, this give you control over how to print errors.
** Calling the infix parser explicitly
......@@ -133,7 +135,8 @@ Here is how to call the infix parser explicitly:
Note that as its name implies, this parser can read more than LTL
formulas: the fragment of PSL we support is basically LTL extended
with regular expressions.
with regular expressions. (Refer to the [[https://spot.lrde.epita.fr/tl.pdf][temporal logic specifications]]
for the syntax and semantics.)
The =parse_infix_psl()= function processes =input=, and returns a
=spot::parsed_formula= object. In addition to the =spot::formula= we
......@@ -141,16 +144,17 @@ desire (stored as the =spot::parsed_formula::f= attribute), the
=spot::parsed_formula= also stores any diagnostic collected during the
parsing. Those diagnostics are stored in the
=spot::parsed_formula::errors= attribute, but they can conveniently be
printed by calling the =spot::parsed::format_errors()= method: this
method returns true if and only if a diagnostic was output, so this is
usually used to abort the program with an error status as above.
printed by calling the =spot::parsed_formula::format_errors()= method:
this method returns =true= if and only if a diagnostic was output, so
this is usually used to abort the program with an error status as
above.
The parser usually tries to do some error recovery, so the =f=
attribute can be non-null even if some parsing errors where returned.
attribute can be non-null even if some parsing errors were returned.
For instance if you have input =(a U b))= the parser will complain
about the extra parenthesis, but it will still return a formula that
is equivalent to =a U b=. So you could decide to continue with the
is equivalent to =a U b=. You could decide to continue with the
"fixed" formula if you wish. Here is an example:
#+BEGIN_SRC C++ :results verbatim :exports both
......@@ -231,7 +235,7 @@ you can simply use =parse_infix_psl()= to parse LTL formulas.
There is a potential problem if you design a tool that only works with
LTL formulas, but call =parse_infix_psl()= to parse user input. In
that case, the user might well input a PSL formula and cause problem
that case, the user might input a PSL formula and cause problem
down the line.
For instance, let's see what happens if a PSL formulas is passed to
......@@ -290,9 +294,9 @@ The first is to simply diagnose non-LTL formulas.
#+END_SRC
A second (but slightly weird) idea would be to try to simplify the PSL
formula, and hope that the PSL simplify is able to come up with an
formula, and hope that the simplifier is able to come up with an
equivalent LTL formula. This does not always work, so you need to be
prepared to reject the formula any way. In our example, we are lucky
prepared to reject the formula anyway. In our example, we are lucky
(maybe because it was carefully chosen...):
#+BEGIN_SRC C++ :results verbatim :exports both
......@@ -411,7 +415,7 @@ specifiers after the second =:= (if any) are the usual [[https://docs.python.org
specifiers]] (typically alignment choices) and are applied on the string
produced from the formula.
The complete list of specified that apply to formulas can always be
The complete list of specifier that apply to formulas can always be
printed with =help(spot.formula.__format__)=:
#+BEGIN_SRC python :results output :exports results
......
......@@ -4,9 +4,9 @@
#+SETUPFILE: setup.org
#+HTML_LINK_UP: tut.html
The task is to read an LTL formula, relabel all (possibly complex)
atomic propositions, and provide =#define= statements for each of
these renamings, writing everything in Spin's syntax.
The task is to read an LTL formula, relabel all (possibly
double-quoted) atomic propositions, and provide =#define= statements
for each of these renamings, writing everything in Spin's syntax.
* Shell
......@@ -49,6 +49,11 @@ accept_all:
}
#+end_example
Aside: another way to work around syntax limitations of tools is to
use [[file:ltldo.org][=ltldo=]]. On the above example, =ltldo ltl2ba -f '"Proc@Here" U
("var > 10" | "var < 4")' -s= would produce a never clam with the
correct atomic proposition, even though =ltl2ba= cannot parse them.
* Python
The =spot.relabel= function takes an optional third parameter that
......@@ -73,8 +78,7 @@ print(g.to_str('spin', True))
* C++
The =spot::relabeling_map= is just a =std::map= with a custom
destructor.
The =spot::relabeling_map= is just implemented as a =std::map=.
#+BEGIN_SRC C++ :results verbatim :exports both
#include <string>
......
......@@ -7,11 +7,11 @@
This page explains how to build formulas and how to iterate over their
syntax trees.
We'll first describe how to build a formula from scratch, by using the
constructor functions associated to each operators, and show that
basic accessor methods for formulas. We will do that for C++ first,
and then Python. Once these basics are covered, we will show examples
for traversing and transforming formulas (again in C++ then Python).
We will first describe how to build a formula from scratch, by using
the constructors associated to each operators, and show the basic
accessor methods for formulas. We will do that for C++ first, and
then Python. Once these basics are covered, we will show examples for
traversing and transforming formulas (again in C++ then Python).
* Constructing formulas
......@@ -86,7 +86,7 @@ memoization.
Building a formula using these operators is quite straightforward.
The second part of the following example shows how to print some
detail of the top-level oeprator in the formula.
detail of the top-level operator in the formula.
#+BEGIN_SRC C++ :results verbatim :exports both
#include <iostream>
......@@ -176,25 +176,26 @@ The Python equivalent is similar:
** C++
In Spot, Formula objects are immutable: this allows identical subtrees
to be shared among multiple formulas. Algorithm that "transform"
to be shared among multiple formulas. Algorithms that "transform"
formulas (for instance the [[file:tut02.org][relabeling function]]) actually recursively
traverses the input formula to construct the output formula.
traverse the input formula to construct the output formula.
Using the operators described in the previous section is enough to
write algorithms on formulas. However there are two special methods
that makes it a lot easier: =traverse= and =map=.
=traverse= takes a function =fun=, and applies it to a subformulas of
a formula, including the formula itself. The formula is explored in a
DFS fashion (without skipping subformula that appear twice). The
children of a formula are explored only if =fun= returns =false=. If
=fun= returns =true=, that indicates to stop the recursion.
=traverse= takes a function =fun=, and applies it to each subformulas
of a given formula, including that starting formula itself. The
formula is explored in a DFS fashion (without skipping subformula that
appear twice). The children of a formula are explored only if =fun=
returns =false=. If =fun= returns =true=, that indicates to stop the
recursion.
In the following we use a lambda function to count the number of =G=
in the formula. We also print each subformula to show the recursion,
and stop the recursion as soon as we encounter a subformula without
sugar (the =is_sugar_free_ltl()= method is a constant-time operation,
that tells whether a formulas contains a =F= or =G= operator) to save
sugar (the =is_sugar_free_ltl()= method is a constant-time operation
that tells whether a formula contains a =F= or =G= operator) to save
time time by not exploring further.
......@@ -241,7 +242,7 @@ b & c & d
The other useful operation is =map=. This also takes a functional
argument, but that function should input a formula and output a
replacement formula. =f.map(fun)= applies =fun= to all children of
=f=, and rebuild a same formula as =f=.
=f=, and assemble the result under the same top-level operator as =f=.
Here is a demonstration of how to exchange all =F= and =G= operators
in a formula:
......@@ -258,7 +259,7 @@ in a formula:
return spot::formula::G(xchg_fg(in[0]));
if (in.is(spot::op::G))
return spot::formula::F(xchg_fg(in[0]));
// No need to transform Boolean subformulas
// No need to transform subformulas without F or G
if (in.is_sugar_free_ltl())
return in;
// Apply xchg_fg recursively on any other operator's children
......@@ -327,7 +328,7 @@ Here is the =F= and =G= exchange:
return spot.formula.G(xchg_fg(i[0]));
if i._is(spot.op_G):
return spot.formula.F(xchg_fg(i[0]));
# No need to transform Boolean subformulas
# No need to transform subformulas without F or G
if i.is_sugar_free_ltl():
return i;
# Apply xchg_fg recursively on any other operator's children
......
......@@ -28,6 +28,11 @@ ltlfilt -c -f '(a U b) U a' --equivalent-to 'b U a'
#+RESULTS:
: 1
Or use =-q= if you only care about the exit status of =ltlfilt=: the
exist status is =0= if some formula matched, and =1= if no formula
matched. (The effect of these =-c= and =-q= options should be
familiar to =grep= users.)
* Python
In Python, we can test this via a =language_containment_checker=
......@@ -38,10 +43,10 @@ import spot
f = spot.formula("(a U b) U a")
g = spot.formula("b U a")
c = spot.language_containment_checker()
print(c.equal(f, g))
print("Equivalent" if c.equal(f, g) else "Not equivalent")
#+END_SRC
#+RESULTS:
: True
: Equivalent
The equivalence check is done by converting the formulas $f$ and $g$
and their negation into four automata $A_f$, $A_{\lnot f}$, $A_g$, and
......@@ -64,15 +69,15 @@ def equiv(f, g):
f = spot.formula("(a U b) U a")
g = spot.formula("b U a")
print(equiv(f, g))
print("Equivalent" if equiv(f, g) else "Not equivalent")
#+END_SRC
#+RESULTS:
: True
: Equivalent
The =language_containment_checker= object essentially performs the
same work, but it also implements a cache to avoid translating the
same formulas multiple times when it is used to test multiple
equivalence.
equivalences.
* C++
......
......@@ -119,13 +119,13 @@ T0_S3:
All the translation pipeline (this include simplifying the formula,
translating the simplified formula into an automaton, and simplifying
the resulting automaton) is handled by the =spot::translator= object.
This object can configured by calling =set_type()= to chose the type
of automaton to output, =set_level()= to set the level of optimization
(it's high by default), and =set_pref()= to set various preferences
(like small or deterministic) or characteristic (complete,
unambiguous) for the resulting automaton. Finally, the output as a
never claim is done via the =print_never_claim= function.
the resulting automaton) is handled by the =spot::translator= class.
An instance of this class can configured by calling =set_type()= to
chose the type of automaton to output, =set_level()= to set the level
of optimization (it's high by default), and =set_pref()= to set
various preferences (like small or deterministic) or characteristic
(complete, unambiguous) for the resulting automaton. Finally, the
output as a never claim is done via the =print_never_claim= function.
#+BEGIN_SRC C++ :results verbatim :exports both
#include <string>
......
......@@ -45,7 +45,7 @@ Note that the automaton parser of Spot can read automata written
either as never claims, in LBTT's format, in ltl2dstar's format or in
the HOA format, and there is no need to specify which format you
expect. Even if our example uses a never claim as input, the code we
write will work with any of those formats.
write will read any of those formats.
* Shell
......@@ -185,18 +185,18 @@ State: 4
--END--
#+END_SRC
In automata, transitions guards are represented by BDDs. The role of
=bdd_dict= object is to keep track of the correspondence between BDD
variables and atomic propositions such as =foo= and =bar= in the above
example. So each automaton has a shared pointer to some =bdd_dict=
(this shared pointer is what the =bdd_dict_ptr= type is), and
operations between automata (like a product) can only work on automata
that share the same pointer. Here, when we call the automaton parser,
we supply the =bdd_dict_ptr= that should be used to do the mapping
between atomic propositions and BDD variables. Atomic propositions
that were not already registered will get a new BDD variable number,
and while existing atomic propositions will reuse the existing
variable.
In the Spot representation of automata, transitions guards are
represented by BDDs. The role of the =bdd_dict= object is to keep
track of the correspondence between BDD variables and atomic
propositions such as =foo= and =bar= in the above example. So each
automaton has a shared pointer to some =bdd_dict= (this shared pointer
is what the =bdd_dict_ptr= type is), and operations between automata
(like a product) can only work on automata that share the same
pointer. Here, when we call the automaton parser, we supply the
=bdd_dict_ptr= that should be used to do the mapping between atomic
propositions and BDD variables. Atomic propositions that were not
already registered will get a new BDD variable number, and while
existing atomic propositions will reuse the existing variable.
In the example for [[file:tut10.org][translating LTL into BA]], we did not specify any
=bdd_dict=, because the =translator= object will create a new one by
......@@ -211,11 +211,12 @@ There are actually different C++ interfaces to the automaton parser,
depending on your use case. For instance the parser is able to read a
stream of automata stored in the same file, so that they could be
processed in a loop. For this, you would instanciate a
=automaton_stream_parser= object and call its =parse()= method in a
loop. Each call to this method will either return one automaton, or
=nullptr= if there is no more automaton to read. The =parse_aut()=
function is actually a simple convenience wrapper that instantiates
an =automaton_stream_parser= and calls its =parse()= method once.
=spot::automaton_stream_parser= object and call its =parse()= method
in a loop. Each call to this method will either return one
=spot::parsed_aut_ptr=, or =nullptr= if there is no more automaton to
read. The =parse_aut()= function is actually a simple convenience
wrapper that instantiates an =automaton_stream_parser= and calls its
=parse()= method once.
In Python, you can easily iterate over a file containing multiple
......
......@@ -6,11 +6,11 @@
This example demonstrates how to iterate over an automaton in C++ and
Python. This case uses automata stored entirely in memory as a graph:
states are numbered by integers, and transitions can be seen as tuple
states are numbered by integers, and transitions can be seen as tuples
of the form
$(\mathit{src},\mathit{dst},\mathit{cond},\mathit{accsets})$ where
$\mathit{src}$ and $\mathit{dst}$ are integer denoting the extremities
of the transition, $\mathit{cond}$ is a BDD representing the label
$\mathit{src}$ and $\mathit{dst}$ are integers denoting the source and
destination states, $\mathit{cond}$ is a BDD representing the label
(a.k.a. guard), and $\mathit{accsets}$ is an object of type
=acc_cond::mark_t= encoding the membership to each acceptance sets
(=acc_cond::mark_t= is basically a bit vector).
......@@ -18,9 +18,9 @@ of the transition, $\mathit{cond}$ is a BDD representing the label
The interface available for those graph-based automata allows random
access to any state of the graph, hence the code given bellow can do a
simple loop over all states of the automaton. Spot also supports a
different kind of interface (not demonstrated here) to iterate over
automata that are constructed on-the-fly and where such a loop would
be impossible.
different kind of interface (not demonstrated here) to
[[file:tut50.org][iterate over automata that are constructed
on-the-fly]] and where such a loop would be impossible.
First let's create an example automaton in HOA format. We use =-U= to
request unambiguous automata, as this allows us to demonstrate how
......@@ -38,7 +38,8 @@ Start: 0
AP: 3 "a" "b" "c"
acc-name: generalized-Buchi 2
Acceptance: 2 Inf(0)&Inf(1)
properties: trans-labels explicit-labels trans-acc stutter-invariant
properties: trans-labels explicit-labels trans-acc unambiguous
properties: stutter-invariant
--BODY--
State: 0
[0] 1
......@@ -102,7 +103,7 @@ corresponding BDD variable number, and then use for instance
void custom_print(std::ostream& out, spot::twa_graph_ptr& aut)
{
// We need the dictionary to print the BDDs that label the edges
const auto& dict = aut->get_dict();
const spot::bdd_dict_ptr& dict = aut->get_dict();
// Some meta-data...
out << "Acceptance: " << aut->get_acceptance() << '\n';
......@@ -146,7 +147,7 @@ corresponding BDD variable number, and then use for instance
// The out(s) method returns a fake container that can be
// iterated over as if the contents was the edges going
// out of s. Each of these edge is a quadruplet
// out of s. Each of these edges is a quadruplet
// (src,dst,cond,acc). Note that because this returns
// a reference, the edge can also be modified.
for (auto& t: aut->out(s))
......@@ -169,7 +170,7 @@ Initial state: 0
Atomic propositions: a (=0) b (=1) c (=2)
Name: Fa | G(Fb & Fc)
Deterministic: no
Unambiguous: maybe
Unambiguous: yes
State-Based Acc: maybe
Terminal: maybe
Weak: maybe
......
......@@ -211,10 +211,10 @@ syntax of C++ works exactly as if we had typed
#+END_SRC
In the above =example()= function, the iterators =i= and =end= are
instance of the =internal::edge_iterator<spot::twa_graph::graph_t>=
instances of the =internal::edge_iterator<spot::twa_graph::graph_t>=
class, which redefines enough operators to act like an STL Foward
Iterator over all outgoing edges of =s=. Note that the =tmp= and =i=
objects hold a pointer to the graph, but it does not really matters
objects hold a pointer to the graph, but it does not really matter
because the compiler will optimize this away.
In fact after operators are inlined and useless temporary variables
......@@ -441,9 +441,9 @@ exploration of an automaton. Subclasses of =twa= need not represent
the entire automaton in memory; if they prefer, they can compute it as
it is explored.
Naturally =twa_graph=, even if they store the automaton graph
explicitly, are subclasses of =twa=, so they also implement the
on-the-fly interface, even if they do not have to compute anything.
Naturally =twa_graph=, even if it stores the automaton graph
explicitly, is a subclasse of =twa=, so it also implements the
on-the-fly interface but without computing anything.
** How this interface works
......@@ -548,23 +548,23 @@ subclass.
The interface puts few requirement on memory management: we want to be
able to write automata that can forget about their states (and
recompute them), so there is no guarantee that reaching twice the same
state will give return the same pointer. Even calling
=get_init_state()= twice could return different pointers. The only
way to decide whether two =state*= =s1= and =s2= represent the same
state is to check that ~s1->compare(s2) == 0~.
recompute them), so there is no guarantee that reaching the same state
twice will return the same pointer twice. Even calling
=get_init_state()= twice could return two different pointers. The
only way to decide whether two =state*= =s1= and =s2= represent the
same state is to check that ~s1->compare(s2) == 0~.
As far as memory management goes, there are roughly two types of =twa=
subclasses: those that always create new =state= instances, and those
that reuse =state= instances (either because they have a cache, or
because, as in the case of =twa_graph=, they know the entire graph).
From the user's perspective, =state= should never be passed to =delete=
(their protected destructor will prevent that). Instead, we should
call =state::destroy()=. Doing so allows each subclass to override
the default behavior of =destroy()= (which is to call =delete=). States
can be cloned using the =state::clone()= methode, in which case each
copy has to be destroyed.
From the user's perspective, =state= should never be passed to
=delete= (their protected destructor will prevent that). Instead, we
should call =state::destroy()=. Doing so allows each subclass to
override the default behavior of =destroy()= (which is to call
=delete=). States can be cloned using the =state::clone()= method, in
which case each copy has to be destroyed.
=twa_succ_iterator= instances are allocated and should be deleted once
done, but to save some work, they can also be returned to the
......@@ -620,7 +620,7 @@ state, it is /usually/ known.
Let us improve the above loop. In the previous example, each of
=first()=, =done()=, =next()= is a virtual method call. So if there
are $n$ successors, there will be $1$ call to =first()=, $n$ calls to
=next()=, and $n+1$ call to =done()=, so a total of $2n$ virtual
=next()=, and $n+1$ calls to =done()=, so a total of $2n+2$ virtual
method calls.
However =first()= and =next()= also return a Boolean stating whether
......@@ -647,7 +647,7 @@ follows:
#+END_SRC
Now we have only $1$ call to =first()= and $n$ calls to =next()=,
so we almost halved to number of virtual calls.
so we halved to number of virtual calls.
Using C++11's ranged =for= loop, this example can be reduced to the
following equivalent code:
......@@ -698,7 +698,7 @@ passes the iterator back to =aut->release_iter()= for recycling.
** Recursive DFS (v1)
We can now write a recursive DFS easily. The only pain is to keep
track of the state to =destroy()= them only after we do not need them
track of the states to =destroy()= them only after we do not need them
anymore. This tracking can be done using the data structure we use to
remember what states we have already seen.
......@@ -769,7 +769,7 @@ previously (in that case the passed state is destroyed). The
=spot::state_unicity_table::is_new()= behaves similarly, but returns
=nullptr= for states that already exist.
With this class, the recursive code can be simplified to this:
With this class, the recursive code can be simplified down to this:
#+BEGIN_SRC C++ :results verbatim :exports both
#include <iostream>
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment