Commit 885a5351 authored by Alexandre Duret-Lutz's avatar Alexandre Duret-Lutz
Browse files

Rewrite the ltl2tgba bench using ltlcross

* bench/ltl2tgba/ New file.
* bench/ltl2tgba/.gitignore, bench/ltl2tgba/,
bench/ltl2tgba/README, bench/ltl2tgba/algorithms, bench/ltl2tgba/big,
bench/ltl2tgba/, bench/ltl2tgba/known, bench/ltl2tgba/small:
Rewrite this benchmark completely.  Also drop support of Wring and
Modella, as we cannot get them to work reliably.
* bench/ltl2tgba/formulae.ltl: Rewrite in Spot's syntax.
* bench/ltl2tgba/, bench/ltl2tgba/,
bench/ltl2tgba/ Delete these scripts, no
longer needed.
* Do not output anymore.
parent e2f17f65
......@@ -3,22 +3,38 @@ EXTRA_DIST = \
big \
formulae.ltl \
known \ \
small \
big.cfg big.log big.txt \
small.cfg small.log small.txt \
known.cfg known.log known.txt
OUTPUTS = known small big
results.pdf results.aux results.log results.tex
.PHONY = run
run: small.txt big.txt known.txt
deps = $(srcdir)/algorithms $(top_srcdir)/ $(top_builddir)/src/tgbatest/ltl2tgba
run: $(OUTJSON)
deps = $(srcdir)/algorithms \
$(top_srcdir)/ \
small.txt: $(srcdir)/small $(deps)
small.json: $(srcdir)/small $(deps)
big.txt: $(srcdir)/big $(deps)
big.json: $(srcdir)/big $(deps)
known.txt: $(srcdir)/known $(srcdir)/formulae.ltl $(deps)
known.json: $(srcdir)/known $(srcdir)/formulae.ltl $(deps)
results.tex: $(srcdir)/ $(OUTJSON)
v=`git describe --always --dirty 2>/dev/null || \
echo $(PACKAGE_STRING)`; \
$(srcdir)/ $(OUTJSON) >$(@:.tex=.tmp) \
--intro "Results assembled on `LANG=C date` with $$v."; \
mv $(@:.tex=.tmp) $@
results.pdf: results.tex
pdflatex results.tex
This directory contains benchmark scripts for LTL-to-Büchi translators.
They are all based on lbtt.
* algorithms
The lbtt configuration of all the algorithms. More about these below.
* small
* big
* known
Run lbtt on, respectively:
small formulae (size 10, 4 propositions)
big formulae (size 12..15, 8 propositions)
known formulae (96 formulae from formulae.ltl)
Each script generates 3 files:
xxxx.cfg: the configuration file for lbtt
xxxx.log: the log of lbtt's execution (also output when the script runs)
xxxx.txt: the summary of the test (also output at the end of the script)
* is generated from by configure. This perl
script converts the intermediate generalized automata computed by
ltl2ba into a form usable by lbtt.
* formulae.ltl
A list of LTL formulae used by the `known' check.
See ../emptchk/README for the sources.
This scripts is used to create *.txt files from *.log files.
The page explains
all the keys used and the different tools involved in the benchmark.
Spot's configure script checks for the tools needed in the
benchmark, and the script in this directory should omit the tools
that are not available.
Running the scripts
1) Install all the third-party translators that you want to check.
They must all be in $PATH and work from there.
Two difficulties comes from Modella and wring2lbtt:
Running 'make run' should run three benchmarks (you may want to use
'make run -j3' if you have three cores or more), then summarize these
into a LaTeX document that is then compiled to 'result.pdf'.
* Modella 1.5.7 produces automata that are not readable by lbtt 1.1.2.
You have to fix the former as follows:
The summary script requires python3 to be installed, and the LaTeX
compilation obviously needs some LaTeX distribution.
--- modella1.5.7/modella_automa.c 2004-08-30 17:19:47.000000000 +0200
+++ modella1.5.7.fixed/modella_automa.c 2005-04-14 15:07:46.632785000 +0200
@@ -618,6 +618,7 @@ void print_LBA(LBA* b,FILE* output){
fprintf(output,"%d ",b->R[j]->dest);
+ fputc('\n',output);
fprintf(output,"-1 ");
The three benchmarks features respectively 200, 200, and 184 formulae,
to be translated (when these tools exist) by spin, ltl2ba, ltl3ba (4
configurations) and ltl2tgba (2 configurations). Each translation has
a timeout set to 10 minutes, but with a total of 4672 translations to
perform it can take a long time. If you want to speed things up, you
may edit the file 'algorithms' to remove tools or lower the timeout.
* The automata produced by Wring are translated to the syntax
understood by lbtt using `wring2lbtt' (by the same author of
Modella). wring2lbtt suffers from the same "lbtt syntax"
problem described above, you have to fix this too before it
can be used.
Also wring2lbtt requires a Wring directory in the directory
where it is run; that makes it harder to use as a normal tool
from $PATH. I use the following wrapper in my $PATH to work
around this.
Here are the different scripts used, in case you want to customize
this benchmark.
cd ~/src/wring2lbtt && ./wring2lbtt "$@"
* algorithms
This is OK because the filenames supplied by lbtt are absolute.
The configuration of all the translators. This is merely a script
that builds the command-line of ltlcross, to be run by the next
three scripts. Most of the $TOOL variables are defined by the
'defs' file, which is output by 'configure' after checking for
the presence of the said tools.
2) ./configure Spot, and build it.
If you want to add your own tool to the mix, simply modify this file.
During the configure process you should see lines such as
The timeout value, common to the three benchmarks, is also set here.
checking for lbt... lbt
checking for ltl2ba... ltl2ba
checking for modella... modella
checking for
checking for spin... spin
checking for wring2lbtt... wring2lbtt
* small
* big
* known
If some tool is not found, the corresponding tests will be disabled.
You can also use variables such as LBT, LTL2BA, MODELLA, LTL2NBA,
SPIN, and WRING2LBTT to specify the location of any of these scripts
if it is not in $PATH. For instance
Three scripts that run ltlcross on, respectively:
100 small formulae (size 10, 4 propositions) and their negations
100 big formulae (size 12..15, 8 propositions) and their negations
92 known formulae (from formulae.ltl) and their negations
./configure LTL2NBA=/home/adl/src/ltlnba/
Each script generates 3 files:
xxxx.log: the log of ltlcross' execution, updated as the script goes
xxxx.csv: the results in CSV format
xxxx.json: the results in JSON format
3) Run `make' to build Spot.
The last two files are only output when ltlcross terminates, so if
you kill a script before it terminates only the xxxx.log file will
have been overwritten.
4) cd into bench/ltl2tgba/ and execute any of
or ./known
* formulae.ltl
Alternatively running `make run' (in that directory) will run all
three scripts. If you have a multicore processor, you may want
to run `make -j3 run' to run these three scripts in parallel.
None of the tested translators use more than one core.
A list of LTL formulae used by the `known' check. They come
from three sources:
@InProceedings{ dwyer.98.fmsp,
author = {Matthew B. Dwyer and George S. Avrunin and James C.
title = {Property Specification Patterns for Finite-state
booktitle = {Proceedings of the 2nd Workshop on Formal Methods in
Software Practice (FMSP'98)},
publisher = {ACM Press},
address = {New York},
editor = {Mark Ardis},
month = mar,
year = {1998},
pages = {7--15}
@InProceedings{ etessami.00.concur,
author = {Kousha Etessami and Gerard J. Holzmann},
title = {Optimizing {B\"u}chi Automata},
booktitle = {Proceedings of the 11th International Conference on
Concurrency Theory (Concur'00)},
pages = {153--167},
year = {2000},
editor = {C. Palamidessi},
volume = {1877},
series = {Lecture Notes in Computer Science},
address = {Pennsylvania, USA},
publisher = {Springer-Verlag}
@InProceedings{ somenzi.00.cav,
author = {Fabio Somenzi and Roderick Bloem},
title = {Efficient {B\"u}chi Automata for {LTL} Formul{\ae}},
booktitle = {Proceedings of the 12th International Conference on
Computer Aided Verification (CAV'00)},
pages = {247--263},
year = {2000},
volume = {1855},
series = {Lecture Notes in Computer Science},
address = {Chicago, Illinois, USA},
publisher = {Springer-Verlag}
In the known benchmark, we use both positive and negated versions
of these formulae.
This script reads all the *.json files, and write out a LaTeX file
with summary tables.
5) Wait...
Reading the summaries
The files small.txt, big.txt, and known.txt contain a summary of the
results. Each algorithm is described on two lines formated as
10: Spot FM (degen)
831 2422 188 | 521 157 | 3.01 | 165971 8723693 (188)
The first line presents the name of the algorithm ("Spot FM (degen)")
and its number for lbtt (10). The number is useless. In this
example, "FM (degen)" means that the Couvreur/FM algorithm is used to
translate LTL formula into a TGBA that is then DEGENeralized, so you
effectively get a Büchi automaton, which you can compare with that
automata produced by other tools. You may want to look in the file
`algorithms' to see which options are used for each name, if the
naming is unclear.
The second line display 9 values:
1. the total number of states of all generated automata (831)
2. the total number of transitions of all generated automata (2422)
3. the total number of acceptance conditions of all generated automata (188)
4. the total number of nondeterministic states in these automata (521)
5. the total number of automata with some nondeterminisitic state (157)
6. the cumulated translation time in seconds (3.01)
7. the total number of states in the synchronized products (165971)
8. the total number of transitions in the synchronized products (8723693)
9. the number of translated formulae (188)
For all these values (but the last!) the smaller number the better.
* Small translation times are not accurate because:
a) most of the translators are run through scripts that translate
their input from and their output to the format understood by
lbtt. For fast translators, most of the time is spent through
these wrappers. (For instance Spot's ltl2tgba is run through
lbtt-translate, and depending on how Spot has been configured
w.r.t. to dynamic libraries, ltl2tgba itself is often a shell
script that run the real binary with the locally built libraries.)
b) LBTT use the time() function to measure time, which usually
has only a 0.01s resolution. Multiply this 0.01s by the number
for formulae to get the possible error (e.g. for the above example
188*0.01 = 1.88s, so the 3.01s should be interpreted as "within
3.01-1.88 and 3.01+1.88).
* Some tools will appear to have translated fewer automata than the
others. This normally indicates bugs in the translator (incorrect
output) or timeouts. In that case it is harder to compare the
results. (Normalizing the other values accordingly may not be
fair: maybe the translator precisely failed to translate the
largest automata.)
The various outputs (CSV, JSON, our LaTeX) may use the following
column headers:
* input: formula translated (as a string in the CSV output, and as
an index into the input table in the JSON output)
* tool: tool used to translated this formula (idem)
* states: number of states of the resulting automaton
* edges: number of physical arcs between these states
* transitions: number of logical transitions from one state to the other
(for instance if the atomic propositions are 'a' and 'b', and
edge labeled by 'a' represents two transitions labeled by
'a&b' and 'a&!b')
* acc: number of acceptance sets used; it should always be one
in this automaton since we are producing (degeneralized)
Büchi automata.
* scc: number of strongly conncected components in the produced automaton
* nondetstates: number of nondeterministic states
* nondeterministic: 0 if the automaton is deterministic
(no nondeterministic state), 1 otherwise
* time: time required by the translation (although this is measured with
the highest-resolution clock available, it is "wall time", so it
can be affected by the machine's load).
* product_states: number of states in a product if the automaton with some
random Kripke structure (one unique Kripke structure is generated
for each formula and used with automata produced by all tools)
* product_transitions: number of transitions in that product
* product_scc: number of strongly connected componebts in that product
The summary tables produced by accumulate all these results for
all formulae, tool by tool. They display an additional column, called
'count', giving the number of formulae successfully translated (the
missing formulae correspond to timeouts).
For all these values (except count), the sammler number the better.
More details about ltlcross (used to produce these outputs) can be
found in its man page, and at
cat >"$conffile" <<EOF
Name = "Spin"
Parameters = "--spin $SPIN"
Enabled = $HAVE_SPIN
# Fill "$@" with the list of translators we want to benchmark.
Name = "ltl2ba"
Parameters = "--spin $LTL2BA"
Enabled = $HAVE_LTL2BA
# Add a dummy initial argument to clear "$@" and also in case one of
# the tools starts with "-".
set dummy
Name = "ltl3ba"
Path = "lbtt-translate"
Parameters = "--spin $LTL3BA"
Enabled = $HAVE_LTL3BA
# Add third-party tools if they are available
test -n "$SPIN" && set "$@" "$SPIN -f %s >%N"
test -n "$LTL2BA" && set "$@" "$LTL2BA -f %s >%N"
test -n "$LTL3BA" && set "$@" "$LTL3BA -f %s >%N" \
"$LTL3BA -M -f %s >%N" \
"$LTL3BA -S -f %s >%N" \
"$LTL3BA -S -M -f %s >%N"
Name = "ltl3ba -M"
Path = "lbtt-translate"
Parameters = "--spin '$LTL3BA -M'"
Enabled = $HAVE_LTL3BA
# Use -s to output a neverclaim, like the other tools.
set "$@" "$LTL2TGBA --det -s %s >%N" \
"$LTL2TGBA --small -s %s >%N"
Name = "ltl3ba -S"
Path = "lbtt-translate"
Parameters = "--spin '$LTL3BA -S'"
Enabled = $HAVE_LTL3BA
# If you want to add your own tool, you can add it here.
# See 'man ltlcross' for the list of %-escape you may use
# to specify input formula and output automaton.
# set "$@" "tool options %... > %..."
Name = "ltl3ba -M -S"
Path = "lbtt-translate"
Parameters = "--spin '$LTL3BA -M -S'"
Enabled = $HAVE_LTL3BA
Name = "Modella"
Path = "$MODELLA"
Parameters = "-o1 -g -e -r12"
# Set the timeout to 5 minutes
set "$@" --timeout=300
Name = "Wring (RewRule+BoolOpt+AutSempl), degen"
Path = "$WRING2LBTT"
Parameters = "-d --5"
Name = "Wring (RewRule+BoolOpt+AutSempl)"
Path = "$WRING2LBTT"
Parameters = "--5"
Name = "NBA"
Path = "$LTL2NBA"
Enabled = $HAVE_LTL2NBA
for type in tgba ba; do
for pref in any deterministic small; do
for level in high; do
cat >>$conffile <<EOF
Name = "Spot ($type $pref $level)"
Parameters = "--spot '$LTL2TGBA --$type --$pref --$level --lbtt -F'"
Enabled = yes
# Finaly remove the dummy initial argument
# -*- shell-script -*-
# Copyright (C) 2011 Laboratoire de Recherche et Developpement de
# Copyright (C) 2012, 2013 Laboratoire de Recherche et Developpement de
# l'Epita (LRDE)
# Copyright (C) 2005 Laboratoire d'Informatique de Paris 6 (LIP6),
# dpartement Systmes Rpartis Coopratifs (SRC), Universit Pierre
# et Marie Curie.
# This file is part of Spot, a model checking library.
......@@ -22,58 +19,7 @@
# along with this program. If not, see <>.
. ./defs
. "$srcdir/algorithms"
cat >>"$conffile" <<EOF
Rounds = 100
Interactive = Never
# Verbosity = 5
# ComparisonCheck = no
ConsistencyCheck = no
# IntersectionCheck = no
TranslatorTimeout = 15min
Size = 200
Propositions = 8
Size = 15...20
Propositions = 8
AbbreviatedOperators = No
GenerateMode = Normal
OutputMode = NNF
PropositionPriority = 50
TruePriority = 1
FalsePriority = 1
AndPriority = 10
OrPriority = 10
XorPriority = 0
EquivalencePriority = 0
BeforePriority = 0
StrongReleasePriority = 0
WeakUntilPriority = 0
UntilPriority = 30
DefaultOperatorPriority = 15
"$LBTT" --configfile="$conffile" | tee "$logfile"
"$srcdir"/ "$logfile" | tee "$sumfile"
$RANDLTL -n 100 --tree-size=15..20 p1 p2 p3 p4 p5 p6 p7 p8 | ltlfilt --nnf |
$LTLCROSS "$@" --csv=big.csv --json=big.json 2>&1 | tee big.log
# -*- mode: shell-script; coding: utf-8 -*-
# Copyright (C) 2012 Laboratoire de Recherche et Développement
# Copyright (C) 2012, 2013 Laboratoire de Recherche et Développement
# de l'Epita (LRDE).
# Copyright (C) 2005 Laboratoire d'Informatique de Paris 6 (LIP6),
# département Systèmes Répartis Coopératifs (SRC), Université Pierre
......@@ -36,8 +36,8 @@ test -f "$srcdir/" || {
......@@ -45,17 +45,3 @@ LTL2TGBA="@top_builddir@/src/bin/ltl2tgba@EXEEXT@"
($LBTT --version) >/dev/null 2>&1 || {
echo "$LBTT not available. Try configuring with --with-included-lbtt."
exit 77
if eval 'test -z "$'$var'"'; then
eval HAVE_$var=no
eval HAVE_$var=yes
U p0 p1
U p0 U p1 p2
V ! p0 V ! p1 ! p2
| U t V f ! p0 V f U t p1
U U t p0 V f p1
U V f p0 p1
& & U t U t p0 V f ! p0 & U t p0 V f V f ! p0
& V f U t p0 U t V f ! p1
| & V f U t p0 U t V f ! p1 & V f U t p1 U t V f ! p0
V p0 | p0 p1
| U X p0 X p1 X V ! p0 ! p1
| U X p0 p1 X V ! p0 | ! p0 ! p1
& V f | ! p0 U t p1 | U X p0 p1 X V ! p0 | ! p0 ! p1
& V f | ! p0 U t p1 | U X p0 X p1 X V ! p0 ! p1
V f | ! p0 U t p1
U t & p0 X U ! p1 ! p2
& U t V f ! p0 V f U t ! p1
V f & U t p0 U t p1
& U t p0 U t ! p0
V & X p1 p2 X U V U p3 p0 p2 V p3 p2
| | & V f | p1 V f U t p0 V f | p2 V f U t ! p0 V f p1 V f p2
| | & V f | p1 U t V f p0 V f | p2 U t V f ! p0 V f p1 V f p2
& & | U t & ! p1 U t V f ! p0 U t & ! p2 U t V f p0 U t ! p1 U t ! p2
& & | U t & ! p1 V f U t ! p0 U t & ! p2 V f U t p0 U t ! p1 U t ! p2
& V f | p1 X V f p0 V f | p2 X V f ! p0
V f | p1 & X p0 X ! p0
| U p0 p0 U p1 p0
U p0 & p1 V f p2
U p0 & p1 X U p2 p3
U p0 & p1 X & p2 U t & p3 X U t & p4 X U t & p5 X U t p6
U t & p0 X V f p1
U t & p0 X & p1 X U t p2
U t & p0 X U p1 p2
| U t V f p0 U t V f p1
V f | ! p0 U p1 p2
U t & p0 X U t & p1 X U t & p2 X U t p3
& & & & V f U t p0 V f U t p1 V f U t p2 V f U t p3 V f U t p4
| | U p0 U p1 p2 U p1 U p2 p0 U p2 U p0 p1
V f | ! p0 U p1 | V f p2 V f p3
<>p1 -> (!p0 U p1)
[](p2 -> [](!p0))
......@@ -92,3 +53,40 @@ V f | ! p0 U p1 | V f p2 V f p3
[] (p2 -> (p0 -> (!p1 U (p3 & !p1 & !p5 & X((!p1 & !p5) U p4)))) U (p1 | [] (p0 -> (p3 & !p5 & X(!p5 U p4)))))
!p0 U ((p0 U ((!p0 U ((p0 U ([]!p0 | []p0)) | []!p0)) | []!p0)) | []!p0)
<>p2 -> (!p2 U (p2 & (!p0 U ((p0 U ((!p0 U ((p0 U ([]!p0 | []p0)) | []!p0)) | []!p0)) | []!p0))))
p0 U p1
p0 U (p1 U p2)