diff --git a/content/introduction.tex b/content/introduction.tex index b88252d..9167d59 100644 --- a/content/introduction.tex +++ b/content/introduction.tex @@ -456,10 +456,9 @@ This thesis is split into two parts. The chapters can be read in isolation. However, the chapters do get more technical and mathematical -- especially in \in{Part}[part:nominal]. -Detailed discussion on the contributions, relations to other research, and future directions of research are presented in each chapter. -\todo{Expand w/ a list of contributions for each chapter.} +Detailed discussion on related work and future directions of research are presented in each chapter. -\startdescription[title={\in{Chapter}[chap:test-methods]: FSM-based test methods.}] +\startcontribution[title={\in{Chapter}[chap:test-methods]: FSM-based test methods.}] This chapter introduces test generation methods which can be used for learning. The methods are presented in a uniform way, which allows to give a single proof of completeness for all these methods. Moreover, the uniform presentation gives room to develop new test generation methods. @@ -467,73 +466,99 @@ One particular contribution (besides the theoretical framework) is the new \emph The main contributions are: \startitemize[before, after] \item Uniform description of known methods: \in{Theorem}[thm:completeness] (p. \at[thm:completeness]) -\item New algorithm with implementation: \in{Section}[sec:hybrid-ads-impementation] (p. \at[sec:hybrid-ads-impementation]) +\item A new proof of completeness: \in{Section}[sec:completeness] (p. \at[sec:completeness]) +\item New algorithm and its implementation: \in{Section}[sec:hybrid-ads-impementation] (p. \at[sec:hybrid-ads-impementation]) \stopitemize -\stopdescription +\stopcontribution -\startdescription[title={\in{Chapter}[chap:applying-automata-learning]: Applying automata learning to embedded control software.}] +\startcontribution[title={\in{Chapter}[chap:applying-automata-learning]: Applying automata learning to embedded control software.}] In this chapter we will apply model learning to an industrial case study. It is a unique benchmark as it is much bigger than any of the applications seen before (3410 states and 77 inputs). This makes it challenging to learn a model and the main obstacle is finding counterexamples. Here, my main contribution is the new test generation method from \in{Chapter}[chap:test-methods]. -The main contributions are: +The main contribution is: \startitemize[before, after] -\item Bla bla +\item Succesfully learn a large-scale system: \in{Section}[sec:learnResults] (p. \at[sec:learnResults]) \stopitemize -\noindent This is based on the following publication: +This is based on the following publication: -\noindent \cite[entry][DBLP:conf/icfem/SmeenkMVJ15]. -\stopdescription +\cite[entry][DBLP:conf/icfem/SmeenkMVJ15]. +\stopcontribution -\startdescription[title={\in{Chapter}[chap:minimal-separating-sequences]: Minimal separating sequences for all pairs of states.}] +\startcontribution[title={\in{Chapter}[chap:minimal-separating-sequences]: Minimal separating sequences for all pairs of states.}] Continuing on test generation methods, this chapter presents an efficient algorithm to construct separating sequences. Not only is the algorithm efficient -- it runs in $\bigO(n \log n)$ time -- it also constructs minimal length sequences. The algorithm is inspired by a minimisation algorithm by \citet[Hopcroft71], but extending it to construct witnesses is non-trivial. +The main contributions are: +\startitemize[before, after] +\item Efficient algorithm for separating sequences: \in{Algorithms}[alg:minimaltree] \& \in{}[alg:increase-k-v3] (p. \at[alg:minimaltree] \& \at[alg:increase-k-v3]) +\item Applications to black box testing: \in{Section}[sec:appl-conformance-testing] (p. \at[sec:appl-conformance-testing]) +\item Implementation: \in{Section}[sec:implementation-partition] (p. \at[sec:implementation-partition]) +\stopitemize This is based on the following publication: \cite[entry][DBLP:conf/lata/SmetsersMJ16]. -\stopdescription +\stopcontribution -\startdescription[title={\in{Chapter}[chap:learning-nominal-automata]: Learning Nominal Automata.}] +\startcontribution[title={\in{Chapter}[chap:learning-nominal-automata]: Learning nominal automata.}] In this chapter, we show how to learn automata over infinite alphabets. We do this by translating the \LStar{} algorithm directly to a nominal version, \nLStar{}. The correctness proofs mimic the original proofs by \citet[Angluin87]. Since our new algorithm is close to the original, we are able to translate variants of the \LStar{} algorithm as well. In particular, we provide a learning algorithm for nominal non-deterministic automata. +The main contributions are: +\startitemize[before, after] +\item \LStar{}-algorithm for nominal automata: \in{Section}[sec:nangluin] (p. \at[sec:nangluin]) +\item Its correctness and complexity: \in{Theorem}[thm:termination] \& \in{Corollary}[cor:complexity-nom-lstar] (p. \at[thm:termination] \& \at[cor:complexity-nom-lstar]) +\item Generalisation to non-deterministic automata: \in{Section}[sec:nominal-nlstar] (p. \at[sec:nominal-nlstar]) +\item Implementation in \NLambda{}: \in{Section}[sec:implementation-nominal-lstar] (p. \at[sec:implementation-nominal-lstar]) +\stopitemize This is based on the following publication: \cite[entry][MoermanS0KS17]. -\stopdescription +\stopcontribution -\startdescription[title={\in{Chapter}[chap:ordered-nominal-sets]: Fast computations on ordered nominal sets.}] +\startcontribution[title={\in{Chapter}[chap:ordered-nominal-sets]: Fast computations on ordered nominal sets.}] In this chapter, we provide a library to compute with nominal sets. We restrict our attention to nominal sets over the total order symmetry. This symmetry allows for a rather easy characterisation of orbits, and hence an easy implementation. We experimentally show that it is competitive with existing tools, which are based on SMT solvers. +The main contributions are: +\startitemize[before, after] +\item Characterisation theorem of orbits: \in{Table}[tab:overview] (p. \at[tab:overview]) +\item Complexity results: \in{Theorems}[thmcomplexity] \& \in{}[thm:moore] (p. \at[thmcomplexity] and \at[thm:moore]) +\item Implementation: \in{Section}[sec:implementation-ons] (p. \at[sec:implementation-ons]) +\stopitemize This is based on the following publication: \cite[entry][VenhoekMR18]. -\stopdescription +\stopcontribution -\startdescription[title={\in{Chapter}[chap:separated-nominal-automata]: Separated Nominal Automata.}] +\startcontribution[title={\in{Chapter}[chap:separated-nominal-automata]: Separated nominal automata.}] We investigate how to reduce the size of certain nominal automata. This is based on the observation that some languages (with outputs) are not just invariant under symmetries, but invariant under arbitrary \emph{transformations}, or \emph{renamings}. We define a new type of automaton, the \emph{separated nominal automaton}, and show that they exactly accept those languages which are closed under renamings. All of this is shown by using a theoretical framework: we establish a strong relationship between nominal sets on one hand, and nominal renaming sets on the other. +The main contributions are: +\startitemize[before, after] +\item Adjunction between nominal sets and renaming sets: \in{Theorem}[thm:adjunction] (p. \at[thm:adjunction]) +\item This adjunction is monoidal: \in{Theorem}[thm:monoidal] (p. \at[thm:monoidal]) +\item Separated automata have reduced state space: \in{Example}[ex:sep-aut-fifo] (p. \at[ex:sep-aut-fifo]) +\stopitemize This is based on a paper under submission: \cite[entry][MoermanR19]. -\stopdescription +\stopcontribution Besides these chapters in this thesis, the author has published the following papers. These are not included in this thesis, but a short summary of those papers is presented below. -\startdescription[title={Complementing Model Learning with Mutation-Based Fuzzing.}] +\startcontribution[title={Complementing Model Learning with Mutation-Based Fuzzing.}] Our group at the Radboud University participated in the RERS challenge 2016. This is a challenge where reactive software is provided and researchers have to asses validity of certain properties (given as LTL specifications). We approached this with model learning: @@ -543,9 +568,9 @@ Our results were presented at the RERS workshop (ISOLA 2016). The report can be found on ar$\Chi$iv: \cite[entry][SmetsersMJV16]. -\stopdescription +\stopcontribution -\startdescription[title={$n$-Complete test suites for IOCO.}] +\startcontribution[title={$n$-Complete test suites for IOCO.}] In this paper, we investigate complete test suites for labelled transition systems (LTSs), instead of deterministic Mealy machines. This is a much harder problem than conformance testing of deterministic systems. The system may adversarially avoid certain states the tester wishes to test. @@ -558,9 +583,9 @@ The conference paper was presented at ICTSS: An extended version will appear soon: \cite[entry][BosJM18]. -\stopdescription +\stopcontribution -\startdescription[title={Learning Product Automata.}] +\startcontribution[title={Learning Product Automata.}] In this article we consider Moore machines with multiple outputs. These machines can be decomposed by projecting on each output, resulting in smaller components that can be learned with fewer queries. @@ -569,7 +594,7 @@ This is all motivated by the idea that compositional methods are widely used thr This work was presented at ICGI 2018: \cite[entry][Moerman18]. -\stopdescription +\stopcontribution \stopsection diff --git a/content/learning-nominal-automata.tex b/content/learning-nominal-automata.tex index ba61511..53490f6 100644 --- a/content/learning-nominal-automata.tex +++ b/content/learning-nominal-automata.tex @@ -1072,7 +1072,7 @@ Let $p$ and $l$ denote respectively the dimension and the number of orbits of $A Let $C_{n,k,m} = (n + m E_{n,k}) (l f_\EAlph(p(n+m), p) + 1) n (k + k \log k + 1)$ be the maximal number of cells in the table. We note that this number is polynomial in $k, l, m$ and $n$ but it is not polynomial in $p$. -\startcorollary +\startcorollary[reference=cor:complexity-nom-lstar] The number of membership queries is bounded by \startformula C_{n,k,m} f_\EAlph(p (n + m), p n (k + k \log k + 1)). @@ -1226,7 +1226,8 @@ It is a well-defined nominal NFA accepting $\lang$. \stopsubsection \startsubsection - [title={\nNLStar{}}] + [title={\nNLStar{}}, + reference=sec:nominal-nlstar] Our nominal version of \NLStar{} again makes use of an observation table $(S, E)$ where $S$ and $E$ are equivariant subsets of $A^{\ast}$ and $row$ is an equivariant function. As in the basic algorithm, we equip $(S, E)$ with a union operation $\rowunion$ and row containment relation $\rowincl$, defined as in \in{Definition}[def:rows-jsl]. @@ -1545,7 +1546,8 @@ The library itself can be downloaded from \citeurl[nlambda-code]. \stopsubsection \startsubsection - [title={Implementation of \nLStar\ and \nNLStar}] + [title={Implementation of \nLStar\ and \nNLStar}, + reference=sec:implementation-nominal-lstar] Using NLambda we implemented the algorithms from \in{Sections}[sec:nangluin] and \in{}[sec:nondet]. We note that the internal representation is slightly different than the one discussed in \in{Section}[sec:nangluin]. @@ -1565,9 +1567,6 @@ Having an approximate teacher is a minor issue since in many applications no com For the experiments listed here the bound was chosen large enough for the learner to terminate with the correct automaton. The code can be found at \citeurl[nominal-lstar-code]. -%We remark that our algorithms seamlessly merge with teachers written in NLambda, but the current version of the library does not allow generating concrete membership queries for external teachers. -%We are currently working on a new version of the library in which this will be possible. -%\todo{Dit kan nu al} \stopsubsection diff --git a/content/minimal-separating-sequences.tex b/content/minimal-separating-sequences.tex index a0f3ece..2c15d5a 100644 --- a/content/minimal-separating-sequences.tex +++ b/content/minimal-separating-sequences.tex @@ -588,7 +588,8 @@ The operation on \inline[line:prepend] is done in constant time by using a linke \stopsection \startsection - [title=Application in Conformance Testing] + [title=Application in Conformance Testing, + reference=sec:appl-conformance-testing] A splitting tree can be used to extract relevant information for two classical test generation methods: a \emph{characterisation set} for the W-method and a \emph{separating family} for the HSI-method. @@ -651,7 +652,8 @@ This can be done in $\bigO(n^2)$ time using a trie data structure. \stopsection \startsection - [title={Experimental Results}] + [title={Experimental Results}, + reference=sec:implementation-partition] We have implemented \in{Algorithms}[alg:increase-k, alg:increase-k-v3] in Go, and we have compared their running time on two sets of FSMs. \footnote{Available at \citeurl[minimal-separating-sequences-code].} diff --git a/content/ordered-nominal-sets.tex b/content/ordered-nominal-sets.tex index eff9768..5afdd41 100644 --- a/content/ordered-nominal-sets.tex +++ b/content/ordered-nominal-sets.tex @@ -80,7 +80,7 @@ In automata learning, the logical structure is not available a-priori, and \ONS{ The structure of this chapter is as follows. \in{Section}[sec:nomsets] contains background on nominal sets and their representation. \in{Section}[sec:total-order] describes the concrete representation of nominal sets, equivariant maps and products in the total order symmetry. -\in{Section}[sec:impl] describes the implementation \ONS{} with complexity results, and \in{Section}[sec:appl] the evaluation of \ONS{} on algorithms for nominal automata. +\in{Section}[sec:implementation-ons] describes the implementation \ONS{} with complexity results, and \in{Section}[sec:appl] the evaluation of \ONS{} on algorithms for nominal automata. Related work is discussed in \in{Section}[sec:related], and future work in \in{Section}[sec:fw]. @@ -414,7 +414,7 @@ As a base case we can represent single orbits by their dimension. \stopsection \startsection [title={Implementation and Complexity of ONS}, - reference=sec:impl] + reference=sec:implementation-ons] The ideas outlined above have been implemented in the \cpp{} library \ONS{} \footnote{\ONS{} can be found at \citeurl[ons-code].} @@ -641,7 +641,7 @@ This guarantees the existence of unique minimal automata. We say an automaton is \emph{minimal} if its set of states has the least number of orbits and each orbit has the smallest dimension possible. \footnote{Abstractly, an automaton is minimal if it has no proper quotients. Minimal deterministic automata are unique up to isomorphism.} -We generalise Moore's minimisation algorithm to nominal DFAs (\in{Algorithm}[alg:moore]) and analyse its time complexity using the bounds from \in{Section}[sec:impl]. +We generalise Moore's minimisation algorithm to nominal DFAs (\in{Algorithm}[alg:moore]) and analyse its time complexity using the bounds from \in{Section}[sec:implementation-ons]. \startplacealgorithm [title={Moore's minimisation algorithm for nominal DFAs.}, @@ -856,7 +856,7 @@ As stated in the introduction, \NLambda{} by \citet[KlinS16] and \LOIS{} by \cit This makes both libraries very flexible and they indeed implement the equality symmetry as well as the total order symmetry. As their representation is not unique, the efficiency depends on how the logical formulas are constructed. As such, they do not provide complexity results. -In contrast, our direct representation allows for complexity results (\in{Section}[sec:impl]) and leads to different performance characteristics (\in{Section}[sec:appl]). +In contrast, our direct representation allows for complexity results (\in{Section}[sec:implementation-ons]) and leads to different performance characteristics (\in{Section}[sec:appl]). A second big difference is that both \NLambda{} and \LOIS{} implement a \quotation{programming paradigm} instead of just a library. This means that they overload natural programming constructs in their host languages (\haskell{} and \cpp{} respectively). diff --git a/environment.tex b/environment.tex index 9df4e3a..0dee3f8 100644 --- a/environment.tex +++ b/environment.tex @@ -36,6 +36,8 @@ This chapter is based on the following publication: \definedescription[description] [headstyle=bold, alternative=serried, width=fit, indenting=yes] +\definedescription[contribution] + [headstyle=bold, alternative=serried, width=fit, indenting=no, after={\blank[2*big]}] \definedescription[step] [headstyle=bold, alternative=serried, width=fit, indenting=yes, text={Step }]