|
|
@ -5,9 +5,9 @@
|
|
|
|
[title={Introduction},
|
|
|
|
[title={Introduction},
|
|
|
|
reference=chap:introduction]
|
|
|
|
reference=chap:introduction]
|
|
|
|
|
|
|
|
|
|
|
|
When I was younger, I often learned how to play with new toys by messing around with them, i.e., by pressing buttons at random, observing its behaviour, pressing more buttons, and so on.
|
|
|
|
When I was younger, I often learned how to play with new toys by messing around with them, i.e., by pressing buttons at random, observing their behaviour, pressing more buttons, and so on.
|
|
|
|
Only resorting to the manual -- or asking \quotation{experts} -- to confirm my beliefs on how the toy works.
|
|
|
|
Only resorting to the manual -- or asking \quotation{experts} -- to confirm my beliefs on how the toy works.
|
|
|
|
Now that I am older, I do mostly the same with new devices, new tools, or new software.
|
|
|
|
Now that I am older, I do mostly the same with new devices, new tools, and new software.
|
|
|
|
However, now I know that this is an established computer science technique, called \emph{model learning}.
|
|
|
|
However, now I know that this is an established computer science technique, called \emph{model learning}.
|
|
|
|
|
|
|
|
|
|
|
|
Model learning
|
|
|
|
Model learning
|
|
|
@ -15,7 +15,7 @@ Model learning
|
|
|
|
The generic name \quotation{model learning} is chosen as a counterpoint to model checking.}
|
|
|
|
The generic name \quotation{model learning} is chosen as a counterpoint to model checking.}
|
|
|
|
is a automated technique to construct a model -- often a type of \emph{automaton} -- from a black box system.
|
|
|
|
is a automated technique to construct a model -- often a type of \emph{automaton} -- from a black box system.
|
|
|
|
The goal of this technique can be manifold:
|
|
|
|
The goal of this technique can be manifold:
|
|
|
|
It can be used to reverse-engineer a system, to find bugs in it, to verify properties of the system, or to understand the system in one way or another.
|
|
|
|
it can be used to reverse-engineer a system, to find bugs in it, to verify properties of the system, or to understand the system in one way or another.
|
|
|
|
It is \emph{not} just random testing: the information learned during the interaction with the system is actively used to guide following interactions.
|
|
|
|
It is \emph{not} just random testing: the information learned during the interaction with the system is actively used to guide following interactions.
|
|
|
|
Additionally, the information learned can be inspected and analysed.
|
|
|
|
Additionally, the information learned can be inspected and analysed.
|
|
|
|
|
|
|
|
|
|
|
@ -40,10 +40,11 @@ For this reason, it makes sense to look at \emph{words}, or \emph{traces}.
|
|
|
|
For an alphabet $\Sigma$, we denote the set of words by $\Sigma^{*}$.
|
|
|
|
For an alphabet $\Sigma$, we denote the set of words by $\Sigma^{*}$.
|
|
|
|
|
|
|
|
|
|
|
|
The learning problem is defined as follows.
|
|
|
|
The learning problem is defined as follows.
|
|
|
|
There is some fixed, but unknown to us, language $\lang \subseteq \Sigma^{*}$.
|
|
|
|
There is some fixed, but unknown, language $\lang \subseteq \Sigma^{*}$.
|
|
|
|
This language may define the behaviour of a software component, a property in model checking, a set of traces from a protocol, etc.
|
|
|
|
This language may define the behaviour of a software component, a property in model checking, a set of traces from a protocol, etc.
|
|
|
|
We wish to infer a description of $\lang$ by only using finitely many words.
|
|
|
|
We wish to infer a description of $\lang$ by only having observed a small part of this language.
|
|
|
|
Note that this is not an easy task, as $\lang$ is often infinite.
|
|
|
|
For example, we may have seen hundred words belonging to the language and a few which do not belong to the language.
|
|
|
|
|
|
|
|
Then conluding with a good description of $\lang$ is difficult, as we are missing information of infinitely many words.
|
|
|
|
|
|
|
|
|
|
|
|
Such a learning problem can be stated and solved in a variety of ways.
|
|
|
|
Such a learning problem can be stated and solved in a variety of ways.
|
|
|
|
In the applications we do in our research group, we often try to infer a model of a software component.
|
|
|
|
In the applications we do in our research group, we often try to infer a model of a software component.
|
|
|
@ -58,7 +59,7 @@ In her framework, the learning algorithm may pose two types of queries to a \emp
|
|
|
|
\description{Membership queries (MQs)}
|
|
|
|
\description{Membership queries (MQs)}
|
|
|
|
The learner poses such a query by providing a word $w \in \Sigma^{*}$ to the teacher.
|
|
|
|
The learner poses such a query by providing a word $w \in \Sigma^{*}$ to the teacher.
|
|
|
|
The teacher will then reply whether $w \in \lang$ or not.
|
|
|
|
The teacher will then reply whether $w \in \lang$ or not.
|
|
|
|
This type of query is often generalised to more general output, so the teacher replies with $\lang(w)$.
|
|
|
|
This type of query is often generalised to more general output, so the teacher replies whether $w \in \lang$ or not.
|
|
|
|
In some papers, such a query is then called an \emph{output query}.
|
|
|
|
In some papers, such a query is then called an \emph{output query}.
|
|
|
|
|
|
|
|
|
|
|
|
\description{Equivalence queries (EQs)}
|
|
|
|
\description{Equivalence queries (EQs)}
|
|
|
@ -78,7 +79,7 @@ Here, the algorithm can again use MQs as before, but the EQs are replace by rand
|
|
|
|
So the allowed query is:
|
|
|
|
So the allowed query is:
|
|
|
|
|
|
|
|
|
|
|
|
\description{Random sample queries (EXs)}
|
|
|
|
\description{Random sample queries (EXs)}
|
|
|
|
If the learner poses this query (there are no parameters), the teacher responds with a random word $w$, together with its label $w \in \lang$.
|
|
|
|
If the learner poses this query (there are no parameters), the teacher responds with a random word $w$ together with its label (i.e., whether $w \in \lang$ or not).
|
|
|
|
|
|
|
|
|
|
|
|
Instead of requiring that the learner exactly learns the model, we only require the following.
|
|
|
|
Instead of requiring that the learner exactly learns the model, we only require the following.
|
|
|
|
The learner should \emph{probably} return a model which is \emph{approximate} to the target.
|
|
|
|
The learner should \emph{probably} return a model which is \emph{approximate} to the target.
|
|
|
@ -87,7 +88,7 @@ Note that there are two uncertainties: the probable and the approximate part.
|
|
|
|
Both part are bounded by parameters, so one can determine the confidence.
|
|
|
|
Both part are bounded by parameters, so one can determine the confidence.
|
|
|
|
|
|
|
|
|
|
|
|
As with many problems in computer science, we are also interested in the \emph{efficiency} of learning algorithms.
|
|
|
|
As with many problems in computer science, we are also interested in the \emph{efficiency} of learning algorithms.
|
|
|
|
Instead of measuring time or space, we often analyse the number of queries posed by an algorithm.
|
|
|
|
Instead of measuring time or space, we analyse the number of queries posed by an algorithm.
|
|
|
|
Efficiency often means that we require a polynomial number of queries.
|
|
|
|
Efficiency often means that we require a polynomial number of queries.
|
|
|
|
But polynomial in what?
|
|
|
|
But polynomial in what?
|
|
|
|
The learner has no input, other than the access to a teacher.
|
|
|
|
The learner has no input, other than the access to a teacher.
|
|
|
@ -101,8 +102,8 @@ The transformation from exact learning to PAC learning is implemented by simply
|
|
|
|
This can be postponed until we actually implement a learning algorithm and apply it.
|
|
|
|
This can be postponed until we actually implement a learning algorithm and apply it.
|
|
|
|
|
|
|
|
|
|
|
|
When using only EQs, only MQs, or only EXs, then there are hardness results for learning DFAs.
|
|
|
|
When using only EQs, only MQs, or only EXs, then there are hardness results for learning DFAs.
|
|
|
|
So the combinations MQs + EQs and MQs + EXs have been carefully been picked.
|
|
|
|
So the combinations MQs + EQs and MQs + EXs have been carefully picked,
|
|
|
|
They provide a minimal basis for efficient learning.
|
|
|
|
they provide a minimal basis for efficient learning.
|
|
|
|
See the book of \citet[KearnsV94] for such hardness results and more information on PAC learning.
|
|
|
|
See the book of \citet[KearnsV94] for such hardness results and more information on PAC learning.
|
|
|
|
|
|
|
|
|
|
|
|
So far, all the queries are assumed to be \emph{just there}.
|
|
|
|
So far, all the queries are assumed to be \emph{just there}.
|
|
|
@ -113,7 +114,7 @@ Nevertheless, at some point in time one has to implement them.
|
|
|
|
In our case of learning software behaviour, membership queries are easily implemented:
|
|
|
|
In our case of learning software behaviour, membership queries are easily implemented:
|
|
|
|
Simply provide the word $w$ to a running instance of the software and observe the output.
|
|
|
|
Simply provide the word $w$ to a running instance of the software and observe the output.
|
|
|
|
\footnote{In reality, it is a bit harder than this.
|
|
|
|
\footnote{In reality, it is a bit harder than this.
|
|
|
|
There are plentiful of challenges to solve, such as timing, choosing your alphabet, choosing the kind of observations to make and being able to faithfully reset the software.}
|
|
|
|
There are plentiful of challenges to solve, such as timing, choosing your alphabet, choosing the kind of observations to make, and being able to reliably reset the software.}
|
|
|
|
Equivalence queries, however, are in general not doable.
|
|
|
|
Equivalence queries, however, are in general not doable.
|
|
|
|
Even if we have the (machine) code, it is often way too complicated to check equivalence.
|
|
|
|
Even if we have the (machine) code, it is often way too complicated to check equivalence.
|
|
|
|
That is why we resort to testing with EX queries.
|
|
|
|
That is why we resort to testing with EX queries.
|
|
|
@ -129,7 +130,7 @@ Yet, as applications show, this is a useful way of learning software behaviour.
|
|
|
|
\startsection
|
|
|
|
\startsection
|
|
|
|
[title={Research challenges}]
|
|
|
|
[title={Research challenges}]
|
|
|
|
|
|
|
|
|
|
|
|
Model learning is far from a
|
|
|
|
% Model learning is far from a
|
|
|
|
In this thesis, we will mostly see learning of DFAs or Mealy machines.
|
|
|
|
In this thesis, we will mostly see learning of DFAs or Mealy machines.
|
|
|
|
Although this is restrictive as many pieces of software require richer models, it has been successfully applied in many different areas.
|
|
|
|
Although this is restrictive as many pieces of software require richer models, it has been successfully applied in many different areas.
|
|
|
|
The restrictions include the following.
|
|
|
|
The restrictions include the following.
|
|
|
@ -137,23 +138,24 @@ The restrictions include the following.
|
|
|
|
\item The system behaves deterministically.
|
|
|
|
\item The system behaves deterministically.
|
|
|
|
\item One can reliably reset the system.
|
|
|
|
\item One can reliably reset the system.
|
|
|
|
\item The system can be modelled with a finite state space.
|
|
|
|
\item The system can be modelled with a finite state space.
|
|
|
|
This also means that the model cannot cope with time or data.
|
|
|
|
This also means that the model does not incorporate time or data.
|
|
|
|
\item The set of input actions is finite.
|
|
|
|
\item The set of input actions is finite.
|
|
|
|
\item One knows when the target is reached.
|
|
|
|
\item One knows when the target is reached.
|
|
|
|
\stopitemize
|
|
|
|
\stopitemize
|
|
|
|
|
|
|
|
|
|
|
|
\description{Research challenge 1: Confidence in the hypothesis.}
|
|
|
|
\description{Research challenge 1: Confidence in the hypothesis.}
|
|
|
|
|
|
|
|
\todo{/ Approximating equivalence queries.}
|
|
|
|
Having confidence in a learned model is difficult.
|
|
|
|
Having confidence in a learned model is difficult.
|
|
|
|
We have PAC guarantees (as discussed before), but sometimes we may require other guarantees.
|
|
|
|
We have PAC guarantees (as discussed before), but sometimes we may want to draw other conclusions.
|
|
|
|
For example, we may require the hypothesis to be correct, provided that the real system is implemented with a certain number of states.
|
|
|
|
For example, we may require the hypothesis to be correct, provided that the real system is implemented with a certain number of states.
|
|
|
|
Efficiency is important here:
|
|
|
|
Efficiency is important here:
|
|
|
|
We want to obtain those guarantees fast and we want to find quickly counterexamples when the hypothesis is wrong.
|
|
|
|
We want to obtain those guarantees fast and we want to quickly find counterexamples when the hypothesis is wrong.
|
|
|
|
Test generation methods is the topic of the first part in this thesis.
|
|
|
|
Test generation methods is the topic of the first part in this thesis.
|
|
|
|
We will review existing algorithms and discuss new algorithms for test generation.
|
|
|
|
We will review existing algorithms and discuss new algorithms for test generation.
|
|
|
|
|
|
|
|
|
|
|
|
\startdescription[title={Research challenge 2: Generalisation to infinite alphabets.}]
|
|
|
|
\startdescription[title={Research challenge 2: Generalisation to infinite alphabets.}]
|
|
|
|
Automata over infinite alphabets are very useful for modelling protocols which involve identifiers or timestamps.
|
|
|
|
Automata over infinite alphabets are very useful for modelling protocols which involve identifiers or timestamps.
|
|
|
|
Not only the alphabet is infinite in these cases, the state-space is as well, since the values have to be remembered.
|
|
|
|
Not only is the alphabet infinite in these cases, the state-space is as well, since the values have to be remembered.
|
|
|
|
In the second part of this thesis, we will see how nominal techniques can be used to tackle this challenge.
|
|
|
|
In the second part of this thesis, we will see how nominal techniques can be used to tackle this challenge.
|
|
|
|
|
|
|
|
|
|
|
|
Being able to learn automata over an infinite alphabet is not new.
|
|
|
|
Being able to learn automata over an infinite alphabet is not new.
|
|
|
@ -164,6 +166,8 @@ These abstractions are automatically refined when a counterexample is presented
|
|
|
|
In the thesis by \citet[Cassel15], another approach is taken.
|
|
|
|
In the thesis by \citet[Cassel15], another approach is taken.
|
|
|
|
The queries are changed to tree queries.
|
|
|
|
The queries are changed to tree queries.
|
|
|
|
The approach in my thesis will be based on symmetries, which gives yet another insight into the problem of learning such automata.
|
|
|
|
The approach in my thesis will be based on symmetries, which gives yet another insight into the problem of learning such automata.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\todo{Ook mensen als Bernhard citeren.}
|
|
|
|
\stopdescription
|
|
|
|
\stopdescription
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@ -211,10 +215,10 @@ This gave developers opportunities to solve problems before replacing the old co
|
|
|
|
|
|
|
|
|
|
|
|
An important step in automata learning is equivalence checking.
|
|
|
|
An important step in automata learning is equivalence checking.
|
|
|
|
Normally, this is abstracted away and done by an oracle, but we intend to implement such an oracle ourselves for our applications.
|
|
|
|
Normally, this is abstracted away and done by an oracle, but we intend to implement such an oracle ourselves for our applications.
|
|
|
|
Concretely, the problem we need to solve is that of \emph{conformance checking}\footnote{It is also known as \emph{machine verification} or \emph{fault detection}.} and this was first described by \citet[Moore56].
|
|
|
|
Concretely, the problem we need to solve is that of \emph{conformance checking}\footnote{Also known as \emph{machine verification} or \emph{fault detection}.} and this was first described by \citet[Moore56].
|
|
|
|
|
|
|
|
|
|
|
|
The problem is as follows:
|
|
|
|
The problem is as follows:
|
|
|
|
We are given the description of a finite state machine and a black box system, does the system behave exactly as the description?
|
|
|
|
Given the description of a finite state machine and a black box system, does the system behave exactly as the description?
|
|
|
|
We wish te determine this by running experiments on the system (as it is black box).
|
|
|
|
We wish te determine this by running experiments on the system (as it is black box).
|
|
|
|
It should be clear that this is a hopelessly difficult task, as an error can be hidden arbitrarily deep in the system.
|
|
|
|
It should be clear that this is a hopelessly difficult task, as an error can be hidden arbitrarily deep in the system.
|
|
|
|
That is why we often assume some knowledge of the system.
|
|
|
|
That is why we often assume some knowledge of the system.
|
|
|
@ -228,7 +232,7 @@ These techniques will be discussed in detail in \in{Chapter}[chap:test-methods].
|
|
|
|
More background and other related problems, as well as their complexity results, are well exposed in a survey of \citet[LeeY94].
|
|
|
|
More background and other related problems, as well as their complexity results, are well exposed in a survey of \citet[LeeY94].
|
|
|
|
|
|
|
|
|
|
|
|
In the following example we model a record player as a finite state machine.
|
|
|
|
In the following example we model a record player as a finite state machine.
|
|
|
|
We will not model the audible output -- that would depend not only on the device, but also the record one chooses to play.
|
|
|
|
We will not model the audible output -- that would depend not only on the device, but also the record one chooses to play\footnote{In particular, we have to add time to the model as one side of a record only lasts for roughly 25 minutes. Unless we take a record with sound on the locked groove such as the \emph{Sgt. Pepper's Lonely Hearts Club Band} album by \emph{The Beatles}.}.
|
|
|
|
Instead, the only observation we can make is looking how fast the turntable spins.
|
|
|
|
Instead, the only observation we can make is looking how fast the turntable spins.
|
|
|
|
The device has two buttons: a start-stop button (\playstop) and a speed button (\spin) which toggles between $33 \frac{1}{3}$ rpm and $45$ rpm.
|
|
|
|
The device has two buttons: a start-stop button (\playstop) and a speed button (\spin) which toggles between $33 \frac{1}{3}$ rpm and $45$ rpm.
|
|
|
|
When turned on, the system starts playing immediately at $33 \frac{1}{3}$ rpm -- this is useful for DJing.
|
|
|
|
When turned on, the system starts playing immediately at $33 \frac{1}{3}$ rpm -- this is useful for DJing.
|
|
|
@ -256,11 +260,11 @@ The intended behaviour of the records player has four states as depicted in \in{
|
|
|
|
\stopplacefigure
|
|
|
|
\stopplacefigure
|
|
|
|
|
|
|
|
|
|
|
|
Let us consider some faults which could be present in an implementation with four states.
|
|
|
|
Let us consider some faults which could be present in an implementation with four states.
|
|
|
|
In \in{Figure}[fig:faulty-record-players] two flawed records players are given.
|
|
|
|
In \in{Figure}[fig:faulty-record-players], two flawed records players are given.
|
|
|
|
In the first (\in{Figure}{a}[fig:faulty-record-players]) the sequence \playstop\,\spin\,\spin\, leads us to the wrong state.
|
|
|
|
In the first (\in{Figure}{a}[fig:faulty-record-players]), the sequence \playstop\,\spin\,\spin\, leads us to the wrong state.
|
|
|
|
However, this is not immediately observable, the turntable is in a non-spinning state as it should be.
|
|
|
|
However, this is not immediately observable, the turntable is in a non-spinning state as it should be.
|
|
|
|
The fault is only visible when we press \playstop\, once more: now the turntable is spinning fast instead of slow.
|
|
|
|
The fault is only visible when we press \playstop\, once more: now the turntable is spinning fast instead of slow.
|
|
|
|
The sequence \playstop\,\spin\,\spin\,\playstop\, is a counterexample.
|
|
|
|
The sequence \playstop\,\spin\,\spin\,\playstop\, is a \emph{counterexample}.
|
|
|
|
In the second example (\in{Figure}{b}[fig:faulty-record-players]), the fault is again not immediately obvious: after pressing \spin\,\playstop\, we are in the wrong state as observably by pressing \playstop.
|
|
|
|
In the second example (\in{Figure}{b}[fig:faulty-record-players]), the fault is again not immediately obvious: after pressing \spin\,\playstop\, we are in the wrong state as observably by pressing \playstop.
|
|
|
|
Here, the counterexample is \spin\,\playstop\,\playstop.
|
|
|
|
Here, the counterexample is \spin\,\playstop\,\playstop.
|
|
|
|
|
|
|
|
|
|
|
@ -309,9 +313,6 @@ In this example, a test suite should at least include the sequences \playstop\,\
|
|
|
|
\stopplacefigure
|
|
|
|
\stopplacefigure
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\todo{I want to move some related work from the second chapter to here}.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\stopsection
|
|
|
|
\stopsection
|
|
|
|
\startsection
|
|
|
|
\startsection
|
|
|
|
[title={Nominal Techniques}]
|
|
|
|
[title={Nominal Techniques}]
|
|
|
@ -360,6 +361,7 @@ We will only informally discuss its semantics for now.
|
|
|
|
To model the behaviour nicely, we want the domain of passwords to be infinite.
|
|
|
|
To model the behaviour nicely, we want the domain of passwords to be infinite.
|
|
|
|
After all, one should allow arbitrarily long passwords to be secure.
|
|
|
|
After all, one should allow arbitrarily long passwords to be secure.
|
|
|
|
This means that a register automaton is actually an automaton over an infinite alphabet.
|
|
|
|
This means that a register automaton is actually an automaton over an infinite alphabet.
|
|
|
|
|
|
|
|
\todo{Opmerking Frits}
|
|
|
|
|
|
|
|
|
|
|
|
Common algorithms for automata, such as learning, will not work with an infinite alphabet.
|
|
|
|
Common algorithms for automata, such as learning, will not work with an infinite alphabet.
|
|
|
|
Any loop which iterates over the alphabet will diverge.
|
|
|
|
Any loop which iterates over the alphabet will diverge.
|
|
|
@ -453,7 +455,8 @@ The following chapters are split into two parts.
|
|
|
|
\in{Part}[part:testing] contains material about testing techniques, while \in{Part}[part:nominal] is about nominal techniques.
|
|
|
|
\in{Part}[part:testing] contains material about testing techniques, while \in{Part}[part:nominal] is about nominal techniques.
|
|
|
|
Each chapter could be read in isolation.
|
|
|
|
Each chapter could be read in isolation.
|
|
|
|
However, the chapters do get more technical and mathematical -- especially in \in{Part}[part:nominal].
|
|
|
|
However, the chapters do get more technical and mathematical -- especially in \in{Part}[part:nominal].
|
|
|
|
\todo{Should I be more concrete about my contribution?}
|
|
|
|
|
|
|
|
|
|
|
|
Detailed discussion on the contributions, relations to other research, and future directions of research are presented in each chapter.
|
|
|
|
|
|
|
|
|
|
|
|
\startdescription[title={\in{Chapter}[chap:test-methods]: FSM-based test methods.}]
|
|
|
|
\startdescription[title={\in{Chapter}[chap:test-methods]: FSM-based test methods.}]
|
|
|
|
This chapter introduces test generation methods which can be used for learning.
|
|
|
|
This chapter introduces test generation methods which can be used for learning.
|
|
|
@ -465,7 +468,7 @@ One particular contribution (besides the theoretical framework) is the new \emph
|
|
|
|
\startdescription[title={\in{Chapter}[chap:applying-automata-learning]: Applying automata learning to embedded control software.}]
|
|
|
|
\startdescription[title={\in{Chapter}[chap:applying-automata-learning]: Applying automata learning to embedded control software.}]
|
|
|
|
In this chapter we will apply model learning to an industrial case study.
|
|
|
|
In this chapter we will apply model learning to an industrial case study.
|
|
|
|
It is a unique benchmark as it is much bigger than any of the applications seen before (3410 states and 77 inputs).
|
|
|
|
It is a unique benchmark as it is much bigger than any of the applications seen before (3410 states and 77 inputs).
|
|
|
|
This makes it very challenging to learn a model and the main obstacle is finding counterexamples.
|
|
|
|
This makes it challenging to learn a model and the main obstacle is finding counterexamples.
|
|
|
|
Here, my main contribution is the new test generation method from \in{Chapter}[chap:test-methods].
|
|
|
|
Here, my main contribution is the new test generation method from \in{Chapter}[chap:test-methods].
|
|
|
|
|
|
|
|
|
|
|
|
This is based on the following publication:
|
|
|
|
This is based on the following publication:
|
|
|
@ -476,7 +479,7 @@ This is based on the following publication:
|
|
|
|
\startdescription[title={\in{Chapter}[chap:minimal-separating-sequences]: Minimal separating sequences for all pairs of states.}]
|
|
|
|
\startdescription[title={\in{Chapter}[chap:minimal-separating-sequences]: Minimal separating sequences for all pairs of states.}]
|
|
|
|
Continuing on test generation methods, this chapter presents an efficient algorithm to construct separating sequences.
|
|
|
|
Continuing on test generation methods, this chapter presents an efficient algorithm to construct separating sequences.
|
|
|
|
Not only is the algorithm efficient -- it runs in $\bigO(n \log n)$ time -- it also constructs minimal length sequences.
|
|
|
|
Not only is the algorithm efficient -- it runs in $\bigO(n \log n)$ time -- it also constructs minimal length sequences.
|
|
|
|
The algorithm is based on a minimisation algorithm by \citet[Hopcroft71], but extending it to construct witnesses is non-trivial.
|
|
|
|
The algorithm is inspired by a minimisation algorithm by \citet[Hopcroft71], but extending it to construct witnesses is non-trivial.
|
|
|
|
|
|
|
|
|
|
|
|
This is based on the following publication:
|
|
|
|
This is based on the following publication:
|
|
|
|
|
|
|
|
|
|
|
@ -517,11 +520,11 @@ This is based on a paper under submission:
|
|
|
|
\cite[entry][MoermanR19].
|
|
|
|
\cite[entry][MoermanR19].
|
|
|
|
\stopdescription
|
|
|
|
\stopdescription
|
|
|
|
|
|
|
|
|
|
|
|
Besides these chapters, of which many are based on publications, the author has published the following papers.
|
|
|
|
Besides these chapters in this thesis, the author has published the following papers.
|
|
|
|
These are not included in this thesis, but a short summary of those papers is presented below.
|
|
|
|
These are not included in this thesis, but a short summary of those papers is presented below.
|
|
|
|
|
|
|
|
|
|
|
|
\startdescription[title={Complementing Model Learning with Mutation-Based Fuzzing.}]
|
|
|
|
\startdescription[title={Complementing Model Learning with Mutation-Based Fuzzing.}]
|
|
|
|
Our group at the Radboud University participated in the RERS challenge.
|
|
|
|
Our group at the Radboud University participated in the RERS challenge 2016.
|
|
|
|
This is a challenge where reactive software is provided and researchers have to asses validity of certain properties (given as LTL specifications).
|
|
|
|
This is a challenge where reactive software is provided and researchers have to asses validity of certain properties (given as LTL specifications).
|
|
|
|
We approached this with model learning:
|
|
|
|
We approached this with model learning:
|
|
|
|
Instead of analysing the source code, we simply learned the external behaviour, and then used model checking on the learned model.
|
|
|
|
Instead of analysing the source code, we simply learned the external behaviour, and then used model checking on the learned model.
|
|
|
@ -537,7 +540,7 @@ In this paper, we investigate complete test suites for labelled transition syste
|
|
|
|
This is a much harder problem than conformance testing of deterministic systems.
|
|
|
|
This is a much harder problem than conformance testing of deterministic systems.
|
|
|
|
The system may adversarially avoid certain states the tester wishes to test.
|
|
|
|
The system may adversarially avoid certain states the tester wishes to test.
|
|
|
|
We provide a test suite which is $n$-complete (provided the implementation is a suspension automaton).
|
|
|
|
We provide a test suite which is $n$-complete (provided the implementation is a suspension automaton).
|
|
|
|
My personal contribution here is the proof of completeness, which resembles the proof presented in \in{Chapter}[chap:test-methods] closely.
|
|
|
|
My main personal contribution here is the proof of completeness, which resembles the proof presented in \in{Chapter}[chap:test-methods] closely.
|
|
|
|
The conference paper was presented at ICTSS:
|
|
|
|
The conference paper was presented at ICTSS:
|
|
|
|
|
|
|
|
|
|
|
|
\cite[entry][BosJM17].
|
|
|
|
\cite[entry][BosJM17].
|
|
|
@ -553,11 +556,19 @@ These machines can be decomposed by projecting on each output,
|
|
|
|
resulting in smaller components that can be learned with fewer queries.
|
|
|
|
resulting in smaller components that can be learned with fewer queries.
|
|
|
|
We give experimental evidence that this is a useful technique which can reduce the number of queries substantially.
|
|
|
|
We give experimental evidence that this is a useful technique which can reduce the number of queries substantially.
|
|
|
|
This is all motivated by the idea that compositional methods are widely used throughout engineering and that we should use this in model learning.
|
|
|
|
This is all motivated by the idea that compositional methods are widely used throughout engineering and that we should use this in model learning.
|
|
|
|
This was presented by Frits Vaandrager at ICGI:
|
|
|
|
This work was presented at ICGI:
|
|
|
|
|
|
|
|
|
|
|
|
\cite[entry][Moerman18].
|
|
|
|
\cite[entry][Moerman18].
|
|
|
|
\stopdescription
|
|
|
|
\stopdescription
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\stopsection
|
|
|
|
|
|
|
|
\startsection
|
|
|
|
|
|
|
|
[title=Conclusion and Outlook]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\todo{Twee wegen. Komen ze nog bij elkaar? Nut van nominale technieken.}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\stopsection
|
|
|
|
\stopsection
|
|
|
|
\referencesifcomponent
|
|
|
|
\referencesifcomponent
|
|
|
|
\stopchapter
|
|
|
|
\stopchapter
|
|
|
|