US -> UK engels
This commit is contained in:
parent
a8f38d4e03
commit
a9bbb8e5d2
5 changed files with 58 additions and 58 deletions
|
@ -40,7 +40,7 @@ The core of the teacher is a \emph{System Under Test (SUT),} a reactive system t
|
||||||
The learner interacts with the SUT to infer a model by sending inputs and observing the resulting outputs (\quotation{membership queries}).
|
The learner interacts with the SUT to infer a model by sending inputs and observing the resulting outputs (\quotation{membership queries}).
|
||||||
In order to find out whether an inferred model is correct, the learner may pose an \quotation{equivalence query}.
|
In order to find out whether an inferred model is correct, the learner may pose an \quotation{equivalence query}.
|
||||||
The teacher uses a model-based testing (MBT) tool to try and answer such queries:
|
The teacher uses a model-based testing (MBT) tool to try and answer such queries:
|
||||||
Given a hypothesized model, an MBT tool generates a long test sequence using some conformance testing method.
|
Given a hypothesised model, an MBT tool generates a long test sequence using some conformance testing method.
|
||||||
If the SUT passes this test, then the teacher informs the learner that the model is deemed correct.
|
If the SUT passes this test, then the teacher informs the learner that the model is deemed correct.
|
||||||
If the outputs of the SUT and the model differ, this constitutes a counterexample, which is returned to the learner.
|
If the outputs of the SUT and the model differ, this constitutes a counterexample, which is returned to the learner.
|
||||||
Based on such a counterexample, the learner may then construct an improved hypothesis.
|
Based on such a counterexample, the learner may then construct an improved hypothesis.
|
||||||
|
@ -158,8 +158,8 @@ The ACM requests the engine to switch between running and standby status.
|
||||||
The other clients request transitions between the other statuses, such as idle, sleep, standby and low power.
|
The other clients request transitions between the other statuses, such as idle, sleep, standby and low power.
|
||||||
All the other clients have the same lowest priority.
|
All the other clients have the same lowest priority.
|
||||||
%
|
%
|
||||||
The Top Capsule instantiates the ESM and communicates with it during the initialization of the ESM.
|
The Top Capsule instantiates the ESM and communicates with it during the initialisation of the ESM.
|
||||||
The Information Manager provides some parameters during the initialization.
|
The Information Manager provides some parameters during the initialisation.
|
||||||
|
|
||||||
There are more managers connected to the ESM but they are of less importance and are thus not mentioned here.
|
There are more managers connected to the ESM but they are of less importance and are thus not mentioned here.
|
||||||
|
|
||||||
|
@ -270,7 +270,7 @@ Based on domain knowledge and discussions with the Oc\'{e} engineers, we could g
|
||||||
When learning the ESM using one function, 83 concrete inputs are grouped into four abstract inputs.
|
When learning the ESM using one function, 83 concrete inputs are grouped into four abstract inputs.
|
||||||
When using two functions, 126 concrete inputs can be grouped.
|
When using two functions, 126 concrete inputs can be grouped.
|
||||||
When an abstract input needs to be sent to the ESM, one concrete input of the represented group is randomly selected, as in the approach of \citet[DBLP:journals/fmsd/AartsJUV15].
|
When an abstract input needs to be sent to the ESM, one concrete input of the represented group is randomly selected, as in the approach of \citet[DBLP:journals/fmsd/AartsJUV15].
|
||||||
This is a valid abstraction because all the inputs in the group have exactly the same behavior in any state of the ESM.
|
This is a valid abstraction because all the inputs in the group have exactly the same behaviour in any state of the ESM.
|
||||||
This has been verified by doing code inspection.
|
This has been verified by doing code inspection.
|
||||||
No other abstractions were found during the research.
|
No other abstractions were found during the research.
|
||||||
After the inputs are grouped a total of 77 inputs remain when learning the ESM using 1 function, and 105 inputs remain when using 2 functions.
|
After the inputs are grouped a total of 77 inputs remain when learning the ESM using 1 function, and 105 inputs remain when using 2 functions.
|
||||||
|
@ -316,8 +316,8 @@ An average test query takes around 1 ms, so it would take about 7 years to execu
|
||||||
In order to reduce the number of tests, \citet[DBLP:journals/tse/Chow78] and
|
In order to reduce the number of tests, \citet[DBLP:journals/tse/Chow78] and
|
||||||
\citet[Vasilevskii73] pioneered the so called W-method.
|
\citet[Vasilevskii73] pioneered the so called W-method.
|
||||||
In their framework a test query consists of a prefix $p$ bringing the SUT to a specific state, a (random) middle part $m$ and a suffix $s$ assuring that the SUT is in the appropriate state.
|
In their framework a test query consists of a prefix $p$ bringing the SUT to a specific state, a (random) middle part $m$ and a suffix $s$ assuring that the SUT is in the appropriate state.
|
||||||
This results in a test suite of the form $P I^{\leq k} W$, where $P$ is a set of (shortest) access sequences, $I^{\leq k}$ the set of all sequences of length at most $k$, and $W$ is a characterization set.
|
This results in a test suite of the form $P I^{\leq k} W$, where $P$ is a set of (shortest) access sequences, $I^{\leq k}$ the set of all sequences of length at most $k$, and $W$ is a characterisation set.
|
||||||
Classically, this characterization set is constructed by taking the set of all (pairwise) separating sequences.
|
Classically, this characterisation set is constructed by taking the set of all (pairwise) separating sequences.
|
||||||
For $k=1$ this test suite is complete in the sense that if the SUT passes all tests, then either the SUT is equivalent to the specification or the SUT has strictly more states than the specification.
|
For $k=1$ this test suite is complete in the sense that if the SUT passes all tests, then either the SUT is equivalent to the specification or the SUT has strictly more states than the specification.
|
||||||
By increasing $k$ we can check additional states.
|
By increasing $k$ we can check additional states.
|
||||||
|
|
||||||
|
@ -327,7 +327,7 @@ The generated test suite, however, was still too big in our learning context.
|
||||||
This allows us to only take a subset of $W$ which is relevant for a specific state.
|
This allows us to only take a subset of $W$ which is relevant for a specific state.
|
||||||
This slightly reduces the test suite without losing the power of the full test suite.
|
This slightly reduces the test suite without losing the power of the full test suite.
|
||||||
This method is known as the Wp-method.
|
This method is known as the Wp-method.
|
||||||
More importantly, this observation allows for generalizations where we can carefully pick the suffixes.
|
More importantly, this observation allows for generalisations where we can carefully pick the suffixes.
|
||||||
|
|
||||||
In the presence of an (adaptive) distinguishing sequence one can take $W$ to be a single suffix, greatly reducing the test suite.
|
In the presence of an (adaptive) distinguishing sequence one can take $W$ to be a single suffix, greatly reducing the test suite.
|
||||||
\citet[DBLP:journals/tc/LeeY94] describe an algorithm (which we will refer to as the LY algorithm) to efficiently construct this sequence, if it exists.
|
\citet[DBLP:journals/tc/LeeY94] describe an algorithm (which we will refer to as the LY algorithm) to efficiently construct this sequence, if it exists.
|
||||||
|
@ -362,11 +362,11 @@ If this does not provide a counterexample, we will randomly pick test queries fr
|
||||||
|
|
||||||
Using the above method the algorithm still failed to learn the ESM.
|
Using the above method the algorithm still failed to learn the ESM.
|
||||||
By looking at the RRRT-based model we were able to see why the algorithm failed to learn.
|
By looking at the RRRT-based model we were able to see why the algorithm failed to learn.
|
||||||
In the initialization phase, the controller gives exceptional behavior when providing a certain input eight times consecutively.
|
In the initialisation phase, the controller gives exceptional behaviour when providing a certain input eight times consecutively.
|
||||||
Of course such a sequence is hard to find in the above testing method.
|
Of course such a sequence is hard to find in the above testing method.
|
||||||
With this knowledge we could construct a single counterexample by hand by which means the algorithm was able to learn the ESM.
|
With this knowledge we could construct a single counterexample by hand by which means the algorithm was able to learn the ESM.
|
||||||
|
|
||||||
In order to automate this process, we defined a sub-alphabet of actions that are important during the initialization phase of the controller.
|
In order to automate this process, we defined a sub-alphabet of actions that are important during the initialisation phase of the controller.
|
||||||
This sub-alphabet will be used a bit more often than the full alphabet. We do this as follows.
|
This sub-alphabet will be used a bit more often than the full alphabet. We do this as follows.
|
||||||
%
|
%
|
||||||
We start testing with the alphabet which provided a counterexample for the previous hypothesis (for the first hypothesis we take the subalphabet).
|
We start testing with the alphabet which provided a counterexample for the previous hypothesis (for the first hypothesis we take the subalphabet).
|
||||||
|
@ -549,7 +549,7 @@ This makes the HEFSM input enabled without having to specify all the transitions
|
||||||
In RRRT it is possible to write the guard and assignment in \cpp{} code.
|
In RRRT it is possible to write the guard and assignment in \cpp{} code.
|
||||||
It is thus possible that the value of a variable changes while an input signal is processed.
|
It is thus possible that the value of a variable changes while an input signal is processed.
|
||||||
In the HEFSM however all the assignments only take effect after the input signal is processed.
|
In the HEFSM however all the assignments only take effect after the input signal is processed.
|
||||||
In order to simulate this behavior the \emph{next} function is used.
|
In order to simulate this behaviour the \emph{next} function is used.
|
||||||
This function takes a variable name and evaluates to the value of this variable after the transition.
|
This function takes a variable name and evaluates to the value of this variable after the transition.
|
||||||
\stopdescription
|
\stopdescription
|
||||||
|
|
||||||
|
@ -558,15 +558,15 @@ This function takes a variable name and evaluates to the value of this variable
|
||||||
\startsubsection
|
\startsubsection
|
||||||
[title={Results}]
|
[title={Results}]
|
||||||
|
|
||||||
\in{Figure}[fig:esra-model] shows a visualization of the learned model that was generated using Gephi \citep[DBLP:conf/icwsm/BastianHJ09].
|
\in{Figure}[fig:esra-model] shows a visualisation of the learned model that was generated using Gephi \citep[DBLP:conf/icwsm/BastianHJ09].
|
||||||
States are coloured according to the strongly connected components.
|
States are coloured according to the strongly connected components.
|
||||||
The number of transitions between two states is represented by the thickness of the edge.
|
The number of transitions between two states is represented by the thickness of the edge.
|
||||||
The large number of states (3.410) and transitions (262.570) makes it hard to visualize this model.
|
The large number of states (3.410) and transitions (262.570) makes it hard to visualise this model.
|
||||||
Nevertheless, the visualization does provide insight in the behaviour of the ESM.
|
Nevertheless, the visualisation does provide insight in the behaviour of the ESM.
|
||||||
The three protrusions at the bottom of \in{Figure}[fig:esra-model] correspond to deadlocks in the model.
|
The three protrusions at the bottom of \in{Figure}[fig:esra-model] correspond to deadlocks in the model.
|
||||||
These deadlocks are \quotation{error} states that are present in the ESM by design.
|
These deadlocks are \quotation{error} states that are present in the ESM by design.
|
||||||
According to the Oc\'{e} engineers, the sequences of inputs that are needed to drive the ESM into these deadlock states will always be followed by a system power reset.
|
According to the Oc\'{e} engineers, the sequences of inputs that are needed to drive the ESM into these deadlock states will always be followed by a system power reset.
|
||||||
The protrusion at the top right of the figure corresponds to the initialization phase of the ESM.
|
The protrusion at the top right of the figure corresponds to the initialisation phase of the ESM.
|
||||||
This phase is performed only once and thus only transitions from the initialisation cluster to the main body of states are present.
|
This phase is performed only once and thus only transitions from the initialisation cluster to the main body of states are present.
|
||||||
|
|
||||||
\startplacefigure
|
\startplacefigure
|
||||||
|
@ -577,7 +577,7 @@ This phase is performed only once and thus only transitions from the initialisat
|
||||||
|
|
||||||
During the construction of the RRRT-based model, the ESM code was thoroughly inspected.
|
During the construction of the RRRT-based model, the ESM code was thoroughly inspected.
|
||||||
This resulted in the discovery of missing behaviour in one transition of the ESM code.
|
This resulted in the discovery of missing behaviour in one transition of the ESM code.
|
||||||
An Oc\'{e} software engineer confirmed that this behavior is a (minor) bug, which will be fixed.
|
An Oc\'{e} software engineer confirmed that this behaviour is a (minor) bug, which will be fixed.
|
||||||
We have verified the equivalence of the learned model and the RRRT-based model by using CADP \citep[DBLP:conf/tacas/GaravelLMS11].
|
We have verified the equivalence of the learned model and the RRRT-based model by using CADP \citep[DBLP:conf/tacas/GaravelLMS11].
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -19,7 +19,7 @@ An implementation using a recently developed Haskell library for nominal computa
|
||||||
[title={Introduction}]
|
[title={Introduction}]
|
||||||
|
|
||||||
Automata are a well established computational abstraction with a wide range of applications, including modelling and verification of (security) protocols, hardware, and software systems.
|
Automata are a well established computational abstraction with a wide range of applications, including modelling and verification of (security) protocols, hardware, and software systems.
|
||||||
In an ideal world, a model would be available before a system or protocol is deployed in order to provide ample opportunity for checking important properties that must hold and only then the actual system would be synthesized from the verified model.
|
In an ideal world, a model would be available before a system or protocol is deployed in order to provide ample opportunity for checking important properties that must hold and only then the actual system would be synthesised from the verified model.
|
||||||
Unfortunately, this is not at all the reality:
|
Unfortunately, this is not at all the reality:
|
||||||
Systems and protocols are developed and coded in short spans of time and if mistakes occur they are most likely found after deployment.
|
Systems and protocols are developed and coded in short spans of time and if mistakes occur they are most likely found after deployment.
|
||||||
In this context, it has become popular to infer or learn a model from a given system just by observing its behaviour or response to certain queries.
|
In this context, it has become popular to infer or learn a model from a given system just by observing its behaviour or response to certain queries.
|
||||||
|
@ -43,20 +43,20 @@ We make crucial use of this feature in the development of a learning algorithm.
|
||||||
|
|
||||||
Our main contributions are the following.
|
Our main contributions are the following.
|
||||||
\startitemize
|
\startitemize
|
||||||
\item A generalization of Angluin's original algorithm to nominal automata.
|
\item A generalisation of Angluin's original algorithm to nominal automata.
|
||||||
The generalization follows a generic pattern for transporting computation models from finite sets to nominal sets, which leads to simple correctness proofs and opens the door to further generalizations.
|
The generalisation follows a generic pattern for transporting computation models from finite sets to nominal sets, which leads to simple correctness proofs and opens the door to further generalisations.
|
||||||
The use of nominal sets with different symmetries also creates potential for generalization, e.g., to languages with time features \cite[DBLP:conf/icalp/BojanczykL12] or data dependencies represented as graphs \cite[DBLP:journals/tcs/MontanariS14].
|
The use of nominal sets with different symmetries also creates potential for generalisation, e.g., to languages with time features \cite[DBLP:conf/icalp/BojanczykL12] or data dependencies represented as graphs \cite[DBLP:journals/tcs/MontanariS14].
|
||||||
\item An extension of the algorithm to nominal non-deterministic automata (nominal NFAs).
|
\item An extension of the algorithm to nominal non-deterministic automata (nominal NFAs).
|
||||||
To the best of our knowledge, this is the first learning algorithm for non-deterministic automata over infinite alphabets.
|
To the best of our knowledge, this is the first learning algorithm for non-deterministic automata over infinite alphabets.
|
||||||
It is important to note that, in the nominal setting, NFAs are strictly more expressive than DFAs.
|
It is important to note that, in the nominal setting, NFAs are strictly more expressive than DFAs.
|
||||||
We learn a subclass of the languages accepted by nominal NFAs, which includes all the languages accepted by nominal DFAs.
|
We learn a subclass of the languages accepted by nominal NFAs, which includes all the languages accepted by nominal DFAs.
|
||||||
The main advantage of learning NFAs directly is that they can provide exponentially smaller automata when compared to their deterministic counterpart.
|
The main advantage of learning NFAs directly is that they can provide exponentially smaller automata when compared to their deterministic counterpart.
|
||||||
This can be seen both as a generalization and as an optimization of the algorithm.
|
This can be seen both as a generalisation and as an optimisation of the algorithm.
|
||||||
\item An implementation using our recently developed Haskell library tailored to nominal computation -- NLambda, or \NLambda{}, by \cite[authoryears][DBLP:journals/corr/KlinS16].
|
\item An implementation using our recently developed Haskell library tailored to nominal computation -- NLambda, or \NLambda{}, by \cite[authoryears][DBLP:journals/corr/KlinS16].
|
||||||
Our implementation is the first non-trivial application of a novel programming paradigm of functional programming over infinite structures, which allows the programmer to rely on convenient intuitions of searching through infinite sets in finite time.
|
Our implementation is the first non-trivial application of a novel programming paradigm of functional programming over infinite structures, which allows the programmer to rely on convenient intuitions of searching through infinite sets in finite time.
|
||||||
\stopitemize
|
\stopitemize
|
||||||
|
|
||||||
The paper is organized as follows.
|
The paper is organised as follows.
|
||||||
In \in{Section}[sec:overview], we present an overview of our contributions (and the original algorithm) highlighting the challenges we faced in the various steps.
|
In \in{Section}[sec:overview], we present an overview of our contributions (and the original algorithm) highlighting the challenges we faced in the various steps.
|
||||||
In \in{Section}[sec:prelim], we revise some basic concepts of nominal sets and automata.
|
In \in{Section}[sec:prelim], we revise some basic concepts of nominal sets and automata.
|
||||||
\in{Section}[sec:nangluin] contains the core technical contributions of our paper: The new algorithm and proof of correctness.
|
\in{Section}[sec:nangluin] contains the core technical contributions of our paper: The new algorithm and proof of correctness.
|
||||||
|
@ -360,7 +360,7 @@ One more step brings us to the correct hypothesis $\autom_4$ (details are omitte
|
||||||
|
|
||||||
Consider now an infinite alphabet $A = \{a,b,c,d,\dots\}$.
|
Consider now an infinite alphabet $A = \{a,b,c,d,\dots\}$.
|
||||||
The language $\lang_1$ becomes $\{aa,bb,cc,dd,\dots\}$.
|
The language $\lang_1$ becomes $\{aa,bb,cc,dd,\dots\}$.
|
||||||
Classical theory of finite automata does not apply to this kind of languages, but one may draw an infinite deterministic automaton that recognizes $\lang_1$ in the standard sense:
|
Classical theory of finite automata does not apply to this kind of languages, but one may draw an infinite deterministic automaton that recognises $\lang_1$ in the standard sense:
|
||||||
|
|
||||||
\starttikzpicture
|
\starttikzpicture
|
||||||
\node[initial,state,initial text={$\autom_5 = $}] (q0) {$q_0$};
|
\node[initial,state,initial text={$\autom_5 = $}] (q0) {$q_0$};
|
||||||
|
@ -402,9 +402,9 @@ This automaton is infinite, but it can be finitely presented in a variety of way
|
||||||
(q4) edge[loop right] node[right] {$A$} (q4);
|
(q4) edge[loop right] node[right] {$A$} (q4);
|
||||||
\stoptikzpicture
|
\stoptikzpicture
|
||||||
|
|
||||||
One can formalize the quantifier notation above (or indeed the \quotation{dots} notation above that) in several ways.
|
One can formalise the quantifier notation above (or indeed the \quotation{dots} notation above that) in several ways.
|
||||||
A popular solution is to consider finite {\em register automata} \cite[DBLP:journals/tcs/KaminskiF94, DBLP:journals/tocl/DemriL09], i.e., finite automata equipped with a finite number of registers where alphabet letters can be stored and later compared for equality.
|
A popular solution is to consider finite {\em register automata} \cite[DBLP:journals/tcs/KaminskiF94, DBLP:journals/tocl/DemriL09], i.e., finite automata equipped with a finite number of registers where alphabet letters can be stored and later compared for equality.
|
||||||
Our language $\lang_1$ is recognized by a simple automaton with four states and one register.
|
Our language $\lang_1$ is recognised by a simple automaton with four states and one register.
|
||||||
The problem of learning registered automata has been successfully attacked before
|
The problem of learning registered automata has been successfully attacked before
|
||||||
\cite[DBLP:conf/vmcai/HowarSJC12].
|
\cite[DBLP:conf/vmcai/HowarSJC12].
|
||||||
|
|
||||||
|
@ -439,7 +439,7 @@ In general, such a table has {\em infinitely many rows and columns}, so the foll
|
||||||
\todo{Is hetzelfde als die hierboven. Doe maar line 4 en 9: oneindige checks. Extra clause over $S$ en $E$ oneindig?}
|
\todo{Is hetzelfde als die hierboven. Doe maar line 4 en 9: oneindige checks. Extra clause over $S$ en $E$ oneindig?}
|
||||||
|
|
||||||
\description{\inline{line}[line:addrow-ex]} every counterexample $t$ has only infinitely many prefixes, so it is not clear how one would construct an infinite set $S$ in finite time.
|
\description{\inline{line}[line:addrow-ex]} every counterexample $t$ has only infinitely many prefixes, so it is not clear how one would construct an infinite set $S$ in finite time.
|
||||||
However, an infinite $S$ is necessary for the algorithm to ever succeed, because no finite automaton recognizes $\lang_1$.
|
However, an infinite $S$ is necessary for the algorithm to ever succeed, because no finite automaton recognises $\lang_1$.
|
||||||
|
|
||||||
At this stage, we need to observe that due to equivariance of $S$, $E$ and $\lang_1$, the following crucial properties hold:
|
At this stage, we need to observe that due to equivariance of $S$, $E$ and $\lang_1$, the following crucial properties hold:
|
||||||
|
|
||||||
|
@ -555,7 +555,7 @@ This is again incorrect, but one additional step will give the correct hypothesi
|
||||||
|
|
||||||
\stopsubsection
|
\stopsubsection
|
||||||
\startsubsection
|
\startsubsection
|
||||||
[title={Generalization to Non-Deterministic Automata}]
|
[title={Generalisation to Non-Deterministic Automata}]
|
||||||
|
|
||||||
Since our extension of Angluin's \LStar{} algorithm stays close to her original development, exploring extensions of other variations of \LStar{} to the nominal setting can be done in a systematic way.
|
Since our extension of Angluin's \LStar{} algorithm stays close to her original development, exploring extensions of other variations of \LStar{} to the nominal setting can be done in a systematic way.
|
||||||
We will show how to extend the algorithm \NLStar{} for learning NFAs by \cite[authoryears][DBLP:conf/ijcai/BolligHKL09].
|
We will show how to extend the algorithm \NLStar{} for learning NFAs by \cite[authoryears][DBLP:conf/ijcai/BolligHKL09].
|
||||||
|
@ -638,7 +638,7 @@ Equivariance here can be rephrased as requiring $\delta(\pi \cdot q, \pi \cdot a
|
||||||
In most examples we take the alphabet to be $A = \EAlph$, but it can be any orbit-finite nominal set.
|
In most examples we take the alphabet to be $A = \EAlph$, but it can be any orbit-finite nominal set.
|
||||||
For instance, $A = Act \times \EAlph$, where $Act$ is a finite set of actions, represents actions $act(x)$ with one parameter $x \in \EAlph$ (actions with arity $n$ can be represented via $n$-fold products of $\EAlph$).
|
For instance, $A = Act \times \EAlph$, where $Act$ is a finite set of actions, represents actions $act(x)$ with one parameter $x \in \EAlph$ (actions with arity $n$ can be represented via $n$-fold products of $\EAlph$).
|
||||||
|
|
||||||
A language $\lang$ is \emph{nominal regular} if it is recognized by a nominal DFA.
|
A language $\lang$ is \emph{nominal regular} if it is recognised by a nominal DFA.
|
||||||
The theory of nominal regular languages recasts the classical one using nominal concepts.
|
The theory of nominal regular languages recasts the classical one using nominal concepts.
|
||||||
A nominal Myhill-Nerode-style \emph{syntactic congruence} is defined: $w, w' \in A^{\ast}$ are \emph{equivalent} w.r.t. $\lang$, written $w \equiv_\lang w'$, whenever
|
A nominal Myhill-Nerode-style \emph{syntactic congruence} is defined: $w, w' \in A^{\ast}$ are \emph{equivalent} w.r.t. $\lang$, written $w \equiv_\lang w'$, whenever
|
||||||
\startformula
|
\startformula
|
||||||
|
@ -653,7 +653,7 @@ Let $\lang$ be a regular nominal language.
|
||||||
The following conditions are equivalent:
|
The following conditions are equivalent:
|
||||||
\startitemize[m]
|
\startitemize[m]
|
||||||
\item the set of equivalence classes of $\equiv_\lang$ is orbit-finite;
|
\item the set of equivalence classes of $\equiv_\lang$ is orbit-finite;
|
||||||
\item $\lang$ is recognized by a nominal DFA.
|
\item $\lang$ is recognised by a nominal DFA.
|
||||||
\stopitemize
|
\stopitemize
|
||||||
\stoptheorem
|
\stoptheorem
|
||||||
|
|
||||||
|
@ -677,11 +677,11 @@ The theory of nominal automata remains similar, and an example nominal language
|
||||||
\startformula
|
\startformula
|
||||||
\left{ a_1 \ldots a_n \mid a_i \leq a_j, \text{for some } i < j \in \left{1, \ldots, n\right} \right}
|
\left{ a_1 \ldots a_n \mid a_i \leq a_j, \text{for some } i < j \in \left{1, \ldots, n\right} \right}
|
||||||
\stopformula
|
\stopformula
|
||||||
which is recognized by a nominal DFA over those atoms.
|
which is recognised by a nominal DFA over those atoms.
|
||||||
|
|
||||||
To simplify the presentation, in this paper we concentrate on the \quotation{equality atoms} only.
|
To simplify the presentation, in this paper we concentrate on the \quotation{equality atoms} only.
|
||||||
Also our implementation of nominal learning algorithms is restricted to equality atoms.
|
Also our implementation of nominal learning algorithms is restricted to equality atoms.
|
||||||
However, both the theory and the implementation can be generalized to other atom structures, with the \quotation{ordered atoms} $(\Q, <)$ as the simplest other example.
|
However, both the theory and the implementation can be generalised to other atom structures, with the \quotation{ordered atoms} $(\Q, <)$ as the simplest other example.
|
||||||
We leave the details of this for a future extended version of this paper.
|
We leave the details of this for a future extended version of this paper.
|
||||||
|
|
||||||
\stopsubsection
|
\stopsubsection
|
||||||
|
@ -874,7 +874,7 @@ Second, the new hypothesis does not necessarily have more states, again it might
|
||||||
|
|
||||||
From the proof \in{Theorem}[thm:termination] we observe moreover that the way we handle counterexamples is not crucial.
|
From the proof \in{Theorem}[thm:termination] we observe moreover that the way we handle counterexamples is not crucial.
|
||||||
Any other method which ensures a nonequivalent hypothesis will work.
|
Any other method which ensures a nonequivalent hypothesis will work.
|
||||||
In particular our algorithm is easily adapted to include optimizations such as the ones by \cite[authoryears][DBLP:journals/iandc/RivestS93, DBLP:journals/iandc/MalerP95], where counterexamples are added as columns.
|
In particular our algorithm is easily adapted to include optimisations such as the ones by \cite[authoryears][DBLP:journals/iandc/RivestS93, DBLP:journals/iandc/MalerP95], where counterexamples are added as columns.
|
||||||
|
|
||||||
\stopsubsection
|
\stopsubsection
|
||||||
\startsubsection
|
\startsubsection
|
||||||
|
@ -1124,7 +1124,7 @@ The termination argument is more involved than that of \LStar{}, but still it re
|
||||||
Developing an algorithm to learn nominal NFAs is not an obvious extension of \NLStar:
|
Developing an algorithm to learn nominal NFAs is not an obvious extension of \NLStar:
|
||||||
Non-deterministic nominal languages strictly contain nominal regular languages, so it is not clear what the developed algorithm should be able to learn.
|
Non-deterministic nominal languages strictly contain nominal regular languages, so it is not clear what the developed algorithm should be able to learn.
|
||||||
To deal with this, we introduce a nominal notion of RFSAs.
|
To deal with this, we introduce a nominal notion of RFSAs.
|
||||||
They are a \emph{proper subclass} of nominal NFAs, because they recognize nominal regular languages.
|
They are a \emph{proper subclass} of nominal NFAs, because they recognise nominal regular languages.
|
||||||
Nonetheless, they are more succinct than nominal DFAs.
|
Nonetheless, they are more succinct than nominal DFAs.
|
||||||
|
|
||||||
\startsubsection
|
\startsubsection
|
||||||
|
@ -1149,7 +1149,7 @@ Given a state $q$ of an automaton, we write $\lang(q)$ for the set of words lead
|
||||||
A \emph{nominal residual finite-state automaton} (nominal RFSA) is a nominal NFA $\autom$ such that $\lang(q) \in R(\lang(\autom))$, for all $q \in Q_\autom$.
|
A \emph{nominal residual finite-state automaton} (nominal RFSA) is a nominal NFA $\autom$ such that $\lang(q) \in R(\lang(\autom))$, for all $q \in Q_\autom$.
|
||||||
\stopdefinition
|
\stopdefinition
|
||||||
|
|
||||||
Intuitively, all states of a nominal RSFA recognize residuals, but not all residuals are recognized by a single state:
|
Intuitively, all states of a nominal RSFA recognise residuals, but not all residuals are recognised by a single state:
|
||||||
There may be a residual $\lang'$ and a set of states $Q'$ such that $\lang' = \bigcup_{q \in Q'} \lang(q)$, but no state $q'$ is such that $\lang(q') = \lang'$.
|
There may be a residual $\lang'$ and a set of states $Q'$ such that $\lang' = \bigcup_{q \in Q'} \lang(q)$, but no state $q'$ is such that $\lang(q') = \lang'$.
|
||||||
A residual $\lang'$ is called \emph{composed} if it is equal to the union of the components it strictly contains, explicitly
|
A residual $\lang'$ is called \emph{composed} if it is equal to the union of the components it strictly contains, explicitly
|
||||||
\startformula
|
\startformula
|
||||||
|
@ -1161,7 +1161,7 @@ This is not the case in a nominal RFSA.
|
||||||
However, the set of components of $\lang'$ always has a finite support, namely $\supp(\lang')$.
|
However, the set of components of $\lang'$ always has a finite support, namely $\supp(\lang')$.
|
||||||
|
|
||||||
The set of prime residuals $PR(\lang)$ is an orbit-finite nominal set, and can be used to define a \emph{canonical} nominal RFSA for $\lang$, which has the minimal number of states and the maximal number of transitions.
|
The set of prime residuals $PR(\lang)$ is an orbit-finite nominal set, and can be used to define a \emph{canonical} nominal RFSA for $\lang$, which has the minimal number of states and the maximal number of transitions.
|
||||||
This can be regarded as obtained from the minimal nominal DFA, by removing composed states and adding all initial states and transitions that do not change the recognized language.
|
This can be regarded as obtained from the minimal nominal DFA, by removing composed states and adding all initial states and transitions that do not change the recognised language.
|
||||||
This automaton is necessarily unique.
|
This automaton is necessarily unique.
|
||||||
|
|
||||||
\startlemma[reference=lem:can-rfsa]
|
\startlemma[reference=lem:can-rfsa]
|
||||||
|
@ -1505,7 +1505,7 @@ Equivalence queries are implemented by constructing a bisimulation (recall that
|
||||||
where a counterexample is obtained when two DFAs are not bisimilar.
|
where a counterexample is obtained when two DFAs are not bisimilar.
|
||||||
For nominal NFAs, however, we cannot implement a complete equivalence query as their language equivalence is undecidable.
|
For nominal NFAs, however, we cannot implement a complete equivalence query as their language equivalence is undecidable.
|
||||||
We approximated the equivalence by bounding the depth of the bisimulation for nominal NFAs.
|
We approximated the equivalence by bounding the depth of the bisimulation for nominal NFAs.
|
||||||
As an optimization, we use bisimulation up to congruence as described by \cite[authoryears][DBLP:journals/cacm/BonchiP15].
|
As an optimisation, we use bisimulation up to congruence as described by \cite[authoryears][DBLP:journals/cacm/BonchiP15].
|
||||||
Having an approximate teacher is a minor issue since in many applications no complete teacher can be implemented and one relies on testing \cite[DBLP:conf/ictac/AartsFKV15, DBLP:conf/dlt/BolligHLM13].
|
Having an approximate teacher is a minor issue since in many applications no complete teacher can be implemented and one relies on testing \cite[DBLP:conf/ictac/AartsFKV15, DBLP:conf/dlt/BolligHLM13].
|
||||||
For the experiments listed here the bound was chosen large enough for the learner to terminate with the correct automaton.
|
For the experiments listed here the bound was chosen large enough for the learner to terminate with the correct automaton.
|
||||||
|
|
||||||
|
@ -1630,14 +1630,14 @@ It has to be rerun at every stage, and each symbol is treated in isolation, whic
|
||||||
Our algorithm \nLStar{}, instead, works with the whole alphabet from the very start, and it exploits its symmetry.
|
Our algorithm \nLStar{}, instead, works with the whole alphabet from the very start, and it exploits its symmetry.
|
||||||
An example is in \in{Sections}[sec:execution_example_original] and \in{}[ssec:nom-learning]:
|
An example is in \in{Sections}[sec:execution_example_original] and \in{}[ssec:nom-learning]:
|
||||||
The ordinary learner uses four equivalence queries, whereas the nominal one, using the symmetry, only needs three.
|
The ordinary learner uses four equivalence queries, whereas the nominal one, using the symmetry, only needs three.
|
||||||
Moreover, our algorithm is easier to generalize to other alphabets and computational models, such as non-determinism.
|
Moreover, our algorithm is easier to generalise to other alphabets and computational models, such as non-determinism.
|
||||||
|
|
||||||
More recently papers appeared on learning register automata by \cite[authoryears][DBLP:conf/vmcai/HowarSJC12, DBLP:journals/fac/CasselHJS16].
|
More recently papers appeared on learning register automata by \cite[authoryears][DBLP:conf/vmcai/HowarSJC12, DBLP:journals/fac/CasselHJS16].
|
||||||
Their register automata are as expressive as our deterministic nominal automata.
|
Their register automata are as expressive as our deterministic nominal automata.
|
||||||
The state-space is similar to our orbit-wise representation:
|
The state-space is similar to our orbit-wise representation:
|
||||||
It is formed by finitely many locations with registers.
|
It is formed by finitely many locations with registers.
|
||||||
Transitions are defined symbolically using propositional logic.
|
Transitions are defined symbolically using propositional logic.
|
||||||
We remark that the most recent paper \cite[DBLP:journals/fac/CasselHJS16] generalizes the algorithm to alphabets with different structures (which correspond to different atom symmetries in our work), but at the cost of changing Angluin's framework.
|
We remark that the most recent paper \cite[DBLP:journals/fac/CasselHJS16] generalises the algorithm to alphabets with different structures (which correspond to different atom symmetries in our work), but at the cost of changing Angluin's framework.
|
||||||
Instead of membership queries the algorithm requires more sophisticated tree queries.
|
Instead of membership queries the algorithm requires more sophisticated tree queries.
|
||||||
In our approach, using a different symmetry does not affect neither the algorithm nor its correctness proof.
|
In our approach, using a different symmetry does not affect neither the algorithm nor its correctness proof.
|
||||||
Tree queries can be reduced to membership queries by enumerating all $n$-types for some $n$ ($n$-types in logic correspond to orbits in the set of $n$-tuples).
|
Tree queries can be reduced to membership queries by enumerating all $n$-types for some $n$ ($n$-types in logic correspond to orbits in the set of $n$-tuples).
|
||||||
|
@ -1648,7 +1648,7 @@ Another class of learning algorithms for systems with large alphabets is based o
|
||||||
\cite[authoryears][DBLP:conf/ictac/AartsFKV15] reduce the alphabet to a finite alphabet of abstractions, and \LStar{} for ordinary DFAs over such finite alphabet is used.
|
\cite[authoryears][DBLP:conf/ictac/AartsFKV15] reduce the alphabet to a finite alphabet of abstractions, and \LStar{} for ordinary DFAs over such finite alphabet is used.
|
||||||
Abstractions are refined by counterexamples.
|
Abstractions are refined by counterexamples.
|
||||||
Other similar approaches are \cite[DBLP:conf/vmcai/HowarSM11, DBLP:conf/nfm/IsbernerHS13], where global and local per-state abstractions of the alphabet are used, and \cite[DBLP:phd/hal/Mens17], where the alphabet can also have additional structure (e.g., an ordering relation).
|
Other similar approaches are \cite[DBLP:conf/vmcai/HowarSM11, DBLP:conf/nfm/IsbernerHS13], where global and local per-state abstractions of the alphabet are used, and \cite[DBLP:phd/hal/Mens17], where the alphabet can also have additional structure (e.g., an ordering relation).
|
||||||
We can also mention \cite[DBLP:conf/popl/BotincanB13], a framework for learning symbolic models of software behavior.
|
We can also mention \cite[DBLP:conf/popl/BotincanB13], a framework for learning symbolic models of software behaviour.
|
||||||
|
|
||||||
In \cite[DBLP:conf/fase/BergJR06, DBLP:conf/fase/BergJR08], authors cope with an infinite alphabet by running \LStar{} (adapted to Mealy machines) using a finite approximation of the alphabet, which may be augmented when equivalence queries are answered.
|
In \cite[DBLP:conf/fase/BergJR06, DBLP:conf/fase/BergJR08], authors cope with an infinite alphabet by running \LStar{} (adapted to Mealy machines) using a finite approximation of the alphabet, which may be augmented when equivalence queries are answered.
|
||||||
A smaller symbolic model is derived subsequently.
|
A smaller symbolic model is derived subsequently.
|
||||||
|
@ -1675,7 +1675,7 @@ In this paper we defined and implemented extensions of several versions of \LSta
|
||||||
We highlight two features of our approach:
|
We highlight two features of our approach:
|
||||||
\startitemize
|
\startitemize
|
||||||
\item it has strong theoretical foundations:
|
\item it has strong theoretical foundations:
|
||||||
The \emph{theory of nominal languages}, covering different alphabets and symmetries (see \in{Section}[sec:other-symms]); \emph{category theory}, where nominal automata have been characterized as \emph{coalgebras} \cite[DBLP:conf/icalp/KozenMP015, DBLP:journals/iandc/CianciaM10] and many properties and algorithms (e.g., minimization) have been studied at this abstract level.
|
The \emph{theory of nominal languages}, covering different alphabets and symmetries (see \in{Section}[sec:other-symms]); \emph{category theory}, where nominal automata have been characterised as \emph{coalgebras} \cite[DBLP:conf/icalp/KozenMP015, DBLP:journals/iandc/CianciaM10] and many properties and algorithms (e.g., minimisation) have been studied at this abstract level.
|
||||||
\item it follows a generic pattern for transporting computation models and algorithms from finite sets to nominal sets, which leads to simple correctness proofs.
|
\item it follows a generic pattern for transporting computation models and algorithms from finite sets to nominal sets, which leads to simple correctness proofs.
|
||||||
\stopitemize
|
\stopitemize
|
||||||
|
|
||||||
|
@ -1689,7 +1689,7 @@ A state is composed whenever it is obtained from other states via an algebraic o
|
||||||
Our algorithm \nNLStar{} is based on the \emph{powerset} monad, representing classical non-determinism.
|
Our algorithm \nNLStar{} is based on the \emph{powerset} monad, representing classical non-determinism.
|
||||||
We are currently investigating a \emph{substitution} monad, where the operation is \quotation{applying a (possibly non-injective) substitution of atoms in the support}.
|
We are currently investigating a \emph{substitution} monad, where the operation is \quotation{applying a (possibly non-injective) substitution of atoms in the support}.
|
||||||
A minimal automaton over this monad, akin to a RFSA, will have states that can generate all the states of the associated minimal DFA via a substitution, but cannot be generated by other states (they are prime).
|
A minimal automaton over this monad, akin to a RFSA, will have states that can generate all the states of the associated minimal DFA via a substitution, but cannot be generated by other states (they are prime).
|
||||||
For instance, we can give an automaton over the substitution monad that recognizes $\lang_2$ from \in{Section}[sec:overview]:
|
For instance, we can give an automaton over the substitution monad that recognises $\lang_2$ from \in{Section}[sec:overview]:
|
||||||
|
|
||||||
\placefigure[force, none]{}{
|
\placefigure[force, none]{}{
|
||||||
\hbox{\starttikzpicture
|
\hbox{\starttikzpicture
|
||||||
|
@ -1720,7 +1720,7 @@ Details, such as precise assumptions on the underlying structure of atoms necess
|
||||||
For an implementation of automata learning over other kinds of atoms without compromising the generic approach, an extension of NLambda to those atoms will be needed, as the current version of the library only supports equality and totally ordered atoms.
|
For an implementation of automata learning over other kinds of atoms without compromising the generic approach, an extension of NLambda to those atoms will be needed, as the current version of the library only supports equality and totally ordered atoms.
|
||||||
|
|
||||||
The efficiency of our current implementation, as measured in \in{Section}[sec:tests], leaves much to be desired.
|
The efficiency of our current implementation, as measured in \in{Section}[sec:tests], leaves much to be desired.
|
||||||
There is plenty of potential for running time optimization, ranging from improvements in the learning algorithms itself, to optimizations in the NLambda library (such as replacing the external and general-purpose SMT solver with a purpose-built, internal one, or a tighter integration of nominal mechanisms with the underlying Haskell language as it was done by \citenp[DBLP:journals/entcs/Shinwell06]), to giving up the functional programming paradigm for an imperative language such as LOIS \cite[DBLP:conf/cade/KopczynskiT16, DBLP:conf/popl/KopczynskiT17].
|
There is plenty of potential for running time optimisation, ranging from improvements in the learning algorithms itself, to optimisations in the NLambda library (such as replacing the external and general-purpose SMT solver with a purpose-built, internal one, or a tighter integration of nominal mechanisms with the underlying Haskell language as it was done by \citenp[DBLP:journals/entcs/Shinwell06]), to giving up the functional programming paradigm for an imperative language such as LOIS \cite[DBLP:conf/cade/KopczynskiT16, DBLP:conf/popl/KopczynskiT17].
|
||||||
|
|
||||||
|
|
||||||
\stopsection
|
\stopsection
|
||||||
|
|
|
@ -513,7 +513,7 @@ We should not touch these states to keep the theoretical time bound.
|
||||||
Therefore, we construct a new parent node $p$ that will \quotation{adopt} the children in $C'(u)$ together with $u$ (\inline[line:setptou]).
|
Therefore, we construct a new parent node $p$ that will \quotation{adopt} the children in $C'(u)$ together with $u$ (\inline[line:setptou]).
|
||||||
|
|
||||||
We will now explain why considering all but the largest children of a node lowers the algorithm's time complexity.
|
We will now explain why considering all but the largest children of a node lowers the algorithm's time complexity.
|
||||||
Let $T$ be a splitting tree in which we color all children of each node blue, except for the largest one.
|
Let $T$ be a splitting tree in which we colour all children of each node blue, except for the largest one.
|
||||||
Then:
|
Then:
|
||||||
|
|
||||||
\startlemma[reference=lem:even]
|
\startlemma[reference=lem:even]
|
||||||
|
|
|
@ -226,7 +226,7 @@ An ads is of special interest since they can identify a state using a single wor
|
||||||
|
|
||||||
These notions are again related.
|
These notions are again related.
|
||||||
We obtain a characterisation set by taking the union of state identifiers for each state.
|
We obtain a characterisation set by taking the union of state identifiers for each state.
|
||||||
For every machine we can construct a set of harmonized state identifiers as will be shown in \in{Chapter}[chap:separating-sequences] and hence every machine has a characterisation set.
|
For every machine we can construct a set of harmonised state identifiers as will be shown in \in{Chapter}[chap:separating-sequences] and hence every machine has a characterisation set.
|
||||||
However, an adaptive distinguishing sequence may not exist.
|
However, an adaptive distinguishing sequence may not exist.
|
||||||
|
|
||||||
\startexample
|
\startexample
|
||||||
|
@ -320,7 +320,7 @@ Possibly one of the earliest $m$-complete test methods.
|
||||||
|
|
||||||
\startdefinition
|
\startdefinition
|
||||||
[reference=w-method]
|
[reference=w-method]
|
||||||
Let $W$ be a characterization set, the \defn{W test suite} is
|
Let $W$ be a characterisation set, the \defn{W test suite} is
|
||||||
defined as
|
defined as
|
||||||
\startformula
|
\startformula
|
||||||
(P \cup Q) \cdot I^{\leq k} \cdot W .
|
(P \cup Q) \cdot I^{\leq k} \cdot W .
|
||||||
|
@ -343,7 +343,7 @@ defined as $P \cdot I^{\leq k} \cdot \bigcup \Fam{W} \,\cup\, Q \cdot I^{\leq k}
|
||||||
\odot \Fam{W}$.
|
\odot \Fam{W}$.
|
||||||
\stopdefinition
|
\stopdefinition
|
||||||
|
|
||||||
Note that $\bigcup \Fam{W}$ is a characterization set as defined for the W-method.
|
Note that $\bigcup \Fam{W}$ is a characterisation set as defined for the W-method.
|
||||||
It is needed for completeness to test states with the whole set $\bigcup \Fam{W}$.
|
It is needed for completeness to test states with the whole set $\bigcup \Fam{W}$.
|
||||||
Once states are tested as such, we can use the smaller sets $W_s$ for testing transitions.
|
Once states are tested as such, we can use the smaller sets $W_s$ for testing transitions.
|
||||||
|
|
||||||
|
@ -354,7 +354,7 @@ Once states are tested as such, we can use the smaller sets $W_s$ for testing tr
|
||||||
reference=sec:hsi]
|
reference=sec:hsi]
|
||||||
|
|
||||||
The Wp-method in turn was refined by Yevtushenko and Petrenko.
|
The Wp-method in turn was refined by Yevtushenko and Petrenko.
|
||||||
They make use of so called \emph{harmonized} state identifiers (which are called separating families in \cite[DBLP:journals/tc/LeeY94] and in present paper).
|
They make use of so called \emph{harmonised} state identifiers (which are called separating families in \cite[DBLP:journals/tc/LeeY94] and in present paper).
|
||||||
By having this global property of the family, less tests need to be executing when testing a state.
|
By having this global property of the family, less tests need to be executing when testing a state.
|
||||||
|
|
||||||
\startdefinition
|
\startdefinition
|
||||||
|
@ -366,7 +366,7 @@ $(P \cup Q) \cdot I^{\leq k} \odot \Fam{H}$.
|
||||||
Our newly described test method is an instance of the HSI-method.
|
Our newly described test method is an instance of the HSI-method.
|
||||||
However, in \cite[LuoPB95, YevtushenkoP90] they describe the HSI-method together with a specific way of generating the separating families.
|
However, in \cite[LuoPB95, YevtushenkoP90] they describe the HSI-method together with a specific way of generating the separating families.
|
||||||
Namely, the set obtained by a splitting tree with shortest witnesses.
|
Namely, the set obtained by a splitting tree with shortest witnesses.
|
||||||
In present paper that is generalized, allowing for our extension to be an instance.
|
In present paper that is generalised, allowing for our extension to be an instance.
|
||||||
|
|
||||||
|
|
||||||
\stopsubsection
|
\stopsubsection
|
||||||
|
@ -392,7 +392,7 @@ $(P \cup Q) \cdot I^{\leq k} \odot \Fam{Z}$.
|
||||||
Some Mealy machines which do not admit an adaptive distinguishing sequence,
|
Some Mealy machines which do not admit an adaptive distinguishing sequence,
|
||||||
may still admit state identifiers which are singletons.
|
may still admit state identifiers which are singletons.
|
||||||
These are exactly UIO sequences and gives rise to the UIOv-method.
|
These are exactly UIO sequences and gives rise to the UIOv-method.
|
||||||
In a way this is a generalization of the ADS-method, since the requirement that state identifiers are harmonized is dropped.
|
In a way this is a generalisation of the ADS-method, since the requirement that state identifiers are harmonised is dropped.
|
||||||
|
|
||||||
\startdefinition
|
\startdefinition
|
||||||
[reference={uio, uiov-method}]
|
[reference={uio, uiov-method}]
|
||||||
|
@ -541,7 +541,7 @@ By \in{Lemma}[lemma:refinement-props] we note the following: in the best case th
|
||||||
\REQUIRE{A Mealy machine $M$}
|
\REQUIRE{A Mealy machine $M$}
|
||||||
\ENSURE{A separating family $Z'$}
|
\ENSURE{A separating family $Z'$}
|
||||||
\startlinenumbering
|
\startlinenumbering
|
||||||
\STATE $T_1 \gets$ splitting tree for Moore's minimization algorithm
|
\STATE $T_1 \gets$ splitting tree for Moore's minimisation algorithm
|
||||||
\STATE $\Fam{H} \gets$ separating family extracted from $T_1$
|
\STATE $\Fam{H} \gets$ separating family extracted from $T_1$
|
||||||
\STATE $T_2 \gets$ splitting tree with valid splits (see \cite[DBLP:journals/tc/LeeY94])
|
\STATE $T_2 \gets$ splitting tree with valid splits (see \cite[DBLP:journals/tc/LeeY94])
|
||||||
\STATE $\Fam{Z'} \gets$ (incomplete) family as constructed from $T_2$
|
\STATE $\Fam{Z'} \gets$ (incomplete) family as constructed from $T_2$
|
||||||
|
@ -639,7 +639,7 @@ These families and the refinement $\Fam{Z'};\Fam{H}$ are given below:
|
||||||
We give an overview of the aforementioned test methods.
|
We give an overview of the aforementioned test methods.
|
||||||
We classify them in two directions,
|
We classify them in two directions,
|
||||||
\startitemize
|
\startitemize
|
||||||
\item whether they use harmonized state identifiers or not and
|
\item whether they use harmonised state identifiers or not and
|
||||||
\item whether it used singleton state identifiers or not.
|
\item whether it used singleton state identifiers or not.
|
||||||
\stopitemize
|
\stopitemize
|
||||||
|
|
||||||
|
@ -648,7 +648,7 @@ We classify them in two directions,
|
||||||
The following test suites are all $n+k$-complete:
|
The following test suites are all $n+k$-complete:
|
||||||
|
|
||||||
\starttabulate[|c|c|c|]
|
\starttabulate[|c|c|c|]
|
||||||
\NC \NC Arbitrary \NC Harmonized
|
\NC \NC Arbitrary \NC Harmonised
|
||||||
\NR \HL %----------------------------------------
|
\NR \HL %----------------------------------------
|
||||||
\NC Many / pairwise \NC Wp \NC HSI
|
\NC Many / pairwise \NC Wp \NC HSI
|
||||||
\NR
|
\NR
|
||||||
|
@ -754,7 +754,7 @@ This proof is very similar to the completeness proof in \cite[DBLP:journals/tse/
|
||||||
In the first part we argue that all states are visited by using some sort of counting and reachability argument.
|
In the first part we argue that all states are visited by using some sort of counting and reachability argument.
|
||||||
Then in the second part we show the actual equivalence.
|
Then in the second part we show the actual equivalence.
|
||||||
To the best of the authors knowledge, this is first $m$-completeness proof which explicitly uses the concept of a bisimulation.
|
To the best of the authors knowledge, this is first $m$-completeness proof which explicitly uses the concept of a bisimulation.
|
||||||
Using a bisimulation allows us to slightly generalize and use bisimulation up-to equivalence, dropping the the often-assumed requirement that $M$ is minimal.
|
Using a bisimulation allows us to slightly generalise and use bisimulation up-to equivalence, dropping the the often-assumed requirement that $M$ is minimal.
|
||||||
|
|
||||||
\startlemma
|
\startlemma
|
||||||
Let $\Fam{W'}$ be a family of state identifiers for $M$.
|
Let $\Fam{W'}$ be a family of state identifiers for $M$.
|
||||||
|
@ -803,9 +803,9 @@ Additionally, they included the P, H an SPY methods.
|
||||||
Unfortunately, the P and H methods do not fit naturally in the overview presented in \in{Section}[sec:overview].
|
Unfortunately, the P and H methods do not fit naturally in the overview presented in \in{Section}[sec:overview].
|
||||||
The P method is not able to test for extra states, making it less usable.
|
The P method is not able to test for extra states, making it less usable.
|
||||||
And the H method \todo{?}
|
And the H method \todo{?}
|
||||||
The SPY method builds upon the HSI-method and changes the prefixes in order to minimize the size of a test suite, exploiting overlap in test sequences.
|
The SPY method builds upon the HSI-method and changes the prefixes in order to minimise the size of a test suite, exploiting overlap in test sequences.
|
||||||
We believe that this technique is independent of the HSI-method and can in fact be applied to all methods presented in this paper.
|
We believe that this technique is independent of the HSI-method and can in fact be applied to all methods presented in this paper.
|
||||||
As such, the SPY method should be considered as an optimization technique, orthogonal to the work in this paper.
|
As such, the SPY method should be considered as an optimisation technique, orthogonal to the work in this paper.
|
||||||
|
|
||||||
More recently, a novel test method was devised which uses incomplete distinguishing sequences \cite[DBLP:journals/cj/HieronsT15].
|
More recently, a novel test method was devised which uses incomplete distinguishing sequences \cite[DBLP:journals/cj/HieronsT15].
|
||||||
They use sequences which can be considered to be adaptive distinguishing sequences on a subset of the state space.
|
They use sequences which can be considered to be adaptive distinguishing sequences on a subset of the state space.
|
||||||
|
|
Reference in a new issue