In computer science, a trace is an equivalence class of strings, wherein certain letters in the string are allowed to commute, but others are not. Traces generalize the concept of strings by relaxing the requirement for all the letters to have a definite order, instead allowing for indefinite orderings in which certain reshufflings could take place. In an opposite way, traces generalize the concept of sets with multiplicities by allowing for specifying some incomplete ordering of the letters rather than requiring complete equivalence under all reorderings. The trace monoid or free partially commutative monoid is a monoid of traces.

Traces were introduced by Pierre Cartier and Dominique Foata in 1969 to give a combinatorial proof of MacMahon's master theorem. Traces are used in theories of concurrent computation, where commuting letters stand for portions of a job that can execute independently of one another, while non-commuting letters stand for locks, synchronization points or thread joins.[1]

The trace monoid is constructed from the free monoid (the set of all strings of finite length) as follows. First, sets of commuting letters are given by an independency relation. These induce an equivalence relation of equivalent strings; the elements of the equivalence classes are the traces. The equivalence relation then partitions the elements of the free monoid into a set of equivalence classes; the result is still a monoid; it is a quotient monoid now called the trace monoid. The trace monoid is universal, in that all dependency-homomorphic (see below) monoids are in fact isomorphic.

Trace monoids are commonly used to model concurrent computation, forming the foundation for process calculi. They are the object of study in trace theory. The utility of trace monoids comes from the fact that they are isomorphic to the monoid of dependency graphs; thus allowing algebraic techniques to be applied to graphs, and vice versa. They are also isomorphic to history monoids, which model the history of computation of individual processes in the context of all scheduled processes on one or more computers.

Trace

edit

Let   denote the free monoid on a set of generators  , that is, the set of all strings written in the alphabet  . The asterisk is a standard notation for the Kleene star. An independency relation   on the alphabet   then induces a symmetric binary relation   on the set of strings  : two strings   are related,   if and only if there exist  , and a pair   such that   and  . Here,   and   are understood to be strings (elements of  ), while   and   are letters (elements of  ).

The trace is defined as the reflexive transitive closure of  . The trace is thus an equivalence relation on   and is denoted by  , where   is the dependency relation corresponding to     and   Different independencies or dependencies will give different equivalence relations.

The transitive closure implies that   if and only if there exists a sequence of strings   such that     and   for all  . The trace is stable under the monoid operation on  , i.e., concatenation, and   is therefore a congruence relation on  

The trace monoid, commonly denoted as  , is defined as the quotient monoid

 

The homomorphism

 

is commonly referred to as the natural homomorphism or canonical homomorphism. That the terms natural or canonical are deserved follows from the fact that this morphism embodies a universal property, as discussed in a later section.

One will also find the trace monoid denoted as   where   is the independency relation. One can also find the commutation relation used instead of the independency relation; it differs from the independency relation by also including all the diagonal elements of   since letters "commute with themselves" in a free monoid of strings of those letters.

Examples

edit

Consider the alphabet  . A possible dependency relation is

 

The corresponding independency is

 

Therefore, the letters   commute. Thus, for example, a trace equivalence class for the string   would be

 

and the equivalence class   would be an element of the trace monoid.

Properties

edit

The cancellation property states that equivalence is maintained under right cancellation. That is, if  , then  . Here, the notation   denotes right cancellation, the removal of the first occurrence of the letter a from the string w, starting from the right-hand side. Equivalence is also maintained by left-cancellation. Several corollaries follow:

  • Embedding:   if and only if   for strings x and y. Thus, the trace monoid is a syntactic monoid.[non sequitur See Talk:Trace monoid#As Syntactic Monoids]
  • Independence: if   and  , then a is independent of b. That is,  . Furthermore, there exists a string w such that   and  .
  • Projection rule: equivalence is maintained under string projection, so that if  , then  .

A strong form of Levi's lemma holds for traces. Specifically, if   for strings u, v, x, y, then there exist strings   and   such that   for all letters   and   such that   occurs in   and   occurs in  , and

 
 [2]

Universal property

edit

A dependency morphism (with respect to a dependency D) is a morphism

 

to some monoid M, such that the "usual" trace properties hold, namely:

1.   implies that  
2.   implies that  
3.   implies that  
4.   and   imply that  

Dependency morphisms are universal, in the sense that for a given, fixed dependency D, if   is a dependency morphism to a monoid M, then M is isomorphic to the trace monoid  . In particular, the natural homomorphism is a dependency morphism.

Normal forms

edit

There are two well-known normal forms for words in trace monoids. One is the lexicographic normal form, due to Anatolij V. Anisimov and Donald Knuth, and the other is the Foata normal form due to Pierre Cartier and Dominique Foata who studied the trace monoid for its combinatorics in the 1960s.[3]

Unicode's Normalization Form Canonical Decomposition (NFD) is an example of a lexicographic normal form - the ordering is to sort consecutive characters with non-zero canonical combining class by that class.

Trace languages

edit

Just as a formal language can be regarded as a subset of  , the set of all possible strings, so a trace language is defined as a subset of   all possible traces.

Alternatively, but equivalently, a language   is a trace language, or is said to be consistent with dependency D if

 

where

 

is the trace closure of a set of strings.

See also

edit

Notes

edit
  1. ^ Sándor & Crstici (2004) p.161
  2. ^ Proposition 2.2, Diekert and Métivier 1997.
  3. ^ Section 2.3, Diekert and Métivier 1997.

References

edit

General references

  • Diekert, Volker; Métivier, Yves (1997), "Partial Commutation and Traces", in Rozenberg, G.; Salomaa, A. (eds.), Handbook of Formal Languages Vol. 3; Beyond Words, Springer-Verlag, Berlin, pp. 457–534, ISBN 3-540-60649-1
  • Lothaire, M. (2011), Algebraic combinatorics on words, Encyclopedia of Mathematics and Its Applications, vol. 90, With preface by Jean Berstel and Dominique Perrin (Reprint of the 2002 hardback ed.), Cambridge University Press, ISBN 978-0-521-18071-9, Zbl 1221.68183
  • Antoni Mazurkiewicz, "Introduction to Trace Theory", pp 3–41, in The Book of Traces, V. Diekert, G. Rozenberg, eds. (1995) World Scientific, Singapore ISBN 981-02-2058-8
  • Volker Diekert, Combinatorics on traces, LNCS 454, Springer, 1990, ISBN 3-540-53031-2, pp. 9–29
  • Sándor, Jozsef; Crstici, Borislav (2004), Handbook of number theory II, Dordrecht: Kluwer Academic, pp. 32–36, ISBN 1-4020-2546-7, Zbl 1079.11001

Seminal publications

  • Pierre Cartier and Dominique Foata, Problèmes combinatoires de commutation et réarrangements, Lecture Notes in Mathematics 85, Springer-Verlag, Berlin, 1969, Free 2006 reprint with new appendixes
  • Antoni Mazurkiewicz, Concurrent program schemes and their interpretations, DAIMI Report PB 78, Aarhus University, 1977