Synchronous programming language

A synchronous programming language is a computer programming language optimized for programming reactive systems.

Computer systems can be sorted in three main classes:

  1. Transformational systems take some inputs, process them, deliver their outputs, and terminate their execution. A typical example is a compiler.
  2. Interactive systems interact continuously with their environment, at their own speed. A typical example is the web.
  3. Reactive systems interact continuously with their environment, at a speed imposed by the environment. A typical example is the automatic flight control system of modern airplanes. Reactive systems must therefore react to stimuli from the environment within strict time bounds. For this reason they are often also called real-time systems, and are found often in embedded systems.

Synchronous programming, also called synchronous reactive programming (SRP), is a computer programming paradigm supported by synchronous programming languages. The principle of SRP is to make the same abstraction for programming languages as the synchronous abstraction in digital circuits. Synchronous circuits are indeed designed at a high level of abstraction where the timing characteristics of the electronic transistors are neglected. Each gate of the circuit (or, and, ...) is therefore assumed to compute its result instantaneously, each wire is assumed to transmit its signal instantaneously. A synchronous circuit is clocked and at each tick of its clock, it computes instantaneously its output values and the new values of its memory cells (latches) from its input values and the current values of its memory cells. In other words, the circuit behaves as if the electrons were flowing infinitely fast. The first synchronous programming languages were invented in France in the 1980s: Esterel, Lustre, and SIGNAL. Since then, many other synchronous languages have emerged.

The synchronous abstraction makes reasoning about time in a synchronous program a lot easier, thanks to the notion of logical ticks: a synchronous program reacts to its environment in a sequence of ticks, and computations within a tick are assumed to be instantaneous, i.e., as if the processor executing them were infinitely fast. The statement "a||b" is therefore abstracted as the package "ab" where "a" and "b" are simultaneous. To take a concrete example, the Esterel statement "'every 60 second emit minute" specifies that the signal "minute" is exactly synchronous with the 60-th occurrence of the signal "second". At a more fundamental level, the synchronous abstraction eliminates the non-determinism resulting from the interleaving of concurrent behaviors. This allows deterministic semantics, therefore making synchronous programs amenable to formal analysis, verification and certified code generation, and usable as formal specification formalisms.

In contrast, in the asynchronous model of computation, on a sequential processor, the statement "a||b" can be either implemented as "a;b" or as "b;a". This is known as the interleaving-based non determinism. The drawback with an asynchronous model is that it intrinsically forbids deterministic semantics (e.g., race conditions), which makes formal reasoning such as analysis and verification more complex. Nonetheless, asynchronous formalisms are very useful to model, design and verify distributed systems, because they are intrinsically asynchronous.

Also in contrast are systems with processes that basically interact synchronously. An example would be systems based on the Communicating sequential processes (CSP) model, which allows deterministic (external) and nondeterministic (internal) choice.

Synchronous languages

edit

See also

edit

References

edit
  1. ^ G. Berry and G. Gonthier. The synchronous programming language ESTEREL: Design, semantics, implementation. Science of Computer Programming, 19(2), 1992.
edit