Comparison of dataflow models

Comparison of Flow-based programming, Actor model and Communicating Sequential Processes in a scientific manner

A 11 minute read. Published on June 3, 2012

Comparison of dataflow models

Mathematical model of choice

Comparison criteria and analysis methods should be chosen before performing actual analysis.

We will solve the task of parallel computation model evaluation as a multiple criteria choice. In order to do this, a set of particular criteria is outlined and for each of them a mapping to the real numbers set is defined:


where x - is alternative choice (model); i(x) - is a particular factor value; R - is the set of real numbers; k - is the particular criterion number, k∈[1,K]; K - total number of particular criteria.

The following relations between possible particular criterion values yi are used:

  1. yi≈yj - equivalence relation. Two values are equivalent if none of them is more preferable than the other. Every value is equivalent to itself.
  2. yi<yj,i≠j - preference relation. Value yj is more preferable than yi if it has benefits over it for a decision maker.
  3. yi≪yj,i≠j - strong preference relation. Value yj is a lot more preferable than yi if it has benefits of new quality over it for a decision maker. Strong preference is a type of preference relation.
  4. yi≤yj - non-strict preference relation. Value yj is more preferable or equivalent to yi.

Then k‘th criterion numeric evaluation function (1) can be written as the following recursive formula:


Here P and Q — are constant weights set by a decision-maker which determine the extent of difference between particular choices. The most trivial example is P=1, Q=2.

For multi-criteria decision making we will use a simple weighted sum model (WSM). We will use linear convolution method to get a scalar quality estimate Y of a model x because the importance of each of the particular criteria is defined by a decision maker. Then generalized quality factor looks like:


where αi>0 is the i‘th criterion significance (weight) defined by a decision maker. If a normalized estimate is being calculated, the following limitation takes place:


We will group particular criteria into several semantic groups. Then we can calculate values for denormalized factors for each group and then sum up all particular sums to get the generalized factor as shown in formula (4):


where M — is the quantity of semantic groups of criteria; βm>0 - significance (weight) of the m‘th group.

Particular model evaluation factors

Below there is a multilevel list of evaluated properties of the dataflow models. At the top level there are groups of criteria, then there are factors themselves nested and lists of possible values inside them. Numeric scale constants calculated using formula (2) are given in parenthesis.

  1. Ease of development — these factors are related to how easy, quickly and convenient application development is.

    1. Structure decomposition — means explicit application structure definition as a graph, which allows the use of methods and utilities of structural analysis and significantly simplifies perception of the model by both specialists and non-specialists:

      1. not emphasized (0) — means that structure is not emphasized explicitly which is usual for models supporting text notation only;

      2. 2-3 levels (P) — models with a fixed number of decomposition levels are relatively simple but they are not so powerful in terms of complex systems design as multilevel models;

      3. multilevel (Q) — especially effective to design large and complex software systems.

    2. Notation support — determines which kind of a language the structure and behavior of the system can be defined in:

      1. text (0) — textual languages are usually effective for behavior definition but it is not so for application structure definition and message routing;

      2. visual (0) — visual notation is very convenient for structural analysis and drawing of communication graphs but may be too cumbersome for behavior definition;

      3. double (Q) — usually means that the structure is represented visually while behavior of particular components is defined with a textual algorithmic language which is the most practical solution.

    3. Iteration facilities — presence of special elements and tools to implement iteration and recursion which simplifies application development:

      1. not emphasized (0) — iteration is implemented manually using complex constructs with back-coupling loops;

      2. recursion (P) — iteration can only be implemented using recursion;

      3. control structures (P) — iteration can be implemented using control structures within component blocks;

      4. specific elements (Q) — the language has specific elements to implement iteration easily as well as recursion and built-in control structures.

  2. Performance and resource consumption — this group of factors affects application scalability, response time, throughput, CPU time consumption, RAM consumption and other performance characteristics.

    1. Message passing — depending on whether a handshake from both sides is required to finish the message transfer successfully:

      1. synchronous (0) — in this case the sender waits until the receiver receives the whole message or until the message is buffered. This mode is easier to control but it may result into higher latency;

      2. asynchronous (P) — the sender does not wait until the message is received in full and proceeds to its further actions. This mode tends to use CPU time more effectively especially in presence of long input/output operations.

    2. Message buffering — defines whether the runtime buffers the messages:

      1. none (0) — if the message is passed from the sender to the receiver directly and both processes are blocked during message transfer;

      2. limited (P) — if non-blocking message passing is supported using bounded buffers; if the queue length reaches the buffer capacity limit, then the sender is blocked;

      3. unlimited (Q) — if the system provides unbounded buffer size in an explicit or an implicit way. This may result into higher memory consumption but guarantees that the message is eventually delivered while the sender can act without being blocked.

    3. Process persistence — this aspect is related to a fact whether application components are resident in memory during its lifetime or not:

      1. non-resident (0) — in such systems processes are spawned to process a new incoming portion of data and terminated when the result of processing is sent along the communication channel. It preserves from memory leaks and saves memory for otherwise rarely used subnets;

      2. resident (0) — such processes reside in memory for all the time that application runs which lets them respond faster (they don’t need to spend time for initialization) but this increases memory consumption and leak risk (if a garbage collector is not used);

      3. by choice (P) — the paradigm and the framework allows a user to develop programs consisting either from resident or from non-resident components depending on what the particular application requires;

      4. combined (Q) — in this case some of the components can be made resident (e.g. if a fast response is required from them) while some of the components within the same network can be non-resident (for example if it is rarely used and its resources can be effectively allocated on demand).

    4. Parallelism facilities — which mechanism of parallelism is used on operating system (OS) level:

      1. processes (0) — if there are multiple OS processes using interprocess communication (IPC) signals;

      2. threads (0) — OS threads communicating via shared memory;

      3. coroutines (0) — lightweight processes, “green” threads or coroutines are effectively multiplexed within one OS thread/process;

      4. combined (Q) — a combination of facilities from a) to c) are used, e.g. threads + lightweight processes.

  3. Flexibility and extensibility — these factors affect the variety of tasks that can be covered by a particular model and how easily the existing programs which have been built using that model can be extended in future.

    1. Structure changes — means the possibility to create and modify subnets in application graph during runtime:

      1. static (0) — such a structure is defined by an engineer once during design/implementation stage and requires the application to be recompiled and restarted in case of a change;

      2. dynamic (Q) — such a structure allows the system to change during runtime, up to continuous evolution of the system.

    2. Feedback loops — defines whether the model supports feedback loops, i.e. signals can be passed from ports down the data stream to the ports up the regular data stream:

      1. no (0) — some models forbid feedback loops explicitly to avoid possible deadlocks a priori. This results into a smaller application domain and complicates implementation of iterative algorithms;

      2. yes (P) — feedback loops may cause deadlocks in some cases but allow an engineer to solve a wider variety of problems in a more simple way.

    3. Control signals — represents the emphasis of control signals as a separate type of signals:

      1. not emphasized (0) — in this case control parameter values are passed directly to regular input and a component should distinguish them and ensure a correct response itself;

      2. initial (0) — in this case control parameter values are entered only once during application initialization which allows a developer to modify system’s behavior from one run to another;

      3. dynamic (P) — in this case control signals can arrive to a component from both system’s environment and other higher-level components during runtime which provides a higher level of flexibility and parametrization.

    4. Non-determinism — on the one hand it affects the spectrum of solvable tasks and on the other it affects system’s reliability ambiguously [32]:

      1. deterministic (0) — deterministic systems are simple and easy to detect errors but they don’t ensure guaranteed service and refuse requests more often;

      2. bounded nondeterminism (P) — systems with bounded nondeterminism solve a wider spectrum of tasks and have a lower fault probability (usually by means of message queues);

      3. unbounded nondeterminism (Q) — models supporting unbounded nondeterminism in theory solve the widest variety of tasks possible while being able to guarantee that a request will be served even if the system load continues to grow.

Comparison analysis of interaction models based on data flows

Below there are tables representing the results of comparison of Flow-Based Programming (FBP), Actor model (Actor) and Communicating Sequential Processes (CSP) as theoretic component interaction models and practical frameworks. The “Ideal” column is somewhat what a perfect hypothetical model would provide.

The marks specified in parenthesis are the points values of the scale:

  • 0 (not specified) - no certain advantage;
  • P – definite advantage;
  • Q – higher order advantage which gives a more possibilities than 0 and P.

FBP is considered as Morrison’s JavaFBP/C#FBP in first place, Actor model is mostly analyzed from its Erlang implementation, CSP example is the default Go language’s runtime.

Table 1 — Comparison by ease of development criteria

Table 3 — Flexibility and extensibility criteria comparison

Note that Parallel facilities is implementation-specific, so we exclude them from the Total sums. Most of the other criteria are model-specific.

Now we can sum the group values and get the final scalar estimates. In order to get them as simple integers we can specify P and Q scale constants. In this simple example we set all Weights (Wt.) to 1, P = 1 and Q = 2.

Table 4 — Consolidated comparison results table

By customizing Weight values, P and Q constants you can get different estimates which match your own priorities better.

There is no actual “best” model, all these models are in the Pareto set.

The FBP model is the most comfortable to use at software design stage and then during implementation because it supports structure analysis and decomposition, double notation (textual description of component behavior and graphical representation of overall application). But actor and CSP models, which are very close to each other, can handle a wider class of tasks and support dynamic restructuring at run-time up to unbounded nondeterminism. The difference between CSP and Actor models is in message passing semantics details, but they are almost identical when using large buffers in CSP.

Let’s outline the weak points and the ways that each of these models or platforms can be improved.


  • Static structure. For the model to be really flexible and universal, it needs to support dynamic restructuring so that an application would be capable of changing entire subnets at run-time. Another major problem is that it is hard to display such dynamic structure (especially if changes are frequent and of variable structure themselves) on graphical diagrams, which are the strongest points of this model.

  • As a consequence, this model supports bounded nondeterminism. Shortly, this means that the system cannot guarantee normal functioning behind the boundaries where its behavior is determined.

  • Facilities for recursion and iteration handling would also improve this model, so that it would make it clear and easy how to implement cyclic and iterative structures.

Actor model:

  • The main disadvantage of most existing actor model implementation is that the only notation supported is textual. Historically this model has been utilized in functional programming languages which rarely have a visual presentation. This makes it hard to apply structure decomposition techniques and maintain the system. A possible solution is to split the system into 2 levels using different notations: high level coordination language (visual or textual) and low level textual component behavior description.

CSP has the same problems as the actor model in general. But another thing a CSP developer has to care about is capacity of each buffer in communication channels, which in reality depends on an exact application and current inbound data intensity.