Skip to main content

Table 1 Glossary of most important terms used

From: High-throughput bioinformatics with the Cyrille2 pipeline system


A pipeline is the definition of a series of computational analyses that are to be performed on a set of data. A pipeline can be described as a graph that is composed of nodes which are connected by edges.


A node represents a single analysis in the context of a pipeline. A node is associated with a tool and is responsible for the execution of one to many jobs. A node specifies how the data from a preceding node is organized for execution.


A single application embedded in the pipeline, for example BLAST.


A tool wrapper is a script that frames and embeds a tool within the pipeline. It enables execution of the tool through communication with the pipeline software, from which it receives the tools' parameter settings (for example, which BLAST database to use). It translates in- and outgoing data from the pipeline in the format required by the tool.


A job is a single execution of a node. For example, a single gene prediction performed on a DNA sequence loaded into the pipeline.


An edge connects two nodes and describes the stream of objects that flow between these nodes. To allow complex pipeline structures, a node can define multiple in- and output edges.


An object is the most granular element of data traversing the pipeline. Each object is tracked by the Cyrille2 system.


A stream is a serie of objects traversing an edge between two nodes.