Build Flow Plugin

Skip to end of metadata
Go to start of metadata

Plugin Information

Plugin ID build-flow-plugin Changes In Latest Release
Since Latest Release
Latest Release
Latest Release Date
Required Core
0.18 (archives)
Jun 10, 2015
buildgraph-view (version:1.0, optional)
Source Code
Issue Tracking
Pull Requests
Open Issues
Pull Requests
n/a (id: damien)
Usage Installations 2014-Nov 4739
2014-Dec 4825
2015-Jan 5122
2015-Feb 5257
2015-Mar 5839
2015-Apr 5939
2015-May 5986
2015-Jun 6351
2015-Jul 6689
2015-Aug 6750
2015-Sep 7042
2015-Oct 7307

This plugin allows managing Jenkins jobs orchestration using a dedicated DSL, extracting the flow logic from jobs.

You probably would be interested by as a possible alternative.


This plugin is designed to handle complex build workflows (aka build pipelines) as a dedicated entity in Jenkins. Without such a plugin, to manage job orchestration the user has to combine parameterized-build, join, downstream-ext and a few more plugins, polluting the job configuration. The build process is then scattered in all those jobs and very complex to maintain. Build Flow enables you to define an upper level Flow item to manage job orchestration and link up rules, using a dedicated DSL. This DSL makes the flow definition very concise and readable.


After installing the plugin, you'll get a new Entry in the job creation wizard to create a Flow. Use the DSL editor to define the flow.


The DSL defines the sequence of jobs to be built :

build( "job1" )
build( "job2" )
build( "job3" )

You can pass parameters to jobs, and get the resulting AbstractBuild when required :

b = build( "job1", param1: "foo", param2: "bar" )
build( "job2", param1: )

Environment variables from a job can be obtained using the following, which is especially useful for getting things like the checkout revision used by the SCM plugin ('P4_CHANGELIST', 'GIT_REVISION', etc) :

def revision = b.environment.get( "GIT_REVISION" )

You can also access some pre-defined variables in the DSL :

  • "build" the current flow execution
  • "out" the flow build console
  • "env" the flow environment, as a Map
  • "params" triggered parameters
  • "upstream" the upstream job, assuming the flow has been triggered as a downstream job for another job.

For example:

// output values
out.println 'Triggered Parameters Map:'
out.println params
out.println 'Build Object Properties:' { out.println "$it.key -> $it.value" }

// use it in the flow
build("job1", parent_param1: params["param1"])
build("job2", parent_workspace:build.workspace)

Guard / Rescue

You may need to run a cleanup job after a job (or set of jobs) whenever they succeeded or not. The guard/rescue structure is designed for this use-case. It works mostly like a try+finally block in Java language :

guard {
    build( "this_job_may_fail" )
} rescue {
    build( "cleanup" )

The flow result will then be the worst of the guarded job(s) result and the rescue ones


You may also want to just ignore result of some job, that are optional for your build flow. You can use ignore block for this purpose :

ignore(FAILURE) {
    build( "send_twitter_notification" )

The flow will not take care of the triggered build status if it's better than the configured result. This allows you to ignore UNSTABLE < FAILURE < ABORTED


You can ask the flow to retry a job a few times until success. This is equivalent to the retry-failed-job plugin :

retry ( 3 ) {
    build( "this_job_may_fail" )


The flow is strictly sequential, but let you run a set of jobs in parallel and wait for completion. This is equivalent to the join plugin :

parallel (
    // job 1, 2 and 3 will be scheduled in parallel.
    { build("job1") },
    { build("job2") },
    { build("job3") }
// job4 will be triggered after jobs 1, 2 and 3 complete

compared to join plugin, parallel can be used for more complex workflows where the parallel branches can sequentially chain multiple jobs :

parallel (

you also can "name" parallel executions, so you can later use reference to extract parameters / status :

join = parallel ([
        first:  { build("job1") },
        second: { build("job2") },
        third:  { build("job3") }

// now, use results from parallel execution

and this can be combined with other orchestration keywords :

parallel (
        guard {
        } rescue {
        retry 3, {

Extension Point

Other plugins that expose themselves to the build flow can be accessed with extension.'plugin-name'

So the plugin foobar might be accessed like:

def x = extension.'my-plugin-name'

Plugins implementing extension points

(searching github for "BuildFlowDSLExtension")

Implementing Extension

Write the extension in your plugin

@Extension(optional = true)
public class MyBuildFlowDslExtension extends BuildFlowDSLExtension {

     * The extensionName to use for the extension.
    public static final String EXTENSION_NAME = "my-plugin-name";

    public Object createExtension(String extensionName, FlowDelegate dsl) {
        if (EXTENSION_NAME.equals(extensionName)) {
            return new MyBuildFlowDsl(dsl);
        return null;

Write the actual extension

public class MyBuildFlowDsl {
    private FlowDelegate dsl;

     * Standard constructor.
     * @param dsl the delegate.
    public MyBuildFlowDsl(FlowDelegate dsl) {
        this.dsl = dsl;

     * World.
    public void hello() {
        ((PrintStream)dsl.getOut()).println("Hello World");


And more ...

future releases may introduce support for some more features and DSL syntax for advanced job orchestration.


As any Job, the Flow is executed by a trigger, and the Cause is exposed to the flow DSL. If you want to implement a build-pipeline after a commit on your scm, you can configure the flow to be triggered as the first scm-polling job is run, but you can as well use any other trigger (manual trigger, XTrigger plugin, ...) for your flow to integrate in a larger process.


JENKINS-16980 major issue to prevent releasing a 1.0 version : build-flow require RUN-SCRIPT permission as it allows to run arbitrary code on master (so bypassing security). Need to setup some sandboxing / security manager to allow editing the DSL with lower permission. Possibly using groovy sandbox, maybe require a more general purpose SecurityManager. Such a component would be useful for all other groovy-related plugins.

JENKINS-19118 make DSL resumable : implementation will probably have significant impact on DSL execution engine. My current plan is to allow the DSL to be executed multiple times, previously-executed "build()" invocation being No-Op. This assumes groovy scripting in DSL is strictly idempotent.

With those two issues in mind, preferred way to add new keyword to DSL is to define an extension in a dedicated plugin.

Need help ?

Please don't enlarge even more this page comment list, that we hardly can follow. Join jenkins-user mailing list and explain your use-case there


Work in Progress

0.14 (release Sep. 09, 2014)

  • enable test-jar for plugins leveraging the extension point.
  • use build.displayName in JobInvocation.toString.

0.13 (release Sep. 09, 2014)

  • read DSL from a file
  • fix buildgraph when using 2nd level flows.
  • swap dependency with buildgraph-view.

0.12 (release May 14, 2014)

  • wait for build to be finalized
  • fixed-width font in DSL box
  • print stack traces when parallel builds fail
  • restore ability to use a workspace, as a checkbox in flow configuration (useful for SCM using workspace)


  • no changes (added the compatibility warning to update center)

0.11 (released Apr. 8, 2014)

  • plugin re-licensed to MIT
  • build flow no longer has a workspace
  • Validation of the DSL when configuring the flow
  • If a build could not be scheduled show the underlying cause
  • extensions can contribute to dsl help
  • aborting a flow causes all jobs scheduled and started by the flow to be aborted
  • retry is configurable
  • misc tweaks to UI and small fixes


  • add support for SimpleParameters (parameter that can be set from a String)
  • mechanism to define DSL extensions
  • visualization moved to build-graph-view plugin
  • minor fixes

0.8 (released Feb. 11, 2013)

  • Fix folder support
  • Basic flow visualization support (thanks to ~gregory144)
  • Alternative map-style way to define parallel executions (thanks to Jeremy Voorhis)

0.7 (released Jan. 11, 2013)

  • Add support for ignore(Result)

0.6 (released November 24, 2012)

  • Enable "read job" permissions for Anonymous (JENKINS-14027)
  • Print errors as .. errors
  • Better failed test reporting
  • Use transient ref to Job/Build …
  • Fix a NullPointer to render FlowCause after jenkins restart
  • Use futures for synchronization plus publisher support plus console println cleanup (Pull request #14 from coreyoconnor)
  • Call to Parametrized jobs correctly use their default values (Pull request #16 from dbaeli)

0.5 (released September 03, 2012)

  • fixed support for publishers (JENKINS-14411)
  • improved job configuration UI (dedicated section, help, prepare code mirror integration)
  • improved error message

0.4 (released June 28, 2012)

  • fixed cast error parsing DSL (Collections$UnmodifiableRandomAccessList' to class 'long') on some version of Jenkins
  • add groovy bindings for current build as "build", console output as "out", environment variables Map as "env", and triggered parameters of current build as "params"
  • fixed bug when many "Parameters" links were shown for each triggered parameter on build page

0.3 (released April 12, 2012)

  • add support for hierarchical project structure (typically : cloudbees folders plugin)

0.2 (released April 9, 2012)

  • changed parallel syntax to support nested flows concurrent execution
  • fixed serialization issues

0.1 (released April 3, 2012)

  • initial release with DSL-based job orchestration


plugin-misc plugin-misc Delete
plugin-trigger plugin-trigger Delete
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.

Add Comment