"If" statement in Orquesta

Given the following input:

hosts = [{
  "name": "serverA",
  "os": "windows",
},{
  "name": "serverB",
  "os": "linux",
}]

I wish to do the following pseudocode in my orquesta workflow:

for host in hosts:
  if host["os"] == "windows":
    run_cmd_on_windows(host["name"])
  elif host["os"] == "linux":
    run_cmd_on_linux(host["name"])

However, I could not find a way to do an if statement with Orquesta. I thought I could do it with the when directive, like so:

input:
  - hosts

tasks:
  determine_os:
    with:
      items: "{{ ctx('hosts') }}"
    action: "core.noop"
    next:
      - when: "{{ item()['os'] == 'windows'}}"
        publish: currentHostName="{{ item()['name'] }}"
        do: run_cmd_on_windows
      - when: "{{ item()['os'] == 'linux'}}"
        publish: currentHostName="{{ item()['name'] }}"
        do: run_cmd_on_linux

  run_cmd_on_windows:
    action: "core.winrm_ps_cmd"
    input:
      host: "{{ currentHostName }}"
      cmd: "my-command --arg"

  run_cmd_on_linux:
    action: "core.remote_sudo"
    input:
      host: "{{ currentHostName }}"
      cmd: "my-command --arg"

However, it seems that in the next block, the item() function returns None, making the comparison impossible.

Is it possible to do such an if statement?

Your with.items only applies to the determine_os task in which it is specified, it does not also apply to the other run_cmd_on_windows task.

There are obtuse ways to iterate over lists and switch between tasks based on item data, but I don’t think that’s necessary in this case.

A cleaner approach is to implement this in two different actions, one for Linux and one for Windows hosts, and filter the master hosts list for each with.items. I’m more familiar with using YAQL to filter lists, but you could also use Jinja if you wanted to (with the appropriate Jinja syntax).

Here’s how I would write that:

input:
  - hosts

tasks:
  # This task isn't technically needed, but makes the parallel
  # execution of the run_cmd tasks more apparent to the next
  # person who has to modify this workflow
  init:
    action: core.noop
    next:
      # Run both actions in parallel
      - run_cmd_on_windows_hosts
      - run_cmd_on_linux_hosts

  run_cmd_on_windows_hosts:
    with:
      items: '<% ctx(hosts).filter($.os == "windows") %>'
    ...
    next: wait_for_all_cmds

  run_cmd_on_linux_hosts:
    with:
      items: '<% ctx(hosts).filter($.os == "linux") %>'
    ...
    next: wait_for_all_cmds

  wait_for_all_cmds:
    join: all  # wait for both inbound parallel tasks to finish
    ...

You can use http://yaqluator.com to throw in some dummy data and test out your YAQL expressions in the with.items expressions, and the full YAQL standard library is available: https://yaql.readthedocs.io/en/latest/standard_library.html

Ninja edit: Fixed a copy/paste typo.

We’ve also fixed some Orquesta bugs dealing with parallel executions and joining in the upcoming StackStorm version 3.2.

However, if you’re currently running a stable production release, you’re still on ST2 v3.1, so I would instead run those tasks in series:

input:
  - hosts

tasks:
  run_cmd_on_windows_hosts:
    with:
      items: '<% ctx(hosts).filter($.os == "windows") %>'
    ...
    next: run_cmd_on_linux_hosts

  run_cmd_on_linux_hosts:
    with:
      items: '<% ctx(hosts).filter($.os == "linux") %>'
    ...
    next: ...

Thanks, this works fine!

For future reference, what do you have in mind with “There are obtuse ways to iterate over lists and switch between tasks based on item data, but I don’t think that’s necessary in this case” ?

And it seems I indeed stumbled upon a bug with parallel executions, as my rule returned an error if I ran it with a linux and a windows host (even though it worked fine if I ran each of them separately). Putting the next solved the issue :smiley:

Also: for some reason if I put an unconditional next, then if a windows host failed but all linux hosts ran fine, it reported success. Putting the when: {{ succeeded() }} solved the problem.

This is the “obtuse” solution:

input:
  - hosts

vars:
  - loop_iteration_count: 0

tasks:
  # Necessary so Orquesta knows where to start
  init:
    action: core.noop
    next:
      - do: start_loop

  start_loop:
    action: core.noop
    next:
      - when: <% ctx().loop_iteration_count < ctx().hosts.len() %>
        do: switch_on_host_os
      - when: <% ctx().loop_iteration_count >= ctx().hosts.len() %>
        do: end_loop

  switch_on_host_os:
    action: core.noop
    next:
      - when: <% ctx().hosts[ctx().loop_iteration_count]?.os = "windows" %>
        do: run_cmd_on_windows_host
      - when: <% ctx().hosts[ctx().loop_iteration_count]?.os = "linux" %>
        do: run_cmd_on_linux_host
      - when: <% ctx().hosts[ctx().loop_iteration_count]?.os = null %>
        do: ...  # How to handle non-Windows and non-Linux hosts is application-specific

  run_cmd_on_windows_host:
    action: ...
    input:
      # host: <% ctx().hosts[ctx().loop_iteration_count] %>
      ...
    next:
      - when: <% succeeded() and ctx().loop_iteration_count < ctx().hosts.len() %>
        publish:
          - loop_iteration_count: <% ctx().loop_iteration_count + 1 %>
        do: start_loop
      - when: <% failed() %>
        do: ...  # How to handle per-item failures is application-specific

  run_cmd_on_linux_host:
    action: ...
    input:
      # host: <% ctx().hosts[ctx().loop_iteration_count] %>
      ...
    next:
      - when: <% succeeded() and ctx().loop_iteration_count < ctx().hosts.len() %>
        publish:
          - loop_iteration_count: <% ctx().loop_iteration_count + 1 %>
        do: start_loop
      - when: <% failed() %>
        do: ...  # How to handle per-item failures is application-specific

  end_loop:
    action: core.noop
    next:
      - do: ...

This version forces users to explicitly specify how to handle error conditions, and this version will probably also run a lot slower, especially in ST2 v3.2 (once that is out) and later versions of ST2.

Ninja edit: Fixed next syntax

I’ve included this example in the st2 wiki on GitHub.

I just started that page, and really that should live in our documentation, but it’s a good place to start collecting implementation patterns in Orquesta. Feel free to contribute to it!