This site is in read only mode. Please continue to browse, but replying, likes, and other actions are disabled for now.

⚠️ We've moved!

Hi there!

To reduce project dependency on 3rd party paid services the StackStorm TSC has decided to move the Q/A from this forum to Github Discussions. This will make user experience better integrated with the native Github flow, as well as the questions closer to the community where they can provide answers.

Use 🔗 Github Discussions to ask your questions.

Restrict a particular run to a particular node within an HA deployment?

How can I restrict a particular action run to only be run on a particular node within a high availability deployment?

My use case is that I want to run a pack install on both nodes. I want to install a stackstorm pack on both HA nodes. The problem is that when I run pack install, it puts an execution request on the rabbitmq through the API. Then one of the two HA nodes picks it up and runs it, and not necessarily the node that I want to run it on. This installs the pack (and therefore the pack’s files) on one of the nodes, but not necessarily on both.

I want to be able to put a action on the queue and force stackstorm to make it run on a particular node so that I can force the pack to install correctly on both.

@armab informs me this is a future stack storm feature, and points me to this pull request:

Yes, there is a feature request Feature req: Partitioning actions to execute on separate nodes · Issue #3096 · StackStorm/st2 · GitHub for partitioning executions to schedule on specific nodes (similar to Partitioning Sensors — StackStorm 2.10.4 documentation). However scheduling execution on specific instance vs running execution on all st2actionrunner instances are 2 different stories.

What you’re trying to do with st2 pack install in HA environment on all nodes is something that’s not encouraged.

Imagine you scaled out your HA setup to hundreds of st2actionrunners. If you run your st2 pack install on all nodes, - that not only generates excessive and unneded load/spike. There is a good chance that only part of those executions st2 pack install ... will succeed. Big chunk of executions may fail due to random reasons: resources, disk, memory, failed networking, different env like cloud, remote content change, pack version deployed, etc. In theory this sounds like kind of idempotent operation, in practice because of many moving parts it’s not (cc @kami)
This means part of your actionrunners will be in unsync state which will result in new set of problems with other operations.

For HA environment so far you have some options:

These approaches guarantee that all st2actionrunners would work with the same content.

That’s very helpful, thanks @armab