How can I manage versions in HA st2 sidecar


In my scenario, I’m deploying st2 workflows to an HA cluster. Because it’s HA, st2 pack install doesn’t work. I’m supposed to create an st2packs image that will be loaded as a sidecar to the pods that need it.

My question is around versioning. I was hoping to create a single ‘repository of packs’ (as a git repo with all tags and therefore all versions available) in the docker image. Then to be able to delay selection of the specific version of the pack to be used until the last moment.

This would be more inline with the idea of a pack repository.

Can somebody confirm that this versioning approach would not work. And in fact the only way to get HA stackstorm to function would be to copy the exact version of the pack into the custom st2pack docker image. And hence if the customer wanted to change or upgrade (or downgrade) packs, I would have to create a whole new image with the correct new version copied in, (or even downgraded version), and deploy this whole image? and to confirm the consequences of a single pack requiring an upgrade means an entire rebuild of that image and redeployment of the custom st2packs image to the HA cluster?

If this is the case, is this by design or are there plans to improve this in future. While I understand an HA deployment might need to have fixed versions, I don’t understand why a selection of versions can’t be provided to chose from at deploy time. This would at least allow me to say “hey here’s a repo with all the versions, pick a version when you deploy, but at least I can reuse this repo image somewhere else”.

Just seems that an HA cluster combined with Airgap makes the whole ‘you can use a git repo to version your packs’ redundant.

Main reason for asking is that if I have several deployments of stackstorm, each with different versions of packs, but all with the same packs, i’m going to have to generate a custom st2pack docker image for every installation and manage this. It’s not impossible to manage, but it’s one hell of a trade off just for HA.

There is a requirement to package all the custom st2 packs in a Docker image. Version that image, deploy and change reference to it in Helm values.yaml. This way you could switch to a new pack content bundle with minimal downtime and roll-back to a previous Docker image version quick, if needed.

Looks like you already made some good understanding about every detail of the question you were asking.

Just installing packs from git on run-time is not acceptable for HA deployment because of the reasons. First, the install process is slow for st2 pack install. Another reality is that pack install is not that ideal idempotent and can fail: if something wrong happened with upstream, networking, pip, repositories, versions, etc.
While the HA deployment should be fast, - you want cluster to be available as soon as possible and make it immutable/reproducible/able to roll-back for overall deployment stability in HA environment. We found this strategy best fit Docker-like environment and model.

So the way pack content management works in K8s HA is just a tradeoff. Alternative is to use some shared RW-many NFS-like storage for managing pack content, which has its own set of pros/cons.

A bit more context:

Talking about bundling several versions of st2 integration packs in a single Docker image so you could switch them “on fly”, - it’s something that should be supported in StackStorm core first with easy mechanisms to manage that.
While this sounds interesting, it also means there is a technical need to do st2 pack install and pre-generate virtualenvs for every possible version of every pack you want to ship as they may have different pip dependencies.

1 Like

Thanks for confirming, I didn’t want to commit to different versions of the image unless it was absolutely necessary, and it looks like it is.

Yes, I noticed that the virtualenv is mentioned in the docs, do I understand that the st2-pack-install tool generates the pip envs automatically and also installs them to the same docker image (with the /virtualenv mounted as a sidecar too) from the requirements.txt and so must have access to the internet when the image is built?

I was actually planning to iterate through each pack calling requirements.txt to pre-generate the virtual envs. But I guess with the single version approach only 1 virtual env is required per pack so maybe that’s a good thing.