Stackstom WebUI throws a 400 ( The plain Http request was sent to Https port)

(Hnanchahal) #1

We are configuring the st2-docker/runtime/kubernetes-1ppc at master · StackStorm/st2-docker · GitHub deployment for stackstorm in our Sandbox.

We have our nginx-ingress controller where we are terminating the SSL connection. It looks like WebUI uses Nginx internally and when we access the webui, we are receiving the error “The Plain http request was sent to Https port”.

Can anyone help us with a workaround. We would ultimately be moving to using helm but we need our internal team to get started with Stackstorm quickly and we have done couple of days of investment in setting this up.

(Lindsay Hill) #2

The web UI nginx config does HTTP -> HTTPS redirect, and SSL termination. It then does reviser proxy to the various st2 services on ports 9100, 9101, etc.

If you want to use your own SSL termination frontend, that’s fine. You could completely replace all the existing nginx config and just reverse proxy direct to the ST2 services. Or if you want to do plaintext from your nginx container to the st2 nginx instance, you could change the st2 nginx config to not do redirects for http.

(Lindsay Hill) #3

Also, if you just need to stand up a quick dev/test environment, it might be easier to do the scripted install on an Ubuntu or RHEL VM.

Removes some container-related complications & distractions at this stage

(Hnanchahal) #4

Can you point to me to where i can find the st2 nginx config?

(Lindsay Hill) #5

/etc/nginx/conf.d/st2.conf I think. Might depend a little on the OS & setup.

(Hnanchahal) #6

I mean, i used the official stackstorm docker image to run on kubernetes. Are you saying i update the configs as they exist on the containers? Or do i update the configs, package the contents and then deploy?

(Lindsay Hill) #7

Are you using the Helm Charts at or something custom?

(Hnanchahal) #8

I am using GitHub - StackStorm/st2-docker: Official docker container for StackStorm.

(Lindsay Hill) #9

If you’re running on Kubernetes, you’re better off using the helm charts at stackstorm-ha.

You can of course do your own thing using those other containers; it’s up to you. If you want to modify the nginx config in those containers, you’d need to build a custom container.

But if you’re doing your own external SSL termination, then there’s a couple of slightly easier options:
1/ Do SSL from your external SSL termination point to the st2web container. Don’t switch to plaintext.
2/ Look at the default nginx config here: st2/st2.conf at master · StackStorm/st2 · GitHub - note the reverse proxy config in there. You could implement the same thing on your external SSL termination point. So you go <external SSL termination/load balancer> -> direct to st2auth, st2api containers. Do that, rather than going -> -> st2api, st2auth

(Eugen C.) #10

@hnanchahal Just pay attention that kubernetes-1ppc is deprecated and will be removed soon in favor of stackstorm-ha, according to st2-docker/runtime/kubernetes-1ppc at master · StackStorm/st2-docker · GitHub notes.

As for Helm/Kubernetes, Ingress controller and HTTP vs HTTPs, per several discussions K8s Ingress Controller · Issue #6 · StackStorm/stackstorm-ha · GitHub and Add support to specify images name and to enable HTTP for st2web by GGabriele · Pull Request #44 · StackStorm/stackstorm-ha · GitHub what you want will be supported in future. Our plan is the following:

  1. Expose Ingress controller settings via Helm values.yaml to allow users to configure the SSL/TLS negotiation layer on their own (optional).
  2. Change st2web Docker image so it will respond on HTTP by default (currently HTTPS).

^^ that will cover your case and also follows K8s/Helm best practices giving some more flexibility.

As a workaround for your current situation, you can simply configure your infra to work like this:

HTTPs (st2web nginx) <> HTTPs (your ingress controller or load-balancer or whatever). 

eg. place your HTTPs load balancer or Ingress in front of existing st2web HTTPs. Just point it to HTTPs endpoint, not HTTP one. That’ll bring some overhead, but overall should be fine as a temporary solution.

That’s also what @lhill is suggesting as #1 option and we’re even using something similar in parts of our internal infrastructure.