This site is in read only mode. Please continue to browse, but replying, likes, and other actions are disabled for now.

⚠️ We've moved!

Hi there!

To reduce project dependency on 3rd party paid services the StackStorm TSC has decided to move the Q/A from this forum to Github Discussions. This will make user experience better integrated with the native Github flow, as well as the questions closer to the community where they can provide answers.

Use 🔗 Github Discussions to ask your questions.

StackStorm with HA on Kubernetes - persistentVolumeClaims configuration issue

Hi,
I am deploying Stackstorm(currently -non enterprise version, eventually also plan with EWC) over Kubernetes, I followed the documentation(StackStorm HA Cluster in Kubernetes - BETA — StackStorm 2.10.4 documentation) but was not successful in installing Stackstorm HA:

Used Command: helm install stackstorm/stackstorm-ha
 Error: timed out waiting for the condition

To investigate further I also checked the status of the pods that were installed, I found many pods that had ‘crashloopback’ or ‘error’ status, they are below:

[root@k8s-master ~]# kubectl get pod | grep Crash
NAME                                                   READY   STATUS             RESTARTS   AGE
existing-cricket-st2actionrunner-5f4654fcc7-5nn4z       0/1     CrashLoopBackOff   6          16m
existing-cricket-st2actionrunner-5f4654fcc7-8gqfs       0/1     CrashLoopBackOff   6          16m
existing-cricket-st2actionrunner-5f4654fcc7-bn4xp       0/1     CrashLoopBackOff   6          16m
existing-cricket-st2actionrunner-5f4654fcc7-hc6nh       0/1     CrashLoopBackOff   6          16m
existing-cricket-st2actionrunner-5f4654fcc7-lsp25       0/1     CrashLoopBackOff   6          16m
existing-cricket-st2api-5b57cdf65-4jmvx                 0/1     CrashLoopBackOff   6          16m
existing-cricket-st2api-5b57cdf65-x2z8p                 0/1     CrashLoopBackOff   6          16m
existing-cricket-st2auth-c44788bf-q6gvm                 0/1     CrashLoopBackOff   6          16m
existing-cricket-st2auth-c44788bf-tw5fz                 0/1     CrashLoopBackOff   6          16m
existing-cricket-st2garbagecollector-6577497dff-64fmp   0/1     CrashLoopBackOff   6          16m
existing-cricket-st2notifier-84dfd666c-hn64k            0/1     CrashLoopBackOff   6          16m
existing-cricket-st2notifier-84dfd666c-nwt5q            0/1     CrashLoopBackOff   6          16m
existing-cricket-st2rulesengine-6b5f86744d-fs58d        0/1     CrashLoopBackOff   6          16m
existing-cricket-st2rulesengine-6b5f86744d-fwmdg        0/1     CrashLoopBackOff   6          16m
existing-cricket-st2scheduler-c5cfd546d-5ztsg           0/1     CrashLoopBackOff   6          16m
existing-cricket-st2scheduler-c5cfd546d-spmg5           0/1     CrashLoopBackOff   6          16m
existing-cricket-st2sensorcontainer-6776b97975-hcn8h    0/1     CrashLoopBackOff   6          16m
existing-cricket-st2stream-c6ddc754d-lp8w2              0/1     CrashLoopBackOff   6          16m
existing-cricket-st2stream-c6ddc754d-pmkhz              0/1     CrashLoopBackOff   6          16m
existing-cricket-st2timersengine-5bbc866d5-5s657        0/1     CrashLoopBackOff   6          16m
existing-cricket-st2workflowengine-6747fcbd4b-dvf6j     0/1     CrashLoopBackOff   6          16m
existing-cricket-st2workflowengine-6747fcbd4b-gbsfc     0/1     CrashLoopBackOff   6          16m
kubectl get pod | grep Error
existing-cricket-st2actionrunner-5f4654fcc7-5nn4z       0/1     Error              24         117m
existing-cricket-st2scheduler-c5cfd546d-spmg5           0/1     Error              24         117m
existing-cricket-st2stream-c6ddc754d-lp8w2              0/1     Error              24         117m

The rest of the pods had the ‘running’ status. I do not know what went wrong, what should I do before installation, any pre-requisites that I am missing? I request for your advise.

Thanks!

Hello. I’m curious the output of, for example, kubectl describe po/existing-cricket-st2api-5b57cdf65-4jmvx… that should give you a better idea what is causing the crash.

1 Like

Hi warrenvw, Thanks much for the interest you have shown, here is the output for the command that I obtained, but am not able to interpret the entire outcome, I can see that the container containing the St2 api has restarted many times and has finally terminated.

    kubectl describe po/existing-cricket-st2api-5b57cdf65-4jmvx
    Name:               existing-cricket-st2api-5b57cdf65-4jmvx
    Namespace:          default
    Priority:           0
    PriorityClassName:  <none>
    Node:               k8s-worker-1/192.168.1.54
    Start Time:         Fri, 18 Jan 2019 08:14:01 +0530
    Labels:             app=st2api
                        chart=stackstorm-ha-0.8.3
                        heritage=Tiller
                        pod-template-hash=5b57cdf65
                        release=existing-cricket
                        support=community
                        tier=backend
                        vendor=stackstorm
    Annotations:        checksum/config: 43c786c09d83c8a43d59cba9c956fa7edb840b0d18f165867d3278b6ef1fc254
    Status:             Running
    IP:                 10.36.0.9
    Controlled By:      ReplicaSet/existing-cricket-st2api-5b57cdf65
    Containers:
      st2api:
        Container ID:   docker://e52b260955c7e856f65a059089261e25a381c39d2bb5de0f41c3e99873ce93e1
        Image:          stackstorm/st2api:2.10dev
        Image ID:       docker-pullable://stackstorm/st2api@sha256:994e7733bc1d76f7a6b9ec0a5aeee4d604f61c2eca5f68f0d0d845f316f9ac0f
        Port:           9101/TCP
        Host Port:      0/TCP
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Error
          Exit Code:    1
          Started:      Sat, 19 Jan 2019 02:42:21 +0530
          Finished:     Sat, 19 Jan 2019 02:42:52 +0530
        Ready:          False
        Restart Count:  198
        Environment Variables from:
          existing-cricket-st2-urls  ConfigMap  Optional: false
        Environment:                 <none>
        Mounts:
          /etc/st2/st2.docker.conf from st2-config-vol (rw)
          /etc/st2/st2.user.conf from st2-config-vol (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-cr842 (ro)
    Conditions:
      Type              Status
      Initialized       True 
      Ready             False 
      ContainersReady   False 
      PodScheduled      True 
    Volumes:
      st2-config-vol:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      existing-cricket-st2-config
        Optional:  false
      default-token-cr842:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-cr842
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type     Reason   Age                         From                   Message
      ----     ------   ----                        ----                   -------
      Warning  BackOff  <invalid> (x4543 over 11h)  kubelet, k8s-worker-1  Back-off restarting failed container

Let’s check why that service actually exited with 1 code:

kubectl logs po/existing-cricket-st2api-5b57cdf65-4jmvx

Going one step ahead: StackStorm relies on MongoDB and RabbitMQ clusters as backends. Please also check if those pods are running as part of the entire Helm release. If st2 services can’t connect to those, - they’ll exit until connection established.

Additionally, anything custom you’ve configured in Helm values.yaml that’ll override defaults?

P.S. Please enclose your code/errors/output with ``` for better formatting and syntax highlighting.

Hi Armab,
Thanks for your comments, We have restarted the server with the following command:
“helm install stackstorm/stackstorm-ha --wait --timeout 5000”
The result was the same, when I tried the command -
“kubectl get pods”. I did check the logs of the st2 api and found that it could not connect to MongoDB. Pods containing Mongodb and RabbitMQ had no logs.

I wanted to explore more about the description of RabbitMQ and the MongoDB pods, observed that their status was alternating between pending and CrashLoopBackOff, have added outputs of their respective -describe- commands. Request you to guide - on how to fix RabbitMQ and Mongodb pods(with configuring replicas):
indent preformatted text by 4 spaces

kubectl get pods
NAME                                                READY   STATUS             RESTARTS   AGE
fancy-lizard-etcd-78964d45d7-qn5jd                  1/1     Running            0          147m
fancy-lizard-mongodb-ha-0                           0/1     Pending            0          147m
fancy-lizard-rabbitmq-ha-0                          0/1     Pending            0          147m
fancy-lizard-st2actionrunner-9f7bdb447-mhsv4        0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2actionrunner-9f7bdb447-nt8hw        0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2actionrunner-9f7bdb447-s6wbx        0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2actionrunner-9f7bdb447-th7lw        0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2actionrunner-9f7bdb447-zncbm        1/1     Running            30         147m
fancy-lizard-st2api-555cb7db85-d66k6                0/1     CrashLoopBackOff   30         147m
fancy-lizard-st2api-555cb7db85-s4crl                1/1     Running            30         147m
fancy-lizard-st2auth-8dfcbb44-9hktq                 0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2auth-8dfcbb44-db6gb                 0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2client-778b496fcf-pbjgl             1/1     Running            0          147m
fancy-lizard-st2garbagecollector-7d5887665f-q9mm7   1/1     Running            30         147m
fancy-lizard-st2notifier-7648c659d7-cn6fr           0/1     Error              30         147m
fancy-lizard-st2notifier-7648c659d7-dmsgx           1/1     Running            30         147m
fancy-lizard-st2rulesengine-677685f6c6-qvttc        0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2rulesengine-677685f6c6-tx844        0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2scheduler-cd8578bb4-47cdf           0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2scheduler-cd8578bb4-qg4wh           1/1     Running            30         147m
fancy-lizard-st2sensorcontainer-76f5f77697-gbf9h    0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2stream-7874dfdf4d-766mk             0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2stream-7874dfdf4d-8knlh             0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2timersengine-66f57f88fc-d8njp       1/1     Running            30         147m
fancy-lizard-st2web-7d6c8c5d7c-7n244                1/1     Running            0          147m
fancy-lizard-st2web-7d6c8c5d7c-wgcpq                1/1     Running            0          147m
fancy-lizard-st2workflowengine-559b946546-k8tx7     0/1     CrashLoopBackOff   29         147m
fancy-lizard-st2workflowengine-559b946546-pzk6j     0/1     CrashLoopBackOff   29         147m
[root@k8s-master ~]# kubectl logs po/fancy-lizard-st2api-555cb7db85-d66k6
2019-01-19 04:55:19,757 DEBUG [-] Using Python: 2.7.12 (/opt/stackstorm/st2/bin/python)
2019-01-19 04:55:19,758 DEBUG [-] Using config files: /etc/st2/st2.conf,/etc/st2/st2.docker.conf,/etc/st2/st2.user.conf
2019-01-19 04:55:19,758 DEBUG [-] Using logging config: /etc/st2/logging.api.conf
2019-01-19 04:55:19,765 INFO [-] Connecting to database "st2" @ "fancy-lizard-mongodb-ha:27017" as user "None".
2019-01-19 04:55:49,783 ERROR [-] Failed to connect to database "st2" @ "fancy-lizard-mongodb-ha:27017" as user "None": No replica set members found yet
2019-01-19 04:55:49,783 ERROR [-] (PID=1) ST2 API quit due to exception.
Traceback (most recent call last):
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2api/cmd/api.py", line 73, in main
    _setup()
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2api/cmd/api.py", line 47, in _setup
    register_signal_handlers=True, register_internal_trigger_types=True)
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/service_setup.py", line 130, in setup
    db_setup()
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/database_setup.py", line 57, in db_setup
    connection = db_init.db_setup_with_retry(**db_cfg)
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/persistence/db_init.py", line 76, in db_setup_with_retry
    ssl_match_hostname=ssl_match_hostname)
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/persistence/db_init.py", line 59, in db_func_with_retry
    return retrying_obj.call(db_func, *args, **kwargs)
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/retrying.py", line 206, in call
    return attempt.get(self._wrap_exception)
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/retrying.py", line 247, in get
    six.reraise(self.value[0], self.value[1], self.value[2])
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/retrying.py", line 200, in call
    attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/models/db/__init__.py", line 159, in db_setup
    ssl_match_hostname=ssl_match_hostname)
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/models/db/__init__.py", line 141, in _db_connect
    raise e
ServerSelectionTimeoutError: No replica set members found yet
[root@k8s-master ~]# kubectl logs po/fancy-lizard-mongodb-ha-0
[root@k8s-master ~]# kubectl logs po/fancy-lizard-rabbitmq-ha-0 
        [root@k8s-master ~]# kubectl logs po/fancy-lizard-mongodb-ha-0
        [root@k8s-master ~]# kubectl logs po/fancy-lizard-rabbitmq-ha-0
        [root@k8s-master ~]# kubectl describe po/fancy-lizard-rabbitmq-ha-0
        Name:               fancy-lizard-rabbitmq-ha-0
        Namespace:          default
        Priority:           0
        PriorityClassName:  <none>
        Node:               <none>
        Labels:             app=rabbitmq-ha
                            controller-revision-hash=fancy-lizard-rabbitmq-ha-578c4f5c6
                            release=fancy-lizard
                            statefulset.kubernetes.io/pod-name=fancy-lizard-rabbitmq-ha-0
        Annotations:        checksum/config: f0e6787b570ecb396db600fa333ea81dd16f84792e9fed699e6c9d8529affab7
        Status:             Pending
        IP:                 
        Controlled By:      StatefulSet/fancy-lizard-rabbitmq-ha
        Init Containers:
          copy-rabbitmq-config:
            Image:      busybox
            Port:       <none>
            Host Port:  <none>
            Command:
              sh
              -c
              cp /configmap/* /etc/rabbitmq; rm -f /var/lib/rabbitmq/.erlang.cookie
            Environment:  <none>
            Mounts:
              /configmap from configmap (rw)
              /etc/rabbitmq from config (rw)
              /var/lib/rabbitmq from data (rw)
              /var/run/secrets/kubernetes.io/serviceaccount from fancy-lizard-rabbitmq-ha-token-pw7zg (ro)
        Containers:
          rabbitmq-ha:
            Image:       rabbitmq:3.7-alpine
            Ports:       4369/TCP, 5672/TCP, 15672/TCP
            Host Ports:  0/TCP, 0/TCP, 0/TCP
            Liveness:    exec [rabbitmqctl status] delay=120s timeout=5s period=10s #success=1 #failure=6
            Readiness:   exec [rabbitmqctl status] delay=10s timeout=3s period=5s #success=1 #failure=3 
            Environment:
              MY_POD_NAME:             fancy-lizard-rabbitmq-ha-0 (v1:metadata.name)
              RABBITMQ_USE_LONGNAME:   true
              RABBITMQ_NODENAME:       rabbit@$(MY_POD_NAME).fancy-lizard-rabbitmq-ha-discovery.default.svc.cluster.local
              K8S_HOSTNAME_SUFFIX:     .fancy-lizard-rabbitmq-ha-discovery.default.svc.cluster.local
              K8S_SERVICE_NAME:        fancy-lizard-rabbitmq-ha-discovery
              RABBITMQ_ERLANG_COOKIE:  <set to the key 'rabbitmq-erlang-cookie' in secret 'fancy-lizard-rabbitmq-ha'>  Optional: false
              RABBITMQ_DEFAULT_USER:   admin
              RABBITMQ_DEFAULT_PASS:   <set to the key 'rabbitmq-password' in secret 'fancy-lizard-rabbitmq-ha'>  Optional: false
              RABBITMQ_DEFAULT_VHOST:  /
            Mounts:
              /etc/rabbitmq from config (rw)
              /var/lib/rabbitmq from data (rw)
              /var/run/secrets/kubernetes.io/serviceaccount from fancy-lizard-rabbitmq-ha-token-pw7zg (ro)
        Conditions:
          Type           Status
          PodScheduled   False 
        Volumes:
          data:
            Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
            ClaimName:  data-fancy-lizard-rabbitmq-ha-0
            ReadOnly:   false
          config:
            Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
            Medium:  
          configmap:
            Type:      ConfigMap (a volume populated by a ConfigMap)
            Name:      fancy-lizard-rabbitmq-ha
            Optional:  false
          fancy-lizard-rabbitmq-ha-token-pw7zg:
            Type:        Secret (a volume populated by a Secret)
            SecretName:  fancy-lizard-rabbitmq-ha-token-pw7zg
            Optional:    false
        QoS Class:       BestEffort
        Node-Selectors:  <none>
        Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                         node.kubernetes.io/unreachable:NoExecute for 300s
        Events:
          Type     Reason            Age                From               Message
          Warning  FailedScheduling  29m (x2 over 29m)  default-scheduler  pod has unbound immediate PersistentVolumeClaims

kubectl describe po/fancy-lizard-mongodb-ha-0
Name:               fancy-lizard-mongodb-ha-0
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=mongodb-ha
                    controller-revision-hash=fancy-lizard-mongodb-ha-864d5f6749
                    release=fancy-lizard
                    statefulset.kubernetes.io/pod-name=fancy-lizard-mongodb-ha-0
Annotations:        prometheus.io/path: /metrics
                    prometheus.io/port: 9216
                    prometheus.io/scrape: true
Status:             Pending
IP:                 
Controlled By:      StatefulSet/fancy-lizard-mongodb-ha
Init Containers:
  copy-config:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
    Args:
      -c
      set -e
      set -x
      
      cp /configdb-readonly/mongod.conf /data/configdb/mongod.conf
      
    Environment:  <none>
    Mounts:
      /configdb-readonly from config (rw)
      /data/configdb from configdir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cr842 (ro)
      /work-dir from workdir (rw)

> Blockquote
        install:
        Image:      k8s.gcr.io/mongodb-install:0.6
        Port:       <none>
        Host Port:  <none>
        Args:
          --work-dir=/work-dir
        Environment:  <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-cr842 (ro)
          /work-dir from workdir (rw)
      bootstrap:
        Image:      mongo:3.4
        Port:       <none>
        Host Port:  <none>
        Command:
          /work-dir/peer-finder
        Args:
          -on-start=/init/on-start.sh
          -service=fancy-lizard-mongodb-ha
        Environment:
          POD_NAMESPACE:  default (v1:metadata.namespace)
          REPLICA_SET:    rs0
        Mounts:
          /data/configdb from configdir (rw)
          /data/db from datadir (rw)
          /init from init (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-cr842 (ro)
          /work-dir from workdir (rw)
    Containers:
      mongodb-ha:
        Image:      mongo:3.4
        Port:       27017/TCP
        Host Port:  0/TCP
        Command:
          mongod
        Args:
          --config=/data/configdb/mongod.conf
          --dbpath=/data/db
          --replSet=rs0
          --port=27017
          --bind_ip=0.0.0.0
        Liveness:     exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=3
        Readiness:    exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=1s period=10s #success=1 #failure=3
        Environment:  <none>
        Mounts:
          /data/configdb from configdir (rw)
          /data/db from datadir (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-cr842 (ro)
          /work-dir from workdir (rw)
    Conditions:
      Type           Status
      PodScheduled   False 
    Volumes:
      datadir:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  datadir-fancy-lizard-mongodb-ha-0
        ReadOnly:   false
      config:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      fancy-lizard-mongodb-ha-mongodb
        Optional:  false
      init:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      fancy-lizard-mongodb-ha-init
        Optional:  false
      workdir:
        Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:  
      configdir:
        Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:  
      default-token-cr842:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-cr842
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:          <none>

Right, so both MongoDB and RabbitMQ clusters can’t start.
Under the hood StackStorm HA cluster Helm chart uses charts/stable/mongodb-replicaset at master · helm/charts · GitHub and charts/stable/rabbitmq-ha at master · helm/charts · GitHub as a dependencies to configure MongoDB ReplicaSet and RabbitMQ HA. You can search further based on that for more invesitgation.

The relevant error message from your output is:

Events:
  Type     Reason            Age                From               Message
  Warning  FailedScheduling  29m (x2 over 29m)  default-scheduler  pod has unbound immediate PersistentVolumeClaims

It means K8s cluster can’t acquire Persistent Volume for both RabbitMQ and MongoDB. Sounds like your K8s cluster is not configured properly to work with persistent volumes or doesn’t have enough resources.

What’s the environment you’re running K8s cluster in? Minikube, bare-metal, any cloud, PaaS, etc?

Thanks Armab and Eugen C for your comments, Yes, We saw that pods containing RabbitMq and MongoDB are facing the same kind of error. We are running K8 on a centOS VM, We just tried to install the stable/Mongodb with helm and it had the same issue. Must we configure the storageclass yaml file?, We are unable to figure out the way to ensure persistent storage for these two important pods.

`>

kubectl describe pods
Name: right-wolverine-mongodb-8688b987b6-ljl58
Namespace: default
Priority: 0
PriorityClassName:
Node:
Labels: app=mongodb
chart=mongodb-5.2.0
pod-template-hash=8688b987b6
release=right-wolverine
Annotations:
Status: Pending
IP:
Controlled By: ReplicaSet/right-wolverine-mongodb-8688b987b6
Containers:
right-wolverine-mongodb:
Image: docker.io/bitnami/mongodb:4.0.3
Port: 27017/TCP
Host Port: 0/TCP
Liveness: exec [mongo --eval db.adminCommand(‘ping’)] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [mongo --eval db.adminCommand(‘ping’)] delay=5s timeout=5s period=10s #success=1 #failure=6
Environment:
MONGODB_ROOT_PASSWORD: <set to the key ‘mongodb-root-password’ in secret ‘right-wolverine-mongodb’> Optional: false
MONGODB_SYSTEM_LOG_VERBOSITY: 0
MONGODB_DISABLE_SYSTEM_LOG: no
MONGODB_ENABLE_IPV6: yes
Mounts:
/bitnami/mongodb from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cr842 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: right-wolverine-mongodb
ReadOnly: false
default-token-cr842:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cr842
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 18m (x2 over 18m) default-scheduler pod has unbound immediate PersistentVolumeClaims
indent preformatted text by 4 spaces
[root@k8s-master .helm]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 33G 11G 21G 34% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 18M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 783M 12K 783M 1% /run/user/42
tmpfs 783M 0 783M 0% /run/user/0
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/3fe262cd0521e1af5f917f63636f304be3216c9de3ab7582f5d4bf3fe3a01b20/merged
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/a59e0bf1fd448deee1b7f6b4002596f5371a3b9ec79bb30bffc76284d8700768/merged
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/b5820dbf53a3f1820564ffd25c436cc8ecd5e07188a7d3d4c159512f3c923fa5/merged
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/7f1181f98174967afe3d7fa34f1587946784ada788b53a6b3dfaba7be7c09a40/merged
shm 64M 0 64M 0% /var/lib/docker/containers/efed76df5fc404c6a3663398fa09bee4dff6b8192cdcaa0dd1e5c47ff5a3bde8/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/973eb7173c011d1e82a8d425973c45e2d55d38a291144166337508af4f4d69ed/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/f448bf3bdbefbf5a17ef9958f5ef00daf5b31bbaa895b34f73fd96a380eb7b31/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/348681f907ddd2c69f8bc009de6bb669d61bbb7f6496d97b64324cd6e8bd66ff/mounts/shm
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/c46993a8cb0d4ce00bdb547c7760dbf0755712446841ca7e49730c1eebe5c93f/merged
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/c89007190885aa2ff4218db8d5ca2585bfa0fc0aecabffb5e22bd2207c3734db/merged
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/83d527cad80655e08b2ffe13a3de8cfa37232ddea5896c7d7474cca625268516/merged
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/bf091f40afe1652c6ae6150dc9dcb77999de0abaef3d23b477c3bd9d1b78be73/merged
tmpfs 3.9G 12K 3.9G 1% /var/lib/kubelet/pods/7b1a0c6f-1a8c-11e9-807f-cec43fe4b10d/volumes/kubernetes.io~secret/kube-proxy-token-4wpms
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/9d0f27dc87ce3c256407f312d1404ab9ac3dc476e4ff4c8cf08866aed364d748/merged
shm 64M 0 64M 0% /var/lib/docker/containers/19131287f8879271d414e88bd601ea2b70bf0a95f44853d24edb7274b05c93c3/mounts/shm
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/d223376b4151dc02ed9d14dc4f984b59c642b81d2642e1ec6b0921fbaf8566d6/merged
tmpfs 3.9G 12K 3.9G 1% /var/lib/kubelet/pods/303c22de-1a8d-11e9-807f-cec43fe4b10d/volumes/kubernetes.io~secret/weave-net-token-lbskt
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/8d5db0d46b8a5deddf07821a1a3678e512a4650726d2e3c804f6b7c1d2c1d8b4/merged
shm 64M 0 64M 0% /var/lib/docker/containers/fa73f60056a1cb22f8abf6fb1b717dfd0dbbe0b54a52e5714c933afa673e8f99/mounts/shm
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/063d574fa32b4efdaf655c584b86daff625d49e2a61ea662d17e1bb773e8e3bc/merged
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/d5acf71d6ace5a84e3c900fe0bbc106918e0e7667af511d9ee8aeebc1ded465a/merged
tmpfs 3.9G 12K 3.9G 1% /var/lib/kubelet/pods/2e0dd5dd-1a8d-11e9-807f-cec43fe4b10d/volumes/kubernetes.io~secret/coredns-token-tt4n6
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/1fbd57bd557c9617b53d59d9be04156db06e2a83ee88c14a375b521afc39ca86/merged
shm 64M 0 64M 0% /var/lib/docker/containers/0628ae3b6197afd472fd425e5b7b5a14f4b43b4391f5508eb2bee330682596ce/mounts/shm
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/3dd037c9f58a5a21440bfbd97e5f675c50cbf2b0eed16194da4de6e9f686458c/merged
tmpfs 3.9G 12K 3.9G 1% /var/lib/kubelet/pods/2e047b57-1a8d-11e9-807f-cec43fe4b10d/volumes/kubernetes.io~secret/coredns-token-tt4n6
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/97ffb5419af534dfa8d3df1f4106077ee88f39815a849e600be093e57181466b/merged
shm 64M 0 64M 0% /var/lib/docker/containers/65670963b6182567aa9a08fabea3b6a1e7f594e069dee7072e3118c36862881e/mounts/shm
overlay 33G 11G 21G 34% /var/lib/docker/overlay2/10537e0b45d9cd5adb349ff637975e3c7f948a4fd0a24c85e892634dbde59b3f/merged
[root@k8s-master .helm]#
[root@k8s-master .helm]# kubectl get storageclass
No resources found.
[root@k8s-master .helm]# kubectl top pod
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
[root@k8s-master .helm]# kubectl top pod --all-namespaces --containers=true
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
[root@k8s-master .helm]#

How did you install/deploy K8s in a CentOS VM? Which K8s version is that? How many nodes/masters does your cluster utilize?

Here are some examples from my local minikube just to give an idea:

$ kubectl get storageclass --all-namespaces
NAME                 PROVISIONER                AGE
standard (default)   k8s.io/minikube-hostpath   115d
$ kubectl get persistentvolumeclaims
NAME                                     STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-nobby-nightingale-rabbitmq-ha-0     Bound     pvc-8b3e25c4-c7fe-11e8-a489-08002707f083   8Gi        RWO            standard       115d
data-nobby-nightingale-rabbitmq-ha-1     Bound     pvc-ff0c7a07-c7fe-11e8-a489-08002707f083   8Gi        RWO            standard       115d
data-nobby-nightingale-rabbitmq-ha-2     Bound     pvc-2f72b329-c7ff-11e8-a489-08002707f083   8Gi        RWO            standard       115d
data-quieting-koala-rabbitmq-ha-0        Bound     pvc-e61deddc-c811-11e8-a489-08002707f083   8Gi        RWO            standard       115d
data-quieting-koala-rabbitmq-ha-1        Bound     pvc-10b23131-c812-11e8-a489-08002707f083   8Gi        RWO            standard       115d
data-quieting-koala-rabbitmq-ha-2        Bound     pvc-3dd9327b-c812-11e8-a489-08002707f083   8Gi        RWO            standard       115d
data-zealous-vulture-rabbitmq-ha-0       Bound     pvc-0eaaba57-c807-11e8-a489-08002707f083   8Gi        RWO            standard       115d
data-zealous-vulture-rabbitmq-ha-1       Bound     pvc-3ace30ae-c807-11e8-a489-08002707f083   8Gi        RWO            standard       115d
data-zealous-vulture-rabbitmq-ha-2       Bound     pvc-6223a898-c807-11e8-a489-08002707f083   8Gi        RWO            standard       115d
datadir-nobby-nightingale-mongodb-ha-0   Bound     pvc-8b362625-c7fe-11e8-a489-08002707f083   10Gi       RWO            standard       115d
datadir-nobby-nightingale-mongodb-ha-1   Bound     pvc-2fea465d-c7ff-11e8-a489-08002707f083   10Gi       RWO            standard       115d
datadir-nobby-nightingale-mongodb-ha-2   Bound     pvc-46248e25-c7ff-11e8-a489-08002707f083   10Gi       RWO            standard       115d
datadir-quieting-koala-mongodb-ha-0      Bound     pvc-e6191f58-c811-11e8-a489-08002707f083   10Gi       RWO            standard       115d
datadir-quieting-koala-mongodb-ha-1      Bound     pvc-121f1b38-c812-11e8-a489-08002707f083   10Gi       RWO            standard       115d
datadir-quieting-koala-mongodb-ha-2      Bound     pvc-475432ad-c812-11e8-a489-08002707f083   10Gi       RWO            standard       115d
datadir-zealous-vulture-mongodb-ha-0     Bound     pvc-0ea4627a-c807-11e8-a489-08002707f083   10Gi       RWO            standard       115d
datadir-zealous-vulture-mongodb-ha-1     Bound     pvc-5fe06fea-c807-11e8-a489-08002707f083   10Gi       RWO            standard       115d
datadir-zealous-vulture-mongodb-ha-2     Bound     pvc-72878283-c807-11e8-a489-08002707f083   10Gi       RWO            standard       115d
$ kubectl get persistentvolume
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                            STORAGECLASS   REASON    AGE
pvc-0ea4627a-c807-11e8-a489-08002707f083   10Gi       RWO            Delete           Bound     default/datadir-zealous-vulture-mongodb-ha-0     standard                 115d
pvc-0eaaba57-c807-11e8-a489-08002707f083   8Gi        RWO            Delete           Bound     default/data-zealous-vulture-rabbitmq-ha-0       standard                 115d
pvc-10b23131-c812-11e8-a489-08002707f083   8Gi        RWO            Delete           Bound     default/data-quieting-koala-rabbitmq-ha-1        standard                 115d
pvc-121f1b38-c812-11e8-a489-08002707f083   10Gi       RWO            Delete           Bound     default/datadir-quieting-koala-mongodb-ha-1      standard                 115d
pvc-2f72b329-c7ff-11e8-a489-08002707f083   8Gi        RWO            Delete           Bound     default/data-nobby-nightingale-rabbitmq-ha-2     standard                 115d
pvc-2fea465d-c7ff-11e8-a489-08002707f083   10Gi       RWO            Delete           Bound     default/datadir-nobby-nightingale-mongodb-ha-1   standard                 115d
pvc-3ace30ae-c807-11e8-a489-08002707f083   8Gi        RWO            Delete           Bound     default/data-zealous-vulture-rabbitmq-ha-1       standard                 115d
pvc-3dd9327b-c812-11e8-a489-08002707f083   8Gi        RWO            Delete           Bound     default/data-quieting-koala-rabbitmq-ha-2        standard                 115d
pvc-46248e25-c7ff-11e8-a489-08002707f083   10Gi       RWO            Delete           Bound     default/datadir-nobby-nightingale-mongodb-ha-2   standard                 115d
pvc-475432ad-c812-11e8-a489-08002707f083   10Gi       RWO            Delete           Bound     default/datadir-quieting-koala-mongodb-ha-2      standard                 115d
pvc-5fe06fea-c807-11e8-a489-08002707f083   10Gi       RWO            Delete           Bound     default/datadir-zealous-vulture-mongodb-ha-1     standard                 115d
pvc-6223a898-c807-11e8-a489-08002707f083   8Gi        RWO            Delete           Bound     default/data-zealous-vulture-rabbitmq-ha-2       standard                 115d
pvc-72878283-c807-11e8-a489-08002707f083   10Gi       RWO            Delete           Bound     default/datadir-zealous-vulture-mongodb-ha-2     standard                 115d
pvc-8b362625-c7fe-11e8-a489-08002707f083   10Gi       RWO            Delete           Bound     default/datadir-nobby-nightingale-mongodb-ha-0   standard                 115d
pvc-8b3e25c4-c7fe-11e8-a489-08002707f083   8Gi        RWO            Delete           Bound     default/data-nobby-nightingale-rabbitmq-ha-0     standard                 115d
pvc-e6191f58-c811-11e8-a489-08002707f083   10Gi       RWO            Delete           Bound     default/datadir-quieting-koala-mongodb-ha-0      standard                 115d
pvc-e61deddc-c811-11e8-a489-08002707f083   8Gi        RWO            Delete           Bound     default/data-quieting-koala-rabbitmq-ha-0        standard                 115d
pvc-ff0c7a07-c7fe-11e8-a489-08002707f083   8Gi        RWO            Delete           Bound     default/data-nobby-nightingale-rabbitmq-ha-1     standard                 115d

Looks like your storageclass is missing (https://kubernetes.io/docs/concepts/storage/storage-classes/), which is a sign of not configured K8s cluster in full.

I’d suggest to refer to your K8s cluster installation/configuration method and search for solution there as it’s something unrelated to StackStorm itself. Ex [1] [2] [3] .