Stackstorm and Napalm_logs integration not working

I’m running Stackstorm 3.4.1 on Python 3.6.8 on Centos 7 (3.10.0-1160.2.2.el7) and trying to get syslogs to trigger a rule. Currently I have napalm-logs setup to ingest syslogs and publishes that to zmq. This works correctly and verified w/ a zmq client.

I now have connected a zmq client, via the napalm-log pack and appear to ingest the syslog mesg but fails to process due to validation errors. Here is the st2rulesengine.log output:

2021-03-29 13:25:29,407 139692157875616 ERROR consumers [-] StagedQueueConsumer failed to process message: {‘trigger’: ‘napalm_logs.log’, ‘payload’: {b’host’: b’vmx01’, b’yang_message’: {b’bgp’: {b’neighbors’: {b’neighbor’: {b’192.168.140.254’: {b’state’: {b’peer_as’: b’4230’}, b’afi_safis’: {b’afi_safi’: {b’inet4’: {b’state’: {b’prefixes’: {b’received’: 141}}, b’ipv4_unicast’: {b’prefix_limit’: {b’state’: {b’max_prefixes’: 140}}}}}}}}}}}, b’message_details’: {b’processId’: b’2902’, b’facility’: 3, b’hostPrefix’: None, b’pri’: b’28’, b’processName’: b’rpd’, b’host’: b’vmx01’, b’tag’: b’BGP_PREFIX_THRESH_EXCEEDED’, b’time’: b’14:03:12’, b’date’: b’Jun 30’, b’message’: b’192.168.140.254 (External AS 4230): Configured maximum prefix-limit threshold(140) exceeded for inet4-unicast nlri: 141 (instance master)’, b’severity’: 4}, b’timestamp’: 1593525792, b’error’: b’BGP_PREFIX_THRESH_EXCEEDED’, b’ip’: b’192.168.1.183’, b’facility’: 3, b’os’: b’junos’, b’yang_model’: b’openconfig-bgp’, b’severity’: 4}, ‘trace_context’: None}
Traceback (most recent call last):
File “/opt/stackstorm/st2/lib/python3.6/site-packages/st2common/transport/consumers.py”, line 86, in process
response = self._handler.pre_ack_process(body)
File “/opt/stackstorm/st2/lib/python3.6/site-packages/st2reactor/rules/worker.py”, line 57, in pre_ack_process
raise_on_no_trigger=True)
File “/opt/stackstorm/st2/lib/python3.6/site-packages/st2reactor/container/utils.py”, line 55, in create_trigger_instance
return TriggerInstance.add_or_update(trigger_instance)
File “/opt/stackstorm/st2/lib/python3.6/site-packages/st2common/persistence/base.py”, line 174, in add_or_update
model_object = cls._get_impl().add_or_update(model_object, validate=True)
File “/opt/stackstorm/st2/lib/python3.6/site-packages/st2common/models/db/init.py”, line 466, in add_or_update
instance.save(validate=validate)
File “/opt/stackstorm/st2/lib/python3.6/site-packages/mongoengine/document.py”, line 369, in save
self.validate(clean=clean)
File “/opt/stackstorm/st2/lib/python3.6/site-packages/mongoengine/base/document.py”, line 413, in validate
raise ValidationError(message, errors=errors)
mongoengine.errors.ValidationError: ValidationError (TriggerInstanceDB:None) (Invalid dictionary key - documents must have only string keys: [‘payload’])

Any ideas why this is failing MongoDB validations for this data?

The error message looks pretty clear to me. Something is trying to store a dictionary that uses bytes as dictionary keys and MongoDB can only handle dictionary keys that are strings.

Either your payload source is sending those keys that are bytes, or your ZMQ client is decoding them as bytes instead of strings.

Have you verified either or both of those possibilities?

Thank you! Thanks for pointing that out… the dictionary K/V are all byte strings. That’s strange. The client I’m using is the napalm-logs pack client from here: GitHub - lampwins/stackstorm-napalm-logs. It registers a sensor and connects to the napalm-log ZMQ server. I’m’ unsure why the dict is in byte string. Newbie here so pls excuse me. The syslog data source is a python script using sockets to send the data to the napalm-log syslog port. I’ve also ran a virtual MX router and configured it to send syslog to napalm-log and got the same result (byte string) Thoughts?

The only help I can really give you is to configure everything to use strings instead of bytes.

What version of ST2 are you using?

st2 --version

This turned out to be a python version issue. Resolved

Can you explain a bit more?