DCP Observability
Logging
Logs are currently captured from the pods in the EKS cluster by the core-paas-sumologic solution which is managed by the Core PaaS Team and are shipped out to a MonC instance of Splunk.
Splunk query should be structured using the following:
index=mulesoft
kubernetes.namespace_name=deploy
(Or whatever namespace you're looking at debugging)kenv=kdeploy.dev
(Or whatever environment you're debugging)
For further information on search syntax in Splunk visit this doc.
Links:
Metrics
RDS Metrics
The refinery controller gathers Postgres metrics and forwards onto Cloudwatch. In order for refinery to work a monc_ro_user
service account
needs to be created on each managed RDS instance. To create the monc_ro_user
do the following:
- Exec to a container from a cluster that has access to your RDS instance.
kubectl exec -it $POD -- /bin/bash
- Connect to the RDS instance. In this example we're using
dcp_database
, update your FQDN as needed.
psql -h dcp-database.cfunxlmfcmct.us-west-2.rds.amazonaws.com -p 5432 -U dcp_user -d dcp_database
- Create the
monc_ro_user
. See password in kilonova-envs-config
create user monc_ro_user login password '$RO_PASS' valid until 'infinity'; grant select on pg_stat_database to monc_ro_user;
-
Validate metrics and the observability service. Redeploy the
core-paas-observability-service
via core-paas-deploy jenkins job. -
Core Paas Observability: https://github.com/mulesoft/core-paas-observability-service
-
Refinery Documentation: https://salesforce.quip.com/9seNA5ce5a7Y
Last Updated: 2024-07-01T19:32:00+0000